Image Title

Search Results for map:

Keynote Analysis with theCUBE | AWS re:Invent 2022


 

(bright music) >> Hello, everyone. Welcome back to live coverage day two or day one, day two for theCUBE, day one for the event. I'm John Furrier, host of theCUBE. It's the keynote analysis segment. Adam just finished coming off stage. I'm here with Dave Vellante and Zeus Kerravala, with principal analyst at ZK Research, Zeus, it's great to see you. Dave. Guys, the analysis is clear. AWS is going NextGen. You guys had a multi-day analyst sessions in on the pre-briefs. We heard the keynote, it's out there. Adam's getting his sea legs, so to speak, a lot of metaphors around ocean. >> Yeah. >> Space. He's got these thematic exploration as he chunked his keynote out into sections. Zeus, a lot of networking in there in terms of some of the price performance, specialized instances around compute, this end-to-end data services. Dave, you were all over this data aspect going into the keynote and obviously, we had visibility into this business transformation theme. What's your analysis? Zeus, we'll start with you. What's your take on what Amazon web service is doing this year and the keynote? What's your analysis? >> Well, I think, there was a few key themes here. The first one is I do think we're seeing better integration across the AWS portfolio. Historically, AWS makes a lot of stuff and it's not always been easy to use say, Aurora and Redshift together, although most customers buy them together. So, they announce the integration of that. It's a lot tighter now. It's almost like it could be one product, but I know they like to keep the product development separately. Also, I think, we're seeing a real legitimization of AWS in a bunch of areas where people said it wasn't possible before. Last year, Nasdaq said they're running in the cloud. The Options Exchange today announced that they're going to be moving to the cloud. Contact centers running the cloud for a lot of real time voice. And so, things that we looked at before and said those will never move to the cloud have now moved to the cloud. And I think, my third takeaway is just AWS is changing and they're now getting into areas to allow customers to do things they couldn't do before. So, if you look at what they're doing in the area of AI, a lot of their AI and ML services before were prediction. And I'm not saying you need an AI, ML to do prediction, was certainly a lot more accurate, but now they're getting into generative data. So, being able to create data where data didn't exist before and that's a whole new use case for 'em. So, AWS, I think, is actually for all the might and power they've had, it's actually stepping up and becoming a much different company now. >> Yeah, I had wrote that post. I had a one-on-one day, got used of the transcript with Adam Selipsky. He went down that route of hey, we going to change NextGen. Oh, that's my word. AWS Classic my word. The AWS Classic, the old school cloud, which a bunch of Lego blocks, and you got this new NextGen cloud with the ecosystems emerging. So, clearly, it's Amazon shifting. >> Yeah. >> But Dave, your breaking analysis teed out the keynote. You went into the whole cost recovery. We heard Adam talk about macro at the beginning of his keynote. He talked about economic impact, sustainability, big macro issues. >> Yeah. >> And then, he went into data and spent most of the time on the keynote on data. Tools, integration, governance, insights. You're all over that. You had that, almost your breaking analysis almost matched the keynote, >> Yeah. >> thematically, macro, cost savings right-sizing with the cloud. And last night, I was talking to some of the marketplace people, we think that the marketplace might be the center where people start managing their cost better. This could have an impact on the ecosystem if they're not in in the marketplace. So, again, so much is going on. >> What's your analogy? >> Yeah, there's so much to unpack, a couple things. One is we get so much insight from theCUBE community plus your sit down 101 with Adam Selipsky allowed us to gather some nuggets, and really, I think, predict pretty accurately. But the number one question I get, if I could hit the escape key a bit, is what's going to be different in the Adam Selipsky era that was different from the Jassy era. Jassy was all about the primitives. The best cloud. And Selipsky's got to double down on that. So, he's got to keep that going. Plus, he's got to do that end-to-end integration and he's got to do the deeper business integration, up the stack, if you will. And so, when you're thinking about the keynote and the spirit of keynote analysis, we definitely heard, hey, more primitives, more database features, more Graviton, the network stuff, the HPC, Graviton for HPC. So, okay, check on that. We heard some better end-to-end integration between the elimination of ETL between Aurora and Redshift. Zeus and I were sitting next to each other. Okay, it's about time. >> Yeah. >> Okay, finally we got that. So, that's good. Check. And then, they called it this thing, the Amazon data zones, which was basically extending Redshift data sharing within your organization. So, you can now do that. Now, I don't know if it works across regions. >> Well, they mentioned APIs and they have the data zone. >> Yep. And so, I don't know if it works across regions, but the interesting thing there is he specifically mentioned integration with Snowflake and Tableau. And so, that gets me to your point, at the end of the day, in order for Amazon, and this is why they win, to succeed, they've got to have this ecosystem really cranking. And that's something that is just the secret sauce of the business model. >> Yeah. And it's their integration into that ecosystem. I think, it's an interesting trend that I've seen for customers where everybody wanted best of breed, everybody wanted disaggregated, and their customers are having trouble now putting those building blocks together. And then, nobody created more building blocks than AWS. And so, I think, under Adam, what we're seeing is much more concerted effort to make it easier for customers to consume those building blocks in an easy way. And the AWS execs >> Yeah. >> I talked to yesterday all committed to that. It's easy, easy, easy. And I think that's why. (Dave laughing) Yeah, there's no question they've had a lead in cloud for a long time. But if they're going to keep that, that needs to be upfront. >> Well, you're close to this, how easy is it? >> Yeah. >> But we're going to have Adrian Cockcroft (Dave laughing) on at the end of the day today, go into one analysis. Now, that- >> Well, less difficult. >> How's that? (indistinct) (group laughing) >> There you go. >> Adrian retired from Amazon. He's a CUBE analyst retiree, but he had a good point. You can buy the bag of Lego blocks if you want primitives >> Yeah. >> or you can buy the toy that's glued together. And it works, but it breaks. And you can't really manage it, and you buy a new one. So, his metaphor was, okay, if the primitives allow you to construct a durable solutions, a lot harder relative to rolling your own, not like that, but also the simplest out-of-the box capability is what people want. They want solutions. We call Adam the solutions CEO. So, I think, you're going to start to see this purpose built specialized services allow the ecosystem to build those toys, so that the customers can have an out-of-the box experience while having the option for the AWS Classic, which is if you want durability, you want to tune it, you want to manage it, that's the way to go for the hardcore. Now, can be foundational, but I just see the solutions things being very much like an out-of-the-box. Okay, throw away, >> Yeah. >> buy a new toy. >> More and more, I'm saying less customers want to be that hardcore assembler of building blocks. And obviously, the really big companies do, but that line is moving >> Yeah. >> and more companies, I think, just want to run their business and they want those prebuilt solutions. >> We had to cut out of the keynote early. But I didn't hear a lot about... The example that they often use is Amazon Connect, the call center solution. >> Yeah. >> I didn't hear a lot to that in the keynote. Maybe it's happening right now, but look, at the end of the day, suites always win. The best of breed does well, (John laughing) takes off, generate a couple billion, Snowflake will grow, they'll get to 10 billion. But you look at Oracle, suites work. (laughs) >> Yeah. >> What I found interesting about the keynote is that he had this thematic exploration themes. First one was space that was like connect the dot, the nebula, different (mumbles) lens, >> Ocean. >> ask the right questions. (Dave laughing) >> Ocean was security which bears more, >> Yeah. >> a lot more needed to manage that oxygen going deep. Are you snorkeling? Are you scuba diving? Barely interesting amount of work. >> In Antarctica. >> Antarctica was the performance around how you handle tough conditions and you've got to get that performance. >> Dave: We're laughing, but it was good. >> But the day, the Ocean Day- >> Those are very poetic. >> I tweeted you, Dave, (Dave laughing) because I sit on theCUBE in 2011. I hate hail. (Dave laughing) It's the worst term ever. It's the day the ocean's more dynamic. It's a lot more flowing. Maybe 10 years too soon, Dave. But he announces the ocean theme and then says we have a Security Lake. So, like lake, ocean, little fun on words- >> I actually think the Security Lake is pretty meaningful, because we were listening to talk, coming over here talking about it, where I think, if you look at a lot of the existing solutions, security solutions there, I describe 'em as a collection of data ponds that you can view through one map, but they're not really connected. And the amount of data that AWS holds now, arguably more than any other company, if they're not going to provide the Security Lake, who is? >> Well, but staying >> Yeah. >> on security for a second. To me, the big difference between Azure and Amazon is the ecosystem. So, CrowdStrike, Okta, Zscaler, name it, CyberArk, Rapid7, they're all part of this ecosystem. Whereas Microsoft competes with all of those guys. >> Yes. Yeah. >> So it's a lot more white space than the Amazon ecosystem. >> Well, I want to get you guys to take on, so in your reaction, because I think, my vision of what what's happening here is that I think that whole data portion's going to be data as code. And I think, the ecosystem harvests the data play. If you look at AWS' key announcements here, Security Lake, price performance, they're going to optimize for those kinds of services. Look at security, okay, Security Lake, GuardDuty, EKS, that's a Docker. Docker has security problems. They're going inside the container and looking at threat detection inside containers with Kubernetes as the runtime. That's a little nuance point, but that's pretty significant, Dave. And they're now getting into, we're talking in the weeds on the security piece, adding that to their large scale security footprint. Security is going to be one of those things where if you're not on the inside of their security play, you're probably going to be on the outside. And of course, the price performance is going to be the killer. The networking piece surprise me. Their continuing to innovate on the network. What does that mean for Cisco? So many questions. >> We had Ajay Patel on yesterday for VMware. He's an awesome middleware guy. And I was asking about serverless and architectures. And he said, "Look, basically, serverless' great for stateless, but if you want to run state, you got to have control over the run time." But the point he made was that people used to think of running containers with straight VMs versus Fargate or Knative, if you choose, or serverless. They used to think of those as different architectures. And his point was they're all coming together. And it's now you're architecting and calling, which service you need. And that's how people are thinking about future architectures, which I think, makes a lot of sense. >> If you are running managed Kubernetes, which everyone's doing, 'cause no one's really building it in-house themselves. >> No. >> They're running it as managed service, skills gaps and a variety of other reasons. This EKS protection is very interesting. They're managing inside and outside the container, which means that gives 'em visibility on both sides, under the hood and inside the application layer. So, very nuanced point, Zeus. What's your reaction to this? And obviously, the networking piece, I'd love to get your thought. >> Well, security, obviously, it's becoming a... It's less about signatures and more of an analytics. And so, things happen inside the container and outside the container. And so, their ability to look on both sides of that allows you to happen threats in time, but then also predict threats that could happen when you spin the container up. And the difficulty with the containers is they are ephemeral. It's not like a VM where it's a persistent workload that you can do analysis on. You need to know what's going on with the container almost before it spins up. >> Yeah. >> And that's a much different task. So, I do think the amount of work they're doing with the containers gives them that entry into that and I think, it's a good offering for them. On the network side, they provide a lot of basic connectivity. I do think there's a role still for the Ciscos and the Aristas and companies like that to provide a layer of enhanced network services that connects multicloud. 'Cause AWS is never going to do that. But they've certainly, they're as legitimate network vendor as there is today. >> We had NetApp on yesterday. They were talking about latency in their- >> I'll tell you this, the analyst session, Steven Armstrong said, "You are going to hear us talk about multicloud." Yes. We're not going to necessarily lead with it. >> Without a mention. >> Yeah. >> But you said it before, never say never with Amazon. >> Yeah. >> We talk about supercloud and you're like, Dave, ultimately, the cloud guys are going to get into supercloud. They have to. >> Look, they will do multicloud. I predict that they will do multicloud. I'll tell you why. Just like in networking- >> Well, customers are asking for it. >> Well, one, they have the, not by design, but by defaulter and multiple clouds are in their environment. They got to deal with that. I think, the supercloud and sky cloud visions, there will be common services. Remember networking back in the old days when Cisco broke in as a startup. There was no real shortest path, first thinking. Policy came in after you connected all the routers together. So, right now, it's going to be best of breed, low latency, high performance. But I think, there's going to be a need in the future saying, hey, I want to run my compute on the slower lower cost compute. They already got segmentation by their announcements today. So, I think, you're going to see policy-based AI coming in where developers can look at common services across clouds and saying, I want to lock in an SLA on latency and compute services. It won't be super fast compared to say, on AWS, with the next Graviton 10 or whatever comes out. >> Yeah. >> So, I think, you're going to start to see that come in. >> Actually, I'm glad you brought Graviton up too, because the work they're doing in Silicon, actually I think, is... 'Cause I think, the one thing AWS now understands is some things are best optimized in Silicon, some at software layers, some in cloud. And they're doing work on all those layers. And Graviton to me is- >> John: Is a home run. >> Yeah. >> Well- >> Dave, they've got more instances, it's going to be... They already have Gravitons that's slower than the other versions. So, what they going to do, sunset them? >> They don't deprecate anything ever. So, (John laughing) Amazon paid $350 million. People believe that it's a number for Annapurna, which is like one of the best acquisitions in history. (group laughing) And it's given them, it's put them on an arm curve for Silicon that is blowing away Intel. Intel's finally going to get Sapphire Rapids out in January. Meanwhile, Amazon just keeps spinning out new Gravitons and Trainiums. >> Yeah. >> And so, they are on a price performance curve. And like you say, no developer ever wants to run on slower hardware, ever. >> Today, if there's a common need for multicloud, they might say, hey, I got the trade off latency and performance on common services if that's what gets me there. >> Sure. >> If there's maybe a business case to do that. >> Well, that's what they're- >> Which by the way, I want to.... Selipsky had strong quote I thought was, "If you're looking to tighten your belt, the cloud is the place >> Yeah. >> to do it." I thought >> I tweeted that. >> that was very strong. >> Yeah. >> Yeah. >> And I think, he's right. And then, the other point I want to make on that is, I think, I don't have any data on this, but I believe believe just based on some of the discussions I've had that most of Amazon's revenue is on demand. Paid by the drink. Those on demand customers are at risk, 'cause they can go somewhere else. So, they're trying to get you into optimized pricing, whether it's reserved instances or one year or three-year subscriptions. And so, they're working really hard at doing that. >> My prediction on that is that's a great point you brought up. My prediction is that the cost belt tightening is going to come in the marketplace, is going to be a major factor as companies want to get their belts tighten. How they going to do that, Dave? They're going to go in the marketplace saying, hey, I already overpaid a three-year commitment. Can I get some cohesively in there? Can I get some of this or that and the other thing? >> Yep. >> You're going to start to see the vendors and the ecosystem. If they're not in the marketplace, that's where I think, the customers will go. There are other choices to either cut their supplier base or renegotiate. I think, it's going to happen in the marketplace. Let's watch. I think, we're going to watch that grow. >> I actually think the optimization services that AWS has to help customers lower spend is a secret sauce for them that they... Customers tell me all the time, AWS comes in, they'll bring their costs down and they wind up spending more with them. >> Dave: Yeah. >> And the other cloud providers don't do that. And that has been almost a silver bullet for them to get customers to stay with them. >> Okay. And this is always the way. You drop the price of storage, you drop the price of memory, you drop the price of compute, people buy more. And in the question, long term is okay. And does AWS get commoditized? Is that where they're going? Or do they continue to thrive up the stack? John, you're always asking people about the bumper sticker. >> Hold on. (John drowns out Dave) Before we get the bumper sticker, I want to get into what we missed, what they missed on the keynote. >> Yeah, there are some blind spots. >> I think- >> That's good call. >> Let's go around the horn and think what did they miss? I'll start, I think, they missed the developer productivity angle. Supply chain software was not talked about at all. We see that at all the other conferences. I thought that could have been weaved in. >> Dave: You mean security in the supply chain? >> Just overall developer productivity has been one of the most constant themes I've seen at events. Who are building the apps? Who are the builders? What are they actually doing? Maybe Werner will bring that up on his last day, but I didn't hear Adam talk about it all, developer productivity. What's your take in this? >> Yeah, I think, on the security side, they announced security data lake. I think, the other cloud providers do a better job of providing insights on how they do security. With AWS, it's almost a black hole. And I know there's a careful line they walk between what they do, what their partners do. But I do think they could be a little clearer on how they operate, much like Azure and GCP. They announce a lot of stuff on how their operations works and things like that. >> I think, platform across cloud is definitely a blind spot for these guys. >> Yeah. >> I think, look at- >> But none of the cloud providers have embraced that, right? >> It's true. >> Yeah. >> Maybe Google a little bit >> Yeah. >> and Microsoft a little bit. Certainly, AWS hasn't at this point in time, but I think, they perceive the likes of Mongo and Snowflake and Databricks, and others as ISVs and they're not. They're platform players that are building across clouds. They're leveraging, they're building superclouds. So, I think that's an opportunity for the ecosystem. And very curious to see how Amazon plays there down the stream. So, John, what do you think is the bumper sticker? We're only in day one and a half here. What do you think so far the bumper sticker is for re:Invent 2022? >> Well, to me, the day one is about infrastructure performance with the whole what's in the data center? What's at the chip level? Today was about data, specialized services, and security. I think that was the key theme here. And then, that's going to sequence into how they're going to reorganize their ecosystem. They have a new leader, Ruba Borno, who's going to be leading the charge. They've integrated all their bespoke fragmented partner network pieces into one leadership. That's going to be really important to hear that. And then, finally, Werner for developers and event-based services, micro services. What that world's going on, because that's where the developers are. And ultimately, they build the app. So, you got infrastructure, data, specialized services, and security. Machine learning with Swami is going to be huge. And again, how do developers code it all up is going to be key. And is it the bag of Legos or the glued toy? (Dave chuckles) So, what do you want? Out-of-the-box or you want to build your own? >> And that's the bottom line is connecting those dots. All they got to be is good enough. I think, Zeus, to your point, >> Yep. >> if they're just good enough, less complicated, the will keep people on the base. >> Yeah. I think, the bumper stickers, the more you buy, the more you're saving. (John laughing) Because from an operational perspective, they are trying to bring down the complexity level. And with their optimization services and the way their credit model works, I do think they're trending down that path. >> And my bumper sticker's ecosystem, ecosystem, ecosystem. This company has 100,000 partners and that is a business model secret weapon. >> All right, there it is. The keynote announced. More analysis coming up. We're going to have the leader of (indistinct) coming up next, here on to break down their perspective, you got theCUBE's analyst perspective here. Thanks for watching. Day two, more live coverage for the next two more days, so stay with us. I'm John Furrier with Dave Vellante and Zeus Kerravala here on theCUBE. Be right back. (bright music)

Published Date : Nov 29 2022

SUMMARY :

in on the pre-briefs. going into the keynote is actually for all the The AWS Classic, the old school cloud, at the beginning of his keynote. and spent most of the time This could have an impact on the ecosystem and the spirit of keynote analysis, And then, they called it this and they have the data zone. And so, that gets me to your And the AWS execs But if they're going to keep on at the end of the day You can buy the bag of Lego blocks allow the ecosystem to build those toys, And obviously, the and more companies, I think, the call center solution. but look, at the end of about the keynote ask the right questions. a lot more needed to around how you handle tough conditions But he announces the ocean theme And the amount of data that AWS holds now, and Amazon is the ecosystem. space than the Amazon ecosystem. And of course, the price performance But the point he made If you are running managed Kubernetes, And obviously, the networking piece, And the difficulty and the Aristas and companies like that We had NetApp on yesterday. the analyst session, But you said it before, the cloud guys are going I predict that they will do on the slower lower cost compute. to start to see that come in. And Graviton to me is- that's slower than the other versions. Intel's finally going to get And like you say, got the trade off latency business case to do that. the cloud is the place to do it." on some of the discussions I've had and the other thing? I think, it's going to happen Customers tell me all the time, And the other cloud And in the question, long term is okay. I want to get into what we missed, We see that at all the other conferences. Who are building the apps? on the security side, I think, platform across is the bumper sticker? And is it the bag of Legos And that's the bottom line on the base. stickers, the more you buy, and that is a business for the next two more

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Adrian CockcroftPERSON

0.99+

Steven ArmstrongPERSON

0.99+

AdamPERSON

0.99+

AWSORGANIZATION

0.99+

Dave VellantePERSON

0.99+

DavePERSON

0.99+

AdrianPERSON

0.99+

AmazonORGANIZATION

0.99+

Adam SelipskyPERSON

0.99+

JohnPERSON

0.99+

CiscoORGANIZATION

0.99+

Ruba BornoPERSON

0.99+

2011DATE

0.99+

John FurrierPERSON

0.99+

one yearQUANTITY

0.99+

AWS'ORGANIZATION

0.99+

ZK ResearchORGANIZATION

0.99+

three-yearQUANTITY

0.99+

AntarcticaLOCATION

0.99+

MicrosoftORGANIZATION

0.99+

Last yearDATE

0.99+

10 billionQUANTITY

0.99+

Zeus KerravalaPERSON

0.99+

JanuaryDATE

0.99+

Ajay PatelPERSON

0.99+

NasdaqORGANIZATION

0.99+

$350 millionQUANTITY

0.99+

CiscosORGANIZATION

0.99+

100,000 partnersQUANTITY

0.99+

yesterdayDATE

0.99+

GoogleORGANIZATION

0.99+

SelipskyPERSON

0.99+

Zeus KerravalaPERSON

0.99+

Options ExchangeORGANIZATION

0.99+

AristasORGANIZATION

0.99+

DatabricksORGANIZATION

0.99+

MongoORGANIZATION

0.99+

TodayDATE

0.99+

todayDATE

0.99+

Nadir Izrael, Armis | CUBE Converstion


 

(bright upbeat music) >> Hello, everyone, and welcome to this #CUBEConversation here in Palo Alto, California. I'm John Furrier, host of "theCUBE." We have the co-founder and CTO of Armis here, Nadir Izrael. Thanks for coming on. Appreciate it. Armis is hot company, RSA, we just happened. Last week, a lot of action going on. Thanks for coming on. >> Thank you for having me. Sure. >> I love CTOs and co-founders. One, you have the entrepreneurial DNA, also technical in a space with cyber security, that is the hottest most important area. It's always been important, but now more than ever, as the service areas are everywhere, tons of attacks, global threats. You got national security at every level, and you got personal liberties for privacy, and other things going on for average citizens. So, important topic. Talk about Armis? Why did you guys start this company? What was the motivation? Give a quick commercial what you guys do, and then we'll get into some of the questions around, who you guys are targeting. >> Sure, so yeah, I couldn't agree more about the importance of cybersecurity, especially I think in these days. And given some of the geopolitical changes happening right now, more than ever, I would say that if we go back 6.5 years or so, when Armis was founded, we at the time talked to dozens of different CIOs, CSOs, it managers. And every single one of them told us the same thing. And this was at least to me surprising at the time. We have no idea what we have. We have no idea what the assets that are connected to our network, or our environment are. At the time, when we started Armis, we thought this was simply, let's call it the other devices. IOT, OT, all kinds of different buzzwords that were kind of flying around at the time, and really that's, what we should focus on. But with time, what we understood, it's actually a problem of scale. Organizations are growing massively. The diversity of different assets they have to deal with is incredible. And if 6.5 or 7 years ago, it was all about just growth of actual physical devices, these days it's virtual, it's containerized, it's cloud-based. It's actually quite insane. And organizations find themselves really quickly dealing with billions of assets within their environment, but no real way to see, account for them, and be able to manage them. That's what Armis is here to solve. It's here to bring back visibility and order into the mix. It's here to bring a complete map of everything within the organization, and the ability to manage different security processes on top of that. And it couldn't have come, I think at a better time for organizations, because the ability to manage these days, the attack surface of an organization, understand where are different weak spots, what way to invest in? They start and end with a complete asset map, and that's really what we're here to solve. >> As I look at your story and understand what you guys are doing, certainly, a lot of great momentum at RSA. But also digging under the hood, you guys really crack the code with on the scale side as well. And also it's lockstep with the environment. If you look at the trends that we've been covering on "theCUBE," system on chip, you're seeing a lot of Silicon action going on, on all the hyperscalers. You're starting to see, again, you mentioned IOT devices and OT, IP enabled processors. I mean, that's basically you can run multi-threaded applications on a light bulb, basically. So, you have these new things going on that are just popping in into the environment. Just people are hanging them on the network. So, anything on the network is risk and that's happening massively, so I see that. But also you guys have this contextualization capability, scope the problem statement for us? How hard is it to do this? Because you got tons of challenges. What's the scale of the problem that you guys have been solving? 'Cause it's not easy. I mean, it's not network management, not just doing auto discovery, there's a lot of secret sauce there, scope the problem? >> Okay, so first of all, just to get a measure of how difficult this is, organizations have been trying to solve this for the better part of the last two decades. I think even when the problem was way smaller, they've still been struggling with being able to do this. It's an age old problem, that for the most part, I got to say that when I describe the problem the way that I did, usually, what the reaction from clients are, "Yes, I'd love for you to solve that." "I just heard this pitch from like five other vendors and I've yet to solve this problem. So, how do you do it?" So, as I kind of scope this, it's also a measure of just basically, how do you go about solving a complex situation where, to kind of list out some of the bold claims here in what I said. Number one, it's the ability to just fingerprint and be able to understand what your assets are. Secondly, being able to do it with very dirty data, if you will. I would say, in many cases, solutions that exist today, basically tell clients, or tell the users, were as good as the data that you provide us. And because the data isn't very good, the results aren't very good. Armis aspires to do something more than that. It aspires to create a logically perfect map of your assets despite being hindered by incomplete and basically wrong data, many times. And third, the ability to infer things about the environment where no source data even exists. So, to all of that, really Armis' approach is pretty straightforward, and it relies on something that we call our collective intelligence. We basically use the power and scale of these masses to our advantage, and not just as a shortcoming. What I mean by that, is Armis today tracks overall, over 2 billion assets worldwide. That's an astounding number. And it thanks to the size of some of the organization that we work with. Armis proudly serves today, for instance, over 35 of Fortune 100. Some of those environments, let me tell you, are huge. So, what Armis basically does, is really simple. It uses thousands, tens of thousands, hundreds of thousands sometimes, of instances of the same device and same assets to basically figure out what it is. Figure out how to fingerprint it best. Figure out how to marry conflicting data sources about it and figure out what's the right host name? What's the right IP address? What are all the different details that you should know about it? And be able to basically find the most minimalist fingerprints for different attributes of an asset in a changing environment. It's something that works really, really well. It's something that we honestly, may have applied to this problem, but it's not something that we fully invented. It's been used effectively to solve other problems as well. For instance, if you think about any kind of mapping software. And I use that analogy a lot. But if you think about mapping software, I happened to work for Google in the past, and specifically on Google Map. So, I know quite a bit about how to solve similar problems. But I can tell you that you think about something like a mapping software, it takes very dirty, incomplete data from lots of different sources, and creates not a pixel perfect map, but a logically perfect map for the use cases you need it to be. And that's exactly what Armis strives to do. Build the Google Maps, if you will, of your organization, or the kind of real time map of everything, and be able to supply that or project that for different business processes. >> Yeah, I love the approach, and I love that search analogy. Discover is a big part of mapping as you know, and reasoning in there with the metadata you have and the dirty data is critical. And by the way, we love bold statements on "theCUBE," because as long as you can back 'em up, then we'll dig into that. But let's back up some of those bold claims. Okay, you have a lot of devices, you've got the collective intelligence. How do you manage the real time nature of devices changing in real time? 'Cause if you do fingerprint on it, and you got some characteristics of the assets in the map, what happens in real time? How fast are you guys managing that? What's the process for that? >> So, very quickly, I think another quick analogy I like to use, because I think it orients people around kind of how Armis operates, is imagine that Armis is kind of like a Shazam for assets. We take different attributes coming from your environment, and we match it up, that collective intelligence to figure out what that asset is. So, we recognize an asset based off of its behavioral fingerprint, or based off of different attributes, figure out what it is. Now, if you take something that recognizes tunes on the radio or anything like that, it's built pretty similarly. Once you have access to different sources. Once we see real environments that introduce new devices or new assets, Armis is immediately learning. It's immediately taking those different queues, those different attributes and learning from them. And to your point, even if something changes its behavioral fingerprint. For instance, it gets updated, a new patch rolls out, something that changes a meaningful aspect of how that asset operates, Armis sees so many environments, and so much these days that it reacts in almost real time to the introduction of these new things. A patch rolls out, it starts changing multiple devices and multiple different environments around the world, Armis is already learning and adapting this model for the new type of asset and device out there. It works very quickly, and it's part of the effectiveness of being able to operate at the scale that we do. >> Well, Nadir, you guys got a great opportunity there at Armis. And as co-founder, you must be pretty pumped, actually working hard, stay up to date, and got a great, great opportunity there. How was RSA this year? And what's your take on the landscape? Because you're kind of in this, I call the new category of lockstep with an environment. Obviously, there's no perimeter, everyone knows that. Service area is the whole internet, basically, distributed computing paradigms and understanding things like discovery and mapping data that you guys are doing. And it's a data problem as well. It's a lot of problems that you guys are solving. But the industry's got some old beggars, as I still hear endpoint protection, zero trust. I hear trust, if you're talking about supply chain, software supply chain, S bombs, you mentioned in a previous interview. You got software supply chain issues with open source, 'cause everything's open source now on infrastructure, so that's happening. How do you manage all that? I mean, is it zero trust or is it trust? 'Cause as you hear, I hear you talking about Armis, it's like, you got to have trusted components in there and you got to trust the data. So, that's not zero trust, that's trust. So, where zero trust and trust solve? What's your take on that? How do you resolve? What's your reaction to that? >> Usually, I wait for someone else to bring up the zero trust buzzword before I touch on that. So, because to your point, it's such an overused buzzword. But let me try and tackle that for a second. First of all, I think that Armis treats assets in a way as, let's call it the vessels of everything. And what I mean by that, is that at a very atomic aspect, assets are the atoms of the environment. They're the vessels of everything. They're the vessels of vulnerabilities. There's the vessels of actual attacks. Like something, some asset needs to exist for something to happen. And every aspect of trust or zero trust, or anything like that applies to basically assets. Now, to your point, Armis, ironically, or like a lot of security tools, I think it assists greatly or even manages a zero trust policy within the environment. It provides the asset intelligence into the mix of how to manage an effective zero trust policy. But in essence, you need to trust Armis, right? I mean, Armis is a critical function now within your environment. And there has to be a degree of trust, but I would say, trust but verified. And that's something that I think the security industry as a whole is evolving into quite a bit, especially post events like solar, winds, or other things that happened in recent years. Armis is a SaaS platform. And in being a SaaS platform, there is an inherent aspect of trust and risk that you take on as a security organization. I think anyone who says differently, is either lying or mistaken. I mean, there are no foolproof, a 100% systems out there. But to mitigate some of that risk, we adhere to a very strict risk in security policy on our end. What that means, is we're incredibly transparent about every aspect of our own environment. We publish to our clients our latest penetration test reports. We publish our security controls and policies. We're very transparent about the different aspects we're involve in our own environment. We give our clients access to our own internal security organization, our own CSO, to be able to provide them with all the security controls they need. And we take a very least privileged approach in how we deploy Armis within an environment. No need for extra permissions. Everything read-only unless there is an explicit reason to do else... I think differently within the environment. And something that we take very seriously, is also anything that we deploy within the environment, should be walled off, except for whatever lease privilege that we need. On top of that, I'd add one more thing that adds, I think a lot of peace of mind to our clients. We are FeRAMP ready, and soon to be certified, We work with DOD clients within the U.S kind of DOD apparatus. And I think that this gives a lot of peace of mind to our clients, even commercial clients, because they know that we need to adhere to hundreds of different security controls that are monitored and government by U.S federal agencies. And that I think gives a lot of extra security measures, a lot of knowledge that this risk is being mitigated and controlled, and governed by different agencies. >> Good stuff there. Also at RSA, you kind of saw people come back together face-to-face, which is great. A lot of kind of similar, everyone kind of knows each other in the security business, but it's getting bigger. What was the big takeaways from you for the folks watching here that didn't get to go to RSA this year? What was the most important stories that came out of RSA this year? Just generally across the industry, from your perspective that people should pay attention to? >> First of all, I think that people were just really happy to get back together. I think it was a really fun RSA. I think that people had a lot of energy and excitement, and they love just walking around. I am obviously, somewhat biased here, but I will say, I've heard from other people too, that our event there, and the formal party that was there was by far the kind of the the talk of the show. And we were fortunate to do that with Sentinel One. with Torque who are both great partners of ours, and, of course, Insight partners. I think a lot of the themes that have come up during RSA, are really around some of the things that we already talked about, visibility as a driver for business processes. The understanding of where do assets and tax surfaces, and things like that play in. But also, I think that everything was, in light of macroeconomics and geopolitics that are kind of happening in the background, that no one can really avoid that. On the one hand, if we look at macroeconomics, obviously, markets are going through quite a shake up right now. And especially, when you talk about tech, the one thing that was really, really evident though, is it's cybersecurity is, I think market-wise just faring way better than others because the demand is absolutely there. I think that no one has slowed down one bit on buying and arming themselves, I'd say, with defensive solutions for cybersecurity. And the reason, is that the threats are there. I mean, we're all very, very much aware of that. And even in situations where companies are spending less on other things, they're definitely spending on cybersecurity, because the toll on the industry is going up significantly year by year, which really ties into also the geopolitics. One of the themes that I've heard significantly, is all the buzz around different initiatives coming from both U.S federal agencies, as well as different governing bodies around anything, from things like shields up in critical infrastructure, all the way to different governance aspects of the TSA. Or even the SCC on different companies with regards to what are they doing on cyber? If some of the initiatives coming from the SCC on public companies come out the way that they are right now, cyber security companies will elevate... Well, sorry, companies in general, would actually elevate cyber security to board level discussions on a regular basis. And everyone wants to be ready to answer effectively, different questions there. And then on top of all of that, I think we're all very aware of, I think, and not to be too doom and gloom here, but the geopolitical aspect of things. It's very clear that we could be facing a very significant and very different cyber warfare aspect than anything that we've seen before in the coming months and years. I think that one of the things you could hear a lot of companies and clients talk about, is the fact that it used to be that you could say, "Look, if a nation state is out to get me, then a nation state is out to get me, and they're going to get me. And I am out to protect myself from common criminals, or cybersecurity criminals, or things like that." But it's no longer the case. I mean, you very well might be attacked by a nation state, and it's no longer something that you can afford to just say, "Yeah, we'll just deal with that if that happens." I think some of the attacks on critical infrastructure in particular have proven to us all, that this is a very, very important topic to deal with. And companies are paying a lot of attention to what can give them visibility and control over their extended attack surface, and anything in between. >> Well, we've been certainly ringing the bell for years. I've been a hawk on this for many, many years, saying we're at cyber war, well below everyone else. So, we've been pounding our fist on the table saying, it's not just a national security issue. Finally, they're waking up and kind of figuring out countermeasures. But private companies don't have their own, they should have their own militia basically. So, what's the role of government and all this? So, all this is about competency and actually understanding what's going on. So, the whole red line, lowering that red line, the adversaries have been operating onside our infrastructure for years. So, the industrial IOT side has been aware of this for years, now it's being streamed, right? So, what do we do? Is the government going to come in and help, and bring some cyber militia to companies to protect their business? I mean, if troops dropped on our shores, I'm sure the government would react, right? So, where is that red line, Nadir? Where do you see the gap being filled? Certainly, people will defend their companies, they have assets obviously. And then, you critical infrastructure on the industrial side is super important, that's the national security issue. What do we do? What's the action here? >> That is such a difficult question. Such a good question I think to tackle, I think, there are similarities and there are differences, right? On the one hand, we do and should expect the government to do more. I think it should do more in policy making. I mean, really, really work to streamline and work much faster on that. And it would do good to all of us because I think that ultimately, policy can mean that the third party vendors that we use are more secure, and in turn, our own organizations are more secure in how they operate. But also, they hold our organizations accountable. And in doing so, consumers who use different services feel safer as well because basically, companies are mandated to protect data, to protect themselves, and do everything else. On the other hand, I'd say that government's support on this is difficult. I think the better way to look at this, is imagine for a second, no troops landing on our kind of shores, if you will. But imagine instead, a situation where Americans are spread all over the world and expect the government to protect them in any country, or in any situation they're at. I think that depicts maybe a little better, how infrastructure looks like today. If you look at multinational companies, they have offices everywhere. They have assets spread out everywhere. They have people working from everywhere around the world. It's become an attack surface, that I think you said this earlier, or in a different interview as well. There's no more perimeter to speak of. There are no more borders to this virtual country, if you will. And so, on the one hand, we do expect our government to do a lot. But on the other hand, we also need to take responsibility as companies, and as vendors, and as suppliers of services, we need to take accountability and take responsibility for the assets that we deploy and put in place. And we should have a very security conscious mind in doing this. >> Yeah. >> So, I think tricky government policy aspect to tackle. I think the government should be doing more, but on the other hand, we should absolutely be pointing internally at where can we do better as companies? >> And the asset understanding the context of what's critical asset too, can impact how you protect it, defend it, and ensure it, or manage it. I mean, this is what people want. It's a data problem in flight, at rest, and in action. So, Armis, you guys are doing a great job there. Congratulations, Nadir on the venture, on your success. I love the product, love the approach. I think it scales nicely with the industry where it's going. So, especially with the intelligent edge booming, and it's just so much happening, you guys are in the middle of it. Thanks for coming on "theCUBE." Appreciate it. >> Thank you so much. As I like to say, it takes a village, and there's so many people in the company who make this happen. I'm just the one who gets to take credit for it. So, I appreciate the time today and the conversation. And thank you for having me. >> Well, we'll check in with you. You guys are right there with us, and we'll be in covering you guys pretty deeply. Thanks for coming on. Appreciate it. Okay, it's #CUBEConversation here in Palo Alto. I'm John Furrier. Thanks for watching. Clear. (bright upbeat music)

Published Date : Jun 17 2022

SUMMARY :

We have the co-founder and CTO Thank you for having me. that is the hottest most important area. and the ability to manage and understand what you guys are doing, of the organization that we work with. And by the way, we love bold at the scale that we do. and mapping data that you guys are doing. a lot of peace of mind to our clients, that didn't get to go to RSA this year? And I am out to protect Is the government going to come in and expect the government to but on the other hand, I love the product, love the approach. So, I appreciate the time you guys pretty deeply.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Nadir IzraelPERSON

0.99+

Palo AltoLOCATION

0.99+

John FurrierPERSON

0.99+

ArmisORGANIZATION

0.99+

NadirPERSON

0.99+

thousandsQUANTITY

0.99+

hundreds of thousandsQUANTITY

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

RSAORGANIZATION

0.99+

Last weekDATE

0.99+

100%QUANTITY

0.99+

tens of thousandsQUANTITY

0.99+

bothQUANTITY

0.99+

todayDATE

0.99+

billionsQUANTITY

0.99+

zero trustQUANTITY

0.99+

FirstQUANTITY

0.99+

GoogleORGANIZATION

0.98+

thirdQUANTITY

0.98+

6.5DATE

0.98+

over 2 billion assetsQUANTITY

0.98+

Google MapsTITLE

0.98+

dozensQUANTITY

0.98+

Google MapTITLE

0.98+

this yearDATE

0.97+

ArmisPERSON

0.97+

five other vendorsQUANTITY

0.97+

TorquePERSON

0.97+

over 35QUANTITY

0.96+

hundredsQUANTITY

0.96+

SCCORGANIZATION

0.96+

OneQUANTITY

0.96+

SecondlyQUANTITY

0.96+

7 years agoDATE

0.94+

oneQUANTITY

0.94+

ArmisTITLE

0.94+

U.SORGANIZATION

0.93+

FeRAMPORGANIZATION

0.92+

RSAEVENT

0.92+

U.SLOCATION

0.9+

Armis'ORGANIZATION

0.89+

one thingQUANTITY

0.89+

6.5 yearsQUANTITY

0.88+

assetsQUANTITY

0.86+

yearsQUANTITY

0.85+

ShazamORGANIZATION

0.84+

Sentinel OneORGANIZATION

0.82+

theCUBEORGANIZATION

0.81+

security controlsQUANTITY

0.8+

DODORGANIZATION

0.8+

last two decadesDATE

0.79+

one bitQUANTITY

0.77+

one more thingQUANTITY

0.73+

Rajiv Mirani and Thomas Cornely, Nutanix | .NEXTConf 2021


 

(upbeat electronic music plays) >> Hey everyone, welcome back to theCube's coverage of .NEXT 2021 Virtual. I'm John Furrier, hosts of theCube. We have two great guests, Rajiv Mirani, who's the Chief Technology Officer, and Thomas Cornely, SVP of Product Management. Day Two keynote product, the platform, announcements, news. A lot of people, Rajiv, are super excited about the, the platform, uh, moving to a subscription model. Everything's kind of coming into place. How are the customers, uh, seeing this? How they adopted hybrid cloud as a hybrid, hybrid, hybrid, data, data, data? Those are the, those are the, that's the, that's where the puck is right now. You guys are there. How are customers seeing this? >> Mirani: Um, um, great question, John, by the way, great to be back here on theCube again this year. So when we talk to our customers, pretty much, all of them agreed that for them, the ideal state that they want to be in is a hybrid world, right? That they want to essentially be able to run both of those, both on the private data center and the public cloud, and sort of have a common platform, common experience, common, uh, skillset, same people managing, managing workloads across both locations. And unfortunately, most of them don't have that that tooling available today to do so, right. And that's where the platform, the Nutanix platform's come a long way. We've always been great at running in the data center, running every single workload, we continue to make great strides on our core with the increased performance for, for the most demanding, uh, workloads out there. But what we have done in the last couple of years has also extended this platform to run in the public cloud and essentially provide the same capabilities, the same operational behavior across locations. And that's when you're seeing a lot of excitement from our customers because they really want to be in that state, for it to have the common tooling across work locations, as you can imagine, we're getting traction. Customers who want to move workloads to public cloud, they don't want to spend the effort to refactor them. Or for customers who really want to operate in a hybrid mode with things like disaster recovery, cloud bursting, workloads like that. So, you know, I think we've made a great step in that direction. And we look forward to doing more with our customers. >> Furrier: What is the big challenge that you're seeing with this hybrid transition from your customers and how are you solving that specifically? >> Mirani: Yeah. If you look at how public and private operate today, they're very different in the kind of technologies used. And most customers today will have two separate teams, like one for their on-prem workloads, using a certain set of tooling, a second completely different team, managing a completely different set of workloads, but with different technologies. And that's not an ideal state in some senses, that's not true hybrid, right? It's like creating two new silos, if anything. And our vision is that you get to a point where both of these operate in the same manner, you've got the same people managing all of them, the same workloads anyway, but similar performance, similar SaaS. So they're going to literally get to point where applications and data can move back and forth. And that's, that's, that's where I think the real future is for hybrid >> Furrier: I have to ask you a personal question. As the CTO, you've got be excited with the architecture that's evolving with hybrid and multi-cloud, I mean, I mean, it's pretty, pretty exciting from a tech standpoint, what is your reaction to that? >> Mirani: %100 and it's been a long time coming, right? We have been building pieces of this over years. And if you look at all the product announcements, Nutanix has made over the last few years and the acquisitions that made them and so on, there's been a purpose behind them. That's been a purpose to get to this model where we can operate a customer's workloads in a hybrid environment. So really, really happy to see all of that come together. Years and years of work finally finally bearing fruit. >> Furrier: Well, we've had many conversations in the past, but it congratulates a lot more to do with so much more action happening. Thomas, you get the keys to the kingdom, okay, and the product management you've got to prioritize, you've got to put it together. What are the key components of this Nutanix cloud platform? The hybrid cloud, multi-cloud strategy that's in place, because there's a lot of headroom there, but take us through the key components today and then how that translates into hybrid multi-cloud for the future. >> Cornely: Certainly, John, thank you again and great to be here, and kind of, Rajiv, you said really nicely here. If you look at our portfolio at Nutanix, what we have is great technologies. They've been sold as a lot of different products in the past, right. And what we've done last few months is we kind of bring things together, simplify and streamline, and we align everything around a cloud platform, right? And this is really the messaging that we're going after is look, it's not about the price of our solutions, but business outcomes for customers. And so are we focusing on pushing the cloud platform, which we encompasses five key areas for us, which we refer to as cloud infrastructure, no deficiencies running your workloads. Cloud management, which is how you're going to go and actually manage, operate, automate, and get governance. And then services on top that started on all around data, right? So we have unified storage, finding the objects, data services. We have database services. Now we have outset of desktop services, which is for EMC. So all of this, the big change for us is this is something that, you know, you can consume in terms of solutions and consume on premises. As Rajiv discussed, you know, we can take the same platform and deploy it in public cloud regions now, right? So you can now get no seamless hybrid cloud, same operating model. But increasingly what we're doing is taking your solutions and re-targeting issues and problems at workers running native public clouds. So think of this as going, after automating more governance, security, you know, finding objects, database services, wherever you're workload is running. So this is taking this portfolio and reapplying it, and targeting on prem at the edge in hybrid and in christening public cloud in ATV. >> Furrier: That's awesome. I've been watching some of the footage and I was noticing quite a lot of innovation around virtualized, networking, disaster, recovery security, and data services. It's all good. You guys were, and this is in your wheelhouse. I know you guys are doing this for many, many years. I want to dive deeper into that because the theme right now that we've been reporting on, you guys are hitting right here what the keynote is cloud scale is about faster development, right? Cloud native is about speed, it's about not waiting for these old departments, IT or security to get back to them in days or weeks and responding to either policy or some changes, you got to move faster. And data, data is critical in all of this. So we'll start with virtualized networking because networking again is a key part of it. The developers want to go faster. They're shifting left, take us through the virtualization piece of how important that is. >> Mirani: Yeah, that's actually a great question as well. So if you think about it, virtual networking is the first step towards building a real cloud like infrastructure on premises that extends out to include networking as well. So one of the key components of any cloud is automation. Another key component is self service and with the API, is it bigger on virtual networking All of that becomes much simpler, much more possible than having to, you know, qualify it, work with someone there to reconfigure physical networks and slots. We can, we can do that in a self service way, much more automated way. But beyond that, the, the, the notion of watching networks is really powerful because it helps us to now essentially extend networks and, and replicate networks anywhere on the private data center, but in the public cloud as well. So now when customers move their workloads, we'd already made that very simple with our clusters offering. But if you're only peek behind the layers a little bit, it's like, well, yea, but the network's not the same on the side. So now it, now it means that a go re IP, my workloads create new subnets and all of that. So there was a little bit of complication left in that process. So to actual network that goes away also. So essentially you can repeat the same network in both locations. You can literally move your workloads, no redesign of your network acquired and still get that self service and automation capabilities of which cookies so great step forward, it really helps us complete the infrastructure as a service stack. We had great storage capabilities before, we create compute capabilities before, and sort of networking the third leg and all of that. >> Furrier: Talk about the complexity here, because I think a lot of people will look at dev ops movement and say, infrastructure is code when you go to one cloud, it's okay. You can, you can, you know, make things easier. Programmable. When, when you start getting into data center, private data centers, or essentially edges now, cause if it's distributed cloud environment or cloud operations, it's essentially one big cloud operation. So the networks are different. As you said, this is a big deal. Okay. This is sort of make infrastructure as code happen in multiple environments across multiple clouds is not trivial. Could you talk about the main trends and how you guys see this evolving and how you solve that? >> Mirani: Yeah. Well, the beauty here is that we are actually creating the same environment everywhere, right? From, from, from point of view of networking, compute, and storage, but also things like security. So when you move workloads, things with security, posture also moves, which is also super important. It's a really hard problem, and something a lot of CIO's struggle with, but having the same security posture in public and private clouds reporting as well. So with this, with this clusters offering and our on-prem offering competing with the infrastructure service stack, you may not have this capability where your operations really are unified across multicloud hybrid cloud in any way you run. >> Furrier: Okay, so if I have multiple cloud vendors, there are different vendors. You guys are creating a connection unifying those three. Is that right? >> Mirani: Essentially, yes, so we're running the same stack on all of them and abstracting away the differences between the clouds that you can run operations. >> Furrier: And when the benefits, the benefits of the customers are what? What's the main, what's the main benefit there? >> Mirani: Essentially. They don't have to worry about, about where their workloads are running. Then they can pick the best cloud for their workloads. It can seamlessly move them between Cloud. They can move their data over easily, and essentially stop worrying about getting locked into a single, into a single cloud either in a multi-cloud scenario or in a hybrid cloud scenario, right. There many, many companies now were started on a cloud first mandate, but over time realized that they want to move workloads back to on-prem or the other way around. They have traditional workloads that they started on prem and want to move them to public cloud now. And we make that really simple. >> Furrier: Yeah. It's kind of a trick question. I wanted to tee that up for Thomas, because I love that kind of that horizontal scales, what the cloud's all about, but when you factor data into it, this is the sweet spot, because this is where, you know, I think it gets really exciting and complicated too, because, you know, data's got, can get unwieldy pretty quickly. You got state got multiple applications, Thomas, what's your, what can you share the data aspect of this? This is super, super important. >> Absolutely. It's, you know, it's really our core source of differentiation, when you think about it. That's what makes Nutanix special right? In, in the market. When we talk about cloud, right. Actually, if you've been following Nutanix for years, you know, we've been talking a lot about making infrastructure invisible, right? The new way for us to talk about what we're doing, with our vision is, is to make clouds invisible so that in the end, you can focus on your own business, right? So how do you make Cloud invisible? Lots of technology is at the application layer to go and containerize applications, you know, make them portable, modernize them, make them cloud native. That's all fine when you're not talking of state class containers, that the simplest thing to move around. Right. But as we all know, you know, applications end of the day, rely on data and measure the data across all of these different locations. I'm not even going to go seconds. Cause that's almost a given, you're talking about attribution. You can go straight from edge to on-prem to hybrid, to different public cloud regions. You know, how do you go into the key control of that and get consistency of all of this, right? So that's part of it is being aware of where your data is, right? But the other part is that inconsistency of set up data services regardless of where you're running. And so this is something that we look at the cloud platform, where we provide you the cloud infrastructure go and run the applications. But we also built into the cloud platform. You get all of your core data services, whether you have to consume file services, object services, or database services to really support your application. And that will move with your application, that is the key thing here by bringing everything onto the same platform. You now can see all operations, regardless of where you're running the application. The last thing that we're adding, and this is a new offering that we're just launching, which is a service, it's called, delete the dead ends. Which is a solution that gives you visibility and allow you to go and get better governance around all your data, wherever it may live, across on-prem edge and public clouds. That's a big deal again, because to manage it, you first have to make sense of it and get control over it. And that's what data answer's is going to be all about. >> Furrier: You know, one of the things we've we've been reporting on is data is now a competitive advantage, especially when you have workflows involved, um, super important. Um, how do you see customers going to the edge? Because if you have this environment, how does the data equation, Thomas, go to the edge? How do you see that evolving? >> Cornely: So it's yeah. I mean, edge is not one thing. And that's actually the biggest part of the challenge of defining what the edge is depending on the customer that you're working with. But in many cases you get data ingesting or being treated at the edge that you then have to go move to either your private cloud or your public cloud environment to go and basically aggregate it, analyze it and get insights from it. Right? So this is where a lot of our technologies, whether it's, I think the object's offering built in, we'll ask you to go and make the ingest over great distances over the network, right? And then have your common data to actually do an ethics audit over our own object store. Right? Again, announcements, we brought into our storage solutions here, we want to then actually organize it then actually organize it directly onto the objects store solution. Nope. Using things, things like or SG select built into our protocols. So again, make it easy for you to go in ingest anywhere, consolidate your data, and then get value out of it. Using some of the latest announcements on the API forms. >> Furrier: Rajiv databases are still the heart of most applications in the enterprise these days, but databases are not just the data is a lot of different data. Moving around. You have a lot a new data engineering platforms coming in. A lot of customers are scratching their head and, and they want to kind of be, be ready and be ready today. Talk about your view of the database services space and what you guys are doing to help enterprise, operate, manage their databases. >> Mirani: Yeah, it's a super important area, right? I mean, databases are probably the most important workload customers run on premises and pretty close on the public cloud as well. And if you look at it recently, the tooling that's available on premises, fairly traditional, but the clouds, when we integrate innovation, we're going to be looking at things like Amazon's relational database service makes it an order of magnitude simpler for our customers to manage the database. At the same time, also a proliferation of databases and we have the traditional Oracle and SQL server. But if you have open source Mongo, DB, and my SQL, and a lot of post-grads, it's a lot of different kinds of databases that people have to manage. And now it just becomes this cable. I have the spoke tooling for each one of them. So with our Arab product, what we're doing is essentially creating a data management layer, a database management layer that unifies operations across your databases and across locations, public cloud and private clouds. So all the operations that you need, you do, which are very complicated in, in, in, in with traditional tooling now, provisioning of databases backing up and restoring them providing a true time machine capabilities, so you can pull back transactions. We can copy data management for your data first. All of that has been tested in Era for a wide variety of database engines, your choice of database engine at the back end. And so the new capabilities are adding sort of extend that lead that we have in that space. Right? So, so one of the things we announced at .Next is, is, is, is one-click storage scaling. So one of the common problems with databases is as they grow over time, it's not running out of storage capacity. Now re-provisions to storage for a database, migrate all the data where it's weeks and months of look, right? Well, guess what? With Era, you can do that in one click, it uses the underlying AOS scale-out architecture to provision more storage and it does it have zero downtime. So on the fly, you can resize your databases that speed, you're adding some security capabilities. You're adding some capabilities around resilience. Era continues to be a very exciting product for us. And one of the things, one of the real things that we are really excited about is that it can really unify database operations between private and public. So in the future, we can also offer an aversion of Era, which operates on native public cloud instances and really excited about that. >> Furrier: Yeah. And you guys got that two X performance on scaling up databases and analytics. Now the big part point there, since you brought up security, I got to ask you, how are you guys talking about security? Obviously it's embedded in from the beginning. I know you guys continue to talk about that, but talk about, Rajiv, the security on, on that's on everyone's mind. Okay. It goes evolving. You seeing ransomware are continuing to happen more and more and more, and that's just the tip of the iceberg. What do you guys, how are you guys helping customers stay secure? >> Mirani: Security is something that you always have to think about as a defense in depth when it comes to security, right? There's no one product that, that's going to do everything for you. That said, what we are trying to do is to essentially go with the gamut of detection, prevention, and response with our security, and ransom ware is a great example of that, right. We've partnered with Qualys to essentially be able to do a risk assessment of your workloads, to basically be able to look into your workloads, see whether they have been bashed, whether they have any known vulnerabilities and so on. To try and prevent malware from infecting your workloads in the first place, right? So that's, that's the first line of defense. Now not systems will be perfect. Some, some, some, some malware will probably get in anyway But then you detect it, right. We have a database of all the 4,000 ransomware signatures that you can use to prevent ransomware from, uh, detecting ransom ware if it does infect the system. And if that happens, we can prevent it from doing any damage by putting your fire systems and storage into read-only mode, right. We can also prevent lateral spread of, of your ransomware through micro-segmentation. And finally, if you were, if you were to invade, all those defenses that you were actually able to encrypt data on, on, on a filer, we have immutable snapshots, they can recover from those kinds of attacks. So it's really a defense in depth approach. And in keeping with that, you know, we also have a rich ecosystem of partners while this is one of them, but older networks market sector that we work with closely to make sure that our customers have the best tooling around and the simplest way to manage security of their infrastructure. >> Furrier: Well, I got to say, I'm very impressed guys, by the announcements from the team I've been, we've been following Nutanix in the beginning, as you know, and now it's back in the next phase of the inflection point. I mean, looking at my notebook here from the announcements, the VPC virtual networking, DR Observability, zero trust security, workload governance, performance expanded availability, and AWS elastic DR. Okay, we'll get to that in a second, clusters on Azure preview cloud native ecosystem, cloud control plane. I mean, besides all the buzzword bingo, that's going on there, this is cloud, this is a cloud native story. This is distributed computing. This is virtualization, containers, cloud native, kind of all coming together around data. >> Cornely: What you see here is, I mean, it is clear that it is about modern applications, right? And this is about shifting strategy in terms of focusing on the pieces where we're going to be great at. And a lot of these are around data, giving you data services, data governance, not having giving you an invisible platform that can be running in any cloud. And then partnering, right. And this is just recognizing what's going on in the world, right? People want options, customers and options. When it comes to cloud, they want options to where they're running the reports, what options in terms of, whether it be using to build the modern applications. Right? So our big thing here is providing and being the best platform to go and actually support for Devers to come in and build and run their new and modern applications. That means that for us supporting a broad ecosystem of partners, entrepreneur platform, you know, we announced our partnership with Red Hat a couple of months ago, right? And this is going to be a big deal for us because again, we're bringing two leaders in the industry that are eminently complimentary when it comes to providing you a complete stack to go and build, run, and manage your client's applications. When you do that on premises, utilizing like the preferred ATI environment to do that. Using the Red Hat Open Shift, or, you're doing this open to public cloud and again, making it seamless and easy, to move the applications and their supporting data services around, around them that support them, whether they're running on prem in hybrid winter mechanic. So client activity is a big deal, but when it comes to client activity, the way we look at this, it's all about giving customers choice, choice of that from services and choice of infrastructure service. >> Furrier: Yeah. Let's talk to the red hat folks, Rajiv, it's you know, it's, they're an operating system thinking company. You know, you look at the internet now in the cloud and edge, and on-premise, it's essentially an operating system. you need your backup and recovery needs to disaster recovery. You need to have the HCI, you need to have all of these elements part of the system. It's, it's, it's, it's building on top of the existing Nutanix legacy, then the roots and the ecosystem with new stuff. >> Mirani: Right? I mean, it's, in fact, the Red Hat part is a great example of, you know, the perfect marriage, if you will, right? It's, it's, it's the best in class platform for running the cloud-native workloads and the best in class platform with a service offering in there. So two really great companies coming together. So, so really happy that we could get that done. You know, the, the point here is that cloud native applications still need infrastructure to run off, right? And then that infrastructure, if anything, the demands on that and growing it since it's no longer that hail of, I have some box storage, I have some filers and, you know, just don't excite them, set. People are using things like object stores, they're using databases increasingly. They're using the Kafka and Map Reduce and all kinds of data stores out there. And back haul must be great at supporting all of that. And that's where, as Thomas said, earlier, data services, data storage, those are our strengths. So that's certainly a building from platform to platform. And then from there onwards platform services, great to have right out of the pocket. >> Furrier: People still forget this, you know, still hardware and software working together behind the scenes. The old joke we have here on the cube is server less is running on a bunch of servers. So, you know, this is the way that is going. It's really the innovation. This is the infrastructure as code truly. This is what's what's happened is super exciting. Rajiv, Thomas, thank you guys for coming on. Always great to talk to you guys. Congratulations on an amazing platform. You guys are developing. Looks really strong. People are giving it rave reviews and congratulations on, on, on your keynotes. >> Cornely: Thank you for having us >> Okay. This is theCube's coverage of.next global virtual 2021 cube coverage day two keynote review. I'm John Furrier Furrier with the cube. Thanks for watching.

Published Date : Sep 22 2021

SUMMARY :

How are the customers, uh, seeing this? the effort to refactor them. the same workloads anyway, As the CTO, you've got be excited with the And if you look at all get the keys to the kingdom, of different products in the because the theme right now So one of the key components So the networks are different. the beauty here is that we Is that right? between the clouds that you They don't have to the data aspect of this? Lots of technology is at the application layer to go and one of the things we've the edge that you then have are still the heart of So on the fly, you can resize Now the big part point there, since you of all the 4,000 ransomware of the inflection point. the way we look at this, now in the cloud and edge, the perfect marriage, if you will, right? Always great to talk to you guys. This is theCube's coverage

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
CornelyPERSON

0.99+

MiraniPERSON

0.99+

JohnPERSON

0.99+

ThomasPERSON

0.99+

Thomas CornelyPERSON

0.99+

RajivPERSON

0.99+

NutanixORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

John FurrierPERSON

0.99+

Red HatORGANIZATION

0.99+

twoQUANTITY

0.99+

AWSORGANIZATION

0.99+

QualysORGANIZATION

0.99+

two separate teamsQUANTITY

0.99+

Rajiv MiraniPERSON

0.99+

bothQUANTITY

0.99+

first stepQUANTITY

0.99+

4,000 ransomwareQUANTITY

0.99+

two leadersQUANTITY

0.99+

oneQUANTITY

0.99+

one clickQUANTITY

0.99+

both locationsQUANTITY

0.98+

first lineQUANTITY

0.98+

red hatORGANIZATION

0.98+

first mandateQUANTITY

0.98+

todayDATE

0.98+

this yearDATE

0.98+

firstQUANTITY

0.98+

threeQUANTITY

0.97+

SQLTITLE

0.97+

one-clickQUANTITY

0.96+

one thingQUANTITY

0.96+

each oneQUANTITY

0.96+

secondQUANTITY

0.96+

two great guestsQUANTITY

0.96+

KafkaTITLE

0.96+

AzureTITLE

0.95+

two new silosQUANTITY

0.95+

EMCORGANIZATION

0.95+

both locationsQUANTITY

0.94+

Map ReduceTITLE

0.94+

one cloudQUANTITY

0.93+

DeversORGANIZATION

0.91+

AOSTITLE

0.91+

third legQUANTITY

0.91+

Day TwoQUANTITY

0.91+

singleQUANTITY

0.9+

five key areasQUANTITY

0.89+

ArabOTHER

0.88+

single cloudQUANTITY

0.87+

great companiesQUANTITY

0.86+

couple of months agoDATE

0.85+

2021DATE

0.84+

MongoTITLE

0.82+

Sudheesh Nair, ThoughtSpot | CUBE Conversation


 

>>mhm >>Hello welcome to this cube conversation here in Palo alto California and john for with the cube we had a great conversation around the rise of the cloud and the massive opportunities and challenges around analytics data ai suggestion. Air ceo of thought spot is here with me for conversation. Great to see you. Welcome back to the cube. How are you? >>Well john it is so good to be back. I wish that we could do one of those massive set up that you have and do this face to face but zoom is not bad. >>You guys are doing very well. We have been covering you guys been covering the progress um great technology enabled business. You're on the wave of this cloud analytics you're seeing, we've seen massive changes and structural changes for the better. It's a tailwind for anyone in the cloud data business. And you also on the backdrop of all that the Covid and now the covid is looking at coming out of covid with growth strategies. People are building modern or modernizing their infrastructure and data is not just a department, it's everywhere. You guys are in the middle of this. Take us through what's the update on thought spot. What are you guys doing? What do you see the market right now? Honestly, delta variants coming coming strong but we think will be out of this soon. Where where are >>we look I think it all starts with the users like you said the consumers are demanding more and more from the business they are interacting with. You're no longer happy with being served like uh I'm gonna put you all in a bucket and then Delaware services to you. Everyone's like look look at me, I have likes and dislikes that is probably going to be different from someone that you think are similar to me. So unless you get to know me and deliver bespoke services to me, I'm gonna go somewhere else who does that And the call that the way you do that is through the data that I'm giving to you. So the worst thing you can do is to take my data and still treat me like an average and numbers and what's happening with the cloud is that it is now possible and it wasn't okay. So I grew up in India where newspapers will always have stock market summary on like one full page full of takers and prices and the way it used to work is that you wake up in the morning you look at the newspaper, I don't know if you have had the same thing and then you call your broker is based on in place of that. Can you imagine doing that now? I mean the information is at your fingertips. Hurricane IDa either is actually going to enter in Louisiana somewhere. What good is it? Yesterday morning state on this morning state if I'm trying to make a decision on whether I should pack my stuff and move away or you know finding to from home depot supply chain manager. I shouldn't figure out what should I be doing for Louisiana in the next two days, this is all about the information that's available to you. If you plan to use it and deliver better services for your consumer cloud makes it possible. >>You know, it's interesting you mentioned that the old way things were it seems so slow, then you got the 15 minute quotes, then there's now a real time. Everything has to be real time. And clearly there's two major things happening at the same time which makes exciting the business model and the competitive advantages for leaders and business to use data is critical but also on the developer side where apps are being developed if you don't have the data access, the machine learning won't work well. So as machine learning becomes really courted driving ai this modern analytics cloud product that you guys announced brings to bear kind of two major lifts the developer app modernization as well as competitive advantage for the companies that need to deploy this. So you guys have announced this modern approach analytics cloud, so to speak. What are some of the challenges that companies are having? Because you gotta, if you hit both of those you're gonna right a lot of value. What are some of the challenges for people who want to do this modern cloud? >>I think the challenge is basically all inside in the company. If you ask companies why are they failing to modernize? They will point to what's inside, it's not outside the technology is there the stack is the vendors are there, It is sometimes lack of courage at the leadership level which is a huge problem. I'll give an example. Uh, we have recently announced what we call thoughts part everywhere, which is our way of looking at how to modernize and bring the data inside that you're looking forward to where you are because Lord knows we all have enough apps on our Octa or a single sign on. The last thing you need is one more how no matter how good it is, they don't want to log into yet under their tool, whether it's thought spot or not. But the insights that you are talking about needs to be there when you need. And the difference is uh, the fundamental approach of data analytics was built on embedded model. You know what we are proposing is what we call data apps. So the difference between data apps and the typical dashboard being embedded into your analytics model is sort of like think of it. Uh newspapers telephones and the gap in between. So there is newspapers radio that is walkie talkie and telephone. They're all different and newspapers get printed and it comes to you and you read in the morning, you can talk back to it, you can drag and drop, you can change it right walkie talkies on the other hand, you know, you could have one conversation then come back to that. Whereas phone, you can have true direction conversation? They're all different if you think of embedding it is sort of like the newspaper, the information that you can't talk back. So somebody resembling something that came out monday, you're going to a board meeting on Wednesday and you look at that and make decisions. That is not enough in the new world, you just can't do that. It's not about what a lot of tools can actually answer what the real magic the real value for customers are unlocked when you ask three subsequent questions and answer them and they will come down to when you hear what you have to know. So what? Right and then what if and then the last is what next Imagine you can answer those three questions every business person every time no matter how powerful the dashboard is, they will always have the next question. What? So what? Okay the business customers are turning so what is it good, is it bad? Is it normal or the next question is like now what what do I do with it two, the ability to take all these three questions so what and what a fun. Now what? That requires true interactivity, you know, start with an intent and with an action and that is what we are actually proposing with the data apps which is only possible if you're sitting on top of a snowflake or red shift kind of really powerful and massive cloud data warehouse where the data comes and moves with agility. >>So how has this cloud data model rewritten the rules of business? Because what you're bringing up is essentially now full interactivity really getting in, getting questions that are iterating and building on context to each other. But with all this massive cloud data, people are really excited by this. How is it changing business than the rules of business? >>Yeah. So think about, I mean topical things like there is a hurricane able to enter, hit the cost of the United States. It's a moving target. No one knows exactly where it is going to be. There is only 15 models from here. 10, 10 models from Europe that's going to predict which way it's going to take every millimeter change in that map is going to have significant consequences for lives and resources and money. Right. This is true for every business. What cloud does this? Uh you have your proprietary data for example, let's say you're a bank and you have proprietary data, you're launching a new product And the propriety data was 2025 extremely valuable. But what what's not proprietary but what is available to you? Which could make that data so much more relevant if you layer them on top census data, this was a census here. The census data is updated. Do you not want that vaccination leader? We clearly know that purchasing power parity will vary based on vaccinations and county by county. But is that enough? You need to have street by street is county data enough. If you're going to open startup, Mr Starbucks? No, you probably want to know much more granular data. You wanna know traffic. Is the traffic picking up business usually an office space where people are not coming to office or is it more of a shopping mall where people are still showing all of these data is out there for you? What cloud is making it possible? Unlike the old era where you know, your data is an SFP oracle or carry later in your data center, it's available for you with a matter of clicks. What thought sport modern analytics. Cloud is a simple thing. We are the front end to bring all of this data and make sense of it. You can sit on top of any cloud data and then interact with a complete sort of freedom without compromising on security, compliance or relevance. And what happens is the analysts, the people who are responsible for bringing the data and then making sure that it is secure and delivered. They are no longer doing incremental in chart updates and dashboard updates. What they're doing is solving business problems, business people there freely interacting and making bigger decisions. That actually adds value to their consumers. This is what your customers are looking for, your users are looking for and if you're not doing it, your competitor will do that. So this is why cloud is not a choice for you. It's not an option for you. It is the only way and if you fail to take that back the other way is taking the world out of a cliff. >>Yeah, that's I love it. But I want to get this uh topic of thoughts about anywhere, but I want to just close out on this whole idea of modern cloud scale analytics. What technology under the hood do you guys see that customers should pay attention to with thought spot and in general because the scale there. So is it just machine learning? We hear data lakes, you know, you know different configurations of that. Machine learning is always thrown around like a buzzword. What new technology capability should every executive by your customer look for when it comes to really doing analytics, modern in the cloud >>analytics has to be near real time, Which means what two things speed at scale, make sure it's complex, it can deal with complexity in data structure. Data complexity is a huge problem. Now imagine doing that at scale and then delivering with performance. That means you have to rethink Look Tableau grew out of excellent worksheets that is the market leader, it is a $40 billion dollar market with the largest company having only a billion dollars in revenue. This is a massive place where the problems need to be solved differently. So the underlying technology to me are like I said, these three things, number one cannot handle the cloud scale, you will have hundreds of billions of rows of data that you brought. But when you talk about social media sentiment of customers, analysis of traffic and weather patterns, all of these publicly available valuable data. We're talking trillions of rows of data. So that is scale. Now imagine complexity. So financial sector for example, there is health care where you know some data is visible, some data is not visible, some some is public assumption not or you have to take credit data and let it on top of your marketing data. So it becomes more complex. And the last is when you answer ask a question, can you deliver with absolute confidence that you're giving the right answer With extremely high performance and to do that you have to rebuild the entire staff. You cannot take your, you know, stack that was built in 1990s and so now we can do search So search that is built for these three things with the machine learning and ai essentially helping at every step of the way so that you're not throwing all this inside directly to a human, throw it to a i engine and the ai engine curates what is relevant to you, showing it to you. And then based on your interaction with that inside, I improve my own logic so that the next interaction, the next situation is going to be significantly better. My point is you cannot take a triple a map and then try to act like this google maps. One is built presuming and zoom out and learn from you. The other one is built to give you rich information but doesn't talk back. So the staff has to be fundamentally rebuilt for the club. That's what he's doing. >>I love I love to buy direction. I love the interactivity. This topic of thought spot everywhere, which you mentioned at the beginning of this conversation, you mentioned data apps which by the way I love that concept. I want to do a drill down on that. Uh I saw data marketplace is coming somewhat working but I think it's going to get it better. I love that idea of an app um, and using as developers but you also mentioned embedded analytics. You made a comment about that. So I gotta ask you what's the difference between data apps and embedded analytics? >>Embedded analytics means that uh you know the dashboards that you love but the one that doesn't talk back to you is going to be available inside the app that you built for your other So if a supply chain app that was built by let's say accenture inside that you haven't had your dashboard without logging into tablet. Great. But what you do, what's the big deal? It is the same thing. My point is like I said every time a business user sees a chart. The questions are going to come up. The next 10 question is where the values on earth for example on Yelp imagine if you will piece about I'm hungry. I want to find a restaurant and it says go to this burrito place. It doesn't work like that. It's not good enough. The reason why yell towards is because I start with an intent. I'm hungry. Okay show me all restaurants. Okay I haven't had about it for a while. Let me see the photos. Let me read the reviews. Let me see if my friends have eaten, let me see some menu. Can I walk there? I do all of this but just what underneath it. There is a rich set of data that probably helped have their own secret source and reviews and then you have google map powering some of them. But I don't care all of that is coming together to deliver a seamless experience that satisfies my hunger. Which will be very different from if you use the same map at the same place you might go to an italian place. I go to bed right. That is the power of a data app in business people are still sitting with this. I am hungry. I gotta eat burrito. That's not how it should be in the new world. A business user should have the freedom to add exactly what the customers require looking for and solve that problem without delay. That means every application should be power and enriched with the data where you can interact and customized. That is not something that enterprise customers are actually used to and to do that you need like I said a I and search powering like the google map underneath it, but you need an app like a yelp like app, that's what we deliver. So for example, uh just last week we delivered a service now app on snowflake. You know, it just changes the game. You are thinking about customer cases. You're a large company, you have support coming from Philippines and India some places the quality is good. Some places bad dashboards are not good enough saying that okay, 17% of our customers are unhappy but we are good. That's not the world we live in. That is the tyranny of >>average, >>17% were unhappy. You got to solve for them. >>You mentioned snowflake and they had their earnings. David and I were commenting about how some of the analysts got it all wrong. And you bring up a really good point that kind of highlights the real trend. Not so much how many new customers they got. But there do what customers are doing more. Right? So, so what's happening is that you're starting to see with data apps, it does imply Softwares in there because it's it's application. So the software wrapping around data. This is interesting because people that are using the snowflakes of the world and thought spot your software and your platform, they're doing more with data. So it's not so much. I use snowflake, I use snowflake now I'm going to do more with it. That's the scale kicking. So this is an opportunity to look at that more equation. How do you talk >>with >>when you see that? Because that's the real thing is like, okay, that's I bought software as a service. But what's the more that's happening? What do you see >>that is such an important point? Even I haven't thought about it that john but you're absolutely right. That is sometimes people think of snowflake is taking care of it and no. Yeah, yes, Sarah later used to store once and zeros and they're moving it into club. That is not the point. Like I said, marketplace as an example when you are opening it up for for example, bringing the entire world's data with one click accessible to you securely. That is something you couldn't do on number two. You can have like 100 suppliers and all of a sudden you can now take a single copy of data and then make it available to all of them without actually creating multiple copies and control it differently. That's not something without cloudy, potentially could do. So things like that are fundamentally different. It is much more than like one plus one equals two. It is one plus one is 33. Like our view is that when you are re platform ng like that, you have to think from customer first. What does the customer do? The customer care that you meant from Entre into cloud or event from Teradata snowflake. No, they will care if their lives are better. Are they able to get better services are able to get it faster. That's what it is. So to me it is very simple. The destiny of an insight or data information is action, right? Imagine you're driving a car and if your car updates the gas tank every monday morning, imagine how you know, stressful your life will be for the whole week. I have to wait until next monday wanting to figure out what, whether I have enough gas or not, that's not the new world, that information is there, you need to have it real time and act on it. If you go through the Tesla you realize now that you know, I'm never worried about mileage because it is going to take me to the supercharger because it knows what I need to get to, it knows how long it is going to be, how bad the traffic is. It is synthesizing all of that to give me peace of mind. >>So this is a great >>conversation. That's a >>great question. It's a great conversation because it's really kind of brings in kind of what's happening, you see successful companies that are working with cloud scale and data like you're talking about, it's you get in there, you get the data, the data apps and all of a sudden you hit it, you hit the value equation and it's like almost like discovering oil all of a sudden you have a gusher and then people just see massive increase in value. It's not like the outcome, it's kind of there, you've got to kind of get in there and this is the scale piece and you see people having strategies to do that, they say okay we're gonna get in there, we're going to use the data to iterate but also watch the data learn where's that value, This is that more trend and and there's a successful of the developing. So I have to ask you when you, when you talk about people and culture, um that's not the way it used to be, used to be like okay I'm buying an outcome. I deployed some software mechanisms and at the end of the day there's some value there. Maybe I write it off maybe I, you know, overtime charges and some accounting thing. All changed the culture and the people in charge now are transforming the management techniques. What do you see as a successful mindset for a customer as they managed through these new paradigms and new new success formulas. >>I see a fork in leadership when it comes to courage. There are people with the spine and there are people without the spine and the ones with the spine are absolutely killing it. They are unafraid. They are not saying, look, I'm just going to stick with the incumbents that I've known for the last 20 years. Look, I used to drive a Toyota forever because I love the Toyota. And then you know after Nutanix IPO went to Lexus still Toyota because it's reliable. I don't, I'm not a huge card person. It works. But guess what? I knew they were missing Patrick and I care about the environment. I don't want to keep pushing hydrocarbons out there. It's not politics. I just don't like burning stuff into the earth atmosphere. So when Tesla came out, it's not like I love the quality I don't personally like alone mask, you know after that Thailand fiasco of cave rescue and all of that. But I can clearly see that Toyota is not going to catch up to Tesla in the next 10 years. And guess what? My loyalty is much more to doing the right thing for my family and to the world. And I switched this is what business leaders need to know. They can't simply say, well, tabloid as search to. They're not as good as thought sports. We'll just stick with them because they have done with us. That's what weak leaders do and customers suffer for that. What I see like the last two weeks ago when I was in new york. I met with them. A business leader for one of the largest banks in the world with 25,000 people reporting to him. The person walks into the room wearing shorts and t shirts uh, and was so full of energy and so full of excitement. I thought I'm going to learn from him and he was asking questions about how we do our business in bed and learning from me. I was humbled, I was flawed and I realized that's what a modern business leader looks like. Even if it is one of the largest and oldest banks in the world, that's the kind of people are making big difference and it doesn't matter how all the companies, how old their data is they have mainframes or not. I hear this excuses all the type of er, mainframes, we can't move, we have COBOL going on. And guess what? You keep talking about that and hear leaders like him are going to transform those companies And next thing you know, there are some of the most modern companies in the world. >>Well certainly they, we know that they don't have any innovation strategy or any kind of R and D or anything going on that could be caught flat footed in the companies that didn't have that going on, didn't have the spine or the, the, the vision to, to at least try the cloud before Covid when Covid hit, those companies are really either going out of business or they're hurting the people who were in the cloud really move their teams into the cloud quicker to take advantage of uh, the environment that they had to. So this became a skill issue. So, so this is a big deal. This is a big deal. And having the right skills are people skilled, it will be a, I both be running everything for them. What is your take on that? >>This is an important question. You can't just say you got to do more things or new things and not take care of all things. You know, there's only 89, 10 hours so you can work in their uh, analysts in the Atlantic species constantly if your analysts are sitting there and making incremental dashboards and reports change every day and then backlog is growing for 56 days and the users are unhappy because you're not getting answers and then you ask them to go to new things. It's just not going to be enough and you can hire your way out of it. You have to make sure that if you say that I have 20 100 x product already, I don't want 21st guess what? Sometimes to be five products, you need to probably go to 21 you got to do new things to actually take away the gunk off the old and in that context, the re skilling starts with unburdening, unburdening of menial task, unburned routine task. There is nothing more frustrating than making reports and dashboards that people don't even use And 90% of the time analysts, they're amazing experiences completely wasted when they're making incremental change to tabloid reports. I kind of believe thought spot and self service on top of cloud data takes away all of that without compromising security and then you invest the experienced people. Business experience is so critical. So don't just go and hire university students and say, okay, they'll go come and quote everything the experience that they have in knowing what the business is about and what it matters to their users, that domain experience and then uplevel them res kill them and then bring fresh energy to challenge that and then make sure there is a culture that allows that to happen. These three things. That's why I said leadership is not just about hiring event of firing another, it's about cultivating a culture and living that value by saying, look if I am wrong, call me, call me out in public because I want to show you how I deal with conflict. So this is I love this thing because when I see these large companies where they're making these massive changes so fast, it inspires you to say you know what if they can do it, anyone can do it. But then I also see if the top leadership is not aligned to that. They are just trying to retire without the stock tanking too much and let me just get through two more years. The entire company suffers. >>So that's great to chat with you got great energy, love your business, love the energy, love the focus. Um it's a new wave you're on. It's a big wave um and it's it's relevant, it's cool and relevant and it's the modern way and people have to have a spine to be successful if not for the faint of heart, but the rewards are there if you get this right. This is what I I love about this new environment. Um so I gotta ask you just to kind of close it out. How would you plug the company for the folks watching that might want to engage with you guys. What's the elevator pitch? What's the positioning? How would you describe thought spot in a bumper sticker or in a positioning statement. Take a minute to talk about that. >>Remember martin Anderson said that software is eating the world, I think it is now time to update that data is eating everything including software. If you don't have a way to turn data into bespoke action for your customers. Guess what? Your customers are gonna go somewhere where they that's happening right? You may not be in the data business but the data company is going to take your business. Thought spot is very simple. We want to be the friend tent for all cloud data when it comes to structured because that's where business value numbers is world satisfaction and dissatisfaction for reduces allying it is important to move data to action and thought Spot is the pioneer in doing that through search and I >>I really think you guys want something very powerful. Looking forward to chatting with you at the upcoming eight of a startup showcase. I think data is a developer mindset. It's an app, it's part of everything. It will. Everyone's a data company, everyone is a media company. Data is everything you guys are on something really big and people got a program it with it, make experiences whether it's simple scripts, point and click. That is a new kind of developer out there. You guys are tapping into it. Great stuff. Thank >>you for coming on. Thank you john it's good to talk to you. >>Okay. It's a cube conversation here in Palo alto California were remote. We're virtual. That's the cube virtual. I'm sean for your host. Thanks for watching. Mhm. Mhm

Published Date : Sep 7 2021

SUMMARY :

around the rise of the cloud and the massive opportunities and challenges around analytics data you have and do this face to face but zoom is not bad. that the Covid and now the covid is looking at coming out of covid with growth strategies. So the worst thing you can do is to take my data and still treat me like an average and numbers but also on the developer side where apps are being developed if you don't have the data access, sort of like the newspaper, the information that you can't talk back. How is it changing business than the rules of business? It is the only way and if you fail to take that you guys see that customers should pay attention to with thought spot and in general because the I improve my own logic so that the next interaction, the next situation is going to be significantly better. which you mentioned at the beginning of this conversation, you mentioned data apps which by the but the one that doesn't talk back to you is going to be available inside the app that you built for You got to solve for them. And you bring up a really good point that kind of highlights the real trend. What do you see and all of a sudden you can now take a single copy of data and then make it available to all of them That's a So I have to ask you when you, when you talk about people and culture, um that's not the way it used to be, leaders like him are going to transform those companies And next thing you know, in the cloud really move their teams into the cloud quicker to take advantage It's just not going to be enough and you can hire your way out of it. So that's great to chat with you got great energy, love your business, love the energy, You may not be in the data business but the data company is going to take your business. Looking forward to chatting with you at the upcoming eight of a startup showcase. Thank you john it's good to talk to you. That's the cube virtual.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

ToyotaORGANIZATION

0.99+

LouisianaLOCATION

0.99+

PhilippinesLOCATION

0.99+

IndiaLOCATION

0.99+

LexusORGANIZATION

0.99+

WednesdayDATE

0.99+

TeslaORGANIZATION

0.99+

SarahPERSON

0.99+

15 minuteQUANTITY

0.99+

10QUANTITY

0.99+

1990sDATE

0.99+

PatrickPERSON

0.99+

EuropeLOCATION

0.99+

90%QUANTITY

0.99+

three questionsQUANTITY

0.99+

2025DATE

0.99+

five productsQUANTITY

0.99+

25,000 peopleQUANTITY

0.99+

martin AndersonPERSON

0.99+

56 daysQUANTITY

0.99+

new yorkLOCATION

0.99+

twoQUANTITY

0.99+

United StatesLOCATION

0.99+

Sudheesh NairPERSON

0.99+

Yesterday morningDATE

0.99+

17%QUANTITY

0.99+

mondayDATE

0.99+

oneQUANTITY

0.99+

last weekDATE

0.99+

NutanixORGANIZATION

0.99+

100 suppliersQUANTITY

0.99+

33QUANTITY

0.99+

two more yearsQUANTITY

0.99+

one clickQUANTITY

0.99+

21stQUANTITY

0.99+

johnPERSON

0.99+

google mapTITLE

0.99+

next mondayDATE

0.98+

bothQUANTITY

0.98+

YelpORGANIZATION

0.98+

three thingsQUANTITY

0.98+

last two weeks agoDATE

0.97+

CovidPERSON

0.97+

15 modelsQUANTITY

0.97+

zerosQUANTITY

0.97+

TeradataORGANIZATION

0.97+

this morningDATE

0.97+

Palo alto CaliforniaLOCATION

0.97+

delta variantsOTHER

0.96+

DelawareLOCATION

0.95+

three subsequent questionsQUANTITY

0.95+

firstQUANTITY

0.94+

10 questionQUANTITY

0.94+

single copyQUANTITY

0.94+

one conversationQUANTITY

0.94+

earthLOCATION

0.93+

21QUANTITY

0.93+

two thingsQUANTITY

0.92+

10 modelsQUANTITY

0.92+

COBOLORGANIZATION

0.92+

google mapsTITLE

0.91+

onceQUANTITY

0.9+

next 10 yearsDATE

0.9+

ThailandLOCATION

0.9+

AtlanticLOCATION

0.89+

two major thingsQUANTITY

0.89+

ThoughtSpotORGANIZATION

0.87+

$40 billion dollarQUANTITY

0.85+

hundreds of billions of rows of dataQUANTITY

0.85+

Hurricane IDaEVENT

0.83+

20 100 xQUANTITY

0.82+

trillions of rows ofQUANTITY

0.81+

OneQUANTITY

0.8+

Derek Manky, Fortinet | CUBEConversation


 

>> Welcome to this Cube Conversation, I'm Lisa Martin. I'm joined by Derek Manky next, the Chief Security Insights and Global Threat Alliances at Fortiguard Labs. Derek, welcome back to the program. >> Hey, it's great to be here again. A lot of stuff's happened since we last talked. >> So Derek, one of the things that was really surprising from this year's Global Threat Landscape Report is a 10, more than 10x increase in ransomware. What's going on? What have you guys seen? >> Yeah so this is massive. We're talking over a thousand percent over a 10x increase. This has been building Lisa, So this has been building since December of 2020. Up until then we saw relatively low high watermark with ransomware. It had taken a hiatus really because cyber criminals were going after COVID-19 lawyers and doing some other things at the time. But we did see a seven fold increase in December, 2020. That has absolutely continued this year into a momentum up until today, it continues to build, never subsided. Now it's built to this monster, you know, almost 11 times increase from, from what we saw back last December. And the reason, what's fueling this is a new verticals that cyber criminals are targeting. We've seen the usual suspects like telecommunication, government in position one and two. But new verticals that have risen up into this third and fourth position following are MSSP, and this is on the heels of the Kaseya attack of course, that happened in 2021, as well as operational technology. There's actually four segments, there's transportation, automotive, manufacturing, and then of course, energy and utility, all subsequent to each other. So there's a huge focus now on, OT and MSSP for cyber criminals. >> One of the things that we saw last year this time, was that attackers had shifted their focus away from enterprise infrastructure devices, to home networks and consumer grade products. And now it looks like they're focusing on both. Are you seeing that? >> Yes, absolutely. In two ways, so first of all, again, this is a kill chain that we talk about. They have to get a foothold into the infrastructure, and then they can load things like ransomware on there. They can little things like information stealers as an example. The way they do that is through botnets. And what we reported in this in the first half of 2021 is that Mirai, which is about a two to three-year old botnet now is number one by far, it was the most prevalent botnet we've seen. Of course, the thing about Mirai is that it's an IOT based botnet. So it sits on devices, sitting inside consumer networks as an example, or home networks, right. And that can be a big problem. So that's the targets that cyber criminals are using. The other thing that we saw that was interesting was that one in four organizations detected malvertising. And so what that means Lisa, is that cyber criminals are shifting their tactics from going just from cloud-based or centralized email phishing campaigns to web born threats, right. So they're infecting sites, waterhole attacks, where, you know, people will go to read their daily updates as an example of things that they do as part of their habits. They're getting sent links to these sites that when they go to it, it's actually installing those botnets onto those systems, so they can get a foothold. We've also seen scare tactics, right. So they're doing new social engineering lures, pretending to be human resource departments. IT staff and personnel, as an example, with popups through the web browser that look like these people to fill out different forms and ultimately get infected on home devices. >> Well, the home device use is proliferate. It continues because we are still in this work from home, work from anywhere environment. Is that, you think a big factor in this increase from 7x to nearly 11x? >> It is a factor, absolutely. Yeah, like I said, it's also, it's a hybrid of sorts. So a lot of that activity is going to the MSSP angle, like I said to the OT. And to those new verticals, which by the way, are actually even larger than traditional targets in the past, like finance and banking, is actually lower than that as an example. So yeah, we are seeing a shift to that. And like I said, that's, further backed up from what we're seeing on with the, the botnet activity specifically with Mirai too. >> Are you seeing anything in terms of the ferocity, we know that the volume is increasing, are they becoming more ferocious, these attacks? >> Yeah, there is a lot of aggression out there, certainly from, from cyber criminals. And I would say that the velocity is increasing, but the amount, if you look at the cyber criminal ecosystem, the stakeholders, right, that is increasing, it's not just one or two campaigns that we're seeing. Again, we're seeing, this has been a record cases year, almost every week we've seen one or two significant, cyber security events that are happening. That is a dramatic shift compared to last year or even, two years ago too. And this is because, because the cyber criminals are getting deeper pockets now. They're becoming more well-funded and they have business partners, affiliates that they're hiring, each one of those has their own methodology, and they're getting paid big. We're talking up to 70 to 80% commission, just if they actually successfully, infect someone that pays for the ransom as an example. And so that's really, what's driving this too. It's a combination of this kind of perfect storm as we call it, right. You have this growing attack surface, work from home environments and footholds into those networks, but you have a whole bunch of other people now on the bad side that are orchestrating this and executing the attacks too. >> So what can organizations do to start- to slow down or limit the impacts of this growing ransomware as a service? >> Yeah, great question. Everybody has their role in this, I say, right? So if we look at, from a strategic point of view, we have to disrupt cyber crime, how do we do that? It starts with the kill chain. It starts with trying to build resilient networks. So things like ZTA and a zero trust network access, SD-WAN as an example for protecting that WAN infrastructure. 'Cause that's where the threats are floating to, right. That's how they get the initial footholds. So anything we can do on the preventative side, making networks more resilient, also education and training is really key. Things like multi-factor authentication are all key to this because if you build that preventatively and it's a relatively small investment upfront Lisa, compared to the collateral damage that can happen with these ransomware paths, the risk is very high. That goes a long way, it also forces the attackers to- it slows down their velocity, it forces them to go back to the drawing board and come up with a new strategy. So that is a very important piece, but there's also things that we're doing in the industry. There's some good news here, too, that we can talk about because there's things that we can actually do apart from that to really fight cyber crime, to try to take the cyber criminals offline too. >> All right, hit me with the good news Derek. >> Yeah, so a couple of things, right. If we look at the botnet activity, there's a couple of interesting things in there. Yes, we are seeing Mirai rise to the top right now, but we've seen big problems of the past that have gone away or come back, not as prolific as before. So two specific examples, EMOTET, that was one of the most prolific botnets that was out there for the past two to three years, there is a take-down that happened in January of this year. It's still on our radar but immediately after that takedown, it literally dropped to half of the activity it had before. And it's been consistently staying at that low watermark now at that half percentage since then, six months later. So that's very good news showing that the actual coordinated efforts that were getting involved with law enforcement, with our partners and so forth, to take down these are actually hitting their supply chain where it hurts, right. So that's good news part one. Trickbot was another example, this is also a notorious botnet, takedown attempt in Q4 of 2020. It went offline for about six months in our landscape report, we actually show that it came back online in about June this year. But again, it came back weaker and now the form is not nearly as prolific as before. So we are hitting them where it hurts, that's that's the really good news. And we're able to do that through new, what I call high resolution intelligence that we're looking at too. >> Talk to me about that high resolution intelligence, what do you mean by that? >> Yeah, so this is cutting edge stuff really, gets me excited, keeps me up at night in a good way. 'Cause we we're looking at this under the microscope, right. It's not just talking about the what, we know there's problems out there, we know there's ransomware, we know there's a botnets, all these things, and that's good to know, and we have to know that, but we're able to actually zoom in on this now and look at- So we, for the first time in the threat landscape report, we've published TTPs, the techniques, tactics, procedures. So it's not just talking about the what, it's talking about the how, how are they doing this? What's their preferred method of getting into systems? How are they trying to move from system to system? And exactly how are they doing that? What's the technique? And so we've highlighted that, it's using the MITRE attack framework TTP, but this is real time data. And it's very interesting, so we're clearly seeing a very heavy focus from cyber criminals and attackers to get around security controls, to do defense innovation, to do privilege escalation on systems. So in other words, trying to be common administrator so they can take full control of the system. As an example, lateral movement, there's still a preferred over 75%, 77 I believe percent of activity we observed from malware was still trying to move from system to system, by infecting removable media like thumb drives. And so it's interesting, right. It's a brand new look on these, a fresh look, but it's this high resolution, is allowing us to get a clear image, so that when we come to providing strategic guides and solutions in defense, and also even working on these takedown efforts, allows us to be much more effective. >> So one of the things that you said in the beginning was we talked about the increase in ransomware from last year to this year. You said, I don't think that we've hit that ceiling yet, but are we at an inflection point? Data showing that we're at an inflection point here with being able to get ahead of this? >> Yeah, I would like to believe so, there is still a lot of work to be done unfortunately. If we look at, there's a recent report put out by the Department of Justice in the US saying that, the chance of a criminal to be committing a crime, to be caught in the US is somewhere between 55 to 60%, the same chance for a cyber criminal lies less than 1%, well 0.5%. And that's the bad news, the good news is we are making progress in sending messages back and seeing results. But I think there's a long road ahead. So, there's a lot of work to be done, We're heading in the right direction. But like I said, they say, it's not just about that. It's, everyone has their role in this, all the way down to organizations and end users. If they're doing their part of making their networks more resilient through this, through all of the, increasing their security stack and strategy. That is also really going to stop the- really ultimately the profiteering that wave, 'cause that continues to build too. So it's a multi-stakeholder effort and I believe we are getting there, but I continue to still, I continue to expect the ransomware wave to build in the meantime. >> On the end-user front, that's always one of the vectors that we talk about, it's people, right? There's so much sophistication in these attacks that even security folks and experts are nearly fooled by them. What are some of the things that you're saying that governments are taking action on some recent announcements from the White House, but other organizations like Interpol, the World Economic Forum, Cyber Crime Unit, what are some of the things that governments are doing that you're seeing that as really advantageous here for the good guys? >> Yeah, so absolutely. This is all about collaboration. Governments are really focused on public, private sector collaboration. So we've seen this across the board with Fortiguard Labs, we're on the forefront with this, and it's really exciting to see that, it's great. There's always been a lot of will to work together, but we're starting to see action now, right? Interpol is a great example, they recently this year, held a high level forum on ransomware. I actually spoke and was part of that forum as well too. And the takeaways from that event were that we, this was a message to the world, that public, private sector we need. They actually called ransomware a pandemic, which is what I've referred to it as before in itself as well too. Because it is becoming that much of a problem and that we need to work together to be able to create action, action against this, measure success, become more strategic. The World Economic Forum were leading a project called the Partnership Against Cyber Crime Threat Map Project. And this is to identify, not just all this stuff we talked about in the threat landscape report, but also looking at, things like, how many different ransomware gangs are there out there. What do the money laundering networks look like? It's that side of the supply chain to map out, so that we can work together to actually take down those efforts. But it really is about this collaborative action that's happening and it's innovation and there's R&D behind this as well, that's coming to the table to be able to make it impactful. >> So it sounds to me like ransomware is no longer a- for any organization in any industry you were talking about the expansion of verticals. It's no longer a, "If this happens to us," but a matter of when and how do we actually prepare to remediate, prevent any damage? >> Yeah, absolutely, how do we prepare? The other thing is that there's a lot of, with just the nature of cyber, there's a lot of connectivity, there's a lot of different, it's not just always siloed attacks, right. We saw that with Colonial obviously, this year where you have attacks on IT, that can affect consumers, right down to consumers, right. And so for that very reason, everybody's infected in this. it truly is a pandemic I believe on its own. But the good news is, there's a lot of smart people on the good side and that's what gets me excited. Like I said, we're working with a lot of these initiatives. And like I said, some of those examples I called up before, we're actually starting to see measurable progress against this as well. >> That's good, well never a dull day I'm sure in your world. Any thing that you think when we talk about this again, in a few more months of the second half of 2021, anything you predict crystal ball wise that we're going to see? >> Yeah, I think that we're going to continue to see more of the, I mean, ransomware, absolutely, more of the targeted attacks. That's been a shift this year that we've seen, right. So instead of just trying to infect everybody for ransom, as an example, going after some of these new, high profile targets, I think we're going to continue to see that happening from the ransomware side and because of that, the average costs of these data breaches, I think they're going to continue to increase, it already did in 2021 as an example, if we look at the cost of a data breach report, it's gone up to about $5 million US on average, I think that's going to continue to increase as well too. And then the other thing too is, I think that we're going to start to see more, more action on the good side like we talked about. There was already a record amount of takedowns that have happened, five takedowns that happened in January. There were arrests made to these business partners, that was also new. So I'm expecting to see a lot more of that coming out towards the end of the year too. >> So as the challenges persist, so do the good things that are coming out of this. Where can folks go to get this first half 2021 Global Threat Landscape? What's the URL that they can go to? >> Yeah, you can check it out, all of our updates and blogs including the threat landscape reports on blog.fortinet.com under our threat research category. >> Excellent, I read that blog, it's fantastic. Derek, always a pleasure to talk to you. Thanks for breaking this down for us, showing what's going on. Both the challenging things, as well as the good news. I look forward to our next conversation. >> Absolutely, it was great chatting with you again, Lisa. Thanks. >> Likewise for Derek Manky, I'm Lisa Martin. You're watching this Cube Conversation. (exciting music)

Published Date : Aug 31 2021

SUMMARY :

Welcome to this Cube Hey, it's great to be here again. So Derek, one of the things Now it's built to this monster, you know, One of the things that So that's the targets that Well, the home device So a lot of that activity but the amount, if you look at that we can talk about because with the good news Derek. of the activity it had before. So it's not just talking about the what, So one of the things that 'cause that continues to build too. What are some of the things And this is to identify, So it sounds to me like And so for that very reason, that we're going to see? more of the targeted attacks. so do the good things that including the threat landscape I look forward to our next conversation. chatting with you again, Lisa. Likewise for Derek

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DerekPERSON

0.99+

Lisa MartinPERSON

0.99+

JanuaryDATE

0.99+

InterpolORGANIZATION

0.99+

Fortiguard LabsORGANIZATION

0.99+

Derek MankyPERSON

0.99+

Derek MankyPERSON

0.99+

2021DATE

0.99+

December, 2020DATE

0.99+

oneQUANTITY

0.99+

last yearDATE

0.99+

10QUANTITY

0.99+

December of 2020DATE

0.99+

White HouseORGANIZATION

0.99+

LisaPERSON

0.99+

0.5%QUANTITY

0.99+

blog.fortinet.comOTHER

0.99+

Department of JusticeORGANIZATION

0.99+

77QUANTITY

0.99+

USLOCATION

0.99+

World Economic ForumORGANIZATION

0.99+

thirdQUANTITY

0.99+

twoQUANTITY

0.99+

7xQUANTITY

0.99+

this yearDATE

0.99+

five takedownsQUANTITY

0.99+

BothQUANTITY

0.99+

bothQUANTITY

0.99+

less than 1%QUANTITY

0.99+

first timeQUANTITY

0.99+

todayDATE

0.99+

two waysQUANTITY

0.98+

two years agoDATE

0.98+

six months laterDATE

0.98+

about $5 millionQUANTITY

0.98+

two specific examplesQUANTITY

0.98+

Global Threat AlliancesORGANIZATION

0.98+

last DecemberDATE

0.98+

COVID-19OTHER

0.98+

Cyber Crime UnitORGANIZATION

0.98+

Global Threat Landscape ReportTITLE

0.98+

60%QUANTITY

0.97+

over 75%QUANTITY

0.97+

fourth positionQUANTITY

0.97+

four segmentsQUANTITY

0.97+

January of this yearDATE

0.97+

OneQUANTITY

0.97+

two campaignsQUANTITY

0.96+

four organizationsQUANTITY

0.96+

second half of 2021DATE

0.95+

this yearDATE

0.95+

55QUANTITY

0.95+

over a thousand percentQUANTITY

0.94+

EMOTETORGANIZATION

0.94+

each oneQUANTITY

0.93+

ColonialORGANIZATION

0.93+

three-year oldQUANTITY

0.92+

firstQUANTITY

0.91+

half percentageQUANTITY

0.91+

about six monthsQUANTITY

0.9+

June this yearDATE

0.89+

three yearsQUANTITY

0.88+

almost 11 timesQUANTITY

0.87+

up to 70QUANTITY

0.85+

more than 10x increaseQUANTITY

0.83+

first half of 2021DATE

0.83+

seven fold increaseQUANTITY

0.82+

pandemicEVENT

0.82+

Global Threat LandscapeTITLE

0.81+

position oneQUANTITY

0.8+

MiraiORGANIZATION

0.79+

FortinetORGANIZATION

0.79+

80% commissionQUANTITY

0.78+

Kumar Sreekanti & Robert Christiansen, HPE | HPE Discover 2020


 

>>from around the globe. It's the Cube covering HP. Discover Virtual Experience Brought to you by HP >>Everyone welcome to the Cube studios here in Palo Alto, California We here for remote conversation. Where for HP Discover virtual experience. 2020. We would Kumar, Sri Context, chief technology officer and head of Software Cube alumni. We've been following Kumar since he started Blue Data. Now he's heading up the software team and CTO at HP and Robert Christensen, VP of Strategy of Office of the CTO Robert Both Cube alumni's Robert, formerly with CTP, now part of the team that's bringing the modernization efforts around enterprises in this fast changing world that's impacting the operating models for businesses. We're seeing that playing out in real time with Covert 19 as customers are modernizing the efforts. Guys, thanks for coming on. Taking the time. >>You're welcome, John. Good to be back here, >>Kumar. First I have to ask you, I have to ask you your new role at HP sent it up to CTO but also head of the software. How >>do you >>describe that role Because you're CTO and also heading up? This offers a general manager. Could you take him in to explain this new role and why It's important. >>Thank you. Thank you, John. And so good to be back. You get two for one with me and Robert didn't. Yeah, it's very exciting to be here as the CTO of HB. And as Antonio described in in his announcement, we consider software will be very key, essential part of the our people as a service. And, uh, we want we see that it's an opportunity for not only layer division but help drive the execution of that reason. Both organic them in our. So we we see we want to have a different change of software that helps the customers, too, to get us to the workloads optimized, or are there specific solutions? >>You guys were both on the Cube in November, Pre cove it with the minimum John Troyer talking about the container platform news, leveraging the acquisitions you guys have done at HP Kumar, your company Blue Data map, our CTP, Robert, the group. You're there really talking about the strategies around running these kinds of workloads. And if you think about Cove in 19 this transformation, it's really changing work. Workforces, workplaces, workloads, work flows everything to do with work and people are at home. That's an extension of the on premise environment. VPN provisions were under provisional hearing all these stories, exposing all the things that need to be worked on because no one ever saw this kind of direction. It highlights the modern efforts that a lot of your customers are going through rubber. Can you explain? And Kumar talk about this digital transformation in this cove it and then when we come out of it, the growth strategies that need to be put in place and the projects take a minute to explain. >>Go ahead. Robert Cover has been spending a lot of time with our customers, and I would like to go ahead. >>Yeah, thank you so much. It's Ah, uh, accelerators. What's happened? Many of our clients have been forced into the conversation about how do I engage our customers, and how do we engage our broad constituents, including our employees and colleagues, in a more rapid and easier way? And many of the systems that were targeted to make their way to a public cloud digital transformation process did not get the attention just because of their size and breadth and depth effort. So that's really put an accelerator down on what are we gonna do? So we have to be able to bring a platform into our clients organizations that have the same behavior characteristics or what we call you know, the same cloud experiences that people are expecting public. Bring it close to our client's data and their applications without having that you don't have a platform by which you can have an accelerated digital transformation because it's historically a public cloud. But the only path to get that done, what we're really considering, what we introduced a while ago was platform near our clients applications. That data that gives them that ability to move quicker and respond to these industries, situations and specifically, what's happened with company really pushes it harder for real solutions Now that they can act on >>Kumar, your thoughts on this pre coded >>Yeah, yeah, this is the piece of acceleration for the digital transformation is just is a longer dynamically multiplied the code. But I think as you pointed out, John the remote working and the VPN is the security. We were as an edge to the Cloud platform company we were already in that space, so it's actually very, very. As Robert pointed out, it's actually nice to see that transformation is his transition or rapidly getting into the digitization. But one thing that is very interesting to note here is you can you can lift and shift of data has gravity. And you actually saw we actually see the war. All the distributor cloud. We see that we're glad to see what we've seen we've been talking about prior to the Kool Aid. And recently even the industry analysts are talking about we believe there is a computer can happen where the data is on. But this is actually an interesting point for me to say. This is why we have actually announced our new software platform, which we as well, which is our our key differentiator pillar for our as a service people that companies are facing. >>Could you talk about what this platform is? You guys are announcing the capabilities and what customers can expect from this. Is that a repackaging? Is there something new here? What's is it something different, Making something better? What? Can you just give us a quick taste of what this is and what it means. >>Good love alive. >>Yeah, so yeah, that's a great question. Is it repackage There's actually something. Well, I'm happy to say. It's a combination of a lot of existing assets that come together in the ecosystem, I think a platform that is super unique. You know, you look at what the Blue data container Onda adoption of communities holistically is a control plane as well as our data fabric of motion to the market with Matt Bahr and you combine that with our network experiences and our other platform very specific platform solutions and your clients data that all comes together in intellectual property that we have that we packed together and make it work together. So there's a lot of new stuff in there, But more importantly, we have a number of other close partners that we've brought together to form out our as moral platform. We have a new, really interesting combination of security and authentication. Piece is through our site L organization that came underneath with us a few months back and are aggressive motion towards bringing in strong networking service that complexity as well. So these all come together and I'm sure leaving a few out there are specifically with info site software to continue to build out a Dr solution on premises that provides that world class of services that John >>Sorry, Johnny, was the question at the beginning is, what is that? Why the software role is This is exactly what I was waiting for that that that moment where Robert pointed out, our goal is we have a lots of good assets. In addition to a lot of good partnerships, we believe the market is the customers want outcome based solutions. Best motion not. I want peace meal. So we have an opportunity to provide the customers the solution from the top to the bottom we were announced, or the Discover ML ops as a service which is actually total top to the bottom and grow, and customers can build ml solutions on the top of the Green lake. This is built on HP is moral, so it's not. I wouldn't use the word repackaging, but it is actually a lot of the inorganic organic technologies that have come together that building the solution. >>You know, I don't think it's ah, negative package something up in >>Toto. So I wouldn't >>I didn't think >>negative, but I was just saying that it is. It's Ah, it's a lot of new stuff, but also, as Robert said included, or you built a very powerful container platform. As you said, you just mentioned it that you've gone. We announced the well. >>One of the things I liked about your talk on November was that the company is kind of getting in the weeds, but stateless versus State. Full data's a big part >>of >>it, but you don't get the cloud and public cloud and horizontal scalability. No one wants Peace meal, that word you guys just mentioned or these siloed tools and about the workforce workplace transformation with Cove it it's exposing the edge, everybody. It's not just a nightie conversation. You need to have software that traverses the environment. So you now looking at not so much point solutions best to breed but you guys have had in the past, but saying Okay, I got to look at this holistically and say, How do I make sure I make sure security, which is the new perimeter, is the home right or wherever is no perimeter anymore is everywhere, So >>this is now >>just a architectural concept. Not so much a point solution, right? I mean, is that kind of how you're thinking about it? >>That's correct. In fact, as you said, the data is generated at the edge and you take the compute and it's been edge to the cloud platform. What we have, actually what we are actually demonstrating is we want to give a complete solution no matter where the processing needs are. And with HP, you have no that cloud like experience both as UNP prime as well as what we call a hybrid. I think let's be honest, the world is going to be hybrid and you can actually see the changes that is happening even from the public cloud vendors. They're trying to come on pram. So HP is being established player in this, and with this technology I think provides that solution, you can process where the data is. >>Yeah, I would agree it's hybrid. I would say Multi cloud is also, you know, code word for multi environment, right? And Robert, I want todo as you mentioned in your talk with stew minimum in November, consistency across environments. So when you talk to customers. Robert. What are they saying? Because I can imagine them in zoom meetings right now or teleconferencing saying, Look it, we have to have an operating model that spans public on premise. Multiple environments, whether it's edge or clouds. I don't wanna have different environments and being managed separately and different data modeling. I won't have a control plane, and this is architectural. I mean, it's kind of complex, but customers are dealing with this right now. What are you hearing from customers? How are they handling and they doubling down on certain projects? Are they reshaping some of their investments? I mean, what's the mindset of the customer >>right now? The mindset is that the customers, under extreme pressure to control costs and improve automation and governance across all their platforms, the business, the businesses that we deal with have established themselves in a public cloud, at least to some extent, with what they call their systems of engagement. Those are all the lot of the elastic systems, the hype ones that the hyper scale very well, and then they have all of their existing on premises, stuff that you typically heavily focused on. A VM based mindset which is being more more viewed as legacy, actually, and so they're looking for that next decade of operating. While that spans both the public and the private cloud on Premises World and what's risen up, that operating model is the open source kubernetes orchestration based operating model, where they gives them the potential of walking into another operating model that's holistic across both public and private but more importantly, as a way for their existing platforms to move into this new operating model. That's what you're talking about, using state full applications that are more legacy minded, monolithic but still can run in the container based platform and move to a new ballistic operating model. Nobody's under the impression, by the way, that the existing operating model we have today on premises is compatible with the cloud operating model. Those two are not compatible in any shape. Before we have to get to an operating model that holistic in nature. We see that, >>and that's a great tee up for the software question Robert, I want to go to. Come on, I want to get thoughts because I know you personally and I've been following your career. Certainly you know. Well, well, well, deep in computer science and software. So I think it's a good role for you. But if you look at what the future is, this is the conversation we're having with CIOs and customers on the Cube is when I get back to work postcode. But I've gotta have a growth strategy. I need to reset, reinvent and have growth strategy. And all the conversations come back to the APS that they have to redevelop or modernize, right? So workloads or whatever. So what that means is they really want true agility, not just as a punch line or cliche. They gotta move security into the Dev Ops pipeline ing. They got to make the application environment. Dev Ops and Dev Ops was kind of a fringe industry thing for about a decade. And now that's implement. That's influencing I T ops, security ops and network ops. These are operational systems, not just, you know, Hey, let's sling some kubernetes and service meshes around. This is like really nuts and bolts business operations. So, you know, I t Ops has impacted SEC ops isn't impacted. They're working us not for the faint of Heart Dev Ops I get that now it's coming everywhere. What's your thoughts on that? What's your reaction? >>We see those things coming together, John. So again, going back to the Israel were the world we believe this innovative software is. It can run on any infrastructure to start with, whether it's HP hardware knowledge we are with. It's called Hybrid. And as we said we talked about, it is it is, um it's whether it is an edge already where the processing is. We also committed to providing integrated, optimized, secure, elastic and automate our solutions. Right. This is, I think, your question of are it's not just appealing to the one segment of the organization. I think there's going to be a I cannot just say I'm only giving you the devil ops solution, but it has to have a security built into. This is why we are actually committed to making our solutions more elastic, more scalable. We're investing in building a complete runtime stack and making sure it has the all the fleet compose. It's not only optimized for the work solution which we call the work runtime stack, it's also has this is our Green Lake solution that that brings these two pieces together. Robert? Yeah. Sorry. Go ahead. >>Robert, you mentioned automation earlier. This is where the automation dream comes in. The Mission ml ops service. What you're really getting at is program ability for the developer across the board, right? Is that kind of what you're thinking? Or? >>Well, there's two parts of that. This is really important. The developer community is looking for a set of tools that they could be very creative and movement right. They don't want to have to be worried about provisioning managing, maintaining any kind of infrastructure. And so there's this bridge between that automation and the actual getting things done. So that's number one. But more importantly, I think this is hugely important, as you look about pushing into the on premises world for for H, P E or anybody else to succeed in that space, you have to have a high degree of automation that takes care of potential problems that humans would otherwise have to get involved with. And that's when they cost. So you have to drive in a commercial. I'm gonna fleet controls of Fleet management services that automate their behavior and give them an S L A that are custom to public cloud. So you've got two sets of automation that you really have to be dealing with. Not only are you talking about Dev ops, the second stage you just talked about, but you gotta have a corresponding automation bake back into drive. A higher user experience at both levels >>and Esmeraldas platforms is cool. I get that. I hear that. So the question next question on that Kumar is platforms have to enable value. What are you guys enabling for the value when you talk to customers? Because who everyone sees the platform play as the as the architecture, but it has to create disruptive, enabling value. What do you >>Yeah, that I'll go on as a starter, I think way pointed out to you. This is the when we announced the container platform, it's off, the very unique. It's not only it's open source Cuban it is. It has a persistent one of the best underlying persistent stories integrated the original map or a file system, as I pointed out, drones one of the world's largest databases, and we can actually allow the customers to run both both state full and stateless workloads. And as I said a few minutes ago, we are committed to having the run times off they run and both which we are. We're not a hardware, so the customers have the choice on. In addition to all of that, I think we're in a very unique solutions. We're offering is ML ops as we talked about and this is only beginning, and we have lots of other examples of Robert is working on a solution. Hopefully, we'll announce sometime soon, which is similar to that. Some of the key elements that we're seeing in the marketplace, the various solutions that goes from the top of the bar >>Robert to you on the same question. What's in it for me in the customer? Bottom line. What's the what's in it for me? >>Well, so I think, just the ease of simplicity. What we are ultimately want to provide for a client is one opportunity to solve a bunch of problems that otherwise have to stitch together myself. It's really about value and speed to value. If I have to solve the same computer vision problem in manufacturing facility and I need a solution and I don't have the resource of the wherewithal stacks like that, but I got to bring a bigger solution. I want a company that knows how to deliver a computer vision solution there or within an airport or wherever, where I don't need to build out sophisticated infrastructure or people are technologies necessary, is point on my own or have some third party product that doesn't have a vested interest in the whole stack. H P E is purposely have focused on delivering that experience with one organization from both hardware and software up to the stack, including the applications that we believe with the highest value to the client We want to be. That organization will be an organization on premises. >>I think that's great, consistent with what we're hearing if you can help take the heavy lifting away and have them focus on their business and the creativity. And I think the application renaissance and transformation is going to be a big focus both on the infrastructure side but also just straight up application developers. That's gonna be really critical path for a lot of these companies to come out of this. So congratulations on that love that love the formula final conclusion question for both you guys. This is something that a lot of people might be asking at HP. Discover virtual experience, or in general, as they have to plan and get back to work and reset, reinvent and grow their organizations. Where is HP heading? How do you see HP heading? How would you answer that question? If the customers like Kumar Robert, where's HP heading? How would you answer that? >>Go ahead, Robert. And then I can >>Yeah, yeah. Uh huh, Uh huh. I see us heading into the true distributed hybrid platform play where that they would look to HP of handling and providing all of their resource is and solutions needs as they relate to technology further and further into what their specific edge locations would look like. So edge is different for everybody. And what HP is providing is a holistic view of compute and our storage and our solutions all the way up through whether they be very close to the edge. Locations are all the way through the data center and including the integration with our public cloud partners out there. So I see HP is actually solving real value business problems in a way that's turnkey and define it for our clients. Really value >>John. I think I'll start with the word Antonio shared. We are edge to the cloud, everything as a service company and I think the we're actually sending is HPE is Valley Legend, and it's actually honored to be part of the such a great company. I think what we have to change with the market transformation the customer needs and what we're doing is we're probably in the customers that innovative solution that you don't have to. You don't have to take your data where the computers, as opposed to you, can take the compute where the data is and we provide you the simplified, automated, secure solutions no matter where you very rare execution needs are. And that is through the significant innovation of the software, both for as Model and the Green Lake. >>That's awesome. And, you know, for all of us, have been through multiple ways of innovation. We've seen this movie before. It's essentially distributive computing, re imagine and re architected with capability is the new scale. I mean, it's almost back to the old days of network operating systems and networking and Os is and it's a you know, >>I that's a very, very good point. And I will come through the following way, right? I mean, it is, It's Ah, two plus two is four no matter what university, Gordo. But you have to change with the market forces. I think the market is what is happening in the marketplace. As you pointed out, there was a shadow I t There's a devil Ops and his idea off the network ops and six years. So now I think we see that all coming together I call this kubernetes is the best equalizer of the past platform. The reason why it became popular is because it's provided that abstraction layer on. I think what we're trying to do is okay, if that is where the customers want and we provide a solution that helps you to build that very quickly without having to lock into any specific platform. >>I think you've got a good strategy there. I would agree with you. I would call that I call it the old TCP I p. What that did networking back in the day. Kubernetes is a unifying, disruptive enabler, and I think it enables things like a runtime stack. Things that you're mentioning. These are the new realities. I think Covad 19 has exposed this new architectures of the world. >>Yeah, one last year, we were saying >>once, if not having something in place >>started. So the last thing I would say is it we're not bolting coolness to anything. Old technologies. It's a fresh it's built in. It's an open source. And it is as a salaries, it can run on any platform that you choose to run. Now. >>Well, next time we get together, we'll refund, observe ability and security and all that good stuff, because that's what's coming next. All the basic in guys. Thank you so much, Kumar. Robert. Thanks for spending the time. Really appreciate it here for the HP Discover Virtual Spirits Cube conversation. Thanks for Thanks for joining me today. >>Thank you very much. >>I'm John Furrier with Silicon Angle. The Cube. We're here in our remote studios getting all the top conversations for HP Discover virtual experience. Thanks for watching. Yeah, >>yeah, yeah.

Published Date : Jun 23 2020

SUMMARY :

Discover Virtual Experience Brought to you by HP at HP and Robert Christensen, VP of Strategy of Office of the CTO Robert it up to CTO but also head of the software. Could you take him in to explain a different change of software that helps the customers, too, about the container platform news, leveraging the acquisitions you guys have done at HP Robert Cover has been spending a lot of time with our customers, and I would like to go ahead. that have the same behavior characteristics or what we call you know, the same cloud experiences But I think as you pointed out, John the remote working and the VPN is the security. You guys are announcing the capabilities and with Matt Bahr and you combine that with our network experiences and our other platform the solution from the top to the bottom we were announced, or the Discover ML We announced the well. One of the things I liked about your talk on November was that the company is kind of getting in the weeds, that word you guys just mentioned or these siloed tools and about the workforce workplace I mean, is that kind of how you're thinking the world is going to be hybrid and you can actually see the changes that is happening I would say Multi cloud is also, you know, code word for multi environment, the business, the businesses that we deal with have established themselves in a public and customers on the Cube is when I get back to work postcode. I think there's going to be a I cannot just say I'm only giving you the devil ops solution, Is that kind of what you're thinking? the second stage you just talked about, but you gotta have a corresponding automation bake back into enabling for the value when you talk to customers? This is the when we announced Robert to you on the same question. and I don't have the resource of the wherewithal stacks like that, but I got to bring a bigger solution. I think that's great, consistent with what we're hearing if you can help take the heavy lifting away and have them focus And then I can the data center and including the integration with our public cloud partners in the customers that innovative solution that you don't have to. I mean, it's almost back to the old days of network operating systems and that helps you to build that very quickly without having to lock into What that did networking back in the day. And it is as a salaries, it can run on any platform that you choose to run. Thanks for spending the time. We're here in our remote studios getting all the top conversations for

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
RobertPERSON

0.99+

AntonioPERSON

0.99+

JohnPERSON

0.99+

KumarPERSON

0.99+

Robert ChristiansenPERSON

0.99+

JohnnyPERSON

0.99+

John TroyerPERSON

0.99+

HPORGANIZATION

0.99+

Kumar SreekantiPERSON

0.99+

NovemberDATE

0.99+

twoQUANTITY

0.99+

two piecesQUANTITY

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

John FurrierPERSON

0.99+

2020DATE

0.99+

Robert ChristensenPERSON

0.99+

six yearsQUANTITY

0.99+

two partsQUANTITY

0.99+

FirstQUANTITY

0.99+

Kumar RobertPERSON

0.99+

bothQUANTITY

0.99+

HPEORGANIZATION

0.99+

Kool AidORGANIZATION

0.99+

oneQUANTITY

0.98+

Covert 19ORGANIZATION

0.98+

BothQUANTITY

0.98+

second stageQUANTITY

0.98+

HP KumarORGANIZATION

0.98+

Sri ContextPERSON

0.98+

Blue Data mapORGANIZATION

0.98+

IsraelLOCATION

0.98+

todayDATE

0.98+

two setsQUANTITY

0.98+

fourQUANTITY

0.97+

one opportunityQUANTITY

0.97+

Matt BahrPERSON

0.96+

one thingQUANTITY

0.96+

Robert CoverPERSON

0.96+

H P EORGANIZATION

0.96+

both levelsQUANTITY

0.95+

one organizationQUANTITY

0.95+

CubeCOMMERCIAL_ITEM

0.94+

Software CubeORGANIZATION

0.93+

Dave Brown, Amazon | AWS Summit Online 2020


 

>> Narrator: From theCUBE studios in Palo Alto in Boston, connecting with thought leaders all around the world, this is theCUBE conversation. >> Everyone, welcome to the Cube special coverage of the AWS Summit San Francisco, North America all over the world, and most of the parts Asia, Pacific Amazon Summit is the hashtag. This is part of theCUBE Virtual Program, where we're going to be covering Amazon Summits throughout the year. I'm John Furrier, host of theCUBE. And of course, we're not at the events. We're here in the Palo Alto Studios, with our COVID-19 quarantine crew. And we got a great guest here from AWS, Dave Brown, Vice President of EC2, leads the team on elastic compute, and its business where it's evolving and most importantly, what it means for the customers in the industry. Dave, thanks for spending the time to come on theCUBE virtual program. >> Hey John, it's really great to be here, thanks for having me. >> So we got the summit going down. It's new format because of the shelter in place. They're going virtual or digital, virtualization of events. And I want to have a session with you on EC2, and some of the new things they're going on. And I think the story is important, because certainly around the pandemic, and certainly on the large scale, SaaS business models, which are turning out to be quite the impact from a positive standpoint, with people sheltering in place, what is the role of data in all this, okay? And also, there's a lot of pressure financially. We've had the payroll loan programs from the government, and to companies really looking at their bottom lines. So two major highlights going on in the world that's directly impacted. And you have some products, and news around this, I want to do a deep dive on that. One is AppFlow, which is a new integration service by AWS, that really talks about taking the scale and value of AWS services, and integrating that with SaaS Applications. And the migration acceleration program for Windows, which has a storied history of database. For many, many years, you guys have been powering most of the Windows workloads, ironic that you guys are not Microsoft, but certainly had success there. Let's start with the AppFlow. Okay, this was recently announced on the 22nd of April. This is a new service. Can you take us through why this is important? What is the service? Why now, what was the main driver behind AppFlow? >> Yeah, absolutely. So with the launcher AppFlow, what we're really trying to do is make it easy for organizations and enterprises to really control the flow of their data, between the number of different applications that they use on premise, and AWS. And so the problem we started to see was, enterprises just had this data all over the place, and they wanted to do something useful with it. Right, we see many organizations running Data Lakes, large scale analytics, Big Machine Learning on AWS, but before you can do all of that, you have to have access to the data. And if that data is sitting in an application, either on-premise or elsewhere in AWS, it's very difficult to get out of that application, and into S3, or Redshift, or one of those services, before you can manipulate it, that was the challenge. And so the journey kind of started a few years ago, we actually launched a service on the EC2 network, inside Private Link. And it was really, it provided organizations with a very secure way to transfer network data, both between VPCs, and also between VPC, and on-prem networks. And what this highlighted to us, is organizations say that's great, but I actually don't have the technical ability, or the team, to actually do the work that's required to transform the data from, whether it's Salesforce, or SAP, and actually move it over Private Link to AWS. And so we realized, while private link was useful, we needed another layer of service that actually provided this, and one of the key requirements was an organization must be able to do this with no code at all. So basically, no developer required. And I want to be able to transfer data from Salesforce, my Salesforce database, and put that in Redshift together with some other data, and then perform some function on that. And so that's what AppFlow is all about. And so we came up with the idea about a little bit more than a year ago, that was the first time I sat down, and actually reviewed the content for what this was going to be. And the team's been hard at work, and launched on the 22nd of April. And we actually launched with 14 partners as well, that provide what we call connectors, which allow us to access these various services, and companies like Salesforce and ServiceNow, Slack, Snowflake, to name a few. >> Well, certainly you guys have a great ecosystem of SaaS partners, and that's you know well documented in the industry that you guys are not going to be competing directly with a lot of these big SaaS players, although you do have a few services for customers who want end to end, Jassy continues to pound that home on my Cube interviews. But I think this, >> Absolutely. is notable, and I want to get your thoughts on this, because this seems to be the key unlocking of the value of SaaS and Cloud, because data traversal, data transfer, there's costs involved, also moving traffic over the internet is unsecure, and unreliable. So a couple questions I wanted to just ask you directly. One is did the AppFlow come out of the AWS Private Link piece of it? And two, is it one directional or bi-directional? How is that working? Because I'm guessing that you had Private Link became successful, because no one wants to move on the internet. They wanted direct connects. Was there something inadequate about that service? Was there more headroom there? And is it bi-directional for the customer? >> So let me take the second one, it's absolutely bi-directional. So you can transfer that data between an on-premise application and AWS, or AWS and the on-premise application. Really, anything that has a connector can support the data flow in both directions. And with transformations, and so data in one data source, may need to be transformed, before it's actually useful in a second data source. And so AppFlow takes care of all that transformation as well, in both directions, And again, with no requirement for any code, on behalf of the customer. Which really unlocks it for a lot of the more business focused parts of an organization, who maybe don't have immediate access to developers. They can use it immediately, just literally with a few transformations via the console, and it's working for you. In terms of, you mentioned sort of the flow of data over the internet, and the need for security of data. It's critically important, and as we look at just what had happened as a company does. We have very, very strict requirements around the flow of data, and what services we can use internally. And where's any of our data going to be going? And I think it's a good example of how many enterprises are thinking about data today. They don't even want to trust even HTTPS, and encryption of data on the internet. I'd rather just be in a world where my data never ever traverses the internet, and I just never have to deal with that. And so, the journey all started with Private Link there, and probably was an interesting feature, 'cause it really was changing the way that we asked our customers to think about networking. Nothing like Private Link has ever existed, in the sort of standard networking that an enterprise would normally have. It's kind of only possible because of what VPC allows you to do, and what the software defined network on AWS gives you. And so we built Private Link, and as I said, customers started to adopt it. They loved the idea of being able to transfer data, either between VPCs, or between on-premise. Or between their own VPC, and maybe a third party provider, like Snowflake, has been a very big adopter of Private Link, and they have many customers using it to get access to Snowflake databases in a very secure way. And so that's where it all started, and in those discussions with customers, we started to see that they wanted us to up level a little bit. They said, "We can use Private Link, it's great, "but one of the problems we have is just the flow of data." And how do we move data in a very secure, in a highly available way, with no sort of bottlenecks in the system. And so we thought Private Link was a great sort of underlying technology, that empowered all of this, but we had to build the system on top of that, which is AppFlow. That says we're going to take care of all the complexity. And then we had to go to the ecosystem, and say to all these providers, "Can you guys build connectors?" 'Cause everybody realized it's super important that data can be shared, and so that organizations can really extract the value from that data. And so the 14 of them at launch, we have many, many more down the road, have come to the party with with connectors, and full support of what AppFlow provides. >> Yeah us DevOps purists always are pounding the fist on the table, now virtual table, API's and connectors. This is the model, so people are integrating. And I want to get your thoughts on this. I think you said low code, or no code on the developer simplicity side. Is it no code, or low code? Can you just explain quickly and clarify that point? >> It's no code for getting started literally, for the kind of, it's basic to medium complexity use case. It's not code, and a lot of customers we spoke to, that was a bottleneck. Right, they needed something from data. It might have been the finance organization, or it could have been human resources, somebody else in organization needed that. They don't have a developer that helps them typically. And so we find that they would wait many, many months, or maybe even never get the project done, just because they never ever had access to that data, or to the developer to actually do the work that was required for the transformation. And so it's no code for almost all use cases. Where it literally is, select your data source, select the connector, and then select the transformations. And some basic transformations, renaming of fields, transformation of data in simple ways. That's more than sufficient for the vast majority of use cases. And then obviously through to the destination, with the connector on the other side, to do the final transformation, to the final data source that you want to migrate the data to. >> You know, you have an interesting background, was looking at your history, and you've essentially been a web services kind of guy all your life. From a code standpoint software environment, and now I'll say EC2 is the crown jewel of AWS, and doing more and more with S3. But what's interesting, as you build more of these layers services in there, there's more flexibility. So right now, in most of the customer environments, is a debate around, do I build something monolithic, and or decoupled, okay? And I think there's a world where there's a mutually, not mutually exclusive, I mean, you have a mainframe, you have a big monolithic thing, if it does something. But generally people would agree that a decoupled environment is more flexible, and more agile. So I want to kind of get to the customer use case, 'cause I can really see this being really powerful, AppFlow with Private Link, where you mentioned Snowflake. I mean, Snowflake is built on AWS, they're doing extremely, extremely well, like any other company that builds on AWS. Whether it's theCUBE Cloud, or it's Snowflake. As we tap those services, customers, we might have people who want to build on our platform on top of AWS. So I know a bunch of startups that are building within the Snowflake ecosystem, a customer of yours. >> Yeah. >> So they're technically a customer of Amazon, but they're also in the ecosystem of say, Snowflake. >> Yes. >> So this brings up an interesting kind of computer science problem, which is architecturally, how do I think about that? Is this something where AppFlow could help me? Because I certainly want to enable people to build on a platform, that I build if I'm doing that, if I'm not going to be a pure SaaS turnkey application. But if I'm going to bring partners in, and do integration, use the benefits of the goodness of an API or Connector driven architecture, I need that. So explain to me how this helps me, or doesn't help me. Is this something that makes sense to you? Does this question make sense? How do you react to that? >> I think so, I think the question is pretty broad. But I think there's an element in which I can help. So firstly, you talk about sort of decoupled applications, right? And I think that is certainly the way that we've gone at Amazon, and been very, very successful for us. I think we started that journey back in 2003, when we decoupled the monolithic application that was amazon.com. And that's when our service journey started. And a lot of that sort of inspired AWS, and how we built what we built today. And we see a lot of our customers doing that, moving to smaller applications. It just works better, it's easier to debug, there's ownership at a very controlled level. So you can get all your engineering teams to have very clear and crisp ownership. And it just drives innovation, right? 'Cause each little component can innovate without the burden of the rest of the ecosystem. And so that's what we really enjoy. I think the other thing that's important when you think about design, is to see how much of the ecosystem you can leverage. And so whether you're building on Snowflake, or you're building directly on top of AWS, or you're building on top of one of our other customers and partners. If you can use something that solves the problem for you, versus building it yourself. Well that just leaves you with more time to actually go and focus on the stuff that you need to be solving, right? The product you need to be building. And so in the case of AppFlow, I think if there's a need for transfer of data, between, for example, Snowflake and some data warehouse, that you as an organisation are trying to build on a Snowflake infrastructure. AppFlow is something you could potentially look at. It's certainly not something that you could just use for, it's very specific and focused to the flow of data between services from a data analytics point of view. It's not really something you could use from an API point of view, or messaging between services. It's more really just facilitating that flow of data, and the transformation of data, to get it into a place that you can do something useful with it. >> And you said-- >> But like any of our services-- (speakers talk over each other) Couldn't be using any layer in the stack. >> Yes, it's a level of integration, right? There's no code to code, depending on how you look at it, cool. Customer use cases, you mentioned, large scale analytics, I thought I heard you say, machine learning, Data Lakes. I mean, basically, anyone who's using data is going to want to tap some sort of data repository, and figure out how to scale data when appropriate. There's also contextual, relevant data that might be specific to say, an industry vertical, or a database. And obviously, AI becomes the application for all this. >> Exactly. >> If I'm a customer, how does AppFlow relate to that? How does that help me, and what's the bottom line? >> So I think there's two parts to that journey. And depending on where customers are, and so there's, we do have millions of customers today that are running applications on AWS. Over the last few years, we've seen the emergence of Data Lakes, really just the storage of a large amount of data, typically in S3. But then companies want to extract value out of, and use in certain ways. Obviously, we have many, many tools today, from Redshift, Athena, that allow you to utilize these Data Lakes, and be able to run queries against this information. Things like EMR, and one of our oldest services in the space. And so doing some sort of large scale analytics, and more recently, services like SageMaker, are allowing us to do machine learning. And so being able to run machine learning across an enormous amount of data that we have stored in AWS. And there's some stuff in the IoT, workload use space as well, that's emerging. And many customers are using it. There's obviously many customers today that aren't using it on AWS, potential customers for us, that are looking to do something useful with data. And so the one part of the journey is taking up all of that infrastructure, and we have a lot of services that make it really easy to do machine learning, and do analytics, and that sort of thing. And then the other problem, the other side of the problem, which is what AppFlow is addressing is, how do I get that data to S3, or to Redshift, to actually go and run that machine learning workload? And that's what it's really unlocking for customers. And it's not just the one time transfer of data, the other thing that AppFlow actually supports, is the continuous updating of data. And so if you decide that you want to have that view of your data in S3, for example, and Data Lake, that's kept up to date, within a few minutes, within an hour, you can actually configure AppFlow to do that. And so the data source could be Salesforce, it could be Slack, it could be whatever data source you want to blend. And you continuously have that flow of data between those systems. And so when you go to run your machine learning workload, or your analytics, it's all continuously up to date. And you don't have this problem of, let me get the data, right? And when I think about some of the data jobs that I've run, in my time, back in the day as an engineer, on early EC2, a small part of it was actually running the job on the data. A large part of it was how do I actually get that data, and is it up to date? >> Up to date data is critical, I think that's the big feature there is that, this idea of having the data connectors, really makes the data fresh, because we go through the modeling, and you realize why I missed a big patch of data, the machine learnings not effective. >> Exactly. >> I mean, it's only-- >> Exactly, and the other thing is, it's very easy to bring in new data sources, right? You think about how many companies today have an enormous amount of data just stored in silos, and they haven't done anything with it. Often it'll be a conversation somewhere, right? Around the coffee machine, "Hey, we could do this, and we can do this." But they haven't had the developers to help them, and haven't had access to the data, and haven't been able to move the data, and to put it in a useful place. And so, I think what we're seeing here, with AppFlow, really unlocking of that. Because going from that initial conversation, to actually having something running, literally requires no code. Log into the AWS console, configure a few connectors, and it's up and running, and you're ready to go. And you can do the same thing with SageMaker, or any of the other services we have on the other side that make it really simple to run some of these ideas, that just historically have been just too complicated. >> Alright, so take me through that console piece. Just walk me through, I'm in, you sold me on this. I just came out of meeting with my company, and I said, "Hey, you know what? "We're blowing up this siloed approach. "We want to kind of create this horizontal data model, "where we can mix "and match connectors based upon our needs." >> Yeah. >> So what do I do? I'm using SageMaker, using some data, I got S3, I got an application. What do I do? I'm connecting what, S3? >> Yeah, well-- >> To the app? >> So the simplest thing is, and the simplest place to find this actually, is on Jeff Bezos blog, that he did for the release, right? Jeff always does a great job in demonstrating how to use our various products. But it literally is going into the standard AWS console, which is the console that we use for all of our services. I think we have 200 of them, so it is getting kind of challenging to find the ball in that console, as we continue to grow. And find AppFlow. AppFlow is a top level service, and so you'll see it in the console. And the first thing you got to do, is you got to configure your Source-Connect. And so it's a connector that, where's the data coming from? And as I said, we had 14 partners, you'll be able to see those connectors there, and see what's supported. And obviously, there's the connectivity. Do you have access to that data, or where is the data running? AppFlow runs within AWS, and so you need to have either VPN, or direct connect back to the organization, if the data source is on-premise. If the data source happens to be in AWS, and obviously be in a VPC, and you just need to configure some of that connectivity functionality. >> So no code if the connectors are there, but what if I want to build my own connector? >> So building your own connector, that is something that we working with third parties with right now. I could be corrected, but not 100% sure whether that's available. It's certainly something I think we would allow customers to do, is to extend sort of either the existing connectors, or to add additional transformations as well. And so you'd be able to do that. But the transformations that the vast majority of our customers are using are literally just in the console, with the basic transformations. >> It comes bigger apps that people have, and just building those connectors. How does a partner get involved? You got 14 partners now, how do you extend the partner base contact in Amazon Partner Manager, or you send an email to someone? How does someone get involved? What are you recommending? >> So there are a couple of ways, right? We have an extensive partner ecosystem that the vast majority of these ISVs are already integrated with. And so, we have the 14 we launched with, we also pre announced SAP, which is going to be a very critical one for the vast majority of our customers. Having deep integration with SAP data, and being able to bring that seamlessly into AWS. That'll be launching soon. And then there's a long list of other ones, that we're currently working on. And they're currently working on them themselves. And then the other one is going to be, like with most things that Amazon, feedback from customers. And so what we hear from customers, and very often you'll hear from third party partners as well, who'll come and say, "Hey, my customers are asking me "to integrate with the AppFlow, what do I need to do?" And so, you know, just reaching out to AWS, and letting them know that you'd be interested in integrating, that you're not part of the partner program. The team would be happy to engage, and bring you on board, so-- >> (mumbles) on playbook, get the top use cases nailed down, listen to customers, and figure it out. >> Exactly. >> Great stuff Dave, we really appreciate it. I'm looking forward to digging in AppFlow, and I'll check on Jeff Bezos blog. Sure, it's April 22, was the launch day, probably had up there. One of the things that want to just jump into, now moving into the next topic, is the cost structure. A lot of pressure on costs. This is where I think this Migration Acceleration Program for Windows is interesting. Andy Jassy always likes to boast on stage at Reinvent, about the number of workloads of Windows running on Amazon Web Services. This has been a big part of the customers, I think, for over 10 years, that I can think of him talking about this. What is this about? Are you still seeing uptake on Windows workloads, or, I mean,-- >> Absolutely. >> Azure has got some market share, >> Absolutely. >> but now you, doesn't really kind of square in my mind, what's going on here. Tell us about this migration service. >> Yeah, absolutely, on the migration side. So Windows is absolutely, we still believe AWS is the best place to run a Windows workload. And we have many, many happy Windows customers today. And it's a very big, very large, growing point of our business today, it used to be. I was part of the original team back in 2008, that launched, I think it was Windows 2008, back then on EC2. And I remember sort of working out all the details, of how to do all the virtualization with Windows, obviously back then we'd done Linux. And getting Windows up and running, and working through some of the challenges that Windows had as an operating system in the early days. And it was October 2008 that we actually launched Windows as an operating system. And it's just been, we've had many, many happy Windows customers since then. >> Why is Amazon so peak to run workloads from Windows so effectively? >> Well, I think, sorry what did you say peaked? >> Why is Amazon so in well positioned to run the Windows workloads? >> Well, firstly, I mean, I think Windows is really just the operating system, right? And so if you think about that as the very last little bit of your sort of virtualization stack, and then being able to support your applications. What you really have to think about is, everything below that, both in terms of the compute, so performance you're going to get, the price performance you're going to get. With our Nitro Hypervisor, and the Nitro System that we developed back in 2018, or launched in 2018. We really are able to provide you with the best price performance, and have the very least overhead from a hypervisor point of view. And then what that means is you're getting more out of your machine, for the price that you pay. And then you think about the rest of the ecosystem, right? Think about all the other services, and all the features, and just the breadth, and the extensiveness of AWS. And that's critically important for all of our Windows customers as well. And so you're going to have things like Active Directory, and these sort of things that are very Windows specific, and we can absolutely support all of those, natively. And in the Windows operating system as well. We have things like various agents that you can run inside the Windows box to do more maintenance and management. And so I think we've done a really good job in bringing Windows into the larger, and broader ecosystem of AWS. And it really is just a case of making sure that Windows runs smoothly. And that's just the last little bit on top of that, and so many customers enterprises run Windows today. When I started out my career, I was developing software in the banking industry, and it was a very much a Windows environment. They were running critical applications. And so we see it's critically important for customers who run Windows today, to be able to bring those Windows workloads to AWS. >> Yeah, and that's certainly-- >> We are seeing a trend. Yeah, sorry, go ahead. >> Well, they're certainly out there from a market share standpoint, but this is a cost driver, you guys are saying, and I want you to just give an example, or just illustrate why it costs less. How is it a cost savings? Is it just services, cycle times on EC2? I mean what's the cost savings? I'm a customer like, "Okay, so I'm going to go to Amazon with my workloads." Why is it a cost saving? >> I think there are a few things. The one I was referring to in my previous comment was the price performance, right? And so if I'm running on a system, where the hypervisor is using a significant portion of the physical CPU that I want to use as well. Well there's an overhead to that. And so from a price performance point of view, I look at, if I go and benchmark a CPU, and I look at how much I pay for that per unit of that benchmark, it's better on AWS. Because with our natural system, we're able to give you 100% of the floor. And so you get a performance then. So that's the first thing is price performance, which is different from this price. But there's a saving there as well. The other one is a large part, and getting into the migration program as well. A large part of what we do with our customers, when they come to AWS, is supposed to be, we take a long look at their license strategy. What licenses do they have? And a key part of bringing in Windows workloads AWS, is license optimization. What can we do to help you optimize the licenses that you're using today for Windows, for SQL Server, and really try and find efficiencies in that. And so we're able to secure significant savings for many of our customers by doing that. And we have a number of tools that they use as part of the migration program to do that. And so that helps save there. And then finally, we have a lot of customers doing what we call modernization of their applications. And so it really embraced Cloud, and some of the benefits that you get from Cloud. Especially elasticities, so being able to scale for demand. It's very difficult to do that when you bound by license for your operating system, because every box you run, you have to have a license for it. And so tuning auto scaling on, you've got to make sure you have enough licenses for all these Windows boxes you've seen. And so the push the Cloud's bringing, we've seen a lot of customers move Windows applications from Windows to Linux, or even move SQL Server, from SQL server to SQL Server on Linux, or another database platform. And do a modernization there, that already allows them to benefit from the elasticity that Cloud provides, without having to constantly worry about licenses. >> So final question on this point, migration service implies migration from somewhere else. How do they get involved? What's the onboarding process? Can you give a quick detail on that? >> Absolutely, so we've been helping customers with migrations for years. We've launched a migration program, or Migration Acceleration Program, MAP. We launched it, I think about 2016, 2017 was the first part of that. It was really just a bringing together of the various, the things we'd learned, the tools we built, the best strategies to do a migration. And we said, "How do we help customers looking "to migrate to the Cloud." And so that's what MAP's all about, is just a three phase, we'll help you assess the migration, we'll help you do a lot of planning. And then ultimately, we help you actually do the migration. We partner with a number of external partners, and ISVs, and GSIs, who also worked very closely with us to help customers do migrations. And so what we launched in April of this year, with the Windows migration program, is really just more support for Windows workload, as part of the broader Migration Acceleration Program. And there's benefits to customers, it's a smoother migration, it's a faster migration in almost all cases, we're doing license assessments, and so there's cost reduction in that as well. And ultimately, there's there's other benefits as well that we offer them, if they partner with us in bringing the workload to AWS. And so getting involved is really just reaching out to one of our AWS sales folks, or one of your account managers, if you have an account manager, and talk to them about workloads that you'd like to bring in. And we even go as far as helping you identify which applications are easiest to migrate. And so that you can kind of get going with some of the easier ones, while we help you with some of the more difficult ones. And strategies' about removing those roadblocks to bring your services to AWS. >> Takes the blockers away, Dave Brown, Vice President of EC2, the crown jewel of AWS, breaking down AppFlow, and the migration to Windows services. Great insights, appreciate the time. >> Thanks. >> We're here with Dave Brown, VP of EC2, as part of the virtual Cube coverage. Dave, I want to get your thoughts on an industry topic. Given what you've done with EC2, and the success, and with COVID-19, you're seeing that scale problem play out on the world stage for the entire population of the global world. This is now turning non-believers into believers of DevOps, web services, real time. I mean, this is now a moment in history, with the challenges that we have, even when we come out of this, whether it's six months or 12 months, the world won't be the same. And I believe that there's going to be a Cambrian explosion of applications. And an architecture that's going to look a lot like Cloud, Cloud-native. You've been doing this for many, many years, key architect of EC2 with your team. How do you see this playing out? Because a lot of people are going to be squirreling in rooms, when this comes back. They're going to be video conferencing now, but when they have meetings, they're going to look at the window of the future, and they're going to be exposed to what's failed. And saying, "We need to double down on that, "we have to fix this." So there's going to be winners and losers coming out of this pandemic, really quickly. And I think this is going to be a major opportunity for everyone to rally around this moment, to reset. And I think it's going to look a lot like this decoupled, this distributed computing environment, leveraging all the things that we've talked about in the past. So what's your advice, and how do you see this evolving? >> Yeah, I completely agree. I mean, I think, just the speed at which it happened as well. And the way in which organizations, both internally and externally, had to reinvent themselves very, very quickly, right? We've been very fortunate within Amazon, moving to working from home was relatively simple for the vast majority of us. Obviously, we have a number of our employees that work in data centers, and performance centers that have been on the front lines, and be doing a great job. But for the rest of us, it's been virtual video conferencing, right? All about meetings, and being able to use all of our networking tools securely, either over the VPN, or the no VPN infrastructure that we have. And many organizations had to do that. And so I think there are a number of different things that have impacted us right now. Obviously, virtual desktops has been a significant sort of growth point, right? Folks don't have access to the physical machine anymore, they're now all having to work remote, and so service like Workspaces, which runs on EC2, as well, has being a critical service data to support many of our largest customers. Our client VPN service, so we have within EC2 on the networking side, has also been critical for many large organizations, as they see more of their staff working everyday remotely. It has also seen, been able to support a lot of customers there. Just more broadly, what we've seen with COVID-19, is we've seen some industries really struggle, obviously travel industry, people just aren't traveling anymore. And so there's been immediate impact to some of those industries. They've been other industries that support functions like the video conferencing, or entertainment side of the house, has seen a bit of growth, over the last couple of months. And education has been an interesting one for us as well, where schools have been moving online. And behind the scenes in AWS, and on EC2, we've been working really hard to make sure that our supply chains are not interrupted in any way. The last thing we want to do is have any of our customers not be able to get EC2 capacity, when they desperately need it. And so we've made sure that capacity is fully available, even all the way through the pandemic. And we've even been able to support customers with, I remember one customer who told me the next day, they're going to have more than hundred thousand students coming online. And they suddenly had to grow their business, by some crazy number. And we were able to support them, and give them the capacity, which is way outside of any sort of demand--. >> I think this is the Cambrain explosion that I was referring to, because a whole new set of new things have emerged. New gaps in businesses have been exposed, new opportunities are emerging. This is about agility. It's real time now. It's actually happening for everybody, not just the folks on the inside of the industry. This is going to create a reinvention. So it's ironic, I've heard the word reinvent mentioned more times now, over the past three months, than I've heard it representing to Amazon. 'Cause that's your annual conference, Reinvent, but people are resetting and reinventing. It's actually a tactic, this is going on. So they're going to need some Clouds. So what do you say to that? >> So, I mean, the first thing is making sure that we can continue to be highly available, continue to have the capacity. The worst scenario is not being able to have the capacity for our customers, right? We did see that with some providers, and that honesty on outside is just years and years of experience of being able to manage supply chain. And the second thing is obviously, making sure that we remain available, that we don't have issues. And so, you know, with all of our stuff going remote and working from home, all my teams are working from home. Being able to support AWS in this environment, we haven't missed a beat there, which has been really good. We were well set up to be able to absorb this. And then obviously, remaining secure, which was our highest priority. And then innovating with our customers, and being able to, and that's both products that we're going to launch over time. But in many cases, like that education scenario I was talking about, that's been able to find that capacity, in multiple regions around the world, literally on a Sunday night, because they found out literally that afternoon, that Monday morning, all schools were virtual, and they were going to use their platform. And so they've been able to respond to that demand. We've seen a lot more machine learning workloads, we've seen an increase there as well as organizations are running more models, both within the health sciences area, but also in the financial areas. And also in just general business, (mumbles), yes, wherever it might be. Everybody's trying to respond to, what is the impact of this? And better understand it. And so machine learning is helping there, and so we've been able to support all those workloads. And so there's been an explosion. >> I was joking with my son, I said, "This world is interesting." Amazon really wins, that stuff's getting delivered to my house, and I want to play video games and Twitch, and I want to build applications, and write software. Now I could do that all in my home. So you went all around. But all kidding aside, this is an opportunity to define agility, so I want to get your thoughts, because I'm a bit a big fan of Amazon. As everyone knows, I'm kind of a pro Amazon person, and as other Clouds kind of try to level up, they're moving in the same direction, which is good for everybody, good competition and all. But S3 and EC2 have been the crown jewels. And building more services around those, and creating these abstraction layers, and new sets of service to make it easier, I know has been a top priority for AWS. So can you share your vision on how you're going to make EC2, and all these services easier for me? So if I'm a coder, I want literally no code, low code, infrastructure as code. I need to make Amazon more programmable and easier. Can you just share your vision on, as we talk about the virtual summits, as we cover the show, what's your take on making Amazon easier to consume and use? >> It's been something we thought a lot over the years, right? When we started out, we were very simple. The early days of EC2, it wasn't that rich feature set. And it's been an interesting journey for us. We've obviously become a lot more, we've written, launched local features, which narrative brings some more complexity to the platform. We have launched things like Lightsail over the years. Lightsail is a hosting environment that gives you that EC2 like experience, but it's a lot simpler. And it's also integrated with a number of other services like RDS and ELB as well, basic load balancing functionality. And we've seen some really good growth there. But what we've also learned is customers enjoy the richness of what ECU provides, and what the full ecosystem provides, and being able to use the pieces that they really need to build their application. From an S3 point of view, from a board ecosystem point of view. It's providing customers with the features and functionality that they really need to be successful. From the compute side of the house, we've done some things. Obviously, Containers have really taken off. And there's a lot of frameworks, whether it's EKS, or community service, or a Docker-based ECS, has made that a lot simpler for developers. And then obviously, in the serverless space, Landers, a great way of consuming EC2, right? I know it's serverless, but there's still an EC2 instance under the hood. And being able to bring a basic function and run those functions in serverless is, a lot of customers are enjoying that. The other complexity we're going after is on the networking side of the house, I find that a lot of developers out there, they're more than happy to write the code, they're more than happy to bring their reputation to AWS. But they struggle a little bit more on the networking side, they really do not want to have to worry about whether they have a route to an internet gateway, and if their subnets defined correctly to actually make the application work. And so, we have services like App Mesh, and the whole mesh server space is developing a lot. To really make that a lot simpler, where you can just bring your application, and call it on an application that just uses service discovery. And so those higher level services are definitely helping. In terms of no code, I think that App Mesh, sorry not App Mesh, AppFlow is one of the examples for already given organizations something at that level, that says I can do something with no code. I'm sure there's a lot of work happening in other areas. It's not something I'm actively thinking on right now , in my role in leading EC2, but I'm sure as the use cases come from customers, I'm sure you'll see more from us in those areas. They'll likely be more specific, though. 'Cause as soon as you take code out of the picture, you're going to have to get pretty specific in the use case. You already get the depth, the functionality the customers will need. >> Well, it's been super awesome to have your valuable time here on the virtual Cube for covering Amazon Summit, Virtual Digital Event that's going on. And we'll be going on throughout the year. Really appreciate the insight. And I think, it's right on the money. I think the world is going to have in six to 12 months, surge in reset, reinventing, and growing. So I think a lot of companies who are smart, are going to reset, reinvent, and set a new growth trajectory. Because it's a Cloud-native world, it's Cloud-computing, this is now a reality, and I think there's proof points now. So the whole world's experiencing it, not just the insiders, and the industry, and it's going to be an interesting time. So really appreciate that, they appreciate it. >> Great, >> Them coming on. >> Thank you very much for having me. It's been good. >> I'm John Furrier, here inside theCUBE Virtual, our virtual Cube coverage of AWS Summit 2020. We're going to have ongoing Amazon Summit Virtual Cube. We can't be on the show floor, so we'll be on the virtual show floor, covering and talking to the people behind the stories, and of course, the most important stories in silicon angle, and thecube.net. Thanks for watching. (upbeat music)

Published Date : May 13 2020

SUMMARY :

leaders all around the world, and most of the parts Hey John, it's really great to be here, and certainly on the large And so the problem we started to see was, in the industry that you guys And is it bi-directional for the customer? and encryption of data on the internet. And I want to get your thoughts on this. and a lot of customers we spoke to, And I think there's a world in the ecosystem of say, Snowflake. benefits of the goodness And so in the case of AppFlow, of our services-- and figure out how to scale And so the one part of the really makes the data fresh, Exactly, and the other thing is, and I said, "Hey, you know what? So what do I do? And the first thing you got to do, that the vast majority and just building those connectors. And then the other one is going to be, the top use cases nailed down, One of the things that doesn't really kind of square in my mind, of how to do all the And in the Windows We are seeing a trend. and I want you to just give an example, And so the push the Cloud's bringing, What's the onboarding process? And so that you can kind of get going and the migration to Windows services. And I believe that there's going to And the way in which organizations, inside of the industry. And the second thing is obviously, But S3 and EC2 have been the crown jewels. and the whole mesh server and it's going to be an interesting time. Thank you very much for having me. and of course, the most important stories

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave BrownPERSON

0.99+

DavePERSON

0.99+

14QUANTITY

0.99+

AWSORGANIZATION

0.99+

JohnPERSON

0.99+

100%QUANTITY

0.99+

John FurrierPERSON

0.99+

AmazonORGANIZATION

0.99+

October 2008DATE

0.99+

JeffPERSON

0.99+

Palo AltoLOCATION

0.99+

2003DATE

0.99+

2018DATE

0.99+

Andy JassyPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Amazon Web ServicesORGANIZATION

0.99+

April 22DATE

0.99+

14 partnersQUANTITY

0.99+

six monthsQUANTITY

0.99+

two partsQUANTITY

0.99+

22nd of AprilDATE

0.99+

WindowsTITLE

0.99+

SnowflakeTITLE

0.99+

12 monthsQUANTITY

0.99+

AppFlowTITLE

0.99+

firstQUANTITY

0.99+

SQL ServerTITLE

0.99+

SQLTITLE

0.99+

LinuxTITLE

0.99+

EC2TITLE

0.99+

UNLIST TILL 4/1 - Putting Complex Data Types to Work


 

hello everybody thank you for joining us today from the virtual verdict of BBC 2020 today's breakout session is entitled putting complex data types to work I'm Jeff Healey I lead vertical marketing I'll be a host for this breakout session joining me is Deepak Magette II technical lead from verdict engineering but before we begin I encourage you to submit questions and comments during the virtual session you don't have to wait just type your question or comment and the question box below the slides and click Submit it won't be a Q&A session at the end of the presentation we'll answer as many questions were able to during that time any questions we don't address we'll do our best to answer them offline alternatively visit Vertica forms that formed up Vertica calm to post your questions there after the session engineering team is planning to join the forms conversation going and also as a reminder that you can maximize your screen by clicking a double arrow button in the lower right corner of the slides yes this virtual session is being recorded and will be available to view on demand this week we'll send you a notification as submits ready now let's get started over to you Deepak thanks yes make sure you talk about the complex a textbook they've been doing it wedeck R&D without further delay let's see why and how we should put completely aside to work in your data analytics so this is going to be the outline or overview of my talk today first I'm going to talk about what are complex data types in some use cases I will then quickly cover some file formats that support these complex website I will then deep dive into the current support for complex data types in America finally I'll conclude with some usage considerations and what is coming in are 1000 release and our future roadmap and directions for this project so what are complex stereotypes complex data types are nested data structures composed of tentative types community types are nothing but your int float and string war binary etc the basic types some examples of complex data types include struct also called row are a list set map and Union composite types can also be built by composing other complicated types computer types are very useful for handling sparse data we also make samples on this presentation on that use case and also they help simplify analysis so let's look at some examples of complex data types so the first example on the left you can see a simple customer which is of type struc with two fields namely make a field name of type string and field ID of type integer structs are nothing but a group of fields and each field is a type of its own the type can be primitive or another complex type and on the right we have some example data for this simple customer complex type so it's basically two fields of type string and integer so in this case you have two rows where the first row is Alex with name named Alex and ID 1 0 and the second row has name Mary with ID 2 0 0 2 the second complex type on the left is phone numbers of type array of data has the element type string so area is nothing but a collection of elements the elements could be again a primitive type or another complex type so in this example the collection is of type string which is a primitive type and on the right you have some example of this collection of a fairy type called phone numbers and basically each row has a set or the list or a collection of phone numbers on the first we have two phone numbers and second you have a single phone number in that array and the third type on the slide is the map data type map is nothing but a collection of key value pairs so each element is actually a key value and you have a collection of such elements the key is usually a primitive type however the value is can be a primitive or complex type so in this example the both the key and value are of type string and then if you look on the right side of the slide you have some sample data here we have HTTP requests where the key is the header type and the value is the header value so the for instance on the first row we have a key type pragma with value no cash key type host with value some hostname and similarly on the second row you have some key value called accept with some text HTML because yeah they actually have a collection of elements allison maps are commonly called as collections as a to talking to in mini documents so we saw examples of a one-level complex steps on this slide we have nested complex there types on the right we have the root complex site called web events of type struct script has a for field a session ID of type integer session duration of type timestamp and then the third and the fourth fields customer and history requests are further complex types themselves so customer is again a complex type of type struct with three fields where the first two fields name ID are primitive types however the third field is another complex type phone numbers which we just saw in the previous slide similarly history request is also the same map type that we just saw so in this example each complex types is independent and you can reuse a complex type inside other complex types for example you can build another type called orders and simply reuse the customer type however in a practical implementation you have to deal with complexities involving security ownership and like sets lifecycle dependencies so keeping complex types as independent has that advantage of reusing them however the complication with that is you have to deal with security and ownership and lifecycle dependencies so this is on this slide we have another style of declaring a nested complex type do is call inlined complex data type so we have the same web driven struct type however if you look at the complex sites that embedded into the parent type definition so customer and HTTP request definition is embedded in lined into this parent structure so the advantage of this is you won't have to deal with the security and other lifecycle dependency issues but with the downside being you can't reuse them so it's sort of a trade-off between the these two so so let's see now some use cases of these complex types so the first use case or the benefit of using complex stereotypes is that you'll be able to express analysis mode naturally compute I've simplified the expression of analysis logic thereby simplifying the data pipelines in sequel it feels as if you have tables inside table so let's look at an example on and say you want to list all the customers with more than one thousand website events so if you have complex types you can simply create a table called web events and with one column of type web even which is a complex step so we just saw that difference it has four fields station customer and HTTP request so you can basically have the entire schema or in one type if you don't have complex types you'll have to create four tables one essentially for each complex type and then you have to establish primary key foreign key dependencies across these tables now if you want to achieve your goal of of listing all the customers in more than thousand web requests if you have complex types you can simply use the dot notation to extract the name the contact and also use some special functions for maps that will give you the count of all the HTTP requests grid in thousand however if you don't have complex types you'll have to now join each table individually extract the results from sub query and again joined on the outer query and finally you can apply a predicate of total requests which are greater than thousand to basically get your final result so it's a complex steps basically simplify the query writing part also the execution itself is also simplified so you don't have to have joins if you have complex you can simply have a load step to load the map type and then you can apply the function on top of it directly however if you have separate tables you have to join all these data and apply the filter step and then finally another joint to get your results alright so the other advantage of complex types is that you can cross this semi structured data very efficiently for example if you have data from clique streams or page views the data is often sparse and maps are very well suited for such data so maps or semi-structured by nature and with this support you can now actually have semi structured data represented along with structured columns in in any database so maps have this nice of nice feature to cap encapsulated sparse data as an example the common fields of a kick stream click stream or page view data are pragma host and except if you don't have map types you will have to end up creating a column for each of this header or field types however if you have map you can basically embed as key value pairs for all the data so on the left here on the slide you can see an example where you have a separate column for each field you end up with a lot of nodes basically the sparse however if you can embed them into in a map you can put them into a single column and sort of yeah have better efficiency and better representation of spots they imagine if you have thousands of fields in a click stream or page view you will have thousands of columns you will need thousands of columns represent data if you don't have a map type correct so given these are the most commonly used complexity types let's see what are the file formats that actually support these complex data types so most of file formats popular ones support complex data types however they have different serve variations so for instance if you have JSON it supports arrays and objects which are complex data types however JSON data is schema-less it is row oriented and this text fits because it is Kimmel s it has to store it in encase on every job the second type of file format is Avro and Avro has records enums arrays Maps unions and a fixed type however Avro has a schema it is oriented and it is binary compressed the third category is basically the park' and our style of file formats where the columnar so parquet and arc have support for arrays maps and structs the hewa schema they are column-oriented unlike Avro which is oriented and they're also binary compressed and they support a very nice compression and encoding types additionally so the main difference between parquet and arc is only in terms of how they represent complex types parquet includes the complex type hierarchy as reputation deflation levels however orc uses a separate column at every parent of the complex type to basically the prisons are now less so that apart from that difference in how they represent complex types parking hogs have similar capabilities in terms of optimizations and other compression techniques so to summarize JSON has no schema has no binary format in this columnar so it is not columnar Avro has a schema because binary format however it is not columnar and parquet and art are have a schema have a binary format and are columnar so let's see how we can query these different kinds of complex types and also the different file formats that they can be present in in how we can basically query these different variations in Vertica so in Vertica we basically have this feature called flex tables to where you can load complex data types and analyze them so flex tables use a binary format called vemma to store data as key value pairs clicks tables are schema-less they are weak typed and they trade flexibility for performance so when I mean what I mean by schema-less is basically the keys provide the field name and each row can potentially have different keys and it is weak type because there's no type information at the column level we have some we will see some examples of of this week type in the following slides but basically there's no type information so so the data is stored in text format and because of the week type and schema-less nature of flex tables you can implement some optimum use cases like if you can trivially implement needs like schema evolution or keep the complex types types fluid if that is your use case then the weak tightness and schema-less nature of flex tables will help you a lot to get give you that flexibility however because you have this weak type you you have a downside of not getting the best possible performance so if you if your use case is to get the best possible performance you can use a new feature of the strongly-typed complex types that we started to introduce in Vertica so complex types here are basically a strongly typed complex types they have a schema and then they give you the best possible performance because the optimizer now has enough information from the schema and the type to implement optimization system column selection or all the nice techniques that Vertica employs to give you the best possible color performance can now be supported even for complex types so and we'll see some of the examples of these two types in these slides now so let's use a simple data called restaurants a restaurant data - as running throughout this poll excites to basically see all the different variations of flex and complex steps so on this slide you have some sample data with four fields and essentially two rows if you sort of loaded in if you just operate them out so the four fields are named cuisine locations in menu name in cuisine or of type watch are locations is essentially an array and menu array of a row of two fields item and price so if you the data is in JSON there is no schema and there is no type information so how do we process that in Vertica so in Vertica you can simply create a flex table called restaurants you can copy the restaurant dot J's the restaurants of JSON file into Vertica and basically you can now start analyzing the data so if you do a select star from restaurants you will see that all the data is actually in one column called draw and it also you have the other column called identity which is to give you some unique row row ID but the row column base again encapsulates all the data that gives in the restaurant so JSON file this tall column is nothing but the V map format the V map format is a binary format that encodes the data as key value pairs and RAW format is basically backed by the long word binary column type in Vertica so each key essentially gives you the field name and the values the field value and it's all in its however the values are in the text text representation so see now you want to get better performance of this JSON data flex tables has these nice functions to basically analyze your data or try to extract some schema and type information from your data so if you execute compute flex table keys on the restaurants table you will see a new table called public dot restaurants underscore keys and then that will give you some information about your JSON data so it was able to automatically infer that your data has four fields namely could be name cuisine locations in menu and could also get that the name in cuisine or watch are however since locations in menu are complex types themselves one is array and one is area for row it sort of uses the same be map format as ease to process them so it has four columns to two primitive of type watch R and 2 R P map themselves so now you can materialize these columns by altering the table definitions and adding columns of that particular type it inferred and then you can get better performance from this materialized columns and yeah it's basically it's not in a single column anymore you have four columns for the fare your restaurant data and you can get some column selection and other optimizations on on the data that Whittaker provides all right so that is three flex tables are basically helpful if you don't have a schema and if you don't have any type of permission however we saw earlier that some file formats like Parker and Avro have schema and have some type information so in those cases you don't have to do the first step of inputting the type so you can directly create the type external table definition of the type and then you can target it to the park a file and you can load it in by an external table in vertical so the same restaurants dot JSON if you call if you transfer it to a translations or park' format you can basically get the fields with look however the locations and menu are still in the B map format all right so the V map format also allows you to explode the data and it has some nice functions to yeah M extract the fields from P map format so you have this map items so the same restaurant later if you want to explode and you want to apply predicate on the fields of the RS and the address of pro you can have map items to export your data and then you can apply predicates on a particular field in the complex type data so on this slide is basically showing you how you can explode the entire data the menu items as well as the locations and basically give you the elements of each of these complex types up so as I mentioned the menus so if you go back to the previous slide the locations and menu items are still the bond binary or the V map format so the question is if you want what if you want to get perform better on the V map data so for primitive types you could materialize into the primitive style however if it's an array and array of row we will need some first-class complex type constructs and that is what we will see that are added in what is right now so Vertica has started to introduce complex stereotypes with where these complex types is sort of a strongly typed complex site so on this slide you have an example of a row complex type where so we create an external table called customers and you have a row type of twit to fields name and ID so the complex type is basically inlined into the tables into the column definition and on the second example you can see the create external table items which is unlisted row type so it has an item of type row which is so fast to peals name and the properties is again another nested row type with two fixed quantities label so these are basically strongly typed complex types and then the optimizer can now give you a better performance compared to the V map using the strongly typed information in their queries so we have support for pure rows and extra draws in external tables for power K we have support for arrays and nested arrays as well for external tables in power K so you can declare an external table called contacts with a flip phone number of array of integers similarly you can have a nested array of items of type integer we can declare a column with that strongly typed complex type so the other complex type support that we are adding in the thinner liz's support for optimized one dimensional arrays and sets for both ross and as well as RK external table so you can create internal table called phone numbers with a one-dimensional array so here you have phone numbers of array of type int you can have one dimensional you can have sets as well which is also one color one dimension arrays but sets are basically optimized for fast look ups they are have unique elements and they are ordered so big so you can get fast look ups using sets if that is a use case then set will give you very quick lookups for elements and we also implemented some functions to support arrays sets as well so you have applied min apply max which are scale out that you can apply on top of an array element and you can get the minimum element and so on so you can up you have support for additional functions as well so the other feature that is coming in ten o is the explored arrays of functionality so we have a implemented EU DX that will allow you to similar similar to the example you saw in the math items case you can extract elements from these arrays and you can apply different predicates or analysis on the elements so for example if you have this restaurant table with the column name watch our locations of each an area of archer and menu again an area watch our you can insert values using the array constructor into these columns so here we inserting three values lilies feed the with location with locations cambridge pittsburgh menu items cheese and pepperoni again another row with name restaurant named bob tacos location Houston and totila salsa and Patty on the third example so now you can basically explode the both arrays into and extract the elements out from these arrays so you can explode the location array and extract the location elements which is which are basically Houston Cambridge Pittsburgh New Jersey and also you can explode the menu items and extract individual elements and now you can sort of apply other predicates on the extruded data Kollek so so so let's see what are some usage considerations of these complex data types so complex data types as we saw earlier are nice if you have sparse data so if your data has clickstream or has some page view data then maps are very nice to have to represent your data and then you can sort of efficiently represent the in the space wise fashion for sparse data use a map types and compensate that as we saw earlier for the web request count query it will help you simplify the analysis as well you don't have to have joins and it will simplify your query analysis as I just mentioned if your use cases are for fast look ups then you can use a set type so arrays are nice but they have the ordering on them however if your primary use case to just look up for certain elements then we can use the set type also you can use the B map or the Flex functionality that we have in Vertica if you want flexibility in your complex set data type schema so like I mentioned earlier you can trivially implement needs like scheme evolution or even keep the complex types fluid so if you have multiple iterations of unit analysis and each iteration we are changing the fields because you're just exploring the data then we map and flex will give you that nice ease to change the fields within the complex type or across files and we can load fluid complex you can load complexity types with bit fluids is basically different fields in different Rho into V map and flex tables easily however if you're once you basically treated over your data you figured out what are the fields and the complex types that you really need you can use the strongly typed complex data types that we started to introduce in Vertica so you can use the array type the struct type in the map type for your data analysis so that's sort of the high level use cases for complex types in vertical so it depends on a lot on where your data analysis phase is fear early then your data is usually still fluid and you might want to use V Maps and flex to explore it once you finalize your schema you can use the strongly typed complex data types and to get the best possible performance holic so so what's coming in the following releases of Vertica so antenna which is coming in sometime now so yeah so we are adding which is the next release of vertical basically we're adding support for loading Park a complex data types to the V map format so parquet is a strongly typed file format basically it has the schema it also has the type information for each of the complex type however if you are exploring your data then you might have different park' files with different schemes so you can load them to the V map format first and then you can analyze your data and then you can switch to the strongly typed complex types we're also adding one dimensional optimized arrays and sets in growth and for parquet so yeah the complex sets are not just limited to parquet you can also store them in drawers however right now you only support one dimension arrays and set in rows we're also adding the Explorer du/dx for one-dimensional arrays in the in this release so you can as you saw in the previous example you can explode the data for of arrays in arrays and you can apply predicates on individual elements for the erase data so you can in it'll apply for set so you can cause them to milli to erase and Clinics code sets as well so what are the plans paths that you know release so we are going to continue both for strongly-typed computer types right now we don't have support for the full in the tail release we won't have support for the full all the combinations of complex types so we only have support for nested arrays sorriness listed pure arrays or nested pure rows and some are only limited to park a file format so we will continue to add more support for sub queries and nested complex sites in the following in the in following releases and we're also planning to add this B map data type so you saw in the examples that the V map data format is currently backed by the long word binary data format or the other column type because of this the optimizer really cannot distinguish which is a which is which data is actually a long wall binary or which is actually data and we map format so if we the idea is to basically add a type called V map and then the optimizer can now implement our support optimizations or even syntax such as dot notation and yeah if your data is columnar such as Parque then you can implement optimizations just keep push down where you can push the keys that are actually querying in your in your in your analysis and then only those keys should be loaded from parquet and built into the V map format so that way you get sort of the column selection optimization for complex types as well and yeah that's something you can achieve if you have different types for the V map format so that's something on the roadmap as well and then unless join is basically another nice to have feature right now if you want to explode and join the array elements you have to explode in the sub query and then in the outer query you have to join the data however if you have unless join till I love you to explode as well as join the data in the same query and on the fly you can do both and finally we are also adding support for this new feature called UD vector so that's on the plan too so our work for complex types is is essentially chain the fundamental way Vertica execute in the sense of functions and expression so right now all expressions in Vertica can return only a single column out acceptance in some cases like beauty transforms and so on but the scalar functions for instance if you take aut scalar you can get only one column out of it however if you have some use cases where you want to compute multiple computation so if you also have multiple computations on the same input data say you have input data of two integers and you want to compute both addition and multiplication on those two columns this is for example but in many many machine learning example use cases have similar patterns so say you want to do both these computations on the data at the same time then in the current approach you have to have one function for addition one function for multiplication and both of them will have to load the data once basically loading data twice to get both these computations turn however with the Uni vector support you can perform both these computations in the same function and you can return two columns out so essentially saving you the loading loading these columns twice you can only do it once and get both the results out so that's sort of what we are trying to implement with all the changes that we are doing to support complex data types in Vertica and also you don't have to use these over Clause like a uni transform so PD scale just like we do scalars you can have your a vector and you can have multiple columns returned from your computations so that sort of concludes my talk so thank you for listening to my presentation now we are ready for Q&A

Published Date : Mar 30 2020

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
AmericaLOCATION

0.99+

Jeff HealeyPERSON

0.99+

second rowQUANTITY

0.99+

MaryPERSON

0.99+

two rowsQUANTITY

0.99+

two fieldsQUANTITY

0.99+

first rowQUANTITY

0.99+

two rowsQUANTITY

0.99+

two typesQUANTITY

0.99+

each rowQUANTITY

0.99+

two integersQUANTITY

0.99+

DeepakPERSON

0.99+

one functionQUANTITY

0.99+

three fieldsQUANTITY

0.99+

fourth fieldsQUANTITY

0.99+

each elementQUANTITY

0.99+

each fieldQUANTITY

0.99+

thirdQUANTITY

0.99+

more than thousand web requestsQUANTITY

0.99+

second exampleQUANTITY

0.99+

todayDATE

0.99+

each keyQUANTITY

0.99+

each tableQUANTITY

0.99+

four fieldsQUANTITY

0.99+

third fieldQUANTITY

0.99+

first exampleQUANTITY

0.99+

Deepak Magette IIPERSON

0.99+

two columnsQUANTITY

0.99+

third categoryQUANTITY

0.99+

two columnsQUANTITY

0.99+

two fieldsQUANTITY

0.99+

HoustonLOCATION

0.99+

first stepQUANTITY

0.99+

twiceQUANTITY

0.99+

thousands of columnsQUANTITY

0.98+

three valuesQUANTITY

0.98+

this weekDATE

0.98+

more than one thousand website eventsQUANTITY

0.98+

third typeQUANTITY

0.98+

each iterationQUANTITY

0.98+

bothQUANTITY

0.98+

greater than thousandQUANTITY

0.98+

cambridgeLOCATION

0.98+

JSONTITLE

0.98+

both arraysQUANTITY

0.97+

one columnQUANTITY

0.97+

thousands of fieldsQUANTITY

0.97+

secondQUANTITY

0.97+

third exampleQUANTITY

0.97+

twoQUANTITY

0.97+

single columnQUANTITY

0.96+

thousandQUANTITY

0.96+

AlexPERSON

0.96+

firstQUANTITY

0.96+

BBC 2020ORGANIZATION

0.96+

VerticaTITLE

0.96+

four columnsQUANTITY

0.95+

onceQUANTITY

0.95+

one typeQUANTITY

0.95+

V MapsTITLE

0.94+

one colorQUANTITY

0.94+

second typeQUANTITY

0.94+

one dimensionQUANTITY

0.94+

first two fieldsQUANTITY

0.93+

four tablesQUANTITY

0.91+

eachQUANTITY

0.91+

Jon Hirschtick, Onshape Inc. | Actifio Data Driven 2019


 

>> from Boston, Massachusetts. It's the queue covering active eo 2019. Data driven you by activity. >> Welcome back to Boston. Everybody watching the Cube, the leader and on the ground tech coverage money was David wanted here with my co host. A student of John for is also in the house. This is active FiOS data driven 19 conference. They're second year, John. Her stick is here is the co founder and CEO of on shape John. Thanks for coming in the Cube. Great to have you great to be here. So love the cofounder. I always ask your father. Why did you start the company? Well, we found it on shape because >> we saw an opportunity to improve how every product on Earth gets developed. Let people who develop products do it faster, B'more, innovative, and do it through a new generation software platform based in the cloud. That's our vision for on shape, That's why. Okay, >> so that's great. You start with the widened. The what is just new generation software capabilities to build the great products visualized actually create >> way took the power of cloud web and mobile and used it to re implement a lot of the classic tools for product development. Three d cad Data management Workflow Bill of Materials. He's may not mean anything to you, but they mean a lot to product developers, and we believe by by moving in the cloud by rethinking them for the cloud we can give people capabilities they've never had before. >> John, bring us in tight a little bit. So you know, I think I've heard a lot the last few years. It's like, Well, I could just do everything a simulation computer simulation. We can have all these models. They could make their three D printings changing the way I build prototypes. So what's kind of state of the state and in your fields? So >> the state of the Art R field is to model product in three dimensions in the computer before you build it for lots of reasons. For simulation for three D printing, you have to have a CAD model to do it, to see how it'll look, how parts fit together, how much it will cost. Really, every product today is built twice. First, it's built in the computer in three dimensions, is a digital model, then it's built in the real world, and what we're trying to do is make those three D modeling and data management collaboration tools to take them to a whole nother level to turbo charge it, if you will, so that teams can can work together even if they're distribute around the world. They work faster. They don't have to pay a tax to install and Karen feed for these systems. You're very complicated, a whole bunch of other benefits. So we talk about the cloud model >> you're talking about a sass model, a subscription model of different customer experience, all of the above, all of the above. Yeah, it's definitely a sass model we do on Ly SAS Way >> hosted and, uh, Amazon. Eight of us were all in with Amazon. It's a it's a subscription model, and we provide a much better, much more modern, better, more productive experience for the user CIA disrupting the traditional >> cad business. Is that Is that right? I mean more than cat cat Plus because there's no such thing as a cad company anymore. We're essentially disrupting the systems that we built because I've been in this business 30 38 years now. I've been doing this. I feel like I'm about half done. Really, really talking about >> your career. Way to start out. Well, I grew up in Chicago. I went to M I t and majored in mechanical engineering and knew howto program computers. And I go to get an internship in 1981 and they say computers, mechanical injury. You need to work on CAD. And I haven't stopped since, you know, because Because we're not done, you know, still still working here. You would >> have me, right? You can't let your weight go dynamic way before we get off on the M I t. Thing you were part of, you know, quite well known group. And Emmet tell us a little bit >> about what you're talking about. The American society of Mechanical Engineer >> has may I was actually an officer and and as any I know your great great events, but the number 21 comes to >> mind you're talking about the MIT blackjack team? Yes, I was, ah, player on the MIT blackjack team, and it's the team featured in movies, TV shows and all that. Yeah, very exciting thing to be doing while I was working at the cath lab is a grad student, you know, doing pursuing my legitimate career. There is also also, uh, playing blackjack. Okay, so you got to add some color to that. So where is the goal of the M I T. Blackjack team? What did you guys do? The goal of the M I t blackjack team was honestly, to make money using legal means of skill to Teo obtain an edge playing blackjack. And that's what we did using. Guess what? The theme of data which ties into this data driven conference and what active Eo is doing. I wish we had some of the data tools of today. I wish we had those 30 years ago. We could have We could have done even more, but it really was to win money through skill. Okay, so So you you weren't wired. Is that right? I mean, it was all sort of No, at the time, you could not use a computer in the casino. Legally, it was illegal to use a computer, so we didn't use it. We use the computer to train ourselves to analyze data. To give a systems is very common. But in the casino itself, we were just operating with good old, you know, good. This computer. Okay. And this computer would what you would you would you would count cards you would try to predict using your yeah, count cards and predict in card. Very good observation there. Card counting is really essentially prediction. In a sense, it's knowing when the remaining cards to be dealt are favorable to the player. That's the goal card counting and other systems we used. We had some proprietary systems to that were very, very not very well known. But it was all about knowing when you had an edge and when you did betting a lot of money and when you didn't betting less double doubling down on high probability situations, so on, So did that proceed Or did that catalyze like, you know, four decks, eight decks, 12 12 decks or if they were already multiple decks. So I don't think we drove them to have more decks. But we did our team. Really. Some of the systems are team Pioneer did drive some changes in the game, which are somewhat subtle. I could get into it, you know, I don't know how much time we have that they were minor changes that our team drove. The multiple decks were already are already well established. By the time my team came up, how did you guys do you know it was your record? I like to say we won millions of dollars during the time I was associated with the team and pretty pretty consistently won. We didn't win every day or every weekend, but we'd run a project for, say, six months at a time. We called it a bank kind of like a fund, if you will, into no six months periods we never lost. We always won something, sometimes quite a bit, where it was part of your data model understanding of certain casinos where there's certain casinos that were more friendly to your methodology. Yes, certain casinos have either differences in rules or, more commonly, differences in what I just call conditions like, for instance, obviously there's a lot of people betting a lot of money. It's easier to blend in, and that's a good thing for us. It could be there there. Their aggressiveness about trying to find card counters right would vary from casino to casino, those kinds of factors and occasionally minor rule variations to help us out. So you're very welcome at because he knows is that well, I once that welcome, I've actually been been Bardet many facilities tell us about that. Well, you get, you get barred, you get usually quite politely asked toe leave by some big guy, sometimes a big person, but sometimes just just honestly, people who like you will just come over and say, Hey, John, we'd rather you not play blackjack here, you know that. You know, we only played in very upstanding professional kind of facilities, but still, the message was clear. You know, you're not welcome here in Las Vegas. They're allowed to bar you from the premises with no reason given in Las Vegas. It's just the law there in Atlantic City. That was not the law. But in Vegas they could bar you and just say you're not welcome. If you come back, we'll arrest you for trespassing. Yeah, And you really think you said everything you did was legal? You know, we kind of gaming the system, I guess through, you know, displaying well probabilities and playing well. But this interesting soothe casinos. Khun, rig the system, right? They could never lose, but the >> players has ever get a bet against the House. >> How did >> you did you at all apply that experience? Your affinity to data to you know, Let's fast forward to where you are now, so I think I learned a lot of lessons playing blackjack that apply to my career and design software tools. It's solid works my old company and now death. So System, who acquired solid words and nowt on shape I learned about data and rigor, could be very powerful tools to win. I learned that even when everyone you know will tell you you can't win, you still can win. You know that a lot of people told me Black Jack would never work. A lot of people told me solid works. We never worked. A lot of people told me on shape would be impossible to build. And you know, you learn that you can win even when other people tell you, Can't you learn that in the long run is a long time? People usually think of what you know, Black Jack. You have to play thousands of hands to really see the edge come out. So I've learned that in business sometimes. You know, sometimes you'll see something happened. You just say, Just stay the course. Everything's gonna work out, right? I've seen that happen. >> Well, they say in business oftentimes, if people tell you it's impossible, you're probably looking at a >> good thing to work on. Yeah. So what's made it? What? What? What was made it ostensibly impossible. How did you overcome that challenge? You mean, >> uh, on >> shape? Come on, Shake. A lot of people thought that that using cloud based tools to build all the product development tools people need would be impossible. Our software tools in product development were modeling three D objects to the precision of the real world. You know that a laptop computer, a wristwatch, a chair, it has to be perfect. It's an incredibly hard problem. We work with large amounts of data. We work with really complex mathematics, huge computing loads, huge graphic loads, interactive response times. All these things add up to people feeling Oh, well, that would never be possible in the cloud. But we believe the opposite is true. We believe we're going to show the world. And in the future, people say, you know We don't understand how you do it without the cloud because there's so much computing require. >> Yeah, right. It seems you know where we're heavy in the cloud space. And if you were talking about this 10 years ago, I could understand some skepticism in 10 2019. All of those things that you mentioned, if I could spin it up, I could do it faster. I can get the resources I need when I needed a good economics. But that's what the clouds built for, as opposed to having to build out. You know, all of these resource is yourself. So what >> was the what was the big technical challenge? Was it was it? Was it latent? See, was it was tooling. So performance is one of the big technical challenges, As you'd imagine, You know, we deliver with on shape we deliver a full set of tools, including CAD formal release management with work flow. If that makes sense to you. Building materials, configurations, industrial grade used by professional companies, thousands of companies around the world. We do that all in a Web browser on any Mac Windows machine. Chromebook Lennox's computer iPad. I look atyou. I mean, we're using. We run on all these devices where the on ly tools in our industry that will run on all these devices and we do that kind of magic. There's nothing install. I could go and run on shape right here in your browser. You don't need a 40 pound laptop, so no, you don't need a 40 pound laptop you don't need. You don't need to install anything. It runs like the way we took our inspiration from tools like I Work Day and Sales Force and Zen Desk and Nets. Sweet. It's just we have to do three D graphics and heavy duty released management. All these complexities that they didn't necessarily have to do. The other thing that was hard was not only a technical challenge like that, but way had to rethink how workflow would happen, how the tools could be better. We didn't just take the old tools and throw him up in a cloud window, we said, How could we make a better way of doing workflow, release management and collaboration than it's ever been done before? So we had to rethink the user experience in the paradigms of the systems. Well, you know, a lot of talk about the edge and if it's relevant for your business. But there's a lot of concerns about the cloud being able to support the edge. But just listening to you, John, it's It's like, Well, everybody says it's impossible. Maybe it's not impossible, but maybe you can solve the speed of light problem. Any thoughts on that? Well, I think all cloud solutions use edge to some degree. Like if you look at any of the systems. I just mentioned sales for us workday, Google Maps. They're using these devices. I mean, it's it's important that you have a good client device. You have better experience. They don't just do everything in the cloud. They say There, there. To me, they're like a carefully orchestrated symphony that says We'll do these things in the core of the cloud, these things near the engineer, the user, and then these things will do right in the client device. So when you're moving around your Google map or when you're looking this big report and sales force you're using the client to this is what are we have some amazing people on her team, like R. We have the fellow who was CTO of Blade Logic. Robbie Ready. And he explains these concepts to make John Russo from Hey came to us from Verizon. These are people who know about big systems, and they helped me understand how we would distribute these workloads. So there's there's no such thing is something that runs completely in the cloud. It has to send something down. So, uh, talk aboutthe company where you're at, you guys have done several raises. You've got thousands of customers. You maybe want to add a couple of zeros to that over time is what's the aspirations? Yeah, correct. We have 1000. The good news is we have thousands of customer cos designing everything you could imagine. Some things never would everything from drones two. We have a company doing nuclear counter terrorism equipment. Amazing stuff. Way have people doing special purpose electric vehicles. We have toys way, have furniture, everything you'd imagined. So that's very gratifying. You us. But thousands of companies is still a small part of the world. This is a $10,000,000,000 a year market with $100,000,000,000 in market cap and literally millions of users. So we have great aspirations to grow our number of users and to grow our tool set capability. So let's talk to him for a second. So $10,000,000,000 current tam are there. Jason sees emerging with all these things, like three D printing and machine intelligence, that that actually could significantly increase the tam when you break out your binoculars or even your telescope. Yes, there are. Jason sees their increasing the tam through. Like you say, new areas drive us So So obviously someone is doing more additive manufacturing. More generative design. They're goingto have more use for tools like ours. Cos the other thing that I observed, if I can add one, it's my own observations. I think design is becoming a greater component of GDP, if you will, like if you look at how much goods in the world are driven by design value versus a decade or two or when I was a child, you know, I just see this is incredible amount, like products are distinguished by design more and more, and so I think that we'll see growth also through through the growth in design as an element of GDP on >> Jonah. I love that observation actually felt like, you know, my tradition. Engineering education. Yeah, didn't get much. A lot of design thing. It wasn't until I was in industry for years. That had a lot of exposure to that. And it's something that we've seen huge explosion last 10 years. And if you talk about automation versus people, it's like the people that designed that creativity is what's going to drive into the >> absolutely, You know, we just surveyed almost 1000 professionals product development leaders. Honestly, I think we haven't published our results yet, So you're getting it. We're about to publish it online, and we found that top of mind is designed process improvements over any particular technology. Be a machine learning, You know, the machine learning is a school for the product development. How did it manufacturers a tool to develop new products, but ultimately they have to have a great process to be competitive in today's very competitive markets. Well, you've seen the effect of the impact that Apple has had on DH sort of awakening people to know the value of grace. Desire absolutely have to go back to the Sony Walkman. You know what happened when I first saw one, right? That's very interesting design. And then, you know, Dark Ages compared to today, you know, I hate to say it. Not a shot at Sony with Sony Wass was the apple? Yeah, era. And what happened? Did they drop the ball on manufacturing? Was it cost to shoot? No. They lost the design leadership poll position. They lost that ability to create a world in pox. Now it's apple. And it's not just apple. You've got Tesla who has lit up the world with exciting design. You've got Dyson. You know, you've got a lot of companies that air saying, you know, it's all about designing those cos it's not that they're cheaper products, certainly rethinking things, pushing. Yeah, the way you feel when you use these products, the senses. So >> that's what the brand experience is becoming. All right. All right, John, thanks >> so much for coming on. The Cuban sharing your experiences with our audience. Well, thank you for having me. It's been a pleasure, really? Our pleasure. All right, Keep right. Everybody stupid demand. A volonte, John Furry. We've been back active, eo active data driven 19 from Boston. You're watching the Cube. Thanks

Published Date : Jun 18 2019

SUMMARY :

Data driven you by activity. Great to have you great to be here. software platform based in the cloud. to build the great products visualized actually create of the classic tools for product development. So you know, I think I've heard a lot the last few years. the state of the Art R field is to model product in three dimensions in the computer before all of the above, all of the above. It's a it's a subscription model, and we provide a much better, We're essentially disrupting the systems that we built you know, because Because we're not done, you know, still still working here. before we get off on the M I t. Thing you were part of, about what you're talking about. By the time my team came up, how did you guys do you know it was your record? you know, Let's fast forward to where you are now, so I think I learned a lot of lessons playing blackjack that How did you overcome that challenge? And in the future, people say, you know We don't understand how you do it without All of those things that you that that actually could significantly increase the tam when you break out your binoculars I love that observation actually felt like, you know, my tradition. Yeah, the way you feel when you use these products, the senses. that's what the brand experience is becoming. Well, thank you for having me.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JasonPERSON

0.99+

DavidPERSON

0.99+

Atlantic CityLOCATION

0.99+

JohnPERSON

0.99+

ChicagoLOCATION

0.99+

$100,000,000,000QUANTITY

0.99+

$10,000,000,000QUANTITY

0.99+

12QUANTITY

0.99+

VegasLOCATION

0.99+

1981DATE

0.99+

six monthsQUANTITY

0.99+

AmazonORGANIZATION

0.99+

EightQUANTITY

0.99+

Jon HirschtickPERSON

0.99+

40 poundQUANTITY

0.99+

AppleORGANIZATION

0.99+

Las VegasLOCATION

0.99+

BostonLOCATION

0.99+

John FurryPERSON

0.99+

John RussoPERSON

0.99+

eight decksQUANTITY

0.99+

1000QUANTITY

0.99+

TeslaORGANIZATION

0.99+

SonyORGANIZATION

0.99+

four decksQUANTITY

0.99+

second yearQUANTITY

0.99+

twiceQUANTITY

0.99+

Blade LogicORGANIZATION

0.99+

iPadCOMMERCIAL_ITEM

0.99+

FirstQUANTITY

0.99+

VerizonORGANIZATION

0.99+

Boston, MassachusettsLOCATION

0.99+

CIAORGANIZATION

0.99+

EarthLOCATION

0.99+

Onshape Inc.ORGANIZATION

0.99+

Robbie ReadyPERSON

0.99+

FiOSORGANIZATION

0.99+

millions of dollarsQUANTITY

0.99+

EmmetPERSON

0.99+

todayDATE

0.98+

thousands of handsQUANTITY

0.98+

thousands of companiesQUANTITY

0.98+

10 years agoDATE

0.98+

ChromebookCOMMERCIAL_ITEM

0.98+

a decadeQUANTITY

0.97+

Google mapTITLE

0.97+

10 2019DATE

0.97+

KarenPERSON

0.97+

MacCOMMERCIAL_ITEM

0.97+

30 years agoDATE

0.97+

PioneerORGANIZATION

0.97+

firstQUANTITY

0.96+

almost 1000 professionalsQUANTITY

0.96+

appleORGANIZATION

0.96+

19QUANTITY

0.95+

oneQUANTITY

0.95+

blackjackTITLE

0.95+

millions of usersQUANTITY

0.94+

JonahPERSON

0.94+

thousands of customerQUANTITY

0.93+

WalkmanCOMMERCIAL_ITEM

0.92+

thousands of customersQUANTITY

0.91+

twoQUANTITY

0.9+

Sony WassORGANIZATION

0.89+

$10,000,000,000 a yearQUANTITY

0.89+

30 38 yearsQUANTITY

0.85+

12 decksQUANTITY

0.85+

R.PERSON

0.85+

three dimensionsQUANTITY

0.82+

M I T.ORGANIZATION

0.82+

DysonPERSON

0.81+

I Work DayTITLE

0.8+

cath labORGANIZATION

0.8+

M I tORGANIZATION

0.8+

CubeORGANIZATION

0.78+

zerosQUANTITY

0.78+

MIT blackjack teamORGANIZATION

0.77+

Doug VanDyke, Enquizit | AWS Public Sector Summit 2019


 

>> live from Washington, D. C. It's the Cube covering a ws public sector summit I wrote to you by Amazon Web services. Welcome >> back, everyone. You are watching the Cube and we are here in our nation's capital at the A. W s Public sector summit. I'm your host, Rebecca Night hosting alongside John Furrier. We're joining Cuba LEM Doug Van Dyke, CEO of Inquisitor to our show. Thanks so much for coming back on. >> Well, thank you for having me back. It's good to be here. >> So as I said, You're a Cuba LEM. You're also a nails on alum. And there's a story there, so >> we'll just do a quick rehash of last year. So I started a day ws in 2,012 with the federal business helped the federal business grow started. The eight of US nonprofit Vertical was invited by John and in stew last year to be on the Cube. The video is a great discussion. The video is seen by some of our best partners and inquisitor who happens to be one of the best partners that I had in public sector. We started some discussions and later I was hired to be the CEO. So, John, >> thank you. I didn't know this was >> going to be a career opportunity >> for you. You're the one who's got the jobs. You through the interviews? Well, political, absolutely appreciated community. Great to have you on. Good. Thank you. Thank you for meeting with Theresa. You've known Therese for many, many years. Microsoft Public Sector Game is certainly on fire. You got Andy chassis on the fireside chat. Kind of bring in. You see the frustration like he's got problems and he's never known any for many, many years. For him to be that animated with his opinion means that it's critical more more than ever. Now, where is public sector opportunity right now? Because it seems to be clouds validated, are we? There is just a turning moment for the whole public sector community, >> yet we're so we're absolutely seeing that and inquisitive fact inquisitor. One of the things I like most about inquisitor is it is focused exclusively on the public sector, so our background is in education. If you know, a student is graduating from high school now and applying to one of the many colleges and universities they use the common application We worked with the common app to help build that system that graduating students can apply to multiple universities as opposed to when I was a graduating high school student, had to fill out the form, send in a check, wait for it to come back in the mail. Now that's all done online. You can apply to multiple colleges at the same time. So I look at that as one of the first innovations that happened in the public sector on a ws inquisitor was a part of it. It was one of the things that attracted me to inquisitor, but the innovations that was in two thousand 92 1,010 it was the beginning. We are just hitting that hockey stick that Andy has talked about in public sector, where you know, the federal business. You talked a little bit about the Intel business and how when the agency moved onto a ws, it really validated security. I think we've seen the government go in. I think we've seen education and nonprofits, so I think this is the time that public sector is really going to take off in the clouds >> about the company that you're leading is the chief now, and the product is using common app. You tell what the common app that my high school's graduates had to fill out. Okay, it's send okay. Is that it? >> That's it. That's it. So I >> got some issues with this thing. >> So follow up that was >> definitely on love on different you. Heavy lifting when filling out applications. Automate is great, but it increases the MAWR schools you can apply to, so creates more inbound applications to schools. It does. I'm sure there's some challenges there that's on the horizon with you guys is solving them that creates more. I won't say span because this legit, but a lot of schools are like people throwing in 17 applications now. 20 applications. >> Well, it's automated. I >> mean technology. So, yes, there's more automation, but there's more background. There's more data and these surgeries going on database decision. So sure we'll let me start with inquisitor. You asked about inquisitive 2,000 to quiz it's started and doing application development. It was in two thousand nine that we really saw the light to move Teo a Ws, and it was through the work that we were doing with the common app that we realised the scale of handling all these applications, that the paper based way isn't an easier. In fact, it really restricts the number of colleges that students can apply, and it restricts the number of applicants that colleges get. So with more students applying to more universities and universities receiving more applications, they can be really selective. They have more data sources, more information aboutthe people. They're going to bring on and have a very inclusive and representative university. We have students applying from China and Europe, too, United States University. So we're getting a lot of diversity, and I think you know, there's probably a little bit more volume, but that's what technology >> today is the first digital data. So that's why I appreciate that. But there's gotta be more automation machine learning going in because now you have a relationship with a student and a school. What, what's next? What happens next? >> Well, it's so Sky's the limit, and you can do once you've got data. So data reporting is basically limited by the quality of the input data. So you have more students applying with more background information, and you could get really personal. So we helped a large Ivy League university in the Northeast migrate all into a ws. And this was after we worked with common app to build the common application way helped this university migrate all into a ws and we realized that there were benefits and challenges along the way. Some of the challenges we saw were repeatable, so we built a proprietary product called Sky Map. And what sky map does is it helps the full migration. So it integrates with your discovery applications like a risk network. It integrates with a ws cloud endure and we were working with cloud endure before a ws acquired them. So we have a p I's there, it manages the whole migration. And your question was, you get all this information about an organization's infrastructure, what do you do with it? Will use the next up is a M l. So we've used some of the higher level services that a bit Amazon Web services has with artificial intelligence. We were using Lambda Server lis and we could go there because I think that's and you've >> got to hand over their 80 must educate. >> Oh, yeah, >> you know, you're great. Get a common app over there. Any university coming soon >> I would Did he mention that I saw he was >> on the show before? >> And I just think that it was You got a huge inbound educational thing going on. So education seems to be a big part of the whole themes here. >> Well, that's our legacy, and we're working with a lot of universities were seeing. So you asked, Where is the cloud going? And in the future, we're seeing large universities move all in on a WS because of they're going to get more flexibility. The costs are going to go down. They're going to have more information on the students. They're going to be able to provide better learning. >> When you're talking to your client of this this big Ivy league in the Northeast, what are its pain points? Because I mean, college admissions is a controversial topic in the United States, and its been there's been scandal this year. What? When? When you were talking with this company and they said, Well, we want to do this. But what was the problem they were trying to solve? I mean, what what were they? What were their pain points. >> Well, one of the first pain points is they were located in a major city and their data center was in the major city. And this is expensive real estate. And so to use expensive real estate that you for date us, you know, for servers, etcetera for data center instead of using it for education is a cost to the university. So very simply put, moving out of that data center opening that space up for education and moving into a ws cloud saved it gave them more space for education. It helped them with cost avoidance, and way had a bunch of lessons learned along the way. So way at the time could move about five servers a week, which may seem like a good number. But now, with the automation that we get through sky map our product, we're working with the large a group of private universities as well as Wharton University. And with this large group of private universities, we found we could do on average over 20 the best week we had 37 servers migrate, hire >> a housefly. They like to be on the cutting edge, but still there public sector. Where's the modernisation Progress on that? Because now you're you've been on both sides of the table. You were Amazon Web services. Now years leading is the CEO of this company in higher ed. How's that modernization going? What's your perspective? What's your observation around? >> Sure, So you know. First of all, I had the opportunity to go work it with the university that's local here last week. And what I love seeing is with this access to the cloud you've got, everyone in the university now has access to nearly unlimited resource is for education. They were staffing their own help desk with their students. And I love seeing that kind of experience being brought from, You know, someone who used to be an IT professional is now being brought down to a student because of thes new technologies are so readily accessible to everybody. >> So so what's that? Tell us some other things that you're seeing that you're hearing. They're they're exciting innovations to you in the in the sector. >> Yeah, well, another opportunity that were working with is we worked with the Small Business Administration, and that was pretty rewarding. For us is a small business and three of the applications that we worked on their were. So we are a small a day, and it used to take our founder TC Ratna pur e about two months. Oh, and we had to hire an outside consultant to apply for our small business accreditation. So he was doing the paperwork and all the, you know, the old school application certification. After we built this application with the Small Business Administration, it took him several hours. He did it by himself. We applied. Got the accreditation. So thes modernizations air happening both in universities as well as in the federal government. >> So what's your business plan? You're the CEO now. What's the company's plan? Which your goals. >> So there's so many things I could talk about ill talk about one or two. We see in the next 1 2 3 to 5 years in public sector that these organizations are going to migrate all in on the cloud. And so we're building up a group. That's what Sky map is mainly addressing is way. Want to make sure that organizations are able tto orchestrate their move to the cloud and we're using? We're going to start exposing the tool that we use for our own internal resource is we're gonna start exposing that, leaving that with universities in the federal government and anyone else who's willing to use it to help them get all in on the cloud. Then we think there's probably going to be a wave where they're trying. Teo, learn the cloud and howto operate It will help them is a manage service provider. And then where I'm excited is you go to server lists and I mentioned were already using Lambda for our sky map product that we see in the future after the M S P V organisations. They're going to be servant lis and they'll be running into no ops environments. >> The classic example of sometimes you your business evolves areas you don't know based off on the wave You're on you guys, we're very proficient at migrating We are now You got sky map which is you're gonna take that those learnings and pay it forward bringing >> that are bringing them to the market that >> we don't have to do that themselves by build kind of thing. >> Well, and it's a little bit like you're doing here, John. And what a ws >> is the only one I get up. I tell everybody that, like >> a ws did eight of us start is away for Amazon to manage their internal servers. And, you know, eventually they realized everyone else in the market can use thes same innovations that they've got. And, >> well, I think this proves the point that if you assassin based model with open AP eyes, you Khun offer and pretty much anything is a service. If you get the speed and agility equation right, someone might say why she is not a court company. Why should I buy? I'll just use that service. I hope so. It's the sad, small hopes up. >> Yeah, and sorry. >> I was going to say you were on the inside. Now you're on the outside of that. This conference. What are your impressions? What are you What kind of conversations are you having that you are going to take back to inquisitor and say, Hey, I learned this at the summit. Are these people over here working on something cool? We got to get this in >> here. Well, it's been really fun for me is a change of perspective. For the last seven years, I've been helping plan and organize the event. Make sure it >> goes off this time. I'm a guest. You know, e I look a little bit >> more relaxed than last year is because, you know, I'm a guest now, but the takeaways are really You know, the innovation is continuing at A W s. And, you know, as a partner of Amazon Web services, I've got to make sure that my team and I stay up to date with all of the services that are being released and simplify those. And, like John was asking earlier, you know, make sure that there's a strategy for migration support and then continuing to re factor what they're doing. >> Well, congratulations on the new job. Get a great tale. When, with cloud growth adoption just early days, public sector continuing toe astonished with numbers. Next, she'll be 38,000 people. A lawsuit is like reinvent size, only 30,000 people. >> This is huge. It's a pleasure to be here. I'm sure you guys are enjoying it as well. >> Yeah, I know. It's been great, Doug. Thanks so much for returning to the Q B. I your two time >> alone. Thank you. Thank >> you. I'm Rebecca Knight for John Furrier. We will have more from the Amazon, Uh, a ws public sector, something coming up in just a little bit.

Published Date : Jun 12 2019

SUMMARY :

a ws public sector summit I wrote to you by Amazon Web services. We're joining Cuba LEM Doug Van Dyke, CEO of Inquisitor to our show. It's good to be here. So as I said, You're a Cuba LEM. be one of the best partners that I had in public sector. I didn't know this was Great to have you on. I like most about inquisitor is it is focused exclusively on the public sector, about the company that you're leading is the chief now, and the product is using common app. So I but it increases the MAWR schools you can apply to, so creates more inbound applications I of colleges that students can apply, and it restricts the number of applicants that colleges learning going in because now you have a relationship with a student and Well, it's so Sky's the limit, and you can do once you know, you're great. So education seems to be a big part of the whole themes here. And in the future, we're seeing large universities When you were talking with this Well, one of the first pain points is they were located in a major city and their data They like to be on the cutting edge, but still there public sector. First of all, I had the opportunity to go work it with the university that's They're they're exciting innovations to you and all the, you know, the old school application certification. You're the CEO now. We see in the next 1 2 3 to 5 years in public sector that these organizations are going to migrate all in on And what a ws is the only one I get up. And, you know, eventually they realized everyone else in the market can use thes same innovations It's the sad, small hopes up. I was going to say you were on the inside. For the last seven years, I've been helping plan and organize I'm a guest. And, like John was asking earlier, you know, make sure that there's a strategy for migration support Well, congratulations on the new job. It's a pleasure to be here. Thanks so much for returning to the Q B. I your two time Thank you. Uh, a ws public sector, something coming up in just a little bit.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

TheresaPERSON

0.99+

Doug VanDykePERSON

0.99+

Rebecca NightPERSON

0.99+

Doug Van DykePERSON

0.99+

AndyPERSON

0.99+

AmazonORGANIZATION

0.99+

Rebecca KnightPERSON

0.99+

United StatesLOCATION

0.99+

DougPERSON

0.99+

37 serversQUANTITY

0.99+

Washington, D. C.LOCATION

0.99+

threeQUANTITY

0.99+

Ivy LeagueORGANIZATION

0.99+

John FurrierPERSON

0.99+

20 applicationsQUANTITY

0.99+

last yearDATE

0.99+

EuropeLOCATION

0.99+

17 applicationsQUANTITY

0.99+

Wharton UniversityORGANIZATION

0.99+

last weekDATE

0.99+

TheresePERSON

0.99+

ChinaLOCATION

0.99+

38,000 peopleQUANTITY

0.99+

last yearDATE

0.99+

twoQUANTITY

0.99+

80QUANTITY

0.99+

VerticalORGANIZATION

0.99+

Andy chassisPERSON

0.99+

oneQUANTITY

0.99+

Small Business AdministrationORGANIZATION

0.99+

30,000 peopleQUANTITY

0.99+

todayDATE

0.98+

bothQUANTITY

0.98+

LambdaTITLE

0.98+

MicrosoftORGANIZATION

0.98+

Sky MapTITLE

0.98+

OneQUANTITY

0.98+

United States UniversityORGANIZATION

0.98+

FirstQUANTITY

0.98+

two timeQUANTITY

0.98+

both sidesQUANTITY

0.98+

two thousand nineQUANTITY

0.97+

sky mapTITLE

0.97+

USLOCATION

0.97+

IntelORGANIZATION

0.97+

Amazon WebORGANIZATION

0.96+

this yearDATE

0.96+

5 yearsQUANTITY

0.95+

eightQUANTITY

0.95+

over 20QUANTITY

0.94+

WSORGANIZATION

0.94+

A. W s Public sector summitEVENT

0.94+

first painQUANTITY

0.93+

about two monthsQUANTITY

0.92+

CubaLOCATION

0.9+

EnquizitORGANIZATION

0.9+

NortheastLOCATION

0.9+

AWS Public Sector Summit 2019EVENT

0.88+

first innovationsQUANTITY

0.88+

two thousand 92 1,010QUANTITY

0.87+

a dayQUANTITY

0.87+

first digitalQUANTITY

0.86+

last seven yearsDATE

0.86+

about five servers a weekQUANTITY

0.85+

SkyORGANIZATION

0.84+

Sky mapTITLE

0.84+

CubeTITLE

0.82+

InquisitorORGANIZATION

0.8+

TC Ratna pur ePERSON

0.79+

2,012QUANTITY

0.79+

Ivy leagueORGANIZATION

0.77+

CEOPERSON

0.67+

MAWRORGANIZATION

0.64+

GameORGANIZATION

0.61+

2,000QUANTITY

0.59+

1QUANTITY

0.59+

KhunORGANIZATION

0.5+

Bridget Kromhout, Microsoft | KubeCon + CloudNativeCon EU 2019


 

(upbeat techno music) >> Live from Barcelona Spain, it's theCUBE. Covering KubeCon CloudNativeCon Europe 2019. Brought to you by Red Hat, The Cloud Native Computing Foundation and Ecosystem Partners. >> Welcome back, this is The Cube's coverage of KubeCon CloudNativeCon 2019. I'm Stu Miniman with Corey Quinn as my cohost, even though he says kucon. And joining us on this segment, we're not going debate how we pronounce certain things, but I will try to make sure that I get Bridget Kromhout correct. She is a Principle Cloud Advocate at Microsoft. Thank you for coming back to The Cube. >> Thank you for having me again. This is fun! >> First of all I do have to say, the bedazzled shirt is quite impressive. We always love the sartorial, ya know, view we get at a show like this because there are some really interesting shirts and there is one guy in a three-piece suit. But ya know-- >> There is, it's the high style, got to have that. >> Oh, absolutely. >> Bringing some class to the joint. >> Wearing a suit is my primary skill. (laughing) >> I will tell you that, yes, they sell this shirt on the Microsoft company store. And yes, it's only available in unisex fitted. Which is to say much like Alice Goldfuss likes to put it, ladies is gender neutral. So, all of the gentleman who say, but I have too much dad bod to wear that shirt! I say, well ya know get your bedazzlers out. You too can make your own shirt. >> I say it's not dad bod, it's a father figure, but I digress. (laughing) >> Exactly! >> Alright, so Bridget you're doing some speaking at the conference. You've been at this show a few times. Tell us, give us a bit of an overview of what you're doing here and your role at Microsoft these days. >> Absolutely. So, my talk is tomorrow and I think that, I'm going to go with its a vote of confidence that they put your talk on the last day at 2:00 P.M. instead of the, oh gosh, are they trying to bury it? But no, it's, I have scheduled enough conferences myself that I know that you have to put some stuff on the last day that people want to go to, or they're just not going to come. And my talk is about, and I'm co-presenting with my colleague, Jessica Deen, and we're talking about Helm 3. Which is to say, I think a lot of times it would, with these open-sourced shows people say, oh, why do you have to have a lot of information about the third release of your, third major release of your project? Why? It's just an iterative release. It is, and yet there are enough significant differences that it's kind of valuable to talk about, at least the end user experience. >> Yeah, so it actually got an applause in the keynote, ya know. (Bridget laughing) There are certain shows where people are hootin' and hollerin' for every, different compute instance that that is released and you look at it a little bit funny. But at the keynote there was a singular moment where it was the removal of Tiller which Corey and I have been trying to get feedback from the community as to what this all means. >> It seems, from my perspective, it seemed like a very strange thing. It's, we added this, yay! We added this other thing, yay! We're taking this thing and ripping it out and throwing it right into the garbage and the crowd goes nuts. And my two thoughts are first, that probably doesn't feel great if that was the thing you spent a lot of time working on, but secondly, I'm not as steep in the ecosystem as perhaps I should be and I don't really know what it does. So, what does it do and why is everyone super happy to con sine it to the dub rubbish bin of history? >> Right, exactly. So, first of all, I think it's 100% impossible to be an expert on every single vertical in this ecosystem. I mean, look around, KubeCon has 7,000 plus people, about a zillion vendor booths. They're all doing something that sounds slightly, overlapping and it's very confusing. So, in the Helm, if you, if people want to look we can say there's a link in the show notes but there, we can, people can go read on Helm.sh/blog. We have a seven part, I think, blog series about exactly what the history and the current release is about. But the TLDR, the too long didn't follow the link, is that Helm 1 was pretty limited in scope, Helm 2 was certainly more ambitious and it was born out of a collaboration between Google actually and a few other project contributors and Microsoft. And, the Tiller came in with the Google folks and it really served a need at that specific time. And it was, it was a server-side component. And this was an era when the Roll by Stacks has control and Kubernetes was, well nigh not existent. And so there were a lot of security components that you kind of had to bolt on after the fact, And once we got to, I think it was Kubernetes 1.7 or 1.8 maybe, the security model had matured enough that instead of it being great to have this extra component, it became burdensome to try to work around the extra component. And so I think that's actually a really good example of, it's like you were saying, people get excited about adding things. People sometimes don't get excited about removing things, but I think people are excited about the work that went into, removing this particular component because it ends up reducing the complexity in terms of the configuration for anyone who is using this system. >> It felt very spiritually aligned in some ways, with the announcement of Open Telemetry, where you're taking two projects and combining them into one. >> Absolutely. >> Where it's, oh, thank goodness, one less thing that-- >> Yes! >> I have to think about or deal with. Instead of A or B I just mix them together and hopefully it's a chocolate and peanut butter moment. >> Delicious. >> One of the topics that's been pretty hot in this ecosystem for the last, I'd say two years now it's been service matched, and talk about some complexity. And I talk to a guy and it's like, which one of these using? Oh I'm using all three of them and this is how I use them in my environment. So, there was an announcement spearheaded by Microsoft, the Service Mesh Interface. Give us the high level of what this is. >> So, first of all, the SMI acronym is hilarious to me because I got to tell you, as a nerdy teenager I went to math camp in the summertime, as one did, and it was named SMI. It was like, Summer Mathematics Institute! And I'm like, awesome! Now we have a work project that's named that, happy memories of lots of nerdy math. But my first Unix system that I played with, so, but what's great about that, what's great about that particular project, and you're right that this is very much aligned with, you're an enterprise. You would very much like to do enterprise-y things, like being a bank or being an airline or being an insurance company, and you super don't want to look at the very confusing CNCF Project Map and go, I think we need something in that quadrant. And then set your ships for that direction, and hopefully you'll get to what you need. And it's especially when you said that, you mentioned that, this, it basically standardizes it, such that whichever projects you want to use, whichever of the N, and we used to joke about JavaScript framework for the week, but I'm pretty sure the Service Mesh Project of the week has outstripped it in terms of like speed, of new projects being released all the time. And like, a lot of end user companies would very much like to start doing something and have it work and if the adorable start-up that had all the stars on GitHub and the two contributors ends up, and I'm not even naming a specific one, I'm just saying like there are many projects out there that are great technically and maybe they don't actually plan on supporting your LTS. And that's fine, but if we end up with this interface such that whatever service mesh, mesh, that's a hard word. Whatever service mesh technology you choose to use, you can be confident that you can move forward and not have a horrible disaster later. >> Right, and I think that's something that a lot of developers when left to our own devices and in my particular device, the devices are pretty crappy. Where it becomes a, I want to get this thing built, and up and running and working, and then when it finally works I do a happy dance. And no one wants to see that, I promise. It becomes a very different story when, okay, how do you maintain this? How do you responsibly keep this running? And it's, well I just got it working, what do you mean maintain it? I'm done, my job is done, I'm going home now. It turns out that when you have a business that isn't being the most clever person in the room, you sort of need to have a longer term plan around that. >> Yeah, absolutely. >> And it's nice to see that level of maturation being absorbed into the ecosystem. >> I think the ecosystem may finally be ready for it. And this is, I feel like, it's easy for us to look at examples of the past, people kind of shake their heads at OpenStack as a cautionary tale or of Sprawl and whatnot. But this is a thriving, which means growing, which means changing, which means very busy ecosystem. But like you're pointing out, if your enterprises are going to adapt some of this technology, they look at it and everyone here was, ya know, eating cupcakes or whatever for the Kubernetes fifth birthday, to an enterprise just 'cause that launched in 2014, June 2014, that sounds kind of new. >> Oh absolutely. >> Like, we're still, we're still running that mainframe that is still producing business value and actually that's fine. I mean, I think this maybe is one of the great things about a company like Microsoft, is we are our customers. Like we also respect the fact that if something works you don't just yolo a new thing out into production to replace it for what reason? What is the business value of replacing it? And I think for this, that's why this, kind of Unix philosophy of the very modular pieces of this ecosystem and we were talking about Helm a little earlier, but there's also, Draft, Brigade, etc. Like the Porter, the CNET spec implementation stuff, and this Cloud Native application bundles, that's a whole mouthful. >> Yes, well no disrespect to your sparkly shirt, but chasing the shiny thing, and this is new and exciting is not necessarily a great thing. >> Right? >> I heard some of the shiny squad that were on the show floor earlier, complaining a little bit about the keynotes, that there haven't been a whole lot of new service and feature announcements. (Bridget laughing) And my opinion on that is feature not bug. I, it turns out most of us have jobs that aren't keeping up with every new commit to an open-source project. >> I think what you were talking about before, this idea of, I'm the developer, I yolo'd out this co-load into production, or I yolo'd this out into production. It is definitely production grade as long as everything stays on the happy path, and nothing unexpected happens. And I probably have air handling, and, yay! We had the launch party, we're drinkin' and eatin' and we're happy and we don't really care that somebody is getting paged. And, it's probably burning down. And a lot of human misery is being poured into keeping it working. I like to think that, considering that we're paying attention to our enterprise customers and their needs, they're pretty interested in things that don't just work on day one, but they work on day two and hopefully day 200 and maybe day 2000. And like, that doesn't mean that you ship something once and you're like, okay, we don't have to change it for three years. It's like, no, you ship something, then you keep iterating on it, you keep bug fixing, you keep, sure you want features, but stability is a feature. And customer value is a feature. >> Well, Bridget I'm glad you brought that up. Last thing I want to ask you 'cause Microsoft's a great example, as you say, as a customer, if you're an Azure customer, I don't ask you what version of Azure you're running or whether you've done the latest security patch that's in there because Microsoft takes care of you. Now, your customers that are pulled between their two worlds is, oh, wait, I might have gotten rid of patch Tuesdays, but I still have to worry and maintain that environment. How are they dealing with, kind of that new world and still have, certain things that are going to stay the old way that they have been since the 90's or longer? >> I mean, obviously it's a very broad question and I can really only speak to the Kubernetes space, but I will say that the customers really appreciate, and this goes for all the Cloud providers, when there is something like the dramatic CVE that we had in December for example. It's like, oh, every Kubernetes cluster everywhere is horribly insecure! That's awesome! I guess, your API gateway is also an API welcome mat for everyone who wants to, do terrible things to your clusters. All of the vendors, Microsoft included, had their managed services patched very quickly. They're probably just like your Harple's of the world. If you rolled your own, you are responsible for patching, maintaining, securing your own. And this is, I feel like that's that tension. That's that continuum we always see our customers on. Like, they probably have a data center full of ya know, veece, fear and sadness, and they would very much like to have managed happiness. And that doesn't mean that they can easily pickup everything in the data center, that they have a lease on and move it instantly. But we can work with them to make sure that, hey, say you want to run some Kubernetes stuff in your data center and you also want to have AKS. Hey, there's this open-source project that we instantiated, that we worked on with other organizations called Vertual Kubelet. There was actually a talk happening about it I think in the last hour, so people can watch the video of that. But, we have now offered, we now have Virtual Node, our product version of it in GA. And I think this is kind of that continuum. It's like, yes of course, you're early adapters want the open-source to play with. Your enterprises want it to be open-source so they can make sure that their security team is happy having reviewed it. But, like you're saying, they would very much like to consume a service so they can get to business value. Like they don't necessarily want to, take, Kelsey's wonderful Kubernetes The Hard Way Tutorial and put that in production. It's like, hmm, probably not, not because they can't, these are smart people, they absolutely could do that. But then they spent their, innovation tokens as, the McKinley blog post puts it, the, it's like, choose boring technology. It's not wrong. It's not that boring is the goal, it's that you want the exciting to be in the area that is producing value for your organization. Like that's where you want most of your effort to go. And so if you can use well vetted open-source that is cross industry standard, stuff like SMI that is going to help you use everything that you chose, wisely or not so wisely, and integrate it and hopefully not spend a lot of time redeveloping. If you redevelop the same applications you already had, its like, I don't think at the end of the quarter anybody is getting their VP level up. If you waste time. So, I think that is, like, one of the things that Microsoft is so excited about with this kind of open-source stuff is that our customers can get to value faster and everyone that we collaborate with in the other clouds and with all of these vendor partners you see on the show floor, can keep the ecosystem moving forward. 'Cause I don't know about you but I feel like for a while we were all building different things. I mean like, instead of, for example, managed services for something like Kubernetes, I mean a few jobs that would go out was that a start up that we, we built our own custom container platform, as one did in 2014. And, we assembled it out of all the LEGOs and we built it out of I think Docker and Packer and Chef and, AWS at the time and, a bunch of janky bash because like if someone tells you there's no janky bash underneath your home grown platform, they are lying. >> It's always a lie, always a lie. >> They're lying. There's definitely bash in there, they may or may not be checking exit codes. But like, we all were doing that for a while and we were all building, container orchestration systems because we didn't have a great industry standard, awesome! We're here at KubeCon. Obviously Kubernetes is a great industry standard, but everybody that wants to chase the shiny is like but surface meshes. If I review talks for, I think I reviewed talks for KubeCon in Copenhagen, and it was like 50 or 60 almost identical service mesh talk proposals. And it's like, and then now, like so that was last year and now everyone is like server lists and its like, you know you still have servers. Like you don't add sensation to them, which is great, but you still have them. I think that that hype train is going to keep happening and what we need to do is make sure that we keep it usable for what the customers are trying to accomplish. Does that make sense? >> Bridget, it does, and unfortunately, we're going to have to leave it there. Thank you so much for sharing everything with our audience here. For Corey, I'm Stu, we'll be back with more coverage. Thanks for watching The Cube. (upbeat techno music)

Published Date : May 22 2019

SUMMARY :

Brought to you by Red Hat, Thank you for coming back to The Cube. Thank you for having me again. We always love the sartorial, There is, it's the high style, Wearing a suit is my primary skill. I will tell you that, yes, they sell this shirt I say it's not dad bod, at the conference. that they put your talk on the last day at 2:00 P.M. from the community as to what this all means. doesn't feel great if that was the thing you And this was an era when the Roll by Stacks has It felt very spiritually aligned in some ways, I have to think about or deal with. And I talk to a guy and it's like, And it's especially when you said that, clever person in the room, you sort of need to And it's nice to see that level of maturation And this is, I feel like, And I think for this, sparkly shirt, but chasing the shiny thing, I heard some of the shiny squad that were on I think what you were talking about Last thing I want to ask you 'cause Microsoft's a SMI that is going to help you use everything Like you don't add sensation to them, which is great, Thank you so much for sharing everything with

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jessica DeenPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Bridget KromhoutPERSON

0.99+

DecemberDATE

0.99+

Corey QuinnPERSON

0.99+

2014DATE

0.99+

CoreyPERSON

0.99+

GoogleORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

three yearsQUANTITY

0.99+

Summer Mathematics InstituteORGANIZATION

0.99+

two projectsQUANTITY

0.99+

100%QUANTITY

0.99+

GALOCATION

0.99+

Vertual KubeletORGANIZATION

0.99+

Alice GoldfussPERSON

0.99+

tomorrowDATE

0.99+

KelseyPERSON

0.99+

BridgetPERSON

0.99+

third releaseQUANTITY

0.99+

last yearDATE

0.99+

KubeConEVENT

0.99+

CNETORGANIZATION

0.99+

firstQUANTITY

0.99+

CopenhagenLOCATION

0.99+

three-pieceQUANTITY

0.99+

one guyQUANTITY

0.99+

two yearsQUANTITY

0.99+

Helm 3TITLE

0.99+

seven partQUANTITY

0.99+

60QUANTITY

0.99+

50QUANTITY

0.99+

AWSORGANIZATION

0.99+

Ecosystem PartnersORGANIZATION

0.98+

OpenStackORGANIZATION

0.98+

StuPERSON

0.98+

two contributorsQUANTITY

0.98+

Barcelona SpainLOCATION

0.98+

two worldsQUANTITY

0.98+

two thoughtsQUANTITY

0.98+

KubernetesTITLE

0.98+

threeQUANTITY

0.98+

June 2014DATE

0.98+

2:00 P.M.DATE

0.97+

OneQUANTITY

0.97+

oneQUANTITY

0.97+

Kubernetes The Hard Way TutorialTITLE

0.97+

day oneQUANTITY

0.95+

McKinleyORGANIZATION

0.95+

SprawlTITLE

0.95+

7,000 plus peopleQUANTITY

0.95+

JavaScriptTITLE

0.94+

day twoQUANTITY

0.94+

third major releaseQUANTITY

0.94+

90'sDATE

0.94+

LEGOsORGANIZATION

0.93+

GitHubORGANIZATION

0.92+

Helm 2TITLE

0.9+

DockerORGANIZATION

0.9+

KubernetesPERSON

0.9+

AzureTITLE

0.89+

fifth birthdayQUANTITY

0.89+

HarpleORGANIZATION

0.88+

CloudNativeCon EU 2019EVENT

0.88+

The CubeTITLE

0.88+

The Cloud Native Computing FoundationORGANIZATION

0.87+

VirtualORGANIZATION

0.87+

AKSORGANIZATION

0.85+

about a zillion vendor boothsQUANTITY

0.85+

Helm.sh/blogOTHER

0.85+

FirstQUANTITY

0.83+

secondlyQUANTITY

0.83+

Helm 1TITLE

0.81+

SMIORGANIZATION

0.8+

Darryl Sladden, Cisco | DevNet Create 2019


 

>> Live from Mountain View, California, it's theCUBE covering DevNet Create 2019 brought to you by Cisco. >> Hello everyone, welcome back to theCUBE's live coverage here in Mountain View, California for the theCUBE's coverage of Cisco DevNet Create. It's a small, intimate event where we're bringing the cloud native creation world with the DevNet community within Cisco and of course building applications, programming networks, that's the theme. I'm John Furrier, your host, our next guest is Darryl Sladden, senior technical product manager at Cisco, 20 year veteran, built voice over IP systems. He's a coder, he's a builder, he's a creator. Great to see you, thanks for coming on. >> Thank you so much, I'm glad to be here. >> And you're a fan? >> I love being on theCUBE. Because-- >> And the trivia behind that? Share the context, you had a product, you built one? >> Yes, the first product management job at Cisco was building the Cisco Unified Border Element and of course, that became the Cube, so any time you mention Cube inside of Cisco, that's going to be my product. >> The renaissance within Cisco theCUBE is back and we're embedded in there. Of course we're breaking all the borders down, getting the data. Tell us what's going on in your world? Obviously you've seen a lot of waves. I mean voice over IP that you were involved in? >> Yeah. >> That took, that old PBX telephone-- >> Right. >> Got digital, created massive innovation. That's an inflection point moment. We're seeing a few of those big waves happening now. One of them's an architectural changes around IoT, Wi-fi 6, 5G, cloud computing all coming together. This is an interesting opportunity. What's your focus? Where do you fit into all that? >> Yeah, where I fit in is this is a massive change and one of the problem sets that hasn't been solved yet is how do I understand where I am indoors? There's been great solutions that have unlocked huge amount of value with the GPS system outdoors. You always know where you are, a lot of way to find out exactly the right, it always amazes me at how accurate they are at how long it's going to take me to get to the Computer Museum. But how do I know once I've got into the museum that theCUBE is in the upstairs, in the back corner? That's where we need to solve that problem and I think we're at the crux of that. >> Waze is a great example because one of the things I'm amazed by with Waze is how fast they report the incidents that are going on. People are so actively rapid of adding, inputting the data. You got data junkies adding it and there's been some side effects. The side streets are always clogged. (laughing) >> Police always know-- >> So in physical locations where Wi-fi 6 for instance comes out? >> Yeah. >> You're going to have new capabilities in bandwidth and throughput and coverage areas, these dense areas. It's going to create a navigation opportunity for either machines to machines, machines to humans, humans to machines, humans to humans, within a physical construct. >> Yeah. >> How do you see that evolving? Use cases? What's the pattern? >> Right. What I really see evolving is taking advantage of some of the capabilities that have already existed in wi-fi, meaning ranging from individual IPs but some of the new things that are coming with Wi-fi 6 is Wi-fi 6 creates a great baseline but there are new things where, 802.11mc for example, which is an extension of Wi-fi 6, has what's called fine timing measurement. I can now, with these super accurate chip sets, know the speed of light is about one nanosecond to go about three feet. If I have an accurate clock, now I can know how far I am from the APs. >> Yeah. >> And I can solve that in indoor locations. >> So a lot of physics involved? >> A lot of rates of physics involved. >> Alright, so what products are you working on now to make all this happen. Take us through some of the things that are out there that you've got your fingers on. >> Yeah, so what I'm working on is Cisco's new location platform, it's called Cisco DNA Spaces and so what we're focusing on is digitizing that indoor space. So people spend of their economic activity are indoors. Whether it's in a hotel, where they're selling the rooms, or a restaurant where they're selling food inside the spaces, but what goes on in that physical space? People don't have that same level of knowledge that you do on the web, right? When I go to a webpage and I shop for outdoor furniture? The next two weeks I'm followed by ads about outdoor furniture. But if I go to Home Depot and I spend an hour in the outdoor furniture aisle, they don't know about that. Now, it allows you to digitize that indoor space and provide that context for other types of applications. >> So the value, I mean I'm not saying, now they're going to know you actually shopped at Home Depot, now your ad go to Home Depot. (laughing) But the value is not so much in the advertising. It's really in the efficiencies around work, play, office. These are the things that are going to be impacted because, you know, take healthcare for instance? Manufacturing? How people do work? How services are delivered? Just like in the consumer side, we all relate to the iPhone days when oh my god, I can have GPS on a phone. Now I do a mash up on a Google Map. >> Right. >> Are you saying the same thing for buildings? You're going to import like architectural drawings? How do you get all of this built out? What's the playbook? >> Yeah. The playbook really will be starting at the larger buildings that will be put into Google Maps or put into other places where it can start to get really accurate indoor locations and then never losing things, right? Be able to know where you are indoors. Being able to always find your stuff, not only where you are but maybe I put a tag on some of my assets and I always know where they are? The idea of nurses becoming more efficient because they're going to know where that wheelchair is if I need to find a wheelchair to move a patient out of an office. All of these things just become a little bit more efficient but that just builds on a huge scale when that happens at scale. >> Darryl, talk about the impact of this because you built and deployed disruptive technology in the past. For the folks watching, whether it's an enterprise architect or CIO or CEO or facilities manager, whoever, what is the impact of these new location based services to their business? How should they be thinking about it, holistically? >> Yeah. >> What's your view? >> My real view is that you want to look at it from a platform, so you're not going to have one company. Even at Cisco, we're not going to solve every application but what you do want to do is build a platform that's extensible, right? We'll take in data from multiple sources, whether it APs or video cameras, other things, create a platform that normalizes that location, and then opens that up. So that's what happened as the mainframes transitioned to client server computing. Once you start breaking things up? That's really the value and so I think the CIOS and architects out there, shouldn't be looking at point products as much as understanding that a location platform will help them unlock the value moving forward. >> Talk about the data. How is the data traversing through this? Because obviously you mentioned connecting things like cameras and other things? It could be medical equipment, it could be anything. IoT's going to be a tsunami of opportunity, applications that are going to create a lot of opportunity. How should I think about the data flow? And the role of machine learning and data in all of this? Is that going to be a key part of this? >> Absolutely, the way that we're looking at it is there's kind of two groups. There's the ones that are all in on the cloud, and we are offering this as a software as a subscription service so you buy it on a subscription basis and you let Cisco deal with the problems. Of course with a regulated environment of access to the data and backing it up and restoring it and making sure it's well curated. Or you can decide, yeah I want to run it on premises. If you want it on prem you have to understand you're going to have to deal with those same problems of back up, the data will get really large as you start to collect more and more location and how are you going to best extract value from that data? So I think you really want to look at that this is something that's going to continue to expand and do I want to make that a core competence by running it myself? Or maybe turn that over to cloud service? >> So in terms of what's real and not real or what's coming and what's real today? So you mentioned there's some location services as a SAS. Talk about what's available now from your customer standpoint. >> Yeah. >> What can they get going on and what's coming around the corner? >> Yeah, so what they can get going on today is that location services, Cisco DNA Spaces. So if you go to ciscodnaspaces.com there's free trials available, it's a great sort of application. But more importantly, it provides you that initial start, right? What's coming is more and more applications will take advantage of that, right? We got a great one for things like student success, so that you know a student is inside of a classroom and then if he doesn't come to class for a couple days in a row? Oh maybe he needs counseling? Maybe his car broke down? You can start to do these really interesting student success applications as an example of a vertical. So the vertical applications are starting to really proliferate, but what's available today is the platform. >> So you see verticals really booming on this? >> Yeah. >> They're going to take advantage of it? Alright, so just kind of zoom out and put your industry hat on, not your Cisco hat. When you look at wi-fi and 5G or other technologies that are out there, what's the big movement? What moves the ball down the field the most? Is it going to be wi-fi and 5G? Because it seems like, you know, inch by inch, unified communication seemed stalled, now it's got an uplift with cloud, with data, more great user experiences. SD-WAN's been around for a long time and getting a resurgence. I mean campus networking had been around for a long, long time. >> I know. (chuckling) >> People go to stadiums, want to do Instagram and do videos. What's the big technology lever here? What's the big tailwind for location based in-building stuff? >> What I start to see for this is improving standards and improving accuracy, right? Until you get to that point where it's reliable and replaceable and I can really depend on it? It's all a niche product. I think that's been happening for literally the last eight years in this industry. Lots of niche examples of things that have been successful but it hasn't exploded, until you build that platform where I can absolutely, with reliability say, this device is at this point at this time? >> Yeah. >> Then you can start to really expand but that's really-- >> The timing and the through put, to your point earlier? >> Yeah. >> Okay, thoughts on DevNet, just to wrap up. What's here? Going on in the show here? DevNet Create, Susie did a good job of bringing communities together. A lot of co-creation, they're creating new things. This is a new application environment, programmable. What's your thoughts on DevNet? >> Yeah, I love being around some of the smartest people in the world here. (laughing) It's great. Humbling just to be able to talk to some of these guys. But I do think that really creates the community that teaches everything from little things, like I learned a quick, great new little API trick that I hadn't learned and maybe I taught some people some of the stuff that we're doing about streaming APIs. What I really like about this is all these small little interactions build something really good. >> Yeah. And you build API into all the products that's only going to create more enablement. >> Yeah. >> More creativity. The creativity's flowing big time. >> Right. >> Darryl, thanks for coming on. >> Well thank you so much. >> Great to see you. Thanks, a CUBE fan. >> Right. (laughing) >> Author of the product called The Cube at Cisco back in the day. I'm John Furrier, back with more live coverage after this short break. (light digital music)

Published Date : Apr 25 2019

SUMMARY :

brought to you by Cisco. for the theCUBE's coverage of Cisco DevNet Create. I love being on theCUBE. and of course, that became the Cube, getting the data. Where do you fit into all that? and one of the problem sets that hasn't been solved yet Waze is a great example because one of the things It's going to create a navigation opportunity of some of the capabilities that have already existed Alright, so what products are you working on now that you do on the web, right? These are the things that are going to be impacted Be able to know where you are indoors. in the past. That's really the value and so I think the CIOS Is that going to be a key part of this? and how are you going to best extract value So you mentioned there's some location services as a SAS. so that you know a student is inside of a classroom Is it going to be wi-fi and 5G? I know. What's the big technology lever here? What I start to see for this Going on in the show here? and maybe I taught some people some of the stuff that's only going to create more enablement. The creativity's flowing big time. Great to see you. Right. Author of the product called The Cube at Cisco

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
CiscoORGANIZATION

0.99+

John FurrierPERSON

0.99+

Darryl SladdenPERSON

0.99+

SusiePERSON

0.99+

20 yearQUANTITY

0.99+

todayDATE

0.99+

Mountain View, CaliforniaLOCATION

0.99+

Home DepotORGANIZATION

0.99+

CIOSTITLE

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

Google MapsTITLE

0.99+

two groupsQUANTITY

0.99+

firstQUANTITY

0.99+

ciscodnaspaces.comOTHER

0.99+

theCUBEORGANIZATION

0.98+

oneQUANTITY

0.97+

an hourQUANTITY

0.97+

OneQUANTITY

0.97+

one companyQUANTITY

0.96+

DarrylPERSON

0.96+

Google MapTITLE

0.96+

2019DATE

0.94+

WazeTITLE

0.94+

CubeCOMMERCIAL_ITEM

0.93+

about three feetQUANTITY

0.89+

about one nanosecondQUANTITY

0.89+

theCUBECOMMERCIAL_ITEM

0.87+

InstagramORGANIZATION

0.87+

next two weeksDATE

0.86+

SpacesORGANIZATION

0.84+

802.11mcOTHER

0.84+

The CubeTITLE

0.82+

wavesEVENT

0.82+

last eight yearsDATE

0.8+

DevNet CreateTITLE

0.77+

DevNet Create 2019TITLE

0.73+

couple daysQUANTITY

0.73+

big wavesEVENT

0.71+

inchQUANTITY

0.68+

DevNetORGANIZATION

0.66+

PBXORGANIZATION

0.63+

SASORGANIZATION

0.62+

DevNetTITLE

0.6+

CUBEORGANIZATION

0.57+

UnifiedCOMMERCIAL_ITEM

0.51+

DNACOMMERCIAL_ITEM

0.41+

Rachel Myers, Capgemini & John Clark, Capgemini | Inforum DC 2018


 

>> Live from Washington D.C., it's theCUBE covering Inforum DC 2018. Brought to you by Infor. >> Welcome back to Washington D.C., we are live here at theCUBE at Inforum '18. I'm John Walls along with Dave Vallante and it's a pleasure now to welcome to the show from Capgemini couple of folks, Rachel Myers, who's Director of Alliances at Capgemini. (laughing) And John Clark, who's the VP of info-practice at Capgemini and Dave put your phone away, would you please. >> We're off to a good start. >> We are. (laughing) >> Who are you guys again? >> I think it was givin' him directions for dinner tonight. I think what you're doing. It's down at K Street take a right. >> Don't drive scooters without a helmet. >> That's right. Inside story. Rachel and John, thanks for being with us. We appreciate the time here. >> Thanks for having us. >> Let's talk about the partnership with Infor. Where it's coming from. What you are adding to that. How you view it and what you're gettin' out of it. And John, if you would? >> Yeah absolutely. First, hello from D.C., he said. The relationship that Capgemini has had with Infor goes back over 20 years. But we formalized it really two years ago and had a strategic partnership defined around several of the products that Infor has with a big focus on digital and cloud. So Capgemini sees that Infor is really leading the charge in a lot of native cloud products out there and we know that, that is certainly something our clients are looking for. So formalized relationship and extremely excited to be lead partners and sponsors here at Inforum. >> And so Rachel, where do you come into play here then as far as Director of Alliances goes? I think the job title probably speaks for itself, but in terms of how the Infor relationship works and where it comes in to your portfolio onto your plate, how does that work? >> So I manage the relationship with Infor as our customers are looking at cloud and all the options out there. I manage the relationship into Infor bringing the right folks to bear to our customers and joining at the hip where we need to in support of our customers. >> Okay, so you mentioned John, that its been a 20 year relationship. So that means it goes back probably to the loss and software days, right? The whole early days of ERP. Now we come into the modern era, cloud. We're hearing all about AI. We're also hearing about, sort of, micro-verticals and industry expertise. >> Yes, yes. >> So square that circle for me because you guys have deep industry expertise. How do you mesh with Infor? >> Yeah great question. We absolutely, as you said, go to market from a sector perspective, so everything we do has some tent of an industry or a sector verticalisation and it matches exactly well with how Infor goes to market with last model functionality. So what we do for example, is look at where Infor and our sector team see gaps like on food processing companies and we'll build out that solution and take that to market. So really kind of extending the last malfunctionality with Infor and having Capgemini's solutions as well. >> So does that functionality ultimately make it back into Infor code or not necessarily? >> Not necessarily. >> Okay, all right. So it's like last inch function-- >> Right exactly. That's a pretty good analogy for it. >> Okay so, well, it's always the hardest part, right? I mean you think of cable, you think of all the-- >> Telephone whatever. >> Sort of examples, right? So, you know the old story is if you're here and you want to get to the wall and you go half way, you never get there, right? >> Exactly. >> So that's kind of the process that you're in. There's always more to do, right? >> Right. >> Okay, so what's hot these days in your space? >> Well we're here at Inforum talking to customers and our partners about many things. But we actually are speaking about Industry 4.0 which is a big hot topic. Supply chain and EAM, Enterprise Asset Management. We have practices and expertise in all of those, so we can bring the best to our customers from a system integration partner capability which would be us along with Infor and the products that they bring to bear. >> So what's the 101 on 4.0? Presumably a lot of automation, more efficiency, driving business value. How would you describe Industry 4.0 Next Gen? >> It's the next evolution, I would say, to automation of processees. We're getting closer, I would think, and people are definitely piloting to get there, but building a road map and helping them really see the value is what we're trying to do with our customers these days and making it real and really producing some ROI beyond that with automation. >> So AI is a piece of that? How about, have you seen like blockchain hit yet? Or is that sort of on people's road maps? >> I think it's definitely a road map item. I think there's some experimentation, but what we're definitely seeing become real is robotics process automation, RPA. We're doin' a lot of that with our customers and taking it beyond experimentation to actual ROI. >> And the RPA is exploding. I was actually impressed and surprised to hear so much RPA talk this morning. I didn't realize that Infor had quasi out of the box capbilities there. So what are you seeing? A lot of, sort of back office functions getting automated, software robots getting trained to do mundane tasks? What's the experience there? >> I think as we are implementing ERPs like Infor's, there is a need to take processes that customers are doing today manual and automate those to see the extension and the ROI beyond just the ERP software. >> We do see a lot of it start in the back office, so a lot of finance and HR functions is kind of the first place that companies look for 'cause on thing that we do see on RPA projects is don't try to tackle everything, but get focused and get some quick wins, if you will and that's really where we built our library and where we work with Infor. >> Is it fair the automation of it is coming from the lines of business which is kind of your wheelhouse, right? >> Right. >> It's not, sort of an IT thing so much. IT is probably a little afraid of it, but is that the way you see it? >> Yes it is. >> Okay and so talk about Capgemini's strategy as the world sort of evolves. You know, you always hear small projects, small wins are the way to go and for years it was like the big SAP implementation >> Yeah. >> Or the big Oracle implementation. How are you guys changing your business to accommodate that new thinking? >> So really on several fronts. One is definitely the methodology that we have and we see on projects is shifting from a waterfall to an agile. So much quicker iterations and cycles on the projects themselves and usually the scope. It will start off with a line of business and again, if it's looking for, hey, I just need to improve the digital relationship I have with my customer. Which can a lot of times just mean start a digital relationship with my customer. So it's really, you kind of keep a tight focus on the scope and just have an agile approach which, again, is what we have changed our methodologies for. >> So digital obviously is real. I mean, every CEO that we talk to is trying to get digital right. A lot of experimentation going on. Like you said a lot of, hey we have to have a digital strategy then you throw AI into the mix. You throw things like blockchain. It's a complicated situation for a lot of firms. What are the discussions like with customers? Where are you seeing the most success or early traction? >> I think having the vision and the scope of where you want to go three years, five years down the road and being able to prioritize against that road map what's going to give you the biggest benefit first, so that it's not just haphazardly trying out these technology enablers like RPA and AI, it is a clear vision and strategy of where we're trying to go and solely hitting some of that ROI and seeing value. >> Are you seeing more of a save money, make money kind of a mix? What are you seeing there? I would say probably a mix, save money for the right reasons and spend money to get the ROI that we're planning for in that road map. >> Just to amplify on the point that you're making Dave. Just from the customer side of the fence on this, for people who aren't, you're just introducing them to the cloud, right? To begin with and they're trying to embrace or understand a concept that they don't have any experience with and now you think of all these other capabilities you have down the road or all these other opportunities whether it's artificial intelligence or whether it's RPA, whatever it is. It's got to be mind-blowing. A little bit, doesn't it? And how do you, I guess, calm 'em down if they realize we are that far behind. We're never going to get there. We're always going to be three, five, 10 years behind because we're that far behind right now. So how do you, I guess, allay their concerns and then get them up to speed at such a way that they feel like they can catch up? >> Yeah, say one of the key things that we can provide is various maturity models. So we have kind of a keepin' it simple of a two by two grid of where do you fall from digital enablement? A, do you even know what that means? Do you do it within divisions or certain lines of business? And then, is that a part of the strategy for your customer acquisition, customer retention, employee retention, et cetera. And start with kind of a fit there and then we basically have offerings that then go from okay, if you're starting out then the approach can be let's go through what cloud is. Like I said, there are absolutely still discussions that we have now on, hey what is the difference between cloud and on-prem? Is it the same software version? Is it a different software? What are the security features and the data center? Some of those questions are still out there as you said and we've got to look at the maturity model to get 'em there. >> So let's go through the simple, I like simple, the two dimensional, one of the buckets, so it's like, hey, we're not even thinkin' about it, it's kind of lower left. Upper left would be line of business focus sort of narrow. Lower right would be at strategic, but we're not acting on it yet. >> Right, in a division or a single line of business or I may have a cross functional solution with a great digital road map, but it's in one plant, you know, 'cause then you get into, okay, well that's probably because you either had a champion locally or you had some trigger such as some customer issues or production issues or something that forced the issue, so to speak, there. And then the top right is, yeah, it's part of the strategy. It's built in to where the budget is allocated as well and it's a part of all the conversations we're having with business and IT. >> Were you guys seeing particular, thinking about sticking on digital for a minute, you think particular industry uptake, I mean, obviously retail's been disrupted, publishing, you know the music industry's been disrupted. But there's certain industries that really haven't been dramatically disrupted yet, financial services, healthcare, defense, really to date, these high risk businesses. What are you guys seeing and kind of where's the greatest familiarity or affinity to digital? >> Where we're starting and where we've been focused with Infor and the market place is consumer products and distribution as well as manufacturing. That's really been a focus area for us and we didn't get into this, but John's team has capability in Infor and is skilled in Infor and there are some focus areas for us with the customers in those industry segments. >> Do you think that automation, AI, improvements in the supply chain, you know robotics even software robots will reverse the trend toward offshore manufacturing tariffs, I guess maybe help too, but I mean, are you seeing any evidence of that automation sort of making the pendulum swing back or are the cost advantages so attractive and is the supply chain so intrenched? >> I'll let John elaborate, but I would say that there is still a fit for purpose for offshoring certain things and for automating certain things and that's why I think it's important to build a plan and a strategy for which things will be solved for in which ways. >> Yeah and the one thing I want to add is as you see some plant go from, it took 200, 300 people to operate a facility to I can do it with 10. That changes the economics of now the labor cost and labor arbitrage isn't as much a function, but yes, what about the rent, facilities and transportation? So we are seeing the economic calculation change a bit from the point of just go offshore for labor. Well if labor is not a big a point, we are seeing a shift there. >> Right, so the labor component's shrinking. And then you can automate that. Is there a quality aspect or is that kind of a myth? >> We think that's a myth from what we're seeing. >> Quality can improve a little bit. >> Exactly. >> Won't go down. Won't go down. >> You're saying coming back on-shoring? Or are you saying offshoring? >> Or automating. Automating whether it's on or off. >> Oh regardless of the location, right? >> Right. >> Automation's going to drive quality up. Lower re-work, right? Okay. >> Robots do it a little bit better than us especially if it's repetitive. >> They don't get tired. (laughing) How about some of your favorite kind of joint examples with Infor, any kind of customer wins you can talk about? >> We're actually working together in a lot of spaces, but one of the biggest ones that we are actually talking about a case study here on the floor at Inforum is at Coke Industries, one of it's companies Flint Hills Resources. We're actually in the middle of an EAM implementation with Flint Hills and working together collaboratively with Infor at the client. >> And is that the or bigger picture, you said 20 year relationship formalized much more recently than that. Ultimately what does that deliver for the client? You think at the end of the day? What's the power of that partnership? >> So I think that there's several things, one is that with the experience and history of a Capgemini with 50 years of consulting experience and strategy work. We now specifically bring Infor and Infor's technology into the conversations that it was not a structure before two years ago. So now we specifically have, where does Infor fit in the road map from a software agnostic industry perspective? And then from a just a plain and simple support and keeping your customer's Infor environment running that's additional strength that we have that we didn't have before. >> So you guys are known for being technology agnostic even though you've got an affinity of going to market with a company in this case Infor. How are they doing? What's on the to do list? If you're talking to customers saying, hey this is the sweet spot," here's where some of the items we want them to improve on. What would you say? >> I'd say for, I can at least say tactically with my team we are looking to enhance our solution is around burst and analytics. So that's definitely a best debris tool in the marketplace and so where we can integrate that into more products 'cause it's, Infor acquired it year and a half ago. So we're trying to fold it in with each product and keeping that trajectory. Where again a customer only has one platform to support for-- >> So that's kind of infusing that modern BI into the platforms. Functionally you're kind of happy with it. >> Oh absolutely. >> And it's just a matter of getting the function into-- >> Right. >> The sweet. >> Have it the defacto. >> Right. >> That's where we want to get. >> Right, right. >> But honestly if you just look at the floor out there, you know from our perspective, the great showing and the excitement and just the conversations that we have around Infor. There's been some confusion, I would say, from, without naming names, other competitors of Infor's on what is our cloud and digital road map and then when we look at Infor with cloud native, you know from the ground up, it makes that back to one of the questions you had on, depending on where customers are starting, if you can go from the beginning like Infor has done with some of their products, natively built cloud up. Then those are great conversations and we're seeing more of that in the market right now. >> When we talk to customers, when you talk to the sort of, traditional vendors, they'll say it's a hybrid world, which seems to be. >> It's true. >> When you talk to other cloud guys, it's like, cloud, cloud, cloud. Now even AWS has somewhat capitulated, they've made some announcements to do stuff on-prem. But logically it makes sense that if the data is in some data center location, it's probably going to stay there for a while if it's working and it's a lot of it and you don't necessarily want to move it to the cloud, so do you buy that? Is it a hybrid world? Will it stay a hybrid world? Or do you feel like the pendulum really is swinging into the cloud or not because of IoT, it's more sort of a decentralized world. What do you guys think? >> I think it's a customer choice. Sometimes we have some federally regulated customers that are concerned about data and security and not necessarily there yet in terms of the cloud and we have some customers that are wanting to go 100% cloud so I think it is definitely customer choice and we are there to advise them whether cloud is the right answer and even to help them implement and support them on their journey. So I think we've seen all, every which flavor of cloud, hybrid. >> From your stand point, whatever you want, you're going to-- >> Yeah, I'd say in the past two or three years there's definitely more clients, I would say most now will look at some, when they're doing their TCO and software selection, they absolutely will lead with, hey at least the core part, ERP part, for example, what can I do for cloud with that? 'Cause there's just so much-- >> Considerationalities. >> Yeah the consideration versus three, five years ago no you wouldn't look at that, but I do think there absolutely will be a hybrid foot print going forward. >> Well, if there's an affinity to cloud, presumably Infor has an advantage there, 'cause they're born on the cloud, or at least for that part of the business and other entrenched ERP is not going to be so easy to move to the cloud. In fact that's what you want to do. >> And I think we share the vision with Infor and talking to customers with the cloud first approach. It makes sense to move to the cloud. There is value in the cloud and we can help build that story for them. >> Charles Philips pretty smooth spokesperson, he's a clear thinker, he laid out the strategy. The strategy of, this is my fourth Inforum, I mean, it's grown, but it's consistent, you know, he presents it in a manner that I think is pretty compelling, so that's got to make you feel good, right? You got a leader that's committed, been here for a while. >> Yeah absolutely and one other thing that I really do like about coming to Inforum to see Charles is he actually gets it. If you think of it from CEO of a large software company with hundreds of products, he knows where they actually fit and can go through kind of the road map and the story. So very credible. >> The partnership's a win-win for sure. It certainly sounds like you've painted a very good picture and we appreciate the time. >> Yeah. >> Thanks for being with us and good luck the next couple of days here at the show. Have fun. >> Thank you. >> Appreciate the time. >> Should be, right? (laughing) Back with more live in Washington D.C., you're watching theCUBE. (upbeat music)

Published Date : Sep 25 2018

SUMMARY :

Brought to you by Infor. and it's a pleasure now to welcome to the show We are. I think what you're doing. Rachel and John, thanks for being with us. the partnership with Infor. So Capgemini sees that Infor is really leading the charge So I manage the relationship with Infor Okay, so you mentioned John, How do you mesh with Infor? So really kind of extending the last malfunctionality So it's like last inch function-- That's a pretty good analogy for it. So that's kind of the process that you're in. and the products that they bring to bear. How would you describe Industry 4.0 Next Gen? and really producing some ROI beyond that with automation. We're doin' a lot of that with our customers So what are you seeing? and the ROI beyond just the ERP software. is kind of the first place that companies look for but is that the way you see it? are the way to go and for years it was like How are you guys changing your business So it's really, you kind of keep a tight focus on the scope What are the discussions like with customers? of where you want to go three years, five years down the road What are you seeing there? and now you think of all these other capabilities you have Yeah, say one of the key things that we can provide the two dimensional, one of the buckets, or something that forced the issue, so to speak, there. What are you guys seeing and kind of where's the greatest and is skilled in Infor and there are and that's why I think it's important Yeah and the one thing I want to add is And then you can automate that. Won't go down. Automating whether it's on or off. Automation's going to drive quality up. especially if it's repetitive. you can talk about? We're actually in the middle of an EAM implementation And is that the or bigger picture, one is that with the experience and history of a Capgemini What's on the to do list? and keeping that trajectory. into the platforms. back to one of the questions you had on, when you talk to the sort of, traditional vendors, Or do you feel like the pendulum really is swinging and even to help them implement Yeah the consideration versus three, five years ago or at least for that part of the business and talking to customers with the cloud first approach. is pretty compelling, so that's got to make that I really do like about coming to Inforum and we appreciate the time. the next couple of days here at the show. Back with more live in Washington D.C.,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
RachelPERSON

0.99+

Rachel MyersPERSON

0.99+

JohnPERSON

0.99+

Dave VallantePERSON

0.99+

CapgeminiORGANIZATION

0.99+

Charles PhilipsPERSON

0.99+

John ClarkPERSON

0.99+

DavePERSON

0.99+

John WallsPERSON

0.99+

CharlesPERSON

0.99+

InforORGANIZATION

0.99+

AWSORGANIZATION

0.99+

20 yearQUANTITY

0.99+

100%QUANTITY

0.99+

three yearsQUANTITY

0.99+

threeQUANTITY

0.99+

50 yearsQUANTITY

0.99+

five yearsQUANTITY

0.99+

Washington D.C.LOCATION

0.99+

fiveQUANTITY

0.99+

Coke IndustriesORGANIZATION

0.99+

one platformQUANTITY

0.99+

OracleORGANIZATION

0.99+

K StreetLOCATION

0.99+

year and a half agoDATE

0.99+

FirstQUANTITY

0.99+

firstQUANTITY

0.99+

CapgeminiPERSON

0.99+

InforumORGANIZATION

0.99+

10QUANTITY

0.99+

oneQUANTITY

0.99+

10 yearsQUANTITY

0.98+

200, 300 peopleQUANTITY

0.98+

each productQUANTITY

0.98+

Flint Hills ResourcesORGANIZATION

0.98+

Flint HillsORGANIZATION

0.98+

twoQUANTITY

0.98+

two years agoDATE

0.97+

five years agoDATE

0.97+

over 20 yearsQUANTITY

0.97+

singleQUANTITY

0.96+

DCLOCATION

0.96+

todayDATE

0.95+

one plantQUANTITY

0.95+

first approachQUANTITY

0.94+

theCUBEORGANIZATION

0.94+

tonightDATE

0.94+

Yaron Haviv, Iguazio | theCUBE NYC 2018


 

Live from New York It's theCUBE! Covering theCUBE New York City 2018 Brought to you by Silicon Angle Media and it's ecosystem partners >> Hey welcome back and we're live in theCUBE in New York city. It's our 2nd day of two days of coverage CUBE NYC. The hashtag CUBENYC Formerly Big data NYC renamed because it's about big data, it's about the server, it's about Cooper _________'s multi-cloud data. It's all about data, and that's the fundamental change in the industry. Our next guest is Yaron Haviv, who's the CTO of Iguazio, key alumni, always coming out with some good commentary smart analysis. Kind of a guest host as well as an industry participant supplier. Welcome back to theCUBE. Good to see you. >> Thank you John. >> Love having you on theCUBE because you always bring some good insight and we appreciate that. Thank you so much. First, before we get into some of the comments because I really want to delve into comments that David Richards said a few years ago, CEO of RenDisco. He said, "Cloud's going to kill Hadoop". And people were looking at him like, "Oh my God, who is this heretic? He's crazy. What is he talking about?" But you might not need Hadoop, if you can run server less Spark, Tensorflow.... You talk about this off camera. Is Hadoop going to be the open stack of the big data world? >> I don't think cloud necessary killed Hadoop, although it is working on that, you know because you go to Amazon and you know, you can consume a bunch of services and you don't really need to think about Hadoop. I think cloud native serve is starting to kill Hadoop, cause Hadoop is three layers, you know, it's a file system, it's DFS, and then you have server scheduling Yarn, then you have applications starting with map produce and then you evolve into things like Spark. Okay, so, file system I don't really need in the cloud. I use Asfree, I can use a database as a service, as you know, pretty efficient way of storing data. For scheduling, Kubernetes is a much more generic way of scheduling workloads and not confined to Spark and specific workloads. I can run with Dancerflow, I can run with data science tools, etc., just containerize. So essentially, why would I need Hadoop? If I can take the traditional tools people are now evolving in and using like Jupiter Notebooks, Spark, Dancerflow, you know, those packages with Kubernetes on top of a database as a service and some object store, I have a much easier stack to work with. And I could mobilize that whether it's in the cloud, you know on different vendors. >> Scale is important too. How do you scale it? >> Of course, you have independent scaling between data and computation, unlike Hadoop. So I can just go to Google, and use Vquery, or use, you know, DynamoDB on Amazon or Redchick, or whatever and automatically scale it down and then, you know >> That's a unique position, so essentially, Hadoop versus Kubernetes is a top-line story. And wouldn't that be ironic for Google, because Google essentially created Map Produce and Coudera ran with it and went public, but when we're talking about 2008 timeframe, 2009 timeframe, back when ventures with cloud were just emerging in the mainstream. So wouldn't it be ironic Kubernetes, which is being driven by Google, ends up taking over Hadoop? In terms of running things on Kubernetes and cloud eight on Visa Vis on premise with Hadoop. >> The poster is tend to give this comment about Google, but essentially Yahoo started Hadoop. Google started the technology  and couple of years after Hadoop started, with Google they essentially moved to a different architecture, with something called Percolator. So Google's not too associated with Hadoop. They're not really using this approach for a long time. >> Well they wrote the map-produced paper and the internal conversations we report on theCUBE about Google was, they just let that go. And Yahoo grabbed it. (cross-conversation) >> The companies that had the most experience were the first to leave. And I think it may respect what you're saying. As the marketplace realizes the outcomes of the dubious associate with, they will find other ways of achieving those outcomes. It might be more depth. >> There's also a fundamental shift in the consumption where Hadoop was about a ranking pages in a batch form. You know, just collecting logs and ranking pages, okay. The chances that people have today revolve around applying AI to business application. It needs to be a lot more concurring, transactional, real-time ish, you know? It's nothing to do with Hadoop, okay? So that's why you'll see more and more workers, mobilizing different black server functions, into service pre-canned services, etc. And Kubernetes playing a good role here is providing the trend. Transport for migrating workloads across cloud providers, because I can use GKE, the Google Kubenetes, or Amazon Kubernetes, or Azure Kubernetes, and I could write a similar application and deploy it on any cloud, or on Clam on my own private cluster. It makes the infrastructure agnostic really application focused. >> Question about Kubernetes we heard on theCUBE earlier, the VP of Project BlueData said that Kubernetes ecosystem and community needs to do a better job with Stapla, they nailed Stapflalis, Stafle application support is something that they need help on. Do you agree with that comment, and then if so, what alternatives do you have for customers who care about Stafe? >> They should use our product (laughing) >> (mumbling) Is Kubernetes struggling there? And if so, talk about your product >> So, I think that our challenge is rounded that there are many solutions in that. I think that they are attacking it from a different approach Many of them are essentially providing some block storage to different containers on really cloud 90. What you want to be able is to have multiple containers access the same data. That means either sharing through file systems, for objects or through databases because one container is generating, for example, ingestion or __________. Another container is manipulating that same data. A third container may look for something in the data, and generate a trigger or an action. So you need shared access to data from those containers. >> The rest of the data synchronizes all three of those things. >> Yes because the data is the form of state. The form of state cannot be associated with the same container, which is what most of where I am very active and sincere in those committees, and you have all the storage guys in the committees, and they think the block story just drag solution. Cause they still think like virtual machines, okay? But the general idea is that if you think about Kubernetes is like the new OS, where you have many processes, they're just scattered around. In OS, the way for us to share state between processes an OS, is whether through files, or through databases, in those form. And that's really what >> Threads and databases as a positive engagement. >> So essentially I gave maybe two years ago, a session at KubeCon in Europe about what we're doing on storing state. It's really high-performance access from those container processes to our database. Impersonate objects, files, streams or time series data, etc And then essentially, all those workloads just mount on top of and we can all share stape. We can even control the access for each >> Do you think you nailed the stape problem? >> Yes, by the way, we have a managed service. Anyone could go today to our cloud, to our website, that's in our cloud. It gets it's own Kubernetes cluster, a provision within less than 10 minutes, five to 10 minutes. With all of those services pre-integrated with Spark, Presto, ______________, real-time, these services functions. All that pre-configured on it's own time. I figured all of these- >> 100% compatible with Kubernetes, it's a good investment >> Well we're just expanding it to Kubernetes stripes, now it's working on them, Amazon Kubernetes, EKS I think, we're working on AKS and GK. We partner with Azure and Google. And we're also building an ad solution that is essentially exactly the same stock. Can run on an edge appliance in a factory. You can essentially mobilize data and functions back and forth. So you can go and develop your work loads, your application in the cloud, test it under simulation, push a single button and teleport the artifacts into the edge factory. >> So is it like a real-time Kubernetes? >> Yes, it's a real-time Kubernetes. >> If you _______like the things we're doing, it's all real-time. >> Talk about real-time in the database world because you mentioned time-series databases. You give objects store versus blog. Talk about time series. You're talking about data that is very relevant in the moment. And also understanding time series data. And then, it's important post-event, if you will, meaning How do you store it? Do you care? I mean, it's important to manage the time series. At the same time, it might not be as valuable as other data, or valuable at certain points and time, which changes it's relationship to how it's stored and how it's used. Talk about the dynamic of time series.. >> Figured it out in the last six or 12 months that since real-time is about time series. Everything you think about real-time censored data, even video is a time-series of frames, okay And what everyone wants to do is just huge amount of time series. They want to cross-correlate it, because for example, you think about stock tickers you know, the stock has an impact from news feeds or Twitter feeds, or of a company or a segment. So essentially, what they need to do is something called multi-volume analysis of multiple time series to be able to extract some meaning, and then decide if you want to sell or buy a stock, as in vacation example. And there is a huge gap in the solution in that market, because most of the time series databases were designed for operational databases, you know, things that monitor apps. Nothing that injects millions of data points per second, and cross-correlates and run real-time AI analytics. Ah, so we've essentially extended because we have a programmable database essentially under the hoop. We've extended it to support time series data with about 50 to 1 compression ratio, compared to some other solutions. You know we've break with the customer, we've done sizing, they told them us they need half a pitabyte. After a small sizing exercise, about 10 to 20 terabytes of storage for the same data they stored in Kassandra for 500 terabytes. No huge ingestion rates, and what's very important, we can do an in-flight with all those cross-correlations, so, that's something that's working very well for us. >> This could help on smart mobility. Kenex 5G comes on, certainly. Intelligent edge. >> So the customers we have, these cases that we applied right now is in financial services, two or three main applications. One is tick data and analytics, everyone wants to be smarter learning on how to buy and sell stocks or manage risk, the second one is infrastructure, monitoring, critical infrastructure, monitoring is SLA monitoring is be able to monitor network devices, latencies, applications, you now, transaction rate, or that, be able to predict potential failures or escalation We have similar applications; we have about three Telco customers using it for real-time time. Series analytics are metric data, cybersecurity attacks, congestion avoidance, SLA management, and also automotive. Fleet management, file linking, they are also essentially feeding huge data sets of time series analytics. They're running cross-correlation and AI logic, so now they can generate triggers. Now apply to Hadoop. What does Hadoop have anything to do with those kinds of applications? They cannot feed huge amounts of datasets, they cannot react in real-time, doesn't store time-series efficiently. >> Hapoop (laughing) >> You said that. >> Yeah. That's good. >> One, I know we don't have a lot of time left. We're running out of time, but I want to make sure we get this out here. How are you engaging with customers? You guys got great technical support. We can vouch for the tech chops that you guys have. We seen the solution. If it's compatible to Kubernetes, certainly this is an alternative to have really great analytical infrastructure. Cloud native, goodness of your building, You do PFC's, they go to your website, and how do you engage, how do you get deals? How do people work with you? >> So because now we have a cloud service, so also we engage through the cloud. Mainly, we're going after customers and leads, or from webinars and activities on the internet, and we sort of follow-up with those customers, we know >> Direct sales? >> Direct sales, but through lead generation mechanism. Marketplace activity, Amazon, Azure, >> Partnerships with Azure and Google now. And Azure joint selling activities. They can actually resale and get compensated. Our solution is an edge for Azure. Working on similar solution for Google. Very focused on retailers. That's the current market focus of since you think about stores that have a single supermarket will have more than a 1,000 cameras. Okay, just because they're monitoring shelves in real-time, think about Amazon go, kind of replication. Real-time inventory management. You cannot push a 1,000 camera feeds into the cloud. In order to analyze it then decide on inventory level. Proactive action, so, those are the kind of applications. >> So bigger deals, you've had some big deals. >> Yes, we're really not a raspberry pie-kind of solution. That's where the bigger customers >> Got it. Yaron, thank you so much. The CTO of Iguazio Check him out. It's actually been great commentary. The Hadoop versus Kubernetes narrative. Love to explore that further with you. Stay with us for more coverage after this short break. We're live in day 2 of CUBE NYC. Par Strata, Hadoop Strata, Hadoop World. CUBE Hadoop World, whatever you want to call it. It's all because of the data. We'll bring it to ya. Stay with us for more after this short break. (upbeat music)

Published Date : Sep 13 2018

SUMMARY :

It's all about data, and that's the fundamental change Love having you on theCUBE because you always and then you evolve into things like Spark. How do you scale it? and then, you know and cloud eight on Visa Vis on premise with Hadoop. Google started the technology and couple of years and the internal conversations we report on theCUBE The companies that had the most experience It's nothing to do with Hadoop, okay? and then if so, what alternatives do you have for So you need shared access to data from those containers. The rest of the data synchronizes is like the new OS, where you have many processes, We can even control the access for each Yes, by the way, we have a managed service. So you can go and develop your work loads, your application If you And then, it's important post-event, if you will, meaning because most of the time series databases were designed for This could help on smart mobility. So the customers we have, and how do you engage, how do you get deals? and we sort of follow-up with those customers, we know Direct sales, but through lead generation mechanism. since you think about stores that have Yes, we're really not a raspberry pie-kind of solution. It's all because of the data.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

Lisa MartinPERSON

0.99+

Ed MacoskyPERSON

0.99+

Darren AnthonyPERSON

0.99+

Yaron HavivPERSON

0.99+

Mandy DollyPERSON

0.99+

Mandy DhaliwalPERSON

0.99+

David RichardsPERSON

0.99+

Suzi JewettPERSON

0.99+

AmazonORGANIZATION

0.99+

AWSORGANIZATION

0.99+

John FurrierPERSON

0.99+

HPORGANIZATION

0.99+

twoQUANTITY

0.99+

2.9 timesQUANTITY

0.99+

DarrenPERSON

0.99+

GoogleORGANIZATION

0.99+

SuziPERSON

0.99+

Silicon Angle MediaORGANIZATION

0.99+

RenDiscoORGANIZATION

0.99+

2009DATE

0.99+

Suzie JewittPERSON

0.99+

HPEORGANIZATION

0.99+

2022DATE

0.99+

YahooORGANIZATION

0.99+

LisaPERSON

0.99+

2008DATE

0.99+

AKSORGANIZATION

0.99+

Las VegasLOCATION

0.99+

500 terabytesQUANTITY

0.99+

60%QUANTITY

0.99+

2021DATE

0.99+

HadoopTITLE

0.99+

1,000 cameraQUANTITY

0.99+

oneQUANTITY

0.99+

18,000 customersQUANTITY

0.99+

fiveQUANTITY

0.99+

AmsterdamLOCATION

0.99+

2030DATE

0.99+

OneQUANTITY

0.99+

HIPAATITLE

0.99+

tomorrowDATE

0.99+

2026DATE

0.99+

YaronPERSON

0.99+

two daysQUANTITY

0.99+

EuropeLOCATION

0.99+

FirstQUANTITY

0.99+

todayDATE

0.99+

TelcoORGANIZATION

0.99+

bothQUANTITY

0.99+

threeQUANTITY

0.99+

Sazzala Reddy & Brian Biles, Datrium | CUBEConversation, July 2018


 

(techy music) >> Hi, everybody, welcome to this special Cube conversation, my name is Dave Vellante. I'm very excited to be here in our Palo Alto studios in the heart of Silicon Valley, the innovation hub of technology. In 2015 we introduced a company to your community called Datrium and one of the co-founders, Brian Biles at the time, came on as one of our segments and shared with us a little bit about what they were doing. Well, several years on, three years on, this company Datrium is exploding and we're really excited to have Brian Biles back, who's the co-founder and chief product officer at Datrium and he's joined by Sazzala Reddy, who's the CTO and another co-founder. One of the, two of the five co-founders here, so gentlemen, great to see you again, thanks for coming on. >> Good to see you, Dave. >> Yeah, so Brian, I remember that interview and I remember, you know, trying to get out of you what that secret sauce was, exactly what you were doing. There were a lot of other start ups, you know, at that time and several have gone by the wayside. You guys are exploding, so I want to help people understand why you're being so successful. Now, I want to start with the two co-founders. Why did you and your other co-founders start the company? >> You know, we started the company... We hired our first people in 2013, and at that time, there were really two separate worlds. There was a cloud world and there was an on-prem world that was sort of dominated by VMware. So, there were these two evolving discussions about how each one was going to grow in it's own way but kind of within its sphere. We thought there was an opportunity to bridge the two, and to do that, you know, ultimately it becomes a questions of how to run sort of coordinating applications on public clouds and deal with data on public clouds but also to have a capable version of the same infrastructure on private clouds. So, our sort of job one was to build up, to sort of import cloud technology onto prem. We currently have, if you want an Amazon-like version of infrastructure on-prem, we're still the best place to go because we have a two layer model, where there's, you know, compute with fast flash, talking to a separate durability layer very much like EC to an S3. You want to do that, we're still the way to go. But the long term story is also developing. We have a footprint on cloud with a backup store on S3 that coordinates all our data services for global deduping security and so on in a very cost effective, simple SAS way, and that part is growing significantly over the next couple of years. So we're, you know, through with sort of phase one. It'll keep, you know, evolving but phase two is really just getting going. >> So Sazzala, as the chief technologist you had to think about the architecture of where the industry was going and the architecture that would fit that. And you know, that people talk about future proofing so if you think back to the original sort of founding premise, what were some of the challenges that you were trying to solve? >> Right, so there's a business use cases and then there's technology use cases. And as a CTO you have to think of both of them, not just technologies, so if you look at technology point of view, you know, in 2000, back in 2000, Google published a paper called Map Reduce that said hey, this is all we can do at large scale. It was the beginning of how to build large scale distributed systems. But it was built for one use case for surge. But if you look at, we started in a time when Google was already there and they built a system for multiple, unpredictable use cases. So you think differently how the problem is whereas Google start from, though. Some of the CI vendors, they've done good things. They kind of evolved in that direction. We have evolved in a new direction. To the technology point of view, that's kind of what we thought about. But from a business perspective, what do people want? You know, if you look at the next generation, the millennials, and look beyond that they're used to iPhone experience. They don't want, if you tell them about LUNs, they don't re-phone LUNs, they're going to just say what is this and why do you have this stuff, right? So you have to evolve away from that. So, it's the CIA wants to think about how do I make my idea the service? How do I consume, you know, how do I make it a consumption model, how do I make my IT not a cost center but a friendly way to you know, grow my business? And the developers want a platform they can develop things faster, they can adapt to newer, kind of newer technologies coming in, there's Mesos, there's Docker container, there's Kubernetes, thus things change rapidly. So that's going to build a framework in how we wanted to start the company. Basically build a cloud-like experience simple as a SAS, simple as a click and then just make that work. >> The thing that's interesting to me about Datrium is you know, the simplicity, like open. You know, I remember when Unix was considered open and then obviously the definition changes, simplicity has changed. I remember when converged infrastructure, bolting together, compute storage and networking, simplified things. Hyperconverge took that to another level. You guys are going beyond that taking it to yet another level of simplicity, so I wonder if you could talk about that-- >> Yeah, so-- Specifically in terms of the problems that you're solving today for customers. >> So if you look at the V block, I guess the VCE was the first, I guess that they made a successful convergence. So they did hardware convergence-- >> Right. which is a useful thing to do. Same thing with your head CI, the traditional vendors, they do hardware convergence, if you look at Hecht, probably stands for hardware convergence, maybe. But we took a little bigger step in the sense that what you really want us to think about is data convergence. Not, hyperconvergence is useful, but you also think operating about data convergence. What's the point of building your on-prem cloud- like experience when you still have to do backups and some other you know, some other boxes you are to buy. That's not a really good experience, but you need is this whole new hardware convergence, we also need data convergence to get that experience of like cloud-like simplicity in your on-prem. >> Right, in the cloud you don't think of backing it up, right, it's self protecting. That's just the nature of how you should be thinking about on-prem as well. So, when we imported that technology to be a two-layer approach, we built that stuff in so you don't have to think about it. It's kind of like there's no SQL or we're sort of like no backup. >> Yeah, we're going to talk some more about that but that's an important point is you get backup and data protection, you know, full capability, it's just there. I always use the example of Netflix or Spotify. I don't have to call up a salesperson or the billing department or the customer service department, it's just there and I deal with it. >> Right and it gives you, you know, this combination of, in the two layers, the ability to run multiple workloads at big scale, which is otherwise hard in some of these more historical approaches, with great performance that you know is off the charts. But it also means you don't have to move data around as much. So you restore, you restart, you don't restore. You don't copy stuff in and out. >> Yeah. >> That data mobility efficiency it turns out, is also super critical when you think about multi-cloud behavior. >> You have to be in the business to actually feel like you talk to backup admins and life is hell. It is really painful and it's also very fearful if you have a problem, you have to restore and everybody's watching you when you're restoring. So we try to eliminate all those problems, right? Make it, just, why worry about all these things? We are living in a new world, let's adapt to it. >> I think I've, tongue-in-cheek I think about the show Silicon Valley and you guys didn't start out to build a box. >> No. >> No. >> You settled this off some problems and so what you have is a set of best of breed storage services that are running the cloud, called multi-cloud, meaning on-prem or in the cloud so I want to try to juxtapose that to sort of the traditional storage model or even some of these emerging storage models of some of the very successful companies. So, how do you guys differentiate, help us understand what's different about Datrium from the classical storage model and even some of these emerging storage models? >> I'll kick it off and Sazzala can expand on it. You know, first we're bringing a cloud experience to on-prem, so it's not a storage system that you, like a SAN. We, you know, offer compute as well and a way to make that whole operation simple around you know, standard and emerging standard coordination frameworks like VMware and Red Hat and Docker. It includes these really powerful data services to make life simple so you don't have to add on a lot of different control panes and spots of data storage and so on. By getting that right, it makes multi-cloud coordination a lot easier because the hardest problem getting started in that, aside from, you know, just doing SAS applications to run it and so on, is getting data back and forth. Making it efficient and cost effective to move it. So, you want to expand? >> Yeah, so you know, I think you give examples of like maybe there are some successful companies in the market today. There is old school array market and there's the new school head CI markets. So, the old school array market, I mean, if some people are still comfortable with that model, I think they just because the flash array market has some performance characteristics but still it's, again, going back to that rotary phone landing, it doesn't map your, the lands don't map your business. It's just a very old school way of thinking about it. Those will probably vanish at some point because it makes no sense to have them around. And yes, they do provide higher performance but they're still, you know, it's still not providing you that level of ideal service. From a developer point of view, I can make my application life easier, I can do things like test and dev. Test and dev, simple thing like test and dev requires you to clone your application so they can run test and dev on them. It's a very powerful use case, it's a very common use case for most companies, including ours. So, you can't do any of that stuff with that old school style of array. And the new school style, they are making progress in terms of making that developer life a little bit more easier but they haven't thought deeply about data services. Like they built a nice packaging and like some UI frameworks but ultimately, data needs to be like stable. They didn't, you think about data in a how do you make it compressed, efficient and cost effective and make it so that it is easy to move data around. And you're think about the backup and DR. Because if we look at application, you've run it, you have to back it up and you have to do archiving for it. You have to think up the entire lifecycle of it. Which is kind of what most people are not doing, thinking of the entire lifecycle. They're solving a small piece of the puzzle but not the entire thing. >> I'll give you another example of that. In you know, to the operator of a private cloud, you're thinking about workloads, you're thinking about relationships between VMs you know, how to get them to the right place, copy them at the right rate, secure them in the right way. In a sort of old style, that kind of thinking about say protection, you might have a catalog in a backup software but you have volumes of VMs in a SAN. Those are completely different mindsets, we've merged them. So we have a completely scalable catalog you know and detailed validation, verification information about every scrap of data on the system that we can test everything four times a day for test restores. All that kind of stuff is organically in a single user interface that's VM focused, so you don't have to think about these different mindsets. >> But it's SAS really for data services. >> For data services, yeah. >> I mean is that a fair way to think about this? >> Yeah, I think so because what's better than one click? Zero clicks, so lot of people are aiming for one click. We are aiming for zero clicks. That's actually a harder problem to do. It's actually hard to actually think about how do I automate everything so they have to do nothing? That's kind of where we have really, really tried hard is that, as little clicks as possible. Aim for zero as much as possible. That's our goal, in the internal company engineers are told you must aim for zero clicks, actually a harder problem. >> Right so, when you think about how to then expand that to managing multiple sort of availability zones across multiple clouds there are additional problems. But starting from these capabilities, starting from great indexing of data, great cataloging of relationships between things, everything's workload specific and great data mobility infrastructure with data reduction and encryption and so on. As we forecast where we can go with that, it's profound. You can start to imagine some context for how to deal with information across clouds and how to both run and protect it in a way that's really just never been in the market. >> So I want to talk about that vision but before we do, before we leave sort of the differences let's take two examples. Two very successful companies, Nutanix and Pure, so how are you different from, let's start with Nutanix, for example. >> I think that there's some good things, I think they're moved the industry forward quite a bit. I think they've brought some new ideas to the market, they made it VM-centric, they said no LUNs. They've made quite some improvements, and then they're a successful company, but ultimately I think their focus tends to be mostly on how to make the UI shiny and how to kind of think about the hypervisor, which is kind of where they're going to. They don't hypervise in the world today, we don't want to go invent another hypervisor. >> Mm-hmm. >> There are so many other options and the world is changing a lot. Like you said, Kubernetes is coming, Mesos is coming, so we want to adapt to those newer ways or style of doing it, and we don't want to invest in making or building a new hypervisor, and we're good partners with VMware, so that's one angle to it. If you look at, you know, how... Because if you're going to go to large enterprises, they want to consolidate the workloads. They want large scale, they want exabyte scale, so you meet customers now who have exabyte scale data, they think they're the cloud. They're not thinking of any other cloud, they think they're the cloud, so how do you make them successful? So, you have to think about exabyte scale systems where basically you can operate it as a cloud internally, so we build those kinds of infrastructures and those kinds of tools to make that exabyte scale successful, and we probably are the fastest system on the planet. Right, so that's kind of where we come from is that we not only say that we scale, we actually prove that we scale. It's not just enough to say we have Google style and the scale, so it's actually you have to prove it, so we actually have tests where we can, we actually have run with other people that it actually works as we say it does. So, I think it's important that you have to speak, you have to not only produce a product which is useful from a UI point of view, that's useful, but also it has to actually work at scale, and we make it more resilient. We have a lot of features built in to make it more resilient and at scale, like what does a tier one mean, what is mission critical apps, how do you make sure that we don't lose data, for example. It runs at the highest performance possible at a price which is reasonable. >> Okay, and I guess the other difference is you're a pure SAS model in that you're responsible for (chuckles) the data services, right, and-- >> Yeah, that's right. >> Yeah, we've pulled a lot more into the data services in our cloud approach. >> Mm-hmm. >> And we've separated from the performance elements, so they're these two layers, so it's both self-protecting in a way that's independently provisioned if you want to expand capacity for backup retention, that's a standard thing. If you want to expand performance or workloads you do that independently on stateless hosts. >> Mm-hmm. >> An example of where this pays off is just the resilience of the system. In a standard hyperconverged model a good case is like what's the crater size when a, or the risk, you know, profile when a single component fails. So, if a motherboard fails in a sort of hyperconverged model that's standard, you know, a single layer thing, then all the data on that system has to be rebuilt. That puts enormous pressure on the network, and you know, some of these systems can have 80, 160 terabytes of data on a single node, that's like a crazy week, and if two of them go down then the whole thing stops. In our model the hosts are stateless, if any number of them go down for any reason the data's still safe, separate-- >> Mm-hmm, right. >> You know, in a hyperconverged model you can't really integrate backup well because when primary goes down back up goes down, too, then what? >> Okay, so that's, I think, clear how you differentiate from hyperconverged. Did you have another-- >> Yeah, I have one more point, it's about the data services you mentioned. We have, again, going back to zero-click, we built all our features into the system. For example, you know, there are a lot things like deduplication, compression, image recording, those are like, I mean, they're not like details, but ultimately they do bring the cost down quite a bit, like by 10 X to five X, right, that's a big difference. >> So, those are services that are inherent. >> That are inherent in the system. >> Yeah, okay. >> Either you can have check boxes, you can say one click and have to like check box, all that. I mean, you have to go and click it, but to click it then you must read a manual, you must do the manual, so then what is this right click and what happens to me, why isn't not on by default. >> Yeah. >> So, those are the problems, I think the differences between them, I think Nutanix and us, is that we kind of made it all, like, be seamless and all built in. >> Yeah, and when we, you know, if you have to, if it's an option that you ask for later that means it probably has some impact on the system that you have to decide about. In our case you can't turn it off, it's always there and we do all our benchmarking with all that stuff turned on, including software-based encryption. It's just a standard thing, and we still are like the fastest thing on the planet. >> Yeah. >> And let's talk about Pure a little bit, because they don't have-- >> Yeah. >> The networking component and then the compute component, it's, you know, flash array, so how would you position relative to Pure? >> Okay, so again, going back to that SAN array was built before the internet, it is just the same. It is just the same, it's just to deport SSDs behind those controllers in central hard drives. It is likely faster, but ultimately the bottleneck is those controllers, those two controllers they have, that's what it is. No matter how many, how awesome your... You put envy in drive, it doesn't matter. It's going to be as much as speed as your network pipe is going to be, and as much faster as your controllers are going to be. Ultimately, the latency, you cannot, like, basically it's over the wire. It will always be slower than what kind of having... >> So, the big thing here is-- >> Yeah, and it's not a private cloud. You know, that kind of model is for someone who's assembling a lot of parts to create a cloud. >> Yep. >> You know, we're integrating these parts, so it's a much simpler deployment of a cloud experience and you're not integrating all these double parts. >> I'm getting a cloud, I'm buying a cloud experience from you guys with the sets of services, let's talk about those services. So, mobility, discovery, analytics. >> Yeah. >> Governance, talked about the... >> Encryption, yeah. >> The other data reduction services, encryption... >> Right, the cataloging and indexing of the data so you can, you know, restart from old data. >> And I can run this on any cloud, including my on-prem cloud, correct? >> Well, that's the direction, we have some parts now and you know, you... (laughs) Sorry, Sazzala can talk about where we're going. >> So, architecturally it's designed to run on... >> Yeah, because I think fundamentally we chose that design philosophy that it has to be two-layer, right, that's a fundamental decision we made long ago, and it's a detail but it's a fundamental decision we made long ago that because if you go to Amazon it is two-layer. You cannot make one-layer work there. Like, you know, compute and storage has to be split to through that part, but they must work together in a nice way, and also S3's very weird. I don't know if you know about S3. S3's very weird behavior, it does not like random writes, it has to be all sequential writes, and that also happens to be how we built it. The way our system works is that we only do sequential writes to any device. It works beautifully in S3 with EC2, so just to step back a little bit, taking big picture, like so, we wanted a cloud-like experience for your on-prem, right. That's kind of what we built, we built a Datrium cloud on-prem, and then we, as of beginning of this year, we started offering services, multi-cloud services and started with Amazon first. The first service we enabled was backup and archiving, that's our first service. A lot of people like it and you have some stats from that, like from last quarter, like how people like it, because people like it because you don't have to have another on-prem infrastructure. You can just consume it as a SAS model, it's very convenient and it's as easy as an iPhone backup. I don't know if you use iPhone backup, it's like a click. >> Yeah. >> Okay, unfortunately it's a click. We have tried to avoid the clicks, but we can't really avoid it all the way, so you have to click it so that you can then start doing backups into the cloud and then can retrieve them in a very simple single pane of glass. It's very cost-effective because we do dedupe on the cloud and we dedupe over the wire, but dedupe over the wire, by the way, it's actually a very unique feature. Not many companies have it, like Nutanix and Pure you mentioned, they don't have it, so you know, so that's one of the things where I think we differentiate because data has gravity, right, so to move it somewhere you need an antigravity device. So, you need something to actually move this data faster, how to defeat speed of light. You have a pipe, you have a VAN network, so how do you defeat the speed of light, so what we have built is a feature, it's called Global Dedupe, is that you can move data in a much more efficient way across the cloud. So, now you may question, "Hey, I'm moving my data "from here to another place," obviously we have these cloud services... The question you may ask is, "Okay, how do "I know I get guaranteed security? "How do I know that it's going to be correct, "that I moved all these places," right? So, we do multiple things, one is that we have built in encryption. It's going to be globally encrypted, it's like an encryption across the whole thing, we call it blanket encryption. >> Mm-hmm. >> The other one is that we have blockchain-like features that are built into the systems so that if you move an object, like an app or whatever, you're going to move from one place to the other, it's built in kind of blockchain features where you cannot move something to another place and get it wrong. It's fundamentally going to be correct for you, so those are the kind of things we thought about, like never to worry about it again. It's going to guarantee the data's correct and it's moved in the most efficient way, so that's our first landing thing we've done is that we wanted to build an experience which is like on-prem cloud, I mean, onto also the cloud. Right, what other experience people are... People like simplicity, people want the SAS-like experience. They don't want to manage it, they don't want to think about it. They just consume the services, so the first service we have in Amazon is what we chose, is backup and DR. The next thing we are going to be shipping soon, announcing soon, and we'll have a demo in the VM World is something we call Cloud Shift. It's an app mobility orchestration framework where you can just click and move your workload to somewhere else, to Amazon, and you can run, so it's not just a backup thing, it'll also become you can run your workloads in Amazon and get a consistent experience from your on-prem and the cloud. So, one of the challenges is that if you move to another place, is it different tool sets, I have to change my whole lifestyle, no. >> Mm-hmm. >> We want to provide that seamless operational consistency that-- >> That's the key, right. >> That's the key. >> Whether it's on-prem or it's in the cloud it operates the same way. I'm accessing those sets of data services and-- >> Yeah. >> I don't really care where it is, is that-- >> That's right. >> The vision? >> Yeah, that's right. >> Exactly. >> That's right, so if it turns out that there's a cost advantage in moving from, you know, A to B, we make it super easy and the control panel from our standpoint is consistent, and it's... So, all of our control orientation moving forward will literally be SAS. It'll be running on a cloud even if you're managing on-prem stuff, because that way, assuming you're multi-cloud, you need a control plane to be dealing with the cloud stuff anyway, and it just sort of neutralizes the experience so that in a multi-cloud way it's always consistent, it's always simple, and the nice thing about sort of true SAS is you don't have to upgrade software parts. We do that for you in the background. >> Mm-hmm. >> So, it's just always up to date. >> So, I was saying before, Datrium takes care of everything. >> Yeah. >> And it's the true cloud experience. >> Just consume it. >> Right. >> Okay, I want to talk about, end on the two other areas: the operational impact and the developer impact. So, when you think of operations, we've talked about LUNs before. I've always said if you're in the business of managing LUNs you really want to think about, you know, updating your skill sets (chuckles) because that capability is not really going to be viewed as valuable. It isn't today and certainly in the future, so the operational impact, the degrees of automation that IT operations are driving is going through the roof. Cloud-like, we've talked about that, and the other is developer productivity. People are using containers, you know, Kubernetes... >> Yeah. >> And new styles of writing software-- >> Yeah. >> As everybody becomes a software company. So, can you talk about those two aspects? >> And ultimately there's going to be serverless. >> Right. >> Right. >> As we think about if you take a leap, in another 10 years I think serverless will probably be one of the important ways, because why do you even care how it runs. You just write some software and like, you know, we can run it. It should be that way, but I think we're not there completely yet, I think, so we want to adopt a methodology where we provide the framework where we don't dictate what apps, how we write your apps. That's, I think, very powerful because that's actually evolving faster as we move forward, because serverless is a new app framework. >> Mm-hmm. >> You cannot anticipate this, right, you cannot anticipate on building everything but what you can anticipate is services we can provide for the developers, which is, you know, no matter... Because it's the granularity of it. We can map their application granularity into our system, we have that fine level granularity, so that kind of was what you want to provide as a primitive. LUNs don't have that primitive, right, so we provide that level of primitive that whatever apps you have will have that level of primitives to global data services for you, and once you have the data services like that we'll guarantee that it's highest performance, which is what app developers want. Like, I get the highest performance, I can easily... And then we will also provide a way to clone those things easily, those apps, because sometimes you're at an app, you want to test it, too. Like a hundred times, you want to just... If you can copy all the data a hundred times or you can just, say, you know what, clone this thing a hundred times in a millisecond and run my tests fast and then okay, I'm done with my test, it looks good, I'll deploy it. >> Mm-hmm. >> That's kind of what developers really want is that they are able to run, write faster, develop faster, because tests on dev cycles are important. A lot of people think that hey, I can put my test on dev in some old box over there, but that's really bad because from business perspective testing does, engineering's expensive. Their test cycles have to be fast so that they can e-trade faster and kind of produce faster. The harder you make it to test your system, this is like, this is what happens in our company today. The harder it is to test your logic and your code, the longer it takes to, like, do e-trade. >> In some ways test and dev is becoming more strategic than the production system, I mean, really-- >> Well, it-- >> (chuckles) Because of speed. >> Yeah, I mean, it can take immediate advantage of some of these improvements in, you know, stacks. Like if, you know, if Kubernetes is better just, you know, go quickly to it. The things that these new stacks assume, though, is that it's, you know, a server-based data, so on-site you can accelerate mobility significantly by, you know, when people ask to copy things from here to there, clone it, you know, start another instance, we can help them do that by just, you know, faking it out with metadata-- >> Mm-hmm. >> And deduplication, and so we tried this with Jenkins just in our own development, moved to that model and you know, everything was suddenly twice as fast in development. To do a build all of a sudden you didn't have to copy data here to there. You were cloning, you know, with metadata. The way to do it across clouds is, again, kind of dedupe focused. If you have to actually move the data it takes a long time and it's expensive, especially for egress costs. If you can just, you know, validate which elements of the data are new versus old on either site you can move a lot less. >> Hmm... >> It might be, you know, six times less, and then the costs go down, the speed goes up, you defeat data gravity. >> Yeah, so-- >> Excellent, all right, we have to leave it there. >> Okay. >> Out of time, thanks so much, you guys, for helping us better understand, you know, Datrium. Congratulations on your success so far and all the great innovations that you've achieved. >> Okay, thank you. >> Okay, thanks for watching, everybody, this special CUBE conversation. This is Dave Vellante, see you next time. (techy music)

Published Date : Jul 26 2018

SUMMARY :

so gentlemen, great to see you again, thanks for coming on. and I remember, you know, trying to get out of you and to do that, you know, ultimately it becomes so if you think back to the original sort of to you know, grow my business? about Datrium is you know, the simplicity, like open. Specifically in terms of the problems So if you look at the V block, backups and some other you know, Right, in the cloud you don't think of you get backup and data protection, you know, with great performance that you know is off the charts. you think about multi-cloud behavior. and everybody's watching you when you're restoring. the show Silicon Valley and you guys what you have is a set of best of breed to make life simple so you don't have to Yeah, so you know, I think you give so you don't have to think about these different mindsets. engineers are told you must aim for Right so, when you think about how to and Pure, so how are you different from, and how to kind of think about the hypervisor, and the scale, so it's actually you have to prove it, the data services in our cloud approach. if you want to expand capacity for backup and you know, some of these systems can have 80, Did you have another-- the data services you mentioned. but to click it then you must read a manual, and us, is that we kind of made it all, on the system that you have to decide about. Ultimately, the latency, you cannot, Yeah, and it's not a private cloud. and you're not integrating all these double parts. from you guys with the sets of services, so you can, you know, restart from old data. some parts now and you know, you... (laughs) and that also happens to be how we built it. so to move it somewhere you need an antigravity device. So, one of the challenges is that if you move the cloud it operates the same way. you know, A to B, we make it super easy you know, updating your skill sets So, can you talk about those two aspects? and like, you know, we can run it. for the developers, which is, you know, no matter... The harder you make it to test your system, from here to there, clone it, you know, moved to that model and you know, It might be, you know, six times less, for helping us better understand, you know, Datrium. This is Dave Vellante, see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

DatriumORGANIZATION

0.99+

Brian BilesPERSON

0.99+

BrianPERSON

0.99+

2013DATE

0.99+

80QUANTITY

0.99+

2015DATE

0.99+

Sazzala ReddyPERSON

0.99+

July 2018DATE

0.99+

AmazonORGANIZATION

0.99+

NutanixORGANIZATION

0.99+

first serviceQUANTITY

0.99+

DavePERSON

0.99+

2000DATE

0.99+

TwoQUANTITY

0.99+

Silicon ValleyLOCATION

0.99+

one-layerQUANTITY

0.99+

one clickQUANTITY

0.99+

GoogleORGANIZATION

0.99+

twoQUANTITY

0.99+

Palo AltoLOCATION

0.99+

NetflixORGANIZATION

0.99+

bothQUANTITY

0.99+

zero clicksQUANTITY

0.99+

Zero clicksQUANTITY

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

CIAORGANIZATION

0.99+

two layersQUANTITY

0.99+

two aspectsQUANTITY

0.99+

two-layerQUANTITY

0.99+

S3TITLE

0.99+

last quarterDATE

0.99+

SpotifyORGANIZATION

0.99+

five XQUANTITY

0.99+

10 XQUANTITY

0.99+

zeroQUANTITY

0.99+

UnixTITLE

0.99+

oneQUANTITY

0.99+

two co-foundersQUANTITY

0.99+

first peopleQUANTITY

0.99+

two layerQUANTITY

0.98+

Map ReduceTITLE

0.98+

10 yearsQUANTITY

0.98+

EC2TITLE

0.98+

two separate worldsQUANTITY

0.98+

six timesQUANTITY

0.98+

firstQUANTITY

0.98+

PureORGANIZATION

0.98+

OneQUANTITY

0.98+

five co-foundersQUANTITY

0.98+

todayDATE

0.98+

Cloud ShiftTITLE

0.98+

two controllersQUANTITY

0.97+

one angleQUANTITY

0.97+

single paneQUANTITY

0.97+

Silicon ValleyTITLE

0.97+

twiceQUANTITY

0.97+

three yearsQUANTITY

0.96+

VMwareORGANIZATION

0.96+

two other areasQUANTITY

0.96+

Eric Seidman, Veritas | CUBEConversation, July 2018


 

(peppy music) >> Welcome back, everybody, Jeff Frick here with theCUBE. We're at our Palo Alto studios for a Cube Conversation. It's a great way to get a little closer to people when we're not at the hustle and bustle of a big show. Although this guest just came from a big show. He's Eric Seidman, director of solutions marketing for Veritas Technology, just back from Microsoft. Welcome, Eric. >> Thank you very much. >> So how was the desert? >> It was very hot. >> (laughs) It was very hot. >> It was very hot. >> So big Microsoft partner show, Inspire. What was kind of the vibe? Things are obviously going really well for Microsoft. We read about they're gaining market share on the cloud space against Amazon. So you know, Satya really seems to have done a great job moving that company. >> Indeed. So there was a lot of focus on Azure at the show. But I thought it was a great event for their partners that are attending there, not only to get more immersed in the capabilities of Microsoft, but also to meet with companies like us, like Veritas, to be able to learn more about our solutions, how they complement what Microsoft is doing, particularly in the public cloud space, and help those partners generate more revenue and help solve the customers' business problems as well. >> It's interesting. You guys are big in appliances. You've got a couple appliances, and we'll talk about specifically the Flex appliances. But more generically, some people might have a question. There's all this rise of public cloud. They're getting more and more percentage of the workloads. How does an appliance fit in a public cloud world? >> Yeah, so that's a great question. We got that a little bit at the Inspire show as well. So first off you kind of have to consider that everything that we do as a company comes out as software first, right? So we're software-defining everything, basically. But there's a lot of consideration that we look at what our customers' requirements are. And so there's many customers that prefer to consume in that agility model that software-defined allows them to do in terms of being able to very quickly scale, Add new features and capabilities on the hardware of their choice. You know, software-defined, particularly storage gives many customers that cloud agility that they're looking for. But there's other sets of companies that are also looking for that same software features and capabilities but prefer more of an appliance consumption model. Maybe they're not ready for that bifurcated type of approach to software and hardware, or they're looking for faster implementation of a fully supported solution. So we provide our customers kind of the best of both worlds. They can consume our solutions, our data protection and storage products as software or as appliances based on the requirements of the company. >> And what's kind of the special, for people that aren't as familiar with appliances, we always hear about industry standard hardware and you know, the hardware's going to zero. What's the advantages that you can accomplish with an appliance that you couldn't just use, you know, with regular kind of off-the-shelf hardware? >> Yeah, well, certainly we take care of that integration and task, and it's a fully supported configuration. So they get all of the benefits of that. But we also, I'd say our unique capability from an appliance standpoint is that it truly is software defined and remains software defined. So as an example of a customer chooses to deploy our Access appliance, which is a long-term retention appliance which complements our net backup, our data protection solutions. Even though they're getting it as an appliance, that software license isn't tied or locked to that appliance. It's still licensed separately. So as an example, if we come out with a new type of storage appliance, they're free to move that license to it. Or if they choose to even move to a third-party hardware, the newer, greener, cheaper pasture storage server, they can transfer that license to that. So while they're consuming it as an appliance for all the benefits around a fully supported solution for them, We still provide that software-defined flexibility or capability, so that's one of the unique aspects of that. >> And then really, you deliver kind of this mixed benefit to the client as well, so they've got the benefits of having it locally. You can put fast storage in there and have local storage as well as manage the pushing out of the other data that maybe is more appropriate in the public cloud or whatever. >> Yeah, so if we take kind of a look at what we were speaking to our Microsoft partners at Inspire, it was around our appliances. And like you were saying, well, why are you talking about appliances? You know, a big push to Azure and all. So we were able to show them with our Flex appliance, which is a very unique container-ized solution for multiple net backup solutions, being able to scale those out in containers versus physical storage devices or servers and also turn on or off cloud tiering capabilities as a service as well. So customers may have a requirement for multiple net backup domains, and in the future they want a tier to Azure or another public cloud, they can simply turn on that cloud tiering service in this Flex appliance. And then our Access appliance that I mentioned that complements our net backup solutions for on-premise long-term retention can also tier to Azure or public clouds as well. And those things both work together where we have very high performance retention in the Flex appliance for the best RPO RTO of the data protection services there. And that can tier to Access for additional on-prem storage at a lower cost per terabyte. And then either or through both, tier to the cloud, depending on the type of data. So a customer may have a requirement where they have to keep data on site, maybe it's for compliance or governance reasons. And then other domains may be okay to move that data longer-term into public cloud. So the appliances provide that type of flexibility that enables the customer to put the data where it meets the requirements, either for cost performance or for compliance requirements. >> So I'd like to kind of go up a notch. You know, you're out with customers all the time and listening to their needs and requirements. We hear all the time the explosion of data, unstructured data, regular data. How are you seeing that really manifest itself in customers that have specific problems today that are sitting at the table with you guys? I mean, what kind of stories are they telling you of kind of the rise of the data quantity that they're having to deal with? I don't know if you have some interesting anecdotes. >> Yeah, well, certainly it's not getting deleted. So more and more of it is being retained for various reasons. Some of it's for data protection reasons, ensuring that they're able to meet, like, litigation requirements and things like that. So there's a lot of long-term retention for those type of requirements. But more and more we're also seeing the growth of this type of data just for the use of mining it and getting more value out of it. They're not deleting it. They're finding that there's ways to monetize that data in different means. So we see that, and that's one of the reasons why our Access appliance has been very well accepted in the market, because it can retain vast amounts of data on-prem at a low price point and be utilized for either backup data protection aspects or the archival in these cases as well. >> So one of the concepts we talk a lot about on theCUBE is about data as a balance sheet asset. It never really was before, right? It was a liability, because you had to buy a bunch of gear to store it. And you couldn't keep it all, and it was too expensive, and you threw stuff away. Clearly the pendulum has swung, and now data's very valuable. Some argue it's the core asset of the business. So I'm just curious if you've seen a change in the investment profile, the ROI metrics, some of the ways that people are making purchase decisions in a world where they want to keep everything, where they recognize that data is an asset. And now it's really, it's not a cost to hold this stuff that's expensive to hold, but it's really now more of an investment to drive an asset that's hopefully going to drive cost savings to get into new businesses or opportunities for revenue. How is that manifesting itself in some of the decision processes that the buyers are going through? >> Yeah, I mean, often we hear a lot of those similar problems within our customers that we talk to. And I think the biggest challenge is, as you were talking about, the cost aspect. They're really trying to figure out, well, how do we move from a cost center or burden for storing all of this data to a value that delivers value to the company. >> Right, a business benefit. >> A business benefit from a cost nature. And we help the customers achieve that in many different ways. We have an object storage offering that has an integrated cognitive engine that can provide very, very deep search capabilities as well as integration into external ML and AI facilities to extract more value from the data. We have some cool products like info map that will allow a company to really see where all those important assets are stored and what type of data that they have and where it's located, you know, basically data center wide, company wide, and even what's in their cloud. And that's from info map. And so they can see it. Like, they may have important data that needs to be treated with GDPR compliance. How do you know where that's located, right? And how do I make sure I'm meeting those type of requirements? So those are some of the kind of tools that we're helping our customers move from that cost center to more of a value proposition where they're delivering business benefits and revenue to the company. >> Right, right. I'm just curious on the GDPR thing. We had a little thing here when it was GDPR day just a couple Fridays ago. >> I heard about that, yeah. >> How are those conversations? Was it a Y2K kind of a moment in the months leading up to it? Was it not that big of a deal? Did people get out in front of it? It seems like the regs passed a long time ago, but the due dates were delayed for quite a bit. And then oh, my goodness, it's GDPR day. >> Yeah, well, I was in the industry back in the Y2K days. I don't think it had that, it didn't have that same type of feeling of impending doom or something, like we don't know what to do. >> Right, until the first couple of clients drop it. >> Yeah, well, maybe, but I think it was more about, well, this is predictable. We've been working on GDPR, being able to provide the compliance to that for a couple years before that regulation came out, you know, working with our customers in Europe and stuff. So we've built a lot of infrastructure and software and capabilities that helped customers achieve that, you know, before the requirements hit. So I guess from our standpoint at Veritas, while it looked pretty menacing, you know, maybe from the outside, but we had been working with our customers all along so that they're already in that mode where they can comply with those new requirements. >> Right, but it just seems so counter to what computers do well. Computers write very well, and they copy very well. You know, so much effort in terms of your product and stuff is protecting that data, replicating the data, duplicating the data, making sure. And now with the GDPR requirement, I want you to take me out of your system. Like, where exactly is that record? And how many versions of that record are stored where? It's kind of that funny movie they made about the cloud. It's in the cloud; it's everywhere. It's nowhere at the same time. So was that kind of a unique challenge, Or you guys have been on top of that for a long time? >> Well, we've been on top of that, right? So that's where I think we brought this capability to our customers, so they were like, we're okay now. Take a deep breath. We're okay, because we have tools that can classify information, and we've had those for a very, very long time. So the customers can already know what their PII data is, where it's located, and then automatically tree it in different manners, like provide the right type of security associated with that PII data, store it in the right locations. All of those type of aspects, we've already automated that process through any of our various capabilities, some of them within our storage product, like I've mentioned, the cognization of our object storage and external software that we bring to the party, and of course, the visualization of it so that you can see it all through the info map. >> So I'm curious, we're halfway through 2018, which I still can't believe we're halfway through 2018. So as you look forward, what are some of the priorities for the balance of the year? What are some of the priorities going forward? >> Well, for us it's still meeting, helping those customers meet their GDPR requirements and ensuring that they're on top of those. Being able to visualize where their data is, is very, very important. And then like we were talking about just a couple of minutes ago, extracting the value from that data. So you'll see some new technologies coming from us later on this year that I'm really excited about. I'm looking forward to talking more about those with you in the future, and our customers that are going to continue that value proposition. We'll continue to help them store vast amounts of their growth of unstructured data, doing it economically, doing it in new ways, and again extracting more value from those data sets as well. >> Yeah, I love, you used "vast." You know, the rate and the amount and the quantity and the value is just going up, up, up. >> It is. >> So you guys are in a pretty good space. >> We think so, yeah, very good. >> All right, Eric, well, thanks for taking a few minutes. And welcome back from Vegas. I'm glad it's not 115 here for you. >> Yeah, so am I, thank you very much. >> All right, he's Eric Seidman and I'm Jeff Frick. You're watching theCUBE. We're at Palo Alto studios having a Cube Conversation. Thanks for watching and I'll see you next time. (upbeat music)

Published Date : Jul 19 2018

SUMMARY :

for Veritas Technology, just back from Microsoft. So you know, Satya really seems to have done and help solve the customers' business problems as well. They're getting more and more percentage of the workloads. We got that a little bit at the Inspire show as well. What's the advantages that you can accomplish or capability, so that's one of the unique aspects of that. in the public cloud or whatever. that enables the customer to put the data that are sitting at the table with you guys? ensuring that they're able to meet, like, So one of the concepts we talk a lot about on theCUBE to a value that delivers value to the company. from that cost center to more of a value proposition I'm just curious on the GDPR thing. in the months leading up to it? it didn't have that same type of feeling and software and capabilities that helped is protecting that data, replicating the data, and of course, the visualization of it for the balance of the year? and our customers that are going to continue and the quantity and the value And welcome back from Vegas. Thanks for watching and I'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jeff FrickPERSON

0.99+

EuropeLOCATION

0.99+

Eric SeidmanPERSON

0.99+

EricPERSON

0.99+

MicrosoftORGANIZATION

0.99+

VeritasORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

July 2018DATE

0.99+

2018DATE

0.99+

VegasLOCATION

0.99+

Veritas TechnologyORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

SatyaPERSON

0.99+

oneQUANTITY

0.99+

bothQUANTITY

0.99+

GDPRTITLE

0.99+

todayDATE

0.98+

firstQUANTITY

0.98+

FlexTITLE

0.95+

both worldsQUANTITY

0.95+

InspireORGANIZATION

0.95+

Y2KEVENT

0.89+

AzureTITLE

0.89+

zeroQUANTITY

0.89+

couple of minutes agoDATE

0.87+

GDPREVENT

0.87+

first coupleQUANTITY

0.84+

115QUANTITY

0.84+

info mapORGANIZATION

0.83+

coupleDATE

0.82+

GDPR dayEVENT

0.81+

CubeCOMMERCIAL_ITEM

0.81+

VeritasPERSON

0.78+

Fridays agoDATE

0.78+

this yearDATE

0.73+

coupleQUANTITY

0.64+

theCUBEORGANIZATION

0.64+

couple yearsQUANTITY

0.63+

Cube ConversationTITLE

0.56+

AccessTITLE

0.56+

laterDATE

0.52+

terabyteQUANTITY

0.51+

ConversationEVENT

0.48+

CUBEConversationEVENT

0.34+

Susie Wee, Cisco | DevNet Create 2018


 

>> Announcer: Live from the Computer History Museum in Mountain View, California, it's theCUBE covering DevNet Create 2018. Brought to you by Cisco. >> Hello everyone, and welcome back to theCUBE's live coverage of Cisco's DevNet Create here in Mountain View in the heart of Silicon Valley. I'm John Furrier, my co-cost, Lauren Cooney, our next guest is Susie Wee, is vice president and CTO of Cisco DevNet. This is her event, DevNet is a Cisco's developer team, conference, community, DevNet created a cloud native, much more dev ops oriented. Our second year covering it, it's only a year and a half old. The creator with her team, Susie, great to have you back. >> Great, it's great to be back. >> What a success, again. You guys are learning, we heard from the keynote that you made some changes, heard some feedback, you added more cooler elements. But this is about technology enablement tools, education, and then fun, and having people exchange information. How's it going? What's the upkeep? >> It's going great. So we're really excited to have our second DevNet Create, and what happened was last year, what we've always tried to do with DevNet overall is to make sure that we had hands-on material because people want to code, people want to learn about the newest technologies. We also made sure that the content of the first DevNet Create was from Cisco, but also from the leading players in the community. And so we got feedback from last year on how to improve it for this year, and basically they just wanted more hands on, and so we've actually expanded from having three parallel workshops to eight parallel workshops, where just folks can get hands-on and code. We continued to have both Cisco content as well as community content from leaders in the field. When we got feedback last year, what happened was we were collecting the feedback. The people who responded, we asked a few questions, and we said: Did you feel that this was useful for you? Did you feel that you were learning about modern tools and technologies that would help you in your career? Would you come back again? The strangest thing that happened is like 100% of people said that they were learning about topics that are modern and they need for their careers. And 100% of them said they would come back again. And I'm like, is it still 100%? 'Cause one person says no, it's not 100%. And so to everyone that responded, they wanted to come back, so we just gave them more of what they wanted. >> It's great, it feels great. You've got a good vibe, but I think there's some real interesting things. We talked last time about how the cloud native world connecting with the commercial liability of Cisco. Cisco is not a small company, they invented routing as we know it, they connect the Internet, and you had that kind of ops networking culture with this new programmable Internet kind of coming together, so there's some notable news you guys had here, why I was impressed by. One is these business exchange, or business ecosystem. Talk about some of the things that you guys are doing now as a result of these two worlds coming together. It's not just speeds that feed tech goodness, just like business value. Money making! >> (laughing) Just to go a little bit more into that, what happens is you kind of have your world of infrastructure, and you have developers who are writing cloud apps, it's so easy to deploy, and really get a lot of value out there. But then you have the world of real companies, real data, real existing infrastructure, enterprise data, smart cities that you want to bring online and everything there, and there's a new type of app that's come to play, and there's a new type of app that of course, needs to work in the cloud, but also needs to couple in with the real world and physical things, and enterprise data. And so that brings rise to a whole new set of applications and new ways to do business. So in terms of what we're doing with that, as someone writes this kind of an app, it's not easy, just like download it onto my phone. It's actually, how do I couple that with the location based infrastructure? How do I couple that with enterprise and hybrid cloud data? And so what we have now is a business exchange, an ecosystem exchange where we can bring those applications up, where if someone is using Cisco infrastructure, we have partners around the world who install and manage solutions that they put for their customers. And we want to show them these are the applications that work together with those products. These are the solutions that you can deliver, so we want to take the applications that our developers are writing and make it available to our partners, to let them use our go to market that we have around the world. >> We get the technical developer ecosystem, and you have the business ecosystem, so that's an indicator that there's some movement and growth. Where is it coming from? Where are you seeing the highlights here? >> Yeah, so in terms of the movement and growth, what happens is we're concentrated on technical enablement for the first few years of DevNet. But clearly, the reason to do the technical enablement is to do that business pull through. Where do we see the growth? So, what happens is everyone in the world wants to digitize, right? So people want to take their manufacturing lines, they want to digitize them. People who have cities want to offer newer experiences that are still kind of leveraging the old, but then providing a top-notch experience for that. So we have people that are in cities who want to use our infrastructure, but also have innovative applications to give to their folks. We have partners around the world who want to not only provide infrastructure, but to provide interesting solutions and experiences. So it's really interesting to see the hunger and the desire now for people to use applications in all different ways, and we're trying to really package it up for them. >> So you're actually stitching these applications together and then packaging them up for consumption for the solution? Is that what you're looking at? >> Yeah, because everybody's buying. Everybody needs a network, everybody has something that exists, but they want to go above it. That boundary between applications and infrastructure is kind of blurring, right? And what an application can do when it's really coupled in to an infrastructure with APIs is completely new, and they want to play, they want to innovate. They don't want to just do the same old thing, and they want to kind of unleash the power, get the value from all of the application development that's going on. >> I think that's great. One of the things I saw from the keynote was the numbers in terms of your exponential growth over the past four years and also the number of folks who continuously visit the site. I think that's awesome. Can you kind of give folks that are looking to build communities any tips or tricks? >> Yeah, and actually, Lauren, you were with us early on. You saw when I was begging for Cisco to have a developer community, and so we didn't have any members at that time. But yeah, we've grown to 480,000, actually 485,000 registered developers. We have 60,000 active monthly users. >> Lauren: That's great. >> So they are really doing stuff. But yeah, in terms of what it takes to grow that community, I think really the key is that my incentive, my goals, my mission, which I shared, is that we want to make developers successful. We want to make our partners in that broader ecosystem and our customers successful. It's not actually my job to sell products. Obviously any solution that's written around APIs for a product will sell products, but my job is to make the ecosystem successful. So I think the key is just constantly keeping their best interest at heart, and having a model where obviously it will pull through the right business for Cisco. >> You've got great self-awareness, and I think that's important to understand what they're trying to do, but also you bring a lot to the table. Cisco has massive presence and enterprises in businesses, whether it's service providers, down to the small medium enterprise to large enterprises. As you look across Cisco, you bring the goods to the party, so to speak. How do you balance that, and what's your approach? So you're taking more of the programmable net ops, which I love, by the way, we talked about that in Barcelona at Cisco Live. You can bring a lot to the table, but you don't want to firehose the developers with all this Cisco stuff. How are you blending that together? What's your approach? >> This is a great point. So what we have to do is we have to understand who our audience is, and we need to bring the right material and speak the language for that audience. And to give you an example, is that we've had you at DevNet Create, we've had you at the DevNet Zones at Cisco Live. When we go to Cisco Live and we have our developer conferences, that is the group in the audience that knows Cisco. They're getting certified, they know how to deploy infrastructure, it's a tremendous community. We have millions of people around the world who basically run, deploy, manage these solutions. >> John: Over years of experience, too. >> Oh, decades of experience, yes, and certification, mastery, expertise. >> They're the network nerds. >> They are the network nerds! (laughs) >> Moving packets around, but now it's changed. >> And the way that we talk to them is different, because what we present to them is how can you automate your infrastructure? How can you scale and use the newest tools? How can you get observability and insights from that infrastructure itself? And then, here's the software tools that you need to use, and here's the APIs you need to know about. Let us understand your problems, and let's work on this together. Now, the types of platforms that we expose and the APIs will be for networking, it'll be for security, it will be for compute, it'll be in many of these areas. Then we come over to DevNet Create, and what we had to do was create a separate venue to hit app developers, cloud native developers, they're not going to Cisco Live. They're actually going to developer conferences, they're in the Bay Area, they're all around the world. They don't think of Cisco or even of infrastructure in what they do, necessarily. >> It's a different culture. >> It's a different culture. And we actually had to re-jigger our vocabulary, we had to re-jigger what we present to them, because when they think of IOS, they don't think of a network operating system, Cisco's iOS operating system, they think of a mobile operating system. So we've actually had to even retrain ourselves to show this is the value that we provide to application developers, here's the platforms and the APIs that matter to you. Here's the right level of abstraction of what would be relevant to an app developer, and really speak to them. And DevNet Create is a separate venue created for that reason. >> And timing is everything, as we know. The wind's at your back because you've got Kubernetes, the container madness, the standardization of contains, which is not new, the Google guy was on earlier talking about containers. You've got micro services, you've got Istio, which is where you're partnering with Google, so this is a new, real emerging tech area that's a nice glue layer between the cultures. How are you handling that? Do you agree? >> Oh my goodness. >> What's your focus on? >> Yes, it's so amazing. So the whole world's in containers and micro services, is shifting how applications are developed. We actually used it within our own system, where we wanted to use the newest technologies, we saw the benefits of working in container and microservices based architecture, to not write monolithic apps but to really be able to compose and reuse services. So we had to go through that change, but what we saw is that when you're dealing with enterprise data, confidential data, customer data, and then public cloud data and everything there, there's a lot of thinking about how to write a cloud app that is a hybrid cloud app that uses OnPrim and public cloud and the best of both worlds. And the world of containers is interesting because suddenly it's the performance of your application, it depends even more on the network. Getting security of how your containers are built up, how they're connected, how they're spinning up in different places, you need that consistency. So having the whole tool of how do you now deploy containers on OnPrim resources as well as public cloud based resources is tricky, and you need to build in that security into the infrastructure itself, and then provide the right abstraction for the developers with tools like Istio. So we're partnered up with Google. It's been a fantastic collaboration where we start with Google's leadership in just cloud native development and what they have to do to scale, and then take together the problems and the opportunities of real enterprises, of real cities, and things there. And as Allen said this morning, it's complicated. It's not that easy. There's a whole new set of problems that we need to deal with, and this partnership is amazing at putting that together. >> Makes the network more important. >> Makes the network more important, yes. >> Awesome, so now talk about what you're doing for incentives. Obviously, you've got a great posture to the marketplace, love how you're doing it, you're bringing two worlds together, bringing a lot to the table, but now you've got to keep people motivated and keep them incentives. Couple things you announced on stage, DevNet Solutions Plus, which is much more curated set of approved rockstar developers or apps that can get on a price list. That's like a lottery ticket. It's like the golden ticket for a developer. There's real value there, right? You can't invite everybody, but you got to do some QAing, but talk about some of these incentive programs you have. >> Absolutely. So what happens is once again, a company like Cisco has an entire community and ecosystem of people and places of infrastructure around the world, and they're looking to differentiate, they're looking to have interesting offerings as well. They're very relevant, because an app developer today needs to figure out how can they make money, how can they take all of everything they've invested in software and bring it to a business value. And so what we're doing is actually coupling that app developer with the entire Cisco channel and the Cisco partners that are out there, and then letting their applications come forward. So when you get something onto... The way that it works is that Cisco has its price list, partners around the world can create solutions that they deliver with those products. But in addition to Cisco's products, what we can do is put on a software and ISV's products onto there, and we're adding it on to the Cisco price list. It's a whole new type of app store. (laughs) But it's another way to go to market to get into these places. >> You're seeing some early returns in terms of the types of ISVs that are coming into the tables, or pattern to the match, or see more network-centric? Who are some of the kinds of developers? What's the make up look like? >> Yeah, so it's really a combination. So what happens is there's the set of applications that are built on infrastructure, surprisingly. So it builds on a collaboration, or a unified communication infrastructure, things that are built on a UCS, like a compute infrastructure. Things that need the network in a mission critical way. So like trading applications, right? You need that network to work, the performance of the application needs to be coupled, so then people tend to buy a kit of here's the software, here's the hardware that makes it all work, I'm buying infrastructure, I want to buy these together. And so it's really kind of putting that bundle of value together and then letting that sell. And I talked to our partners around the world, it's an amazing ecosystem. And when they can actually connect to the world of software developers and this ecosystem in a way that it helps them differentiate their business, it helps bring the app developer money and a business opportunity. It's a whole new level of scale. It's incredible. >> You'll be pushing video apps on there, too. >> Susie: Absolutely. >> CUBE videos. >> CUBE videos, there we go. (laughing) Absolutely. >> Interesting times. Awesome. Anything you want to add? >> Yeah, definitely. One of the things I was wondering about is that with this whole app ecosystem and the partners and the things along those lines, what are the apps that you're seeing that you actually never expected to see? >> Well, some are ones that we actually did expect, or we hoped for them, but the fact that they're coming through is another case. There's a set of applications that are built, for example, around contact centers. Contact centers are customer care, it's the way that people are interacting, right? And there's a whole kind of communications infrastructure around that, it's how people are answering phones, offering services, knowing what to do, so how you build those solutions together. There's a set of healthcare applications, so when you're going into healthcare, your patient monitoring devices versus your guest Wi-Fi services are different, so the kinds of solutions that you can provide there are key. There's actually a great thing in terms of indoor location based services. So we have Meraki and CMX where your Wi-Fi infrastructure not only provides wireless connectivity but gives you indoor location proximity. There's actually a company here called Map Wise, which has built kind of a wayfinding application on top. When I was at Web Summit, then they had Cisco infrastructure for putting up the conference, then they had their application to help people navigate throughout the conference, and they came in, and I actually spoke to Matthew, who's here, and he was like yeah, I had to learn because I had to go in early. They had to set up the network, and then I'm a software guy, I had to get my app to work on that network. I hadn't really thought about how to do that before. Right, so you're starting to couple these apps into that. >> Stu: New use cases. >> These are new use cases, and so much value. >> Yeah, and it's good that you get the terminology, it's a language issue, right? So you got to get the languages nailed down. All right, final question for you. What's the bumper sticker here? What's the phrase? I heard you on stage, create, connect, secure. What's the current DevNet Create tagline? >> So it is: Connect to Create. And so in one port, it was about connecting the world, providing that connection, and that's what we've done over the last 25 years. And over the next 25, even more things will be connected, but it's really about the solutions that we can build together as a team, and there's an ecosystem now that you have APIs that are exposed. You can build machine learning, and artificial intelligence together with world leading connectivity, together with world leading cloud companies. And when you bring all those together, you can have entirely new types of experiences that we can do, so it's Connect to Create. Along with that, actually comes the need for security and protection, and so that fabric needs to not only connect to create, but also connect and protect to create. And we think that by building that into the infrastructure as well, we can help app developers to secure their customers' data and to secure their users themselves, access, and all sorts of things. >> I love the concept of co-creation, really great collaboration model, and you guys are doing a great job. Congratulations on driving this developer program, and programs now, from a handful of renegades, now to a big organization, or growing organization. >> We're still lean, but our pack is growing. (laughing) >> You don't got to be a rocket scientist to know they're going to be doubling down on this. Cisco, cracking the code on the developer forum about learning the languages, knowing how to lead into the right cultures and bring them together, and have the right technology, enablement, and Susie, the creator, a part of the team, member of Cisco team for DevNet. Thanks for coming on and sharing, appreciate it. >> Susie: Thank you so much. >> Be right back with more live coverage after this short break. (upbeat music)

Published Date : Apr 10 2018

SUMMARY :

Brought to you by Cisco. here in Mountain View in the heart of Silicon Valley. that you made some changes, heard some feedback, We also made sure that the content of the first Talk about some of the things that you guys And so that brings rise to a whole new set of applications and you have the business ecosystem, so that's an indicator and the desire now for people to use applications coupled in to an infrastructure with APIs One of the things I saw from the keynote to have a developer community, and so we didn't is to make the ecosystem successful. the goods to the party, so to speak. And to give you an example, Oh, decades of experience, yes, and certification, and here's the APIs you need to know about. and the APIs that matter to you. the container madness, the standardization of contains, So having the whole tool of how do you now deploy It's like the golden ticket for a developer. and the Cisco partners that are out there, of the application needs to be coupled, CUBE videos, there we go. Anything you want to add? and the things along those lines, are different, so the kinds of solutions Yeah, and it's good that you get the terminology, but it's really about the solutions that we can build I love the concept of co-creation, really great (laughing) about learning the languages, knowing how to lead Be right back with more live coverage

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
SusiePERSON

0.99+

LaurenPERSON

0.99+

Lauren CooneyPERSON

0.99+

MatthewPERSON

0.99+

Susie WeePERSON

0.99+

CiscoORGANIZATION

0.99+

IOSTITLE

0.99+

JohnPERSON

0.99+

GoogleORGANIZATION

0.99+

John FurrierPERSON

0.99+

AllenPERSON

0.99+

480,000QUANTITY

0.99+

Silicon ValleyLOCATION

0.99+

BarcelonaLOCATION

0.99+

last yearDATE

0.99+

iOSTITLE

0.99+

100%QUANTITY

0.99+

Map WiseORGANIZATION

0.99+

eight parallel workshopsQUANTITY

0.99+

millionsQUANTITY

0.99+

second yearQUANTITY

0.99+

this yearDATE

0.99+

Mountain View, CaliforniaLOCATION

0.99+

CMXORGANIZATION

0.99+

three parallel workshopsQUANTITY

0.98+

one portQUANTITY

0.98+

bothQUANTITY

0.98+

secondQUANTITY

0.98+

firstQUANTITY

0.97+

485,000 registered developersQUANTITY

0.97+

both worldsQUANTITY

0.97+

a year and a half oldQUANTITY

0.97+

MerakiORGANIZATION

0.97+

one personQUANTITY

0.96+

Bay AreaLOCATION

0.96+

Mountain ViewLOCATION

0.96+

two worldsQUANTITY

0.96+

IstioORGANIZATION

0.96+

2018DATE

0.95+

theCUBEORGANIZATION

0.95+

OneQUANTITY

0.95+

Web SummitEVENT

0.95+

todayDATE

0.93+

DevNetORGANIZATION

0.93+

Cisco DevNetORGANIZATION

0.92+

first few yearsQUANTITY

0.9+

DevNet CreateORGANIZATION

0.89+

OnPrimTITLE

0.88+

StuPERSON

0.88+

CUBEORGANIZATION

0.87+

this morningDATE

0.85+

100% of peopleQUANTITY

0.84+

DevNetTITLE

0.81+

Cisco LiveORGANIZATION

0.81+

past four yearsDATE

0.8+

Jyothi Swaroop, Veritas | Veritas Vision 2017


 

>> Announcer: Live from Las Vegas, it's theCUBE! Covering Veritas Vision 2017. Brought to you by Veritas. >> Welcome back to the Aria in Las Vegas, everybody. This is theCUBE, the leader in live tech coverage. We go out to the events and extract the signal from the noise. We're here at Veritas Vision 2017, #VtasVision. Jyothi Swaroop is here. He's the vice president of product and solutions marketing at Veritas. Jyothi, welcome to theCUBE. Good to see you. >> Thanks, Dave. I'm an officially an alum, now? >> A CUBE alum, absolutely! >> Two times! Three more times, we'll give you a little VIP badge, you know, we give you the smoking jacket, all that kind of stuff. >> Five or six times, you'll be doing the interviews. >> I'm going to be following you guys around, then, for the next three events. >> So, good keynote this morning. >> Jyothi: Thank you. >> Meaty. There was a lot going on. Wasn't just high-level concepts, it was a lot of high-level messaging, but then, here's what we've done behind it. >> No, it's actually the opposite. It's a lot of real products that customers are using. The world forgets that Veritas has only been out of Symantec, what, 20 months? Since we got out, we were kind of quiet the first year. That was because we were figuring our strategy out, investing in innovation and engineering, 'cause that's what Carlyle, our board, wants for us to do is invest in innovation and engineering, and build real products. So we took our time, 18 to 20 months to build these products out, and we launched them. And they're catching on like wildfire in the customer base. >> Jyothi, Bill came on and talked about, he made a lot of changes in the company. Focused it on culture, innovation, something he's want. What brought you? You know, a lot of places you could've gone. Why Veritas, why now? >> Well, Bill is one of the reasons, actually. I mean, if you look at his history and what he's done with different companies over the years, and how the journey of IT, as he put it during his keynote, he wants to make that disruption happen again at Veritas. That was one. Two was just the strategy that they had. Veritas has a Switzerland approach to doing business. Look, it's granted that most Fortune 500 or even midmarket customers have some sort of a Cloud project going on. But what intrigued me the most, especially with my background, coming from other larger companies is, Veritas was not looking to tie them down or become a data hoarder, you know what I mean? It's just charge this massive dollar per terabyte and just keep holding them, lock them into a storage or lock them into a cloud technology. But, we were facilitating their journey to whichever cloud they wanted to go. It was refreshing, and I still remember the first interview with Veritas, and they were talking about, "Oh, we want to help move customers' data "into Azure and AWS and Google," and my brain from previous storage vendors is going, "Hang on a minute. "How are you going to make money "if you're just going to move all of this data "to everyone else?" But that's what is right for the customer. >> Okay, so, how are you going to make money? >> Well, it's not just about the destination, right? Cloud's a journey, it's not just a destination. Most customers are asking us, "On average, we adopt three clouds," is what they're telling us. Whether it's public, private, on-prem, on average, they have about three separate clouds. What they say is, "Jyothi, our struggle is to move "an entire virtual business service "from on-prem to the Cloud." And once we've moved it, let's say Cloud A is suddenly expensive or is not working out for them. To get out of that cloud and move it to Cloud B is just so painful. It's going to cost me tons of money, and I lost all of the agility that I was expecting from Cloud A, anyway. If you have products like VRP from Veritas, for example, where we could move an entire cloud business service from Cloud A to Cloud B, and guess what. We can move it back onto on-prem on the fly. That's brilliant for the customers. Complete portability. >> Let's see. The portfolio is large. Help us boil it down. How should we think about it at a high level? We only have 20 minutes, so how do we think about that in 15, 20 minutes? >> I'll focus on three tenets. Our 360 data management wheel, if you saw at the keynote, has six tenets. The three tenets I'll focus on today are visibility, portability, and last, but definitely not the least, storage. You want to store it efficiently and cost-effectively. Visibility, most of our customers that are getting on their cloud journey are already in the Cloud, somewhere. They have zero visibility, almost. Like, "What applications should I move into the Cloud? "If I have moved these applications, "are they giving me the right value? "Because I've invested heavily in the Cloud "to move these applications." They don't know. 52% of our customers have dark data. We've surveyed them. All that dark data has now been moved into some cloud. Look, cloud is awesome. We have partnered up with every cloud vendor out there. But if we're not making it easy for customers to identify what is the right data to move to the Cloud, then they lost half the battle even before they moved to the Cloud. That's one. We're giving complete visibility with the Info Map connectors that we just announced earlier on in the keynote. >> That's matching the workload characteristics with the right sort of platform characteristics, is that right? >> Absolutely. You could be a Vmware user, you're only interested in VM-based data that you want to move, and you want role-based access into that data, and you want to protect only that data and back it up into the Cloud. We give you that granularity. It's one thing to provide visibility. It's quite another to give them the ability to have policy-driven actions on that data. >> Jyothi, just take us inside the customers for that. Who owns this kind of initiative? The problem in IT, it's very heterogeneous, very siloed. You take that multi-cloud environment, most customers we talk to, if they've got a cloud strategy, the ink's still drying. It's usually because, well, that group needed this, and somebody needed this, and it's very tactical. So, how do I focus on the information? Who drives that kind of need for visibility and manages across all of these environments? >> That's a great question, Stu. I mean, we pondered around the same question for about a year, because we were going both top-down and bottoms-up in the customer's organization, and trying to find where's our sweet spot. What we figured is, it's not a one-strategy thing, especially with the portfolio that we have. 80% of the time, we are talking to the CIOs, we are talking to the CXOs, and we're coming down with their digital transformation strategy or their cloud transformation strategy, they may call it whatever they want. We're coming top-down with our products, because when you talk visibility, a backup admin, he may not jump out of his seat the first thing. "Visibility's not what I care about, "the ease of use of this backup job "is what I care about, day one." But if you talk to the CIO, and I tell him, "I'll give you end-to-end visibility "of your entire infrastructure. "I don't care which cloud you're in." He'll be like, "I'm interested in that, "'cause I may not want to move 40% of this data "that I'm moving to Cloud A today. "I want to keep it back, or just delete it." 'Cause GDPR in Europe gives the citizens the right to delete their data. Doesn't matter which company the data's present in. The citizen can go to that company and say, "You have to delete my data." How will you delete the data if you just don't know where the data is? >> It's in 20 places in 15 different databases. Okay, so that's one. You had said there were three areas that you wanted to explore. >> The second one is, again, all about workload data and application portability. Over the years, we had storage lock-ins. I'm not going to name names, but historically, there are lots of storage vendors that tend to lock customers into a particular type of storage, or to the company, and they just get caught up in that stacked refresh every three years, and you just keep doing that over and over again. We're seeing more and more of cloud lock-in start to happen. You start migrating all of this into one cloud service provider, and you get familiar with the tools and widgets that they give you around that data, and then all of a sudden you realize this is not the right fit, or I'm moving too much data into this place and it's costing me a lot more. I want to not do this anymore, I want to move it to another local service provider, for example. It's going to cost you twice as much as it did just to move the data into the Cloud in the first place. With VRP, Veritas Resiliency Platform, we give our customers literally a few mouse clicks, if you watched the demo onstage. Literally, with a few mouse clicks, you identify the data that you want to move, including your virtual machines and your applications, and you move them as a business service, not just as random data. You move it as an entire business service from Cloud A to Cloud B. >> Jyothi, there's still physics involved in this. There's many reasons why with lock-in, you mentioned, kind of familiarity. But if I have a lot of data, moving it takes a lot of time as well as the money. How do we handle that? >> It goes back to the original talk track here about visibility. If you give the customer the right amount of visibility, they know exactly what to move. If the customer has 80 petabytes of data in their infrastructure, they don't have to move all 80 petabytes of it, if we are able to tell them, "These are the 10 petabytes that you need to move, "based on what Information Map is telling you." They'll only move those 10 petabytes, so the workload comes down drastically, because they're able to visualize what they need to move. >> Stu: Third piece of storage? >> Third piece of storage. A lot of people don't know this, but Veritas was the first vendor that launched the software to find storage solution. Back in the VOS days, Veritas, Oracle, and Sun Microsystems, we had the first file system that would be this paper over rocks, if you will, that was just a software layer. It would work with literally SAN/DAS, anything that's out there in the market, it would just be that file system that would work. And we've kept that DNA in our engineering team. Like, for example, Abhijit, who leads up our engineering, he wrote the first cluster file system. We are extending that beyond just a file system. We're going file, block, and object, just as any other storage vendor would. We are certifying on various commodity hardware, so the customers can choose the hardware of their choice. And not just that. The one thing we're doing very differently, though, is embedding intelligence close to the metadata. The reason we can do that is, unlike some of the classic storage vendors, we wrote the storage ground-up. We wrote the code ground-up. We could extract, if you look at an object, it has object data and metadata. So, metadata standard, it's about this long, right? It's got all these characters in it. It's hard to make sense of it unless you buy another tool to read that object and digest it for the customer. But what if you embed intelligence next to the metadata, so storage is not dumb anymore? It's intelligent, so you avoid the number of layers before you actually get to a BI product. I'll just give you a quick example in healthcare. We're all wearing Apple Watches and FitBits. The data is getting streamed into some object store, whether it's in the Cloud or on-prem. Billions of objects are getting stored even right now, with all the Apple Watches and FitBits out there. What if the storage could predictively, using machine learning and intelligence, tell you predictively you might be experiencing a stroke right on your watch, because your heartbeats are X and your pulse is Y? Combining all of the data and your history, based on the last month or last three months, I can tell you, "Jyothi, you should probably go see the doctor "or do something about it." So that's predictive, and it can happen at the storage layer. It doesn't have to be this other superficial intelligence layer that you paid millions of dollars for. >> So that analytic capability is really a feature of your platform, right? I mean, others, Stu, have tried it, and they tried to make it the product, and it really isn't a product, it's a byproduct. And so, is that something I could buy today? Is that something that's sort of roadmap, or, what's the reaction been from customers? >> The reaction has been great, both customers and analysts have just loved where we're going with this. Obviously, we have two products that are on the truck today, which are InfoScale and Access. InfoScale is a block-based product and Access is a file-based product. We also have HyperScale, which was designed specifically for modern workloads, containers, and OpenStack. That has its own roadmap. You know how OpenStack and containers work. We have to think like a developer for those products. Those are the products that are on the truck today. What you'll see announced tomorrow, I hope I'm not giving away too much, because Mike already announced it, is Veritas Cloud Storage. That's going to be announced tomorrow, and we're going to go deep into that. Veritas Cloud Storage will be this on-prem, object-based storage which will eventually become a platform that will also support file and block. It's just one single, software-defined, highly-intelligent storage system for all use cases. Throw whatever data you want at it. >> And the line on Veritas, the billboards, no hardware agenda. Ironic where that came from. Sometimes you'll announce appliances. What is that all about, and when do you decide to do that? >> Great question. You know, it's all about choice. It's the cliched thing to say, I know, but Veritas, most people don't know this, has a heavy channel revenue element to what we do. We love our partners and channel. Now, if you go to the channel that's catering to midmarket customers, or SMBs, they just want the easy button to storage. Their agility, I don't have five people sitting around trying to piece all of this together with your software and Seagate's hardware and whatever else, and piece this together. I just want a box, a pizza box that I can put in my infrastructure, turn it on, and it just works, and I call Veritas if something goes wrong. I don't call three different people. This is for those people. Those customers that just want the easy button to storage or easy button to back up. >> To follow up on the flip side, when you're only selling software, the knock on software of course is, I want it to be fast, I want it to be simple, I need to be agile. How come Veritas can deliver these kinds of solutions and not be behind all the people that have all the hardware and it's all fully baked-in to start with? >> Well, that's because we've written these from the ground up. When you write software code from the ground up, I mean, I'm an engineer, and I know how hard it is to take a piece of legacy code that's baked in for 10, 20 years. It's almost like adding lipstick, right? It just doesn't work, especially in today's cloud-first world, where people are in the DevOps situation, where apps are being delivered in five, 10, 15 minutes. Every day, my app almost gets updated on the phone every day? That just doesn't work. We wrote these systems from the ground up to be able to easily be placed onto any hardware possible. Now, again, I won't mention the vendor, but in my previous lives, there were a lot of hardware boxes and the software was written specifically for those hardware configurations. When they tried to software-define it forcefully, it became a huge challenge, 'cause it was never designed to do that. Whereas at Veritas, we write the software layer first. We test it on multiple hardware systems, and we keep fine-tuning it. Our ideal situation is to sell the software, and if the customer wants the hardware, we'll ship them the box. >> One of the things that struck me in the keynote this morning was what I'll call your compatibility matrix. Whether it was cloud, somebody's data store, that really is your focus, and that is a differentiator, I think. Knocking those down so you can, basically, it's a TAM expansion strategy. >> Oh, yeah, absolutely. I mean, TAM expansion strategy, as well as helping the customer choose what's best for them. We're not limiting their choices. We're literally saying, we go from the box and dropboxes of the world all the way to Dell EMC, even, with Info Map, for example. We'll cover end-to-end spectrum because we don't have a dollar-per-terabyte or dollar-per-petabyte agenda to store this data within our own cloud situation. >> All right, Jyothi, we got to leave it there. Thanks very much for coming back on theCUBE. It's good to see you again. >> Jyothi: No, it's great to be here. >> All right, keep it right there, everybody. We'll be back with our next guest. We're live from Veritas Vision 2017. This is theCUBE. (fast electronic music)

Published Date : Sep 19 2017

SUMMARY :

Brought to you by Veritas. and extract the signal from the noise. I'm an officially an alum, now? Three more times, we'll give you a little VIP badge, I'm going to be following you guys around, then, it was a lot of high-level messaging, and we launched them. You know, a lot of places you could've gone. and I still remember the first interview with Veritas, and I lost all of the agility so how do we think about that in 15, 20 minutes? and last, but definitely not the least, storage. and you want to protect only that data So, how do I focus on the information? the right to delete their data. that you wanted to explore. It's going to cost you twice as much as it did you mentioned, kind of familiarity. "These are the 10 petabytes that you need to move, that launched the software to find storage solution. and they tried to make it the product, We have to think like a developer for those products. and when do you decide to do that? It's the cliched thing to say, I know, and not be behind all the people that have all the hardware and the software was written specifically in the keynote this morning was all the way to Dell EMC, even, It's good to see you again. We'll be back with our next guest.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
SeagateORGANIZATION

0.99+

JyothiPERSON

0.99+

FiveQUANTITY

0.99+

80 petabytesQUANTITY

0.99+

18QUANTITY

0.99+

20 placesQUANTITY

0.99+

10QUANTITY

0.99+

VeritasORGANIZATION

0.99+

DavePERSON

0.99+

BillPERSON

0.99+

SymantecORGANIZATION

0.99+

AbhijitPERSON

0.99+

40%QUANTITY

0.99+

Jyothi SwaroopPERSON

0.99+

15QUANTITY

0.99+

10 petabytesQUANTITY

0.99+

Sun MicrosystemsORGANIZATION

0.99+

MikePERSON

0.99+

Two timesQUANTITY

0.99+

20 minutesQUANTITY

0.99+

fiveQUANTITY

0.99+

OracleORGANIZATION

0.99+

tomorrowDATE

0.99+

two productsQUANTITY

0.99+

oneQUANTITY

0.99+

20 monthsQUANTITY

0.99+

TwoQUANTITY

0.99+

15 different databasesQUANTITY

0.99+

TAMORGANIZATION

0.99+

six tenetsQUANTITY

0.99+

Las VegasLOCATION

0.99+

EuropeLOCATION

0.99+

GoogleORGANIZATION

0.99+

Third pieceQUANTITY

0.99+

five peopleQUANTITY

0.99+

three tenetsQUANTITY

0.99+

52%QUANTITY

0.99+

AWSORGANIZATION

0.99+

twiceQUANTITY

0.99+

three areasQUANTITY

0.99+

InfoScaleTITLE

0.98+

six timesQUANTITY

0.98+

todayDATE

0.98+

Dell EMCORGANIZATION

0.98+

first vendorQUANTITY

0.98+

80%QUANTITY

0.98+

StuPERSON

0.98+

Veritas Cloud StorageORGANIZATION

0.98+

three eventsQUANTITY

0.97+

OneQUANTITY

0.97+

OpenStackTITLE

0.97+

second oneQUANTITY

0.97+

Three more timesQUANTITY

0.97+

AccessTITLE

0.97+

first yearQUANTITY

0.96+

first file systemQUANTITY

0.96+

Billions of objectsQUANTITY

0.96+

both customersQUANTITY

0.96+

bothQUANTITY

0.96+

15 minutesQUANTITY

0.95+

HyperScaleTITLE

0.95+

Derek Kerton, Autotech Council | Autotech Council - Innovation in Motion


 

hey welcome back everybody Jeff Rick here with the cube we're at the mill pedis at an interesting event is called the auto tech council innovation in motion mapping and navigation event so a lot of talk about autonomous vehicles so it's a lot of elements to autonomous vehicles this is just one small piece of it it's about mapping and navigation and we're excited to have with us our first guest again and give us a background of this whole situation just Derick Curtin and he's the founder and chairman of the auto tech council so first up there welcome thank you very much good to be here absolutely so for the folks that aren't familiar what is the auto tech council autofit council is a sort of a club based in Silicon Valley where we have gathered together some of the industry's largest OMS om is mean car makers you know of like Rio de Gono from France and a variety of other ones they have offices here in Silicon Valley right and their job is to find innovation you find that Silicon Valley spark and take it back and get it into cars eventually and so what we are able to do is gather them up put them in a club and route a whole bunch of Silicon Valley startups and startups from other places to in front of them in a sort of parade and say these are some of the interesting technologies of the month so did they reach out for you did you see an opportunity because obviously they've all got the the Innovation Centers here we were at the Ford launch of their innovation center you see that the tagline is all around is there too now Palo Alto and up and down the peninsula so you know they're all here so was this something that they really needed an assist with that something opportunity saw or was it did it come from more the technology side to say we needed I have a new one to go talk to Raja Ford's well it's certainly true that they came on their own so they spotted Silicon Valley said this is now relevant to us where historically we were able to do our own R&D build our stuff in Detroit or in Japan or whatever the cases all of a sudden these Silicon Valley technologies are increasingly relevant to us and in fact disruptive to us we better get our finger on that pulse and they came here of their own at the time we were already running something called the telecom Council Silicon Valley where we're doing a similar thing for phone companies here so we had a structure in place that we needed to translate that into beyond modem industry and meet all those guys and say listen we can help you we're going to be a great tool in your toolkit to work the valley ok and then specifically what types of activities do you do with them to execute division you know it's interesting when we launched this about five years ago we're thinking well we have telecommunication back when we don't have the automotive skills but we have the organizational skills what turned out to be the cases they're not coming here the car bakers and the tier 1 vendors that sell to them they're not coming here to study break pad material science and things like that they're coming to Silicon Valley to find the same stuff the phone company two years ago it's lookin at least of you know how does Facebook work in a car out of all these sensors that we have in phones relate to automotive industry accelerometers are now much cheaper because of reaching economies of scale and phones so how do we use those more effectively hey GPS is you know reach scale economies how do we put more GPS in cars how do we provide mapping solutions all these things you'll set you'll see and sound very familiar right from that smartphone industry in fact the thing that disrupts them the thing that they're here for that brought them here and out of out of defensive need to be here is the fact that the smartphone itself was that disruptive factor inside the car right right so you have events like today so gives little story what's it today a today's event is called the mapping and navigation event what are people who are not here what's what's happening well so every now and then we pick a theme that's really relevant or interesting so today is mapping and navigation actually specifically today is high definition mapping and sensors and so there's been a battle in the automotive industry for the autonomous driving space hey what will control an autonomous car will it be using a map that's stored in memory onboard the car it knows what the world looked like when they mapped it six months ago say and it follows along a pre-programmed route inside of that world a 3d model world or is it a car more likely with the Tesla's current they're doing where it has a range of sensors on it and the sensors don't know anything about the world around the corner they only know what they're sensing right around them and they drive within that environment so there's two competing ways of modeling a 3d world around autonomous car and I think you know there was a battle looking backwards which one is going to win and I think the industry has come to terms with the fact the answer is both more everyday and so today we're talking about both and how to infuse those two and make better self-driving vehicles so for the outsider looking in right I'm sure they get wait the mapping wars are over you know Google Maps what else is there right but then I see we've got TomTom and meet a bunch of names that we've seen you know kind of pre pre Google Maps and you know shame on me I said the same thing when Google came out with a cert I'm like certain doors are over who's good with so so do well so Eddie's interesting there's a lot of different angles to this beyond just the Google map that you get on your phone well anything MapQuest what do you hear you moved on from MapQuest you print it out you're good together right well that's my little friends okay yeah some people written about some we're burning through paper listen the the upshot is that you've MapQuest is an interesting starting board probably first it's these maps folding maps we have in our car there's a best thing we have then we move to MapQuest era and $5,000 Sat Navs in some cars and then you might jump forward to where Google had kind of dominate they offered it for free kicked you know that was the disruptive factor one of the things where people use their smartphones in the car instead of paying $5,000 like car sat-nav and that was a long-running error that we have in very recent memory but the fact of the matter is when you talk about self-driving cars or autonomous vehicles now you need a much higher level of detail than TURN RIGHT in 400 feet right that's that's great for a human who's driving the car but for a computer driving the car you need to know turn right in 400.000 five feet and adjust one quarter inch to the left please so the level of detail requires much higher and so companies like TomTom like a variety of them that are making more high-level Maps Nokia's form a company called here is doing a good job and now a class of car makers lots of startups and there's crowdsource mapping out there as well and the idea is how do we get incredibly granular high detail maps that we can push into a car so that it has that reference of a 3d world that is extremely accurate and then the next problem is oh how do we keep those things up to date because when we Matt when when a car from this a Nokia here here's the company house drives down the street does a very high-level resolution map with all the equipment you see on some of these cars except for there was a construction zone when they mapped it and the construction zone is now gone right update these things so these are very important questions if you want to have to get the answers correct and in the car stored well for that credit self drive and once again we get back to something to mention just two minutes ago the answer is sensor fusion it's a map as a mix of high-level maps you've got in the car and what the sensors are telling you in real time so the sensors are now being used for what's going on right now and the maps are give me a high level of detail from six months ago and when this road was driven it's interesting back of the day right when we had to have the CD for your own board mapping Houston we had to keep that thing updated and you could actually get to the edge of the sea didn't work we were in the islands are they covering here too which feeds into this is kind of of the optical sensors because there's kind of the light our school of thought and then there's the the biopic cameras tripod and again the answers probably both yeah well good that's a you know that's there's all these beat little battles shaping up in the industry and that's one of them for sure which is lidar versus everything else lidar is the gold standard for building I keep saying a 3d model and that's basically you know a computer sees the world differently than your eye your eye look out a window we build a 3d model of what we're looking at how does computer do it so there's a variety of ways you can do it one is using lidar sensors which spin around biggest company in this space is called Bella died and been doing it for years for defense and aviation it's been around pointing laser lasers and waiting for the signal to come back so you basically use a reflected signal back and the time difference it takes to be billows back it builds a 3d model of the objects around that particular sensor that is the gold standard for precision the problem is it's also bloody expensive so the karmak is said that's really nice but I can't put for $8,000 sensors on each corner of a car and get it to market at some price that a consumers willing to pay so until every car has one and then you get the mobile phone aside yeah but economies of scale at eight thousand dollars we're looking at going that's a little stuff so there's a lot of startups now saying this we've got a new version of lighter that's solid-state it's not a spinning thing point it's actually a silicon chip with our MEMS and stuff on it they're doing this without the moving parts and we can drop the price down to two hundred dollars maybe a hundred dollars in the future and scale that starts being interesting that's four hundred dollars if you put it off all four corners of the car but there's also also other people saying listen cameras are cheap and readily available so you look at a company like Nvidia that has very fast GPUs saying listen our GPUs are able to suck in data from up to 12 cameras at a time and with those different stereoscopic views with different angle views we can build a 3d model from cheap cameras so there's competing ideas on how you build a model of the world and then those come to like Bosh saying well we're strong in car and written radar and we can actually refine our radar more and more and get 3d models from radar it's not the good resolution that lidar has which is a laser sense right so there's all these different sensors and I think there the answer is not all of them because cost comes into play below so a car maker has to choose well we're going to use cameras and radar we're gonna use lidar and high heaven so they're going to pick from all these different things that are used to build a high-definition 3d model of the world around the car cost effective and successful and robust can handle a few of the sensors being covered by snow hopefully and still provide a good idea of the world around them and safety and so they're going to fuse these together and then let their their autonomous driving intelligence right on top of that 3d model and drive the car right so it's interesting you brought Nvidia in what's really fun I think about the autonomous vehicle until driving cars and the advances is it really plays off the kind of Moore's laws impact on the three tillers of its compute right massive compute power to take the data from these sensors massive amounts of data whether it's in the pre-programmed map whether you're pulling it off the sensors you're pulling off a GPS lord knows where by for Wi-Fi waypoints I'm sure they're pulling all kinds of stuff and then of course you know storage you got to put that stuff the networking you gotta worry about latency is it on the edge is it not on the edge so this is really an interesting combination of technologies all bring to bear on how successful your car navigates that exit ramp you're spot-on and that's you're absolutely right and that's one of the reasons I'm really bullish on self-driving cars a lot more than in the general industry analyst is and you mentioned Moore's law and in videos taking advantage of that with a GPUs so let's wrap other than you should be into kind of big answer Big Data and more and more data yes that's a huge factor in cars not only are cars going to take advantage of more and more data high definition maps are way more data than the MapQuest Maps we printed out so that's a massive amount of data the car needs to use but then in the flipside the cars producing massive amounts of data I just talked about a whole range of sensors I talked lidar radar cameras etc that's producing data and then there's all the telemetric data how's the car running how's the engine performing all those things car makers want that data so there's massive amounts of data needing to flow both ways now you can do that at night over Wi-Fi cheaply you can do it over an LTE and we're looking at 5g regular standards being able to enable more transfer of data between the cars and the cloud so that's pretty important cloud data and then cloud analytics on top of that ok now that we've got all this data from the car what do we do with it we know for example that Tesla uses that data sucked out of cars to do their fleet driving their fleet learning so instead of teaching the cars how to drive I'm a programmer saying if you see this that they're they're taking the information out of the cars and saying what are the situation these cars are seen how did our autonomous circuitry suggest the car responds and how did the user override or control the car in that point and then they can compare human driving with their algorithms and tweak their algorithms based on all that fleet to driving so it's a master advantage in sucking data out of cars massive advantage of pushing data to cars and you know we're here at Kingston SanDisk right now today so storage is interesting as well storage in the car increasingly important through these big amount of data right and fast storage as well High Definition maps are beefy beefy maps so what do you do do you have that in the cloud and constantly stream it down to the car what if you drive through a tunnel or you go out of cellular signal so it makes sense to have that map data at least for the region you're in stored locally on the car in easily retrievable flash memory that's dropping in price as well alright so loop in the last thing about that was a loaded question by the way and I love it and this is the thing I love this is why I'm bullish and more crazier than anybody else about the self-driving car space you mentioned Moore's law I find Moore's law exciting used to not be relevant to the automotive industry they used to build except we talked about I talked briefly about brake pad technology material science like what kind of asbestos do we use and how do we I would dissipate the heat more quickly that's science physics important Rd does not take advantage of Moore's law so cars been moving along with laws of thermodynamics getting more miles per gallon great stuff out of Detroit out of Tokyo out of Europe out of Munich but Moore's law not entirely relevant all of a sudden since very recently Moore's law starting to apply to cars so they've always had ECU computers but they're getting more compute put in the car Tesla has the Nvidia processors built into the car many cars having stronger central compute systems put in okay so all of a sudden now Moore's law is making cars more able to do things that they we need them to do we're talking about autonomous vehicles couldn't happen without a huge central processing inside of cars so Moore's law applying now what it did before so cars will move quicker than we thought next important point is that there's other there's other expansion laws in technology if people look up these are the cool things kryder's law so kryder's law is a law about storage in the rapidly expanding performance of storage so for $8.00 and how many megabytes or gigabytes of storage you get well guess what turns out that's also exponential and your question talked about isn't dat important sure it is that's why we could put so much into the cloud and so much locally into the car huge kryder's law next one is Metcalfe's law Metcalfe's law has a lot of networking in it states basically in this roughest form the value of network is valued to the square of the number of nodes in the network so if I connect my car great that's that's awesome but who does it talk to nobody you connect your car now we can have two cars you can talk together and provide some amount of element of car to car communications and some some safety elements tell me the network is now connected I have a smart city all of a sudden the value keeps shooting up and up and up so all of these things are exponential factors and there all of a sudden at play in the automotive industry so anybody who looks back in the past and says well you know the pace of innovation here has been pretty steep it's been like this I expect in the future we'll carry on and in ten years we'll have self-driving cars you can't look back at the slope of the curve right and think that's a slope going forward especially with these exponential laws at play so the slope ahead is distinctly steeper in this deeper and you left out my favorite law which is a Mars law which is you know we underestimate in the short term or overestimate in the short term and underestimate in the long term that's all about it's all about the slope so there we could go on for probably like an hour and I know I could but you got a kill you got to go into your event so thanks for taking min out of your busy day really enjoyed the conversation and look forward to our next one my pleasure thanks all right Jeff Rick here with the Q we're at the Western Digital headquarters in Milpitas at the Auto Tech Council innovation in motion mapping and navigation event thanks for watching

Published Date : Jun 15 2017

SUMMARY :

for the signal to come back so you

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
$8,000QUANTITY

0.99+

$8.00QUANTITY

0.99+

$5,000QUANTITY

0.99+

JapanLOCATION

0.99+

Jeff RickPERSON

0.99+

Derick CurtinPERSON

0.99+

DetroitLOCATION

0.99+

Silicon ValleyLOCATION

0.99+

Autotech CouncilORGANIZATION

0.99+

Derek KertonPERSON

0.99+

TokyoLOCATION

0.99+

Silicon ValleyLOCATION

0.99+

FranceLOCATION

0.99+

Jeff RickPERSON

0.99+

MunichLOCATION

0.99+

TeslaORGANIZATION

0.99+

MilpitasLOCATION

0.99+

NokiaORGANIZATION

0.99+

todayDATE

0.99+

Palo AltoLOCATION

0.99+

400 feetQUANTITY

0.99+

eight thousand dollarsQUANTITY

0.99+

HoustonLOCATION

0.99+

two hundred dollarsQUANTITY

0.99+

two carsQUANTITY

0.99+

four hundred dollarsQUANTITY

0.99+

NvidiaORGANIZATION

0.99+

MoorePERSON

0.99+

FordORGANIZATION

0.99+

first guestQUANTITY

0.99+

FacebookORGANIZATION

0.99+

EuropeLOCATION

0.99+

ten yearsQUANTITY

0.99+

six months agoDATE

0.99+

EddiePERSON

0.99+

two years agoDATE

0.98+

Google MapsTITLE

0.98+

twoQUANTITY

0.98+

MapQuestORGANIZATION

0.98+

two minutes agoDATE

0.98+

one quarter inchQUANTITY

0.98+

GoogleORGANIZATION

0.98+

TomTomORGANIZATION

0.97+

six months agoDATE

0.97+

Western DigitalLOCATION

0.97+

bothQUANTITY

0.97+

Silicon ValleyLOCATION

0.97+

both waysQUANTITY

0.97+

each cornerQUANTITY

0.96+

firstQUANTITY

0.96+

one of the reasonsQUANTITY

0.96+

two competing waysQUANTITY

0.96+

MarsLOCATION

0.95+

telecom Council Silicon ValleyORGANIZATION

0.95+

an hourQUANTITY

0.95+

Google mapTITLE

0.95+

MetcalfePERSON

0.93+

oneQUANTITY

0.92+

every carQUANTITY

0.89+

up to 12 camerasQUANTITY

0.89+

Kingston SanDiskORGANIZATION

0.88+

OMSORGANIZATION

0.88+

one small pieceQUANTITY

0.88+

about five years agoDATE

0.87+

innovation in motion mapping and navigationEVENT

0.86+

auto tech councilORGANIZATION

0.86+

tier 1QUANTITY

0.82+

George Chow, Simba Technologies - DataWorks Summit 2017


 

>> (Announcer) Live from San Jose, in the heart of Silicon Valley, it's theCUBE covering DataWorks Summit 2017, brought to you by Hortonworks. >> Hi everybody, this is George Gilbert, Big Data and Analytics Analyst with Wikibon. We are wrapping up our show on theCUBE today at DataWorks 2017 in San Jose. It has been a very interesting day, and we have a special guest to help us do a survey of the wrap-up, George Chow from Simba. We used to call him Chief Technology Officer, now he's Technology Fellow, but when we was explaining the different in titles to me, I thought he said Technology Felon. (George Chow laughs) But he's since corrected me. >> Yes, very much so >> So George and I have been, we've been looking at both Spark Summit last week and DataWorks this week. What are some of the big advances that really caught your attention? >> What's caught my attention actually is how much manufacturing has really, I think, caught into the streaming data. I think last week was very notable that both Volkswagon and Audi actually had case studies for how they're using streaming data. And I think just before the break now, there was also a similar session from Ford, showcasing what they are doing around streaming data. >> And are they using the streaming analytics capabilities for autonomous driving, or is it other telemetry that they're analyzing? >> The, what is it, I think the Volkswagon study was production, because I still have to review the notes, but the one for Audi was actually quite interesting because it was for managing paint defect. >> (George Gilbert) For paint-- >> Paint defect. >> (George Gilbert) Oh. >> So what they were doing, they were essentially recording the environmental condition that they were painting the cars in, basically the entire pipeline-- >> To predict when there would be imperfections. >> (George Chow) Yes. >> Because paint is an extremely high-value sort of step in the assembly process. >> Yes, what they are trying to do is to essentially make a connection between downstream defect, like future defect, and somewhat trying to pinpoint the causes upstream. So the idea is that if they record all the environmental conditions early on, they could turn around and hopefully figure it out later on. >> Okay, this sounds really, really concrete. So what are some of the surprising environmental variables that they're tracking, and then what's the technology that they're using to build model and then anticipate if there's a problem? >> I think the surprising finding they said were actually, I think it was a humidity or fan speed, if I recall, at the time when the paint was being applied, because essentially, paint has to be... Paint is very sensitive to the condition that is being applied to the body. So my recollection is that one of the finding was that it was a narrow window during which the paint were, like, ideal, in terms of having the least amount of defect. >> So, had they built a digital twin style model, where it's like a digital replica of some aspects of the car, or was it more of a predictive model that had telemetry coming at it, and when it's an outside a certain bounds they know they're going to have defects downstream? >> I think they're still working on the predictive model, or actually the model is still being built, because they are essentially trying to build that model to figure out how they should be tuning the production pipeline. >> Got it, so this is sort of still in the development phase? >> (George Chow) Yeah, yeah >> And can you tell us, did they talk about the technologies that they're using? >> I remember the... It's a little hazy now because after a couple weeks of conference, so I don't remember the specifics because I was counting on the recordings to come out in a couples weeks' time. So I'll definitely share that. It's a case study to keep an eye on. >> So tell us, were there other ones where this use of real-time or near real-time data had some applications that we couldn't do before because we now can do things with very low latency? >> I think that's the one that I was looking forward to with Ford. That was the session just earlier, I think about an hour ago. The session actually consisted of a demo that was being done live, you know. It was being streamed to us where they were showcasing the data that was coming off a car that's been rigged up. >> So what data were they tracking and what were they trying to anticipate here? >> They didn't give enough detail, but it was basically data coming off of the CAN bus of the car, so if anybody is familiar with the-- >> Oh that's right, you're a car guru, and you and I compare, well our latest favorite is the Porche Macan >> Yes, yes. >> SUV, okay. >> But yeah, they were looking at streaming the performance data of the car as well as the location data. >> Okay, and... Oh, this sounds more like a test case, like can we get telemetry data that might be good for insurance or for... >> Well they've built out the system enough using the Lambda Architecture with Kafka, so they were actually consuming the data in real-time, and the demo was actually exactly seeing the data being ingested and being acted on. So in the case they were doing a simplistic visualization of just placing the car on the Google Map so you can basically follow the car around. >> Okay so, what was the technical components in the car, and then, how much data were they sending to some, or where was the data being sent to, or how much of the data? >> The data was actually sent, streamed, all the way into Ford's own data centers. So they were using NiFi with all the right proxy-- >> (George Gilbert) NiFi being from Hortonworks there. >> Yeah, yeah >> The Hortonworks data flow, okay >> Yeah, with all the appropriate proxys and firewall to bring it all the way into a secure environment. >> Wow >> So it was quite impressive from the point of view of, it was life data coming off of the 4G modem, well actually being uploaded through the 4G modem in the car. >> Wow, okay, did they say how much compute and storage they needed in the device, in this case the car? >> I think they were using a very lightweight platform. They were streaming apparently from the Raspberry Pi. >> (George Gilbert) Oh, interesting. >> But they were very guarded about what was inside the data center because, you know, for competitive reasons, they couldn't share much about how big or how large a scale they could operate at. >> Okay, so Simba has been doing ODBC and JDBC drivers to standard APIs, to databases for a long time. That was all about, that was an era where either it was interactive or batch. So, how is streaming, sort of big picture, going to change the way applications are built? >> Well, one way to think about streaming is that if you look at many of these APIs, into these systems, like Spark is a good example, where they're trying to harmonize streaming and batch, or rather, to take away the need to deal with it as a streaming system as opposed to a batch system, because it's obviously much easier to think about and reason about your system when it is traditional, like in the traditional batch model. So, the way that I see it also happening is that streaming systems will, you could say will adapt, will actually become easier to build, and everyone is trying to make it easier to build, so that you don't have to think about and reason about it as a streaming system. >> Okay, so this is really important. But they have to make a trade-off if they do it that way. So there's the desire for leveraging skill sets, which were all batch-oriented, and then, presumably SQL, which is a data manipulation everyone's comfortable with, but then, if you're doing it batch-oriented, you have a portion of time where you're not sure you have the final answer. And I assume if you were in a streaming-first solution, you would explicitly know whether you have all the data or don't, as opposed to late arriving stuff, that might come later. >> Yes, but what I'm referring to is actually the programming model. All I'm saying is that more and more people will want streaming applications, but more and more people need to develop it quickly, without having to build it in a very specialized fashion. So when you look at, let's say the example of Spark, when they focus on structured streaming, the whole idea is to make it possible for you to develop the app without having to write it from scratch. And the comment about SQL is actually exactly on point, because the idea is that you want to work with the data, you can say, not mindful, not with a lot of work to account for the fact that it is actually streaming data that could arrive out of order even, so the whole idea is that if you can build applications in a more consistent way, irrespective whether it's batch or streaming, you're better off. >> So, last week even though we didn't have a major release of Spark, we had like a point release, or a discussion about the 2.2 release, and that's of course very relevant for our big data ecosystem since Spark has become the compute engine for it. Explain the significance where the reaction time, the latency for Spark, went down from several hundred milliseconds to one millisecond or below. What are the implications for the programming model and for the applications you can build with it. >> Actually, hitting that new threshold, the millisecond, is actually a very important milestone because when you look at a typical scenario, let's say with AdTech where you're serving ads, you really only have, maybe, on the order about 100 or maybe 200 millisecond max to actually turn around. >> And that max includes a bunch of things, not just the calculation. >> Yeah, and that, let's say 100 milliseconds, includes transfer time, which means that in your real budget, you only have allowances for maybe, under 10 to 20 milliseconds to compute and do any work. So being able to actually have a system that delivers millisecond-level performance actually gives you ability to use Spark right now in that scenario. >> Okay, so in other words, now they can claim, even if it's not per event processing, they can claim that they can react so fast that it's as good as per event processing, is that fair to say? >> Yes, yes that's very fair. >> Okay, that's significant. So, what type... How would you see applications changing? We've only got another minute or two, but how do you see applications changing now that, Spark has been designed for people that have traditional, batch-oriented skills, but who can now learn how to do streaming, real-time applications without learning anything really new. How will that change what we see next year? >> Well I think we should be careful to not pigeonhole Spark as something built for batch, because I think the idea is that, you could say, the originators, of Spark know that it's all about the ease of development, and it's the ease of reasoning about your system. It's not the fact that the technology is built for batch, so the fact that you could use your knowledge and experience and an API that actually is familiar, should leverage it for something that you can build for streaming. That's the power, you could say. That's the strength of what the Spark project has taken on. >> Okay, we're going to have to end it on that note. There's so much more to go through. George, you will be back as a favorite guest on the show. There will be many more interviews to come. >> Thank you. >> With that, this is George Gilbert. We are DataWorks 2017 in San Jose. We had a great day today. We learned a lot from Rob Bearden and Rob Thomas up front about the IBM deal. We had Scott Gnau, CTO of Hortonworks on several times, and we've come away with an appreciation for a partnership now between IBM and Hortonworks that can take the two of them into a set of use cases that neither one on its own could really handle before. So today was a significant day. Tune in tomorrow, we have another great set of guests. Keynotes start at nine, and our guests will be on starting at 11. So with that, this is George Gilbert, signing out. Have a good night. (energetic, echoing chord and drum beat)

Published Date : Jun 13 2017

SUMMARY :

in the heart of Silicon Valley, do a survey of the wrap-up, What are some of the big advances caught into the streaming data. but the one for Audi was actually quite interesting in the assembly process. So the idea is that if they record So what are some of the surprising environmental So my recollection is that one of the finding or actually the model is still being built, of conference, so I don't remember the specifics the data that was coming off a car the performance data of the car for insurance or for... So in the case they were doing a simplistic visualization So they were using NiFi with all the right proxy-- to bring it all the way into a secure environment. So it was quite impressive from the point of view of, I think they were using a very lightweight platform. the data center because, you know, for competitive reasons, going to change the way applications are built? so that you don't have to think about and reason about it But they have to make a trade-off if they do it that way. so the whole idea is that if you can build and for the applications you can build with it. because when you look at a typical scenario, not just the calculation. So being able to actually have a system that delivers but how do you see applications changing now that, so the fact that you could use your knowledge There's so much more to go through. that can take the two of them

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IBMORGANIZATION

0.99+

GeorgePERSON

0.99+

HortonworksORGANIZATION

0.99+

George GilbertPERSON

0.99+

Scott GnauPERSON

0.99+

Rob BeardenPERSON

0.99+

AudiORGANIZATION

0.99+

Rob ThomasPERSON

0.99+

San JoseLOCATION

0.99+

George ChowPERSON

0.99+

FordORGANIZATION

0.99+

last weekDATE

0.99+

Silicon ValleyLOCATION

0.99+

one millisecondQUANTITY

0.99+

twoQUANTITY

0.99+

next yearDATE

0.99+

100 millisecondsQUANTITY

0.99+

200 millisecondQUANTITY

0.99+

todayDATE

0.99+

tomorrowDATE

0.99+

VolkswagonORGANIZATION

0.99+

this weekDATE

0.99+

Google MapTITLE

0.99+

AdTechORGANIZATION

0.99+

DataWorks 2017EVENT

0.98+

DataWorks Summit 2017EVENT

0.98+

bothQUANTITY

0.98+

11DATE

0.98+

SparkTITLE

0.98+

WikibonORGANIZATION

0.96+

under 10QUANTITY

0.96+

oneQUANTITY

0.96+

20 millisecondsQUANTITY

0.95+

Spark SummitEVENT

0.94+

first solutionQUANTITY

0.94+

SQLTITLE

0.93+

hundred millisecondsQUANTITY

0.93+

2.2QUANTITY

0.92+

one wayQUANTITY

0.89+

SparkORGANIZATION

0.88+

Lambda ArchitectureTITLE

0.87+

KafkaTITLE

0.86+

minuteQUANTITY

0.86+

Porche MacanORGANIZATION

0.86+

about 100QUANTITY

0.85+

ODBCTITLE

0.84+

DataWorksEVENT

0.84+

NiFiTITLE

0.84+

about an hour agoDATE

0.8+

JDBCTITLE

0.79+

Raspberry PiCOMMERCIAL_ITEM

0.76+

SimbaORGANIZATION

0.75+

Simba TechnologiesORGANIZATION

0.74+

couples weeks'QUANTITY

0.7+

CTOPERSON

0.68+

theCUBEORGANIZATION

0.67+

twinQUANTITY

0.67+

couple weeksQUANTITY

0.64+

Wikibon Research Meeting


 

>> Dave: The cloud. There you go. I presume that worked. >> David: Hi there. >> Dave: Hi David. We had agreed, Peter and I had talked and we said let's just pick three topics, allocate enough time. Maybe a half hour each, and then maybe a little bit longer if we have the time. Then try and structure it so we can gather some opinions on what it all means. Ultimately the goal is to have an outcome with some research that hits the network. The three topics today, Jim Kobeielus is going to present on agile and data science, David Floyer on NVMe over fabric and of course keying off of the Micron news announcement. I think Nick is, is that Nick who just joined? He can contribute to that as well. Then George Gilbert has this concept of digital twin. We'll start with Jim. I guess what I'd suggest is maybe present this in the context of, present a premise or some kind of thesis that you have and maybe the key issues that you see and then kind of guide the conversation and we'll all chime in. >> Jim: Sure, sure. >> Dave: Take it away, Jim. >> Agile development and team data science. Agile methodology obviously is well-established as a paradigm and as a set of practices in various schools in software development in general. Agile is practiced in data science in terms of development, the pipelines. The overall premise for my piece, first of all starting off with a core definition of what agile is as a methodology. Self-organizing, cross-functional teams. They sprint toward results in steps that are fast, iterative, incremental, adaptive and so forth. Specifically the premise here is that agile has already come to data science and is coming even more deeply into the core practice of data science where data science is done in team environment. It's not just unicorns that are producing really work on their own, but more to the point, it's teams of specialists that come together in co-location, increasingly in co-located environments or in co-located settings to produce (banging) weekly check points and so forth. That's the basic premise that I've laid out for the piece. The themes. First of all, the themes, let me break it out. In terms of the overall how I design or how I'm approaching agile in this context is I'm looking at the basic principles of agile. It's really practices that are minimal, modular, incremental, iterative, adaptive, and co-locational. I've laid out how all that maps in to how data science is done in the real world right now in terms of tight teams working in an iterative fashion. A couple of issues that I see as regards to the adoption and sort of the ramifications of agile in a data science context. One of which is a co-location. What we have increasingly are data science teams that are virtual and distributed where a lot of the functions are handled by statistical modelers and data engineers and subject matter experts and visualization specialists that are working remotely from each other and are using collaborative tools like the tools from the company that I just left. How can agile, the co-location work primer for agile stand up in a world with more of the development team learning deeper and so forth is being done on a scrutiny basis and needs to be by teams of specialists that may be in different cities or different time zones, operating around the clock, produce brilliant results? Another one of which is that agile seems to be predicated on the notion that you improvise the process as you go, trial and error which seems to fly in the face of documentation or tidy documentation. Without tidy documentation about how you actually arrived at your results, how come those results can not be easily reproduced by independent researchers, independent data scientists? If you don't have well defined processes for achieving results in a certain data science initiative, it can't be reproduced which means they're not terribly scientific. By definition it's not science if you can't reproduce it by independent teams. To the extent that it's all loosey-goosey and improvised and undocumented, it's not reproducible. If it's not reproducible, to what extent should you put credence in the results of a given data science initiative if it's not been documented? Agile seems to fly in the face of reproducibility of data science results. Those are sort of my core themes or core issues that I'm pondering with or will be. >> Dave: Jim, just a couple questions. You had mentioned, you rattled off a bunch of parameters. You went really fast. One of them was co-location. Can you just review those again? What were they? >> Sure. They are minimal. The minimum viable product is the basis for agile, meaning a team puts together data a complete monolithic sect, but an initial deliverable that can stand alone, provide some value to your stakeholders or users and then you iteratively build upon that in what I call minimum viable product going forward to pull out more complex applications as needed. There's sort of a minimum viable product is at the heart of agile the way it's often looked at. The big question is, what is the minimum viable product in a data science initiative? One way you might approach that is saying that what you're doing, say you're building a predictive model. You're predicting a single scenario, for example such as whether one specific class of customers might accept one specific class of offers under the constraining circumstances. That's an example of minimum outcome to be achieved from a data science deliverable. A minimum product that addresses that requirement might be pulling the data from a single source. We'll need a very simplified feature set of predictive variables like maybe two or three at the most, to predict customer behavior, and use one very well understood algorithm like linear regressions and do it. With just a few lines of programming code in Python or Aura or whatever and build us some very crisp, simple rules. That's the notion in a data science context of a minimum viable product. That's the foundation of agile. Then there's the notion of modular which I've implied with minimal viable product. The initial product is the foundation upon which you build modular add ons. The add ons might be building out more complex algorithms based on more data sets, using more predictive variables, throwing other algorithms in to the initiative like logistic regression or decision trees to do more fine-grained customer segmentation. What I'm giving you is a sense for the modular add ons and builds on to the initial product that generally weaken incrementally in the course of a data science initiative. Then there's this, and I've already used the word incremental where each new module that gets built up or each new feature or tweak on the core model gets added on to the initial deliverable in a way that's incremental. Ideally it should all compose ultimately the sum of the useful set of capabilities that deliver a wider range of value. For example, in a data science initiative where it's customer data, you're doing predictive analysis to identify whether customers are likely to accept a given offer. One way to add on incrementally to that core functionality is to embed that capability, for example, in a target marketing application like an outbound marketing application that uses those predictive variables to drive responses in line to, say an e-commerce front end. Then there's the notion of iterative and iterative really comes down to check points. Regular reviews of the standards and check points where the team comes together to review the work in a context of data science. Data science by its very nature is exploratory. It's visualization, it's model building and testing and training. It's iterative scoring and testing and refinement of the underlying model. Maybe on a daily basis, maybe on a weekly basis, maybe adhoc, but iteration goes on all the time in data science initiatives. Adaptive. Adaptive is all about responding to circumstances. Trial and error. What works, what doesn't work at the level of the clinical approach. It's also in terms of, do we have the right people on this team to deliver on the end results? A data science team might determine mid-way through that, well we're trying to build a marketing application, but we don't have the right marketing expertise in our team, maybe we need to tap Joe over there who seems to know a little bit about this particular application we're trying to build and this particular scenario, this particular customers, we're trying to get a good profile of how to reach them. You might adapt by adding, like I said, new data sources, adding on new algorithms, totally changing your approach for future engineering as you go along. In addition to supervised learning from ground troops, you might add some unsupervised learning algorithms to being able to find patterns in say unstructured data sets as you bring those into the picture. What I'm getting at is there's a lot, 10 zillion variables that, for a data science team that you have to add in to your overall research plan going forward based on, what you're trying to derive from data science is its insights. They're actionable and ideally repeatable. That you can embed them in applications. It's just a matter of figuring out what actually helps you, what set of variables and team members and data and sort of what helps you to achieve the goals of your project. Finally, co-locational. It's all about the core team needs to be, usually in the same physical location according to the book how people normally think of agile. The company that I just left is basically doing a massive social engineering exercise, ongoing about making their marketing and R&D teams a little more agile by co-locating them in different cities like San Francisco and Austin and so forth. The whole notion that people will collaborate far better if they're not virtual. That's highly controversial, but none-the-less, that's the foundation of agile as it's normally considered. One of my questions, really an open question is what hard core, you might have a sprawling team that's doing data science, doing various aspects, but what solid core of that team needs to be physically co-located all or most of the time? Is it the statistical modeler and a data engineer alone? The one who stands up how to do cluster and the person who actually does the building and testing of the model? Do the visualization specialists need to be co-located as well? Are other specialties like subject matter experts who have the knowledge in marketing, whatever it is, do they also need to be in the physical location day in, day out, week in and week out to achieve results on these projects? Anyway, so there you go. That's how I sort of appealed the argument of (mumbling). >> Dave: Okay. I got a minimal modular, incremental, iterative, adaptive, co-locational. What was six again? I'm sorry. >> Jim: Co-locational. >> Dave: What was the one before that? >> Jim: I'm sorry. >> Dave: Adaptive. >> Minimal, modular, incremental, iterative, adaptive, and co-locational. >> Dave: Okay, there were only six. Sorry, I thought it was seven. Good. A couple of questions then we can get the discussion going here. Of course, you're talking specifically in the context of data science, but some of the questions that I've seen around agile generally are, it's not for everybody, when and where should it be used? Waterfalls still make sense sometimes. Some of the criticisms I've read, heard, seen, and sometimes experienced with agile are sort of quality issues, I'll call it lack of accountability. I don't know if that's the right terminology. We're going for speed so as long as we're fast, we checked that box, quality can sacrifice. Thoughts on that. Where does it fit and again understanding specifically you're talking about data science. Does it always fit in data science or because it's so new and hip and cool or like traditional programming environments, is it horses for courses? >> David: Can I add to that, Dave? It's a great, fundamental question. It seems to me there's two really important aspects of artificial intelligence. The first is the research part of it which is developing the algorithms, developing the potential data sources that might or might not matter. Then the second is taking that and putting it into production. That is that somewhere along the line, it's saving money, time, etc., and it's integrated with the rest of the organization. That second piece is, the first piece it seems to be like most research projects, the ROI is difficult to predict in a new sort of way. The second piece of actually implementing it is where you're going to make money. Is agile, if you can integrate that with your systems of record, for example and get automation of many of the aspects that you've researched, is agile the right way of doing it at that stage? How would you bridge the gap between the initial development and then the final instantiation? >> That's an important concern, David. Dev Ops, that's a closely related issue but it's not exactly the same scope. As data science and machine learning, let's just net it out. As machine learning and deep learning get embedded in applications, in operations I should say, like in your e-commerce site or whatever it might be, then data science itself becomes an operational function. The people who continue to iterate those models in line the operational applications. Really, where it comes down to an operational function, everything that these people do needs to be documented and version controlled and so forth. These people meaning data science professionals. You need documentation. You need accountability. The development of these assets, machine learning and so forth, needs to be, is compliance. When you look at compliance, algorithmic accountability comes into it where lawyers will, like e-discovery. They'll subpoena, theoretically all your algorithms and data and say explain how you arrived at this particular recommendation that you made to grant somebody or not grant somebody a loan or whatever it might be. The transparency of the entire development process is absolutely essential to the data science process downstream and when it's a production application. In many ways, agile by saying, speed's the most important thing. Screw documentation, you can sort of figure that out and that's not as important, that whole pathos, it goes by the wayside. Agile can not, should not skip on documentation. Documentation is even more important as data science becomes an operational function. That's one of my concerns. >> David: I think it seems to me that the whole rapid idea development is difficult to get a combination of that and operational, boring testing, regression testing, etc. The two worlds are very different. The interface between the two is difficult. >> Everybody does their e-commerce tweaks through AB testing of different layouts and so forth. AB testing is fundamentally data science and so it's an ongoing thing. (static) ... On AB testing in terms of tweaking. All these channels and all the service flow, systems of engagement and so forth. All this stuff has to be documented so agile sort of, in many ways flies in the face of that or potentially compromises the visibility of (garbled) access. >> David: Right. If you're thinking about IOT for example, you've got very expensive machines out there in the field which you're trying to optimize true put through and trying to minimize machine's breaking, etc. At the Micron event, it was interesting that Micron's use of different methodologies of putting systems together, they were focusing on the data analysis, etc., to drive greater efficiency through their manufacturing process. Having said that, they need really, really tested algorithms, etc. to make sure there isn't a major (mumbling) or loss of huge amounts of potential revenue if something goes wrong. I'm just interested in how you would create the final product that has to go into production in a very high value chain like an IOT. >> When you're running, say AI from learning algorithms all the way down to the end points, it gets even trickier than simply documenting the data and feature sets and the algorithms and so forth that were used to build up these models. It also comes down to having to document the entire life cycle in terms of how these algorithms were trained to make the predictors of whatever it is you're trying to do at the edge with a particular algorithm. The whole notion of how are all of these edge points applications being trained, with what data, at what interval? Are they being retrained on a daily basis, hourly basis, moment by moment basis? All of those are critical concerns to know whether they're making the best automated decisions or actions possible in all scenarios. That's like a black box in terms of the sheer complexity of what needs to be logged to figure out whether the application is doing its job as best a possible. You need a massive log, you need a massive event log from end to end of the IOT to do that right and to provide that visibility ongoing into the performance of these AI driven edge devices. I don't know anybody who's providing the tool to do it. >> David: If I think about how it's done at the moment, it's obviously far too slow at the moment. At the same time, you've got to have some testing and things like that. It seems to me that you've got a research model on one side and then you need to create a working model from that which is your production model. That's the one that goes through the testing and everything of that sort. It seems to me that the interface would be that transition from the research model to the working model that would be critical here and the working model is obviously a subset and it's going to be optimized for performance, etc. in real time, as opposed to the development model which can be a lot to do and take half a week to manage it necessary. It seems to me that you've got a different set of business pressures on the working model and a different set of skills as well. I think having one team here doesn't sound right to me. You've got to have a Dev Ops team who are going to take the working model from the developers and then make sure that it's sound and save. Especially in a high value IOT area that the level of iteration is not going to be nearly as high as in a lower cost marketing type application. Does that sound sensible? >> That sounds sensible. In fact in Dev Ops, the Dev Ops team would definitely be the ones that handle the continuous training and retraining of the working models on an ongoing basis. That's a core observation. >> David: Is that the right way of doing it, Jim? It seems to me that the research people would be continuing to adapt from data from a lot of different places whereas the operational model would be at a specific location with a specific IOT and they wouldn't have necessarily all the data there to do that. I'm not quite sure whether - >> Dave: Hey guys? Hey guys, hey guys? Can I jump in here? Interesting discussion, but highly nuanced and I'm struggling to figure out how this turns into a piece or sort of debating some certain specifics that are very kind of weedy. I wonder if we could just reset for a second and come back to sort of what I was trying to get to before which is really the business impact. Should this be applied broadly? Should this be applied specifically? What does it mean if I'm a practitioner? What should I take away from, Jim your premise and your sort of fixed parameters? Should I be implementing this? Why? Where? What's the value to my organization - the value I guess is obvious, but does it fit everywhere? Should it be across the board? Can you address that? >> Neil: Can I jump in here for a second? >> Dave: Please, that would be great. Is that Neil? >> Neil: Neil. I've never been a data scientist, but I was an actuary a long time ago. When the truth actuary came to me and said we need to develop a liability insurance coverage for floating oil rigs in the North Sea, I'm serious, it took a couple of months of research and modeling and so forth. If I had to go to all of those meetings and stand ups in an agile development environment, I probably would have gone postal on the place. I think that there's some confusion about what data science is. It's not a vector. It's not like a Dev Op situation where you start with something and you go (mumbling). When a data scientist or whatever you want to call them comes up with a model, that model has to be constantly revisited until it's put out of business. It's refined, it's evaluated. It doesn't have an end point like that. The other thing is that data scientist is typically going to be running multiple projects simultaneously so how in the world are you going to agilize that? I think if you look at the data science group, they're probably, I think Nick said this, there are probably groups in there that are doing fewer Dev Ops, software engineering and so forth and you can apply agile techniques to them. The whole data science thing is too squishy for that, in my opinion. >> Jim: Squishy? What do you mean by squishy, Neil? >> Neil: It's not one thing. I think if you try to represent data science as here's a project, we gather data, we work on a model, we test it, and then we put it into production, it doesn't end there. It never ends. It's constantly being revised. >> Yeah, of course. It's akin to application maintenance. The application meaning the model, the algorithm to be fit for purpose has to continually be evaluated, possibly tweaked, always retrained to determine its predictive fit for whatever task it's been assigned. You don't build it once and assume its strong predictive fit forever and ever. You can never assume that. >> Neil: James and I called that adaptive control mechanisms. You put a model out there and you monitor the return you're getting. You talk about AB testing, that's one method of doing it. I think that a data scientist, somebody who really is keyed into the machine learning and all that jazz. I just don't see them as being project oriented. I'll tell you one other thing, I have a son who's a software engineer and he said something to me the other day. He said, "Agile? Agile's dead." I haven't had a chance to find out what he meant by that. I'll get back to you. >> Oh, okay. If you look at - Go ahead. >> Dave: I'm sorry, Neil. Just to clarify, he said agile's dead? Was that what he said? >> Neil: I didn't say it, my son said it. >> Dave: Yeah, yeah, yeah right. >> Neil: No idea what he was talking about. >> Dave: Go ahead, Jim. Sorry. >> If you look at waterfall development in general, for larger projects it's absolutely essential to get requirements nailed down and the functional specifications and all that. Where you have some very extensive projects and many moving parts, obviously you need a master plan that it all fits into and waterfall, those checkpoints and so forth, those controls that are built into that methodology are critically important. Within the context of a broad project, some of the assets being build up might be machine loading models and analytics models and so forth so in the context of our broader waterfall oriented software development initiative, you might need to have multiple data science projects spun off within the sub-projects. Each of those would fit into, by itself might be indicated sort of like an exploration task where you have a team doing data visualization, exploration in more of an open-ended fashion because while they're trying to figure out the right set of predictors and the right set of data to be able to build out the right model to deliver the right result. What I'm getting at is that agile approaches might be embedded into broader waterfall oriented development initiatives, agile data science approaches. Fundamentally, data science began and still is predominantly very smart people, PhDs in statistics and math, doing open-ended exploration of complex data looking for non-obvious patterns that you wouldn't be able to find otherwise. Sort of a fishing expedition, a high priced fishing expedition. Kind of a mode of operation as how data science often is conducted in the real world. Looking for that eureka moment when the correlations just jump out at you. There's a lot of that that goes on. A lot of that is very important data science, it's more akin to pure science. What I'm getting at is there might be some role for more structure in waterfall development approaches in projects that have a data science, core data science capability to them. Those are my thoughts. >> Dave: Okay, we probably should move on to the next topic here, but just in closing can we get people to chime in on sort of the bottom line here? If you're writing to an audience of data scientists or data scientist want to be's, what's the one piece of advice or a couple of pieces of advice that you would give them? >> First of all, data science is a developer competency. The modern developers are, many of them need to be data scientists or have a strong grounding and understanding of data science, because much of that machine learning and all that is increasingly the core of what software developers are building so you can't not understand data science if you're a modern software developer. You can't understand data science as it (garbled) if you don't understand the need for agile iterative steps within the, because they're looking for the needle in the haystack quite often. The right combination of predictive variables and the right combination of algorithms and the right training regimen in order to get it all fit. It's a new world competency that need be mastered if you're a software development professional. >> Dave: Okay, anybody else want to chime in on the bottom line there? >> David: Just my two penny worth is that the key aspect of all the data scientists is to come up with the algorithm and then implement them in a way that is robust and it part of the system as a whole. The return on investment on the data science piece as an insight isn't worth anything until it's actually implemented and put into production of some sort. It seems that second stage of creating the working model is what is the output of your data scientists. >> Yeah, it's the repeatable deployable asset that incorporates the crux of data science which is algorithms that are data driven, statistical algorithms that are data driven. >> Dave: Okay. If there's nothing else, let's close this agenda item out. Is Nick on? Did Nick join us today? Nick, you there? >> Nick: Yeah. >> Dave: Sounds like you're on. Tough to hear you. >> Nick: How's that? >> Dave: Better, but still not great. Okay, we can at least hear you now. David, you wanted to present on NVMe over fabric pivoting off the Micron news. What is NVMe over fabric and who gives a fuck? (laughing) >> David: This is Micron, we talked about it last week. This is Micron announcement. What they announced is NVMe over fabric which, last time we talked about is the ability to create a whole number of nodes. They've tested 250, the architecture will take them to 1,000. 1,000 processor or 1,000 nodes, and be able to access the data on any single node at roughly the same speed. They are quoting 200 microseconds. It's 195 if it's local and it's 200 if it's remote. That is a very, very interesting architecture which is like nothing else that's been announced. >> Participant: David, can I ask a quick question? >> David: Sure. >> Participant: This latency and the node count sounds astonishing. Is Intel not replicating this or challenging in scope with their 3D Crosspoint? >> David: 3D Crosspoint, Intel would love to sell that as a key component of this. The 3D Crosspoint as a storage device is very, very, very expensive. You can replicate most of the function of 3D Crosspoint at a much lower price point by using a combination of D-RAM and protective D-RAM and Flash. At the moment, 3D Crosspoint is a nice to have and there'll be circumstances where they will use it, but at the meeting yesterday, I don't think they, they might have brought it up once. They didn't emphasize it (mumbles) at all as being part of it. >> Participant: To be clear, this means rather than buying Intel servers rounded out with lots of 3D Crosspoint, you buy Intel servers just with the CPU and then all the Micron niceness for their NVMe and their Interconnect? >> David: Correct. They are still Intel servers. The ones they were displaying yesterday were HP1's, they also used SuperMicro. They want certain characteristics of the chip set that are used, but those are just standard pieces. The other parts of the architecture are the Mellanox, the 100 gigabit converged ethernet and using Rocky which is IDMA over converged ethernet. That is the secret sauce which allows you and Mellanox themselves, their cards have a lot of offload of a lot of functionality. That's the secret sauce which allows you to go from any point to any point in 5 microseconds. Then create a transfer and other things. Files are on top of that. >> Participant: David, Another quick question. The latency is incredibly short. >> David: Yep. >> Participant: What happens if, as say an MPP SQL database with 1,000 nodes, what if they have to shuffle a lot of data? What's the throughput? Is it limited by that 100 gig or is that so insanely large that it doesn't matter? >> David: They key is this, that it allows you to move the processing to wherever the data is very, very easily. In the principle that will evolve from this architecture, is that you know where the data is so don't move the data around, that'll block things up. Move the processing to that particular node or some adjacent node and do the processing as close as possible. That is as an architecture is a long term goal. Obviously in the short term, you've got to take things as they are. Clearly, a different type of architecture for databases will need to eventually evolve out of this. At the moment, what they're focusing on is big problems which need low latency solutions and using databases as they are and the whole end to end use stack which is a much faster way of doing it. Then over time, they'll adapt new databases, new architectures to really take advantage of it. What they're offering is a POC at the moment. It's in Beta. They had their customers talking about it and they were very complimentary in general about it. They hope to get it into full production this year. There's going to be a host of other people that are doing this. I was trying to bottom line this in terms of really what the link is with digital enablement. For me, true digital enablement is enabling any relevant data to be available for processing at the point of business engagement in real time or near real time. The definition that this architecture enables. It's a, in my view a potential game changer in that this is an architecture which will allow any data to be available for processing. You don't have to move the data around, you move the processing to that data. >> Is Micron the first market with this capability, David? NV over Me? NVMe. >> David: Over fabric? Yes. >> Jim: Okay. >> David: Having said that, there are a lot of start ups which have got a significant amount of money and who are coming to market with their own versions. You would expect Dell, HP to be following suit. >> Dave: David? Sorry. Finish your thought and then I have another quick question. >> David: No, no. >> Dave: The principle, and you've helped me understand this many times, going all the way back to Hadoop, bring the application to the data, but when you're using conventional relational databases and you've had it all normalized, you've got to join stuff that might not be co-located. >> David: Yep. That's the whole point about the five microseconds. Now that the impact of non co-location if you have to join stuff or whatever it is, is much, much lower. It's so you can do the logical draw in, whatever it is, very quickly and very easily across that whole fabric. In terms of processing against that data, then you would choose to move the application to that node because it's much less data to move, that's an optimization of the architecture as opposed to a fundamental design point. You can then optimize about where you run the thing. This is ideal architecture for where I personally see things going which is traditional systems of record which need to be exactly as they've ever been and then alongside it, the artificial intelligence, the systems of understanding, data warehouses, etc. Having that data available in the same space so that you can combine those two elements in real time or in near real time. The advantage of that in terms of business value, digital enablement, and business value is the biggest thing of all. That's a 50% improvement in overall productivity of a company, that's the thing that will drive, in my view, 99% of the business value. >> Dave: Going back just to the joint thing, 100 gigs with five microseconds, that's really, really fast, but if you've got petabytes of data on these thousand nodes and you have to do a join, you still got to go through that 100 gig pipe of stuff that's not co-located. >> David: Absolutely. The way you would design that is as you would design any query. You've got a process you would need, a process in front of that which is query optimization to be able to farm all of the independent jobs needed to do in each of the nodes and take the output of that and bring that together. Both the concepts are already there. >> Dave: Like a map. >> David: Yes. That's right. All of the data science is there. You're starting from an architecture which is fundamentally different from the traditional let's get it out architectures that have existed, by removing that huge overhead of going from one to another. >> Dave: Oh, because this goes, it's like a mesh not a ring? >> David: Yes, yes. >> Dave: It's like the high performance compute of this MPI type architecture? >> David: Absolutely. NVMe, by definition is a point to point architecture. Rocky, underneath it is a point to point architecture. Everything is point to point. Yes. >> Dave: Oh, got it. That really does call for a redesign. >> David: Yes, you can take it in steps. It'll work as it is and then over time you'll optimize it to take advantage of it more. Does that definition of (mumbling) make sense to you guys? The one I quoted to you? Enabling any relevant data to be available for processing at the point of business engagement, in real time or near real time? That's where you're trying to get to and this is a very powerful enabler of that design. >> Nick: You're emphasizing the network topology, while I kind of thought the heart of the argument was performance. >> David: Could you repeat that? It's very - >> Dave: Let me repeat. Nick's a little light, but I could hear him fine. You're emphasizing the network topology, but Nick's saying his takeaway was the whole idea was the thrust was performance. >> Nick: Correct. >> David: Absolutely. Absolutely. The result of that network topology is a many times improvement in performance of the systems as a whole that you couldn't achieve in any previous architecture. I totally agree. That's what it's about is enabling low latency applications with much, much more data available by being able to break things up in parallel and delivering multiple streams to an end result. Yes. >> Participant: David, let me just ask, if I can play out how databases are designed now, how they can take advantage of it unmodified, but how things could be very, very different once they do take advantage of it which is that today, if you're doing transaction processing, you're pretty much bottle necked on a single node that sort of maintains the fresh cache of shared data and that cache, even if it's in memory, it's associated with shared storage. What you're talking about means because you've got memory speed access to that cache from anywhere, it no longer is tied to a node. That's what allows you to scale out to 1,000 nodes even for transaction processing. That's something we've never really been able to do. Then the fact that you have a large memory space means that you no longer optimize for mapping back and forth from disk and disk structures, but you have everything in a memory native structure and you don't go through this thing straw for IO to storage, you go through memory speed IO. That's a big, big - >> David: That's the end point. I agree. That's not here quite yet. It's still IO, so the IO has been improved dramatically, the protocol within the Me and the over fabric part of it. The elapsed time has been improved, but it's not yet the same as, for example, the HPV initiative. That's saying you change your architecture, you change your way of processing just in the memory. Everything is assumed to be memory. We're not there yet. 200 microseconds is still a lot, lot slower than the process that - one impact of this architecture is that the amount of data that you can pass through it is enormously higher and therefore, the memory sizes themselves within each node will need to be much, much bigger. There is a real opportunity for architectures which minimize the impact, which hold data coherently across multiple nodes and where there's minimal impact of, no tapping on the shoulder for every byte transferred so you can move large amounts of data into memory and then tell people that it's there and allow it to be shared, for example between the different calls and the GPUs and FPGAs that will be in these processes. There's more to come in terms of the architecture in the future. This is a step along the way, it's not the whole journey. >> Participant: Dave, another question. You just referenced 200 milliseconds or microseconds? >> David: Did I say milliseconds? I meant microseconds. >> Participant: You might have, I might have misheard. Relate that to the five microsecond thing again. >> David: If you have data directly attached to your processor, the access time is 195 microseconds. If you need to go to a remote, anywhere else in the thousand nodes, your access time is 200 microseconds. In other words, the additional overhead of that data is five microseconds. >> Participant: That's incredible. >> David: Yes, yes. That is absolutely incredible. That's something that data scientists have been working on for years and years. Okay. That's the reason why you can now do what I talked about which was you can have access from any node to any data within that large amount of nodes. You can have petabytes of data there and you can have access from any single node to any of that data. That, in terms of data enablement, digital enablement, is absolutely amazing. In other words, you don't have to pre put the data that's local in one application in one place. You're allowing an enormous flexibility in how you design systems. That coming back to artificial intelligence, etc. allows you a much, much larger amount of data that you can call on for improving applications. >> Participant: You can explore and train models, huge models, really quickly? >> David: Yes, yes. >> Participant: Apparently that process works better when you have an MPI like mesh than a ring. >> David: If you compare this architecture to the DSST architecture which was the first entrance into this that MP bought for a billion dollars, then that one stopped at 40 nodes. It's architecture was very, very proprietary all the way through. This one takes you to 1,000 nodes with much, much lower cost. They believe that the cost of the equivalent DSSD system will be between 10 and 20% of that cost. >> Dave: Can I ask a question about, you mentioned query optimizer. Who develops the query optimizer for the system? >> David: Nobody does yet. >> Jim: The DBMS vendor would have to re-write theirs with a whole different pensive cost. >> Dave: So we would have an optimizer database system? >> David: Who's asking a question, I'm sorry. I don't recognize the voice. >> Dave: That was Neil. Hold on one second, David. Hold on one second. Go ahead Nick. You talk about translation. >> Nick: ... On a network. It's SAN. It happens to be very low latency and very high throughput, but it's just a storage sub-system. >> David: Yep. Yep. It's a storage sub-system. It's called a server SAN. That's what we've been talking about for a long time is you need the same characteristics which is that you can get at all the data, but you need to be able to get at it in compute time as opposed to taking a stroll down the road time. >> Dave: Architecturally it's a SAN without an array controller? >> David: Exactly. Yeah, the array controller is software from a company called Xcellate, what was the name of it? I can't remember now. Say it again. >> Nick: Xcelero or Xceleron? >> David: Xcelero. That's the company that has produced the software for the data services, etc. >> Dave: Let's, as we sort of wind down this segment, let's talk about the business impact again. We're talking about different ways potentially to develop applications. There's an ecosystem requirement here it sounds like, from the ISDs to support this and other developers. It's the final, portends the elimination of the last electromechanical device in computing which has implications for a lot of things. Performance value, application development, application capability. Maybe you could talk about that a little bit again thinking in terms of how practitioners should look at this. What are the actions that they should be taking and what kinds of plans should they be making in their strategies? >> David: I thought Neil's comment last week was very perceptive which is, you wouldn't start with people like me who have been imbued with the 100 database call limits for umpteen years. You'd start with people, millennials, or sub-millenials or whatever you want to call them, who can take a completely fresh view of how you would exploit this type of architecture. Fundamentally you will be able to get through 10 or 100 times more data in real time than you can with today's systems. There's two parts of that data as I said before. The traditional systems of record that need to be updated, and then a whole host of applications that will allow you to do processes which are either not possible, or very slow today. To give one simple example, if you want to do real time changing of pricing based on availability of your supply chain, based on what you've got in stock, based on the delivery capabilities, that's a very, very complex problem. The optimization of all these different things and there are many others that you could include in that. This will give you the ability to automate that process and optimize that process in real time as part of the systems of record and update everything together. That, in terms of business value is extracting a huge number of people who previously would be involved in that chain, reducing their involvement significantly and making the company itself far more agile, far more responsive to change in the marketplace. That's just one example, you can think of hundreds for every marketplace where the application now becomes the systems of record, augmented by AI and huge amounts more data can improve the productivity of an organization and the agility of an organization in the marketplace. >> This is a godsend for AI. AI, the draw of AI is all this training data. If you could just move that in memory speed to the application in real time, it makes the applications much sharper and more (mumbling). >> David: Absolutely. >> Participant: How long David, would it take for the cloud vendors to not just offer some instances of this, but essentially to retool their infrastructure. (laughing) >> David: This is, to me a disruption and a half. The people who can be first to market in this are the SaaS vendors who can take their applications or new SaaS vendors. ISV. Sorry, say that again, sorry. >> Participant: The SaaS vendors who have their own infrastructure? >> David: Yes, but it's not going to be long before the AWS' and Microsofts put this in their tool bag. The SaaS vendors have the greatest capability of making this change in the shortest possible time. To me, that's one area where we're going to see results. Make no mistake about it, this is a big change and at the Micron conference, I can't remember what the guys name was, he said it takes two Olympics for people to start adopting things for real. I think that's going to be shorter than two Olympics, but it's going to be quite a slow process for pushing this out. It's radically different and a lot of the traditional ways of doing things are going to be affected. My view is that SaaS is going to be the first and then there are going to be individual companies that solve the problems themselves. Large companies, even small companies that put in systems of this sort and then use it to outperform the marketplace in a significant way. Particularly in the finance area and particularly in other data intent areas. That's my two pennies worth. Anybody want to add anything else? Any other thoughts? >> Dave: Let's wrap some final thoughts on this one. >> Participant: Big deal for big data. >> David: Like it, like it. >> Participant: It's actually more than that because there used to be a major trade off between big data and fast data. Latency and throughput and this starts to push some of those boundaries out so that you sort of can have both at once. >> Dave: Okay, good. Big deal for big data and fast data. >> David: Yeah, I like it. >> Dave: George, you want to talk about digital twins? I remember when you first sort of introduced this, I was like, "Huh? What's a digital twin? "That's an interesting name." I guess, I'm not sure you coined it, but why don't you tell us what digital twin is and why it's relevant. >> George: All right. GE coined it. I'm going to, at a high level talk about what it is, why it's important, and a little bit about as much as we can tell, how it's likely to start playing out and a little bit on the differences of the different vendors who are going after it. As far as sort of defining it, I'm cribbing a little bit from a report that's just in the edit process. It's data representation, this is important, or a model of a product, process, service, customer, supplier. It's not just an industrial device. It can be any entity involved in the business. This is a refinement sort of Peter helped with. The reason it's any entity is because there is, it can represent the structure and behavior, not just of a machine tool or a jet engine, but a business process like sales order process when you see it on a screen and its workflow. That's a digital twin of what used to be a physical process. It applied to both the devices and assets and processes because when you can model them, you can integrate them within a business process and improve that process. Going back to something that's more physical so I can do a more concrete definition, you might take a device like a robotic machine tool and the idea is that the twin captures the structure and the behavior across its lifecycle. As it's designed, as it's built, tested, deployed, operated, and serviced. I don't know if you all know the myth of, in the Greek Gods, one of the Goddesses sprang fully formed from the forehead of Zeus. I forgot who it was. The point of that is digital twin is not going to spring fully formed from any developers head. Getting to the level of fidelity I just described is a journey and a long one. Maybe a decade or more because it's difficult. You have to integrate a lot of data from different systems and you have to add structure and behavior for stuff that's not captured anywhere and may not be captured anywhere. Just for example, CAD data might have design information, manufacturing information might come from there or another system. CRM data might have support information. Maintenance repair and overhaul applications might have information on how it's serviced. Then you also connect the physical version with the digital version with essentially telemetry data that says how its been operating over time. That sort of helps define its behavior so you can manipulate that and predict things or simulate things that you couldn't do with just the physical version. >> You have to think about combined with say 3D printers, you could create a hot physical back up of some malfunctioning thing in the field because you have the entire design, you have the entire history of its behavior and its current state before it went kablooey. Conceivably, it can be fabricated on the fly and reconstituted as a physicologic from the digital twin that was maintained. >> George: Yes, you know what actually that raises a good point which is that the behavior that was represented in the telemetry helps the designer simulate a better version for the next version. Just what you're saying. Then with 3D printing, you can either make a prototype or another instance. Some of the printers are getting sophisticated enough to punch out better versions or parts for better versions. That's a really good point. There's one thing that has to hold all this stuff together which is really kind of difficult, which is challenging technology. IBM calls it a knowledge graph. It's pretty much in anyone's version. They might not call it a knowledge graph. It's a graph is, instead of a tree where you have a parent and then children and then the children have more children, a graph, many things can relate to many things. The reason I point that out is that puts a holistic structure over all these desperate sources of data behavior. You essentially talk to the graph, sort of like with Arnold, talk to the hand. That didn't, I got crickets. (laughing) Let me give you guys the, I put a definitions table in this dock. I had a couple things. Beta models. These are some important terms. Beta model represents the structure but not the behavior of the digital twin. The API represents the behavior of the digital twin and it should conform to the data model for maximum developer usability. Jim, jump in anywhere where you feel like you want to correct or refine. The object model is a combination of the data model and API. You were going to say something? >> Jim: No, I wasn't. >> George: Okay. The object model ultimately is the digital twin. Another way of looking at it, defining the structure and behavior. This sounds like one of these, say "T" words, the canonical model. It's a generic version of the digital twin or really the one where you're going to have a representation that doesn't have customer specific extensions. This is important because the way these things are getting built today is mostly custom spoke and so if you want to be able to reuse work. If someone's building this for you like a system integrator, you want to be able to, or they want to be able to reuse this on the next engagement and you want to be able to take the benefit of what they've learned on the next engagement back to you. There has to be this canonical model that doesn't break every time you essentially add new capabilities. It doesn't break your existing stuff. Knowledge graph again is this thing that holds together all the pieces and makes them look like one coherent hole. I'll get to, I talked briefly about network compatibility and I'll get to level of detail. Let me go back to, I'm sort of doing this from crib notes. We talked about telemetry which is sort of combining the physical and the twin. Again, telemetry's really important because this is like the time series database. It says, this is all the stuff that was going on over time. Then you can look at telemetry data that tells you, we got a dirty power spike and after three of those, this machine sort of started vibrating. That's part of how you're looking to learn about its behavior over time. In that process, models get better and better about predicting and enabling you to optimize their behavior and the business process with which it integrates. I'll give some examples of that. Twins, these digital twins can themselves be composed in levels of detail. I think I used the example of a robotic machine tool. Then you might have a bunch of machine tools on an assembly line and then you might have a bunch of assembly lines in a factory. As you start modeling, not just the single instance, but the collections that higher up and higher levels of extractions, or levels of detail, you get a richer and richer way to model the behavior of your business. More and more of your business. Again, it's not just the assets, but it's some of the processes. Let me now talk a little bit about how the continual improvement works. As Jim was talking about, we have data feedback loops in our machine learning models. Once you have a good quality digital twin in place, you get the benefit of increasing returns from the data feedback loops. In other words, if you can get to a better starting point than your competitor and then you get on the increasing returns of the data feedback loops, that is improving the fidelity of the digital twins now faster than your competitor. For one twin, I'll talk about how you want to make the whole ecosystem of twins sort of self-reinforcing. I'll get to that in a sec. There's another point to make about these data feedback loops which is traditional apps, and this came up with Jim and Neil, traditional apps are static. You want upgrades, you get stuff from the vendor. With digital twins, they're always learning from the customer's data and that has implications when the partner or vendor who helped build it for a customer takes learnings from the customer and goes to a similar customer for another engagement. I'll talk about the implications from that. This is important because it's half packaged application and half bespoke. The fact that you don't have to take the customer's data, but your model learns from the data. Think of it as, I'm not going to take your coffee beans, your data, but I'm going to run or make coffee from your beans and I'm going to take that to the next engagement with another customer who could be your competitor. In other words, you're extracting all the value from the data and that helps modify the behavior of the model and the next guy gets the benefit of it. Dave, this is the stuff where IBM keeps saying, we don't take your data. You're right, but you're taking the juice you squeezed out of it. That's one of my next reports. >> Dave: It's interesting, George. Their contention is, they uniquely, unlike Amazon and Google, don't swap spit, your spit with their competitors. >> George: That's misleading. To say Amazon and Google, those guys aren't building digital twins. Parametric technology is. I've got this definitely from a parametric technical fellow at an AWS event last week, which is they, not only don't use the data, they don't use the structure of the twin either from engagement to engagement. That's a big difference from IBM. I have a quote, Chris O'Connor from IBM Munich saying, "We'll take the data model, "but we won't take the data." I'm like, so you take the coffee from the beans even if you don't take the beans? I'm going to be very specific about saying that saying you don't do what Google and FaceBook do, what they do, it's misleading. >> Dave: My only caution there is do some more vetting and checking. A lot of times what some guy says on a Cube interview, he or she doesn't even know, in my experience. Make sure you validate that. >> George: I'll send it to them for feedback, but it wasn't just him. I got it from the CTO of the IOT division as well. >> Dave: When you were in Munich? >> George: This wasn't on the Cube either. This was by the side of, at the coffee table during our break. >> Dave: I understand and CTO's in theory should know. I can't tell you how many times I've gotten a definitive answer from a pretty senior level person and it turns out it was, either they weren't listening to me or they didn't know or they were just yessing me or whatever. Just be really careful and make sure you do your background checks. >> George: I will. I think the key is leave them room to provide a nuanced answer. It's more of a really, really, really concrete about really specific edge conditions and say do you or don't you. >> Dave: This is a pretty big one. If I'm a CIO, a chief digital officer, a chief data officer, COO, head of IT, head of data science, what should I be doing in this regard? What's the advice? >> George: Okay, can I go through a few more or are we out of time? >> Dave: No, we have time. >> George: Let me do a couple more points. I talked about training a single twin or an instance of a twin and I talked about the acceleration of the learning curve. There's edge analytics, David has educated us with the help of looking at GE Predicts. David, you have been talking about this fpr a long time. You want edge analytics to inform or automate a low latency decision and so this is where you're going to have to run some amount of analytics. Right near the device. Although I got to mention, hopefully this will elicit a chuckle. When you get some vendors telling you what their edge and cloud strategies are. Map R said, we'll have a hadoop cluster that only needs four or five nodes as our edge device. And we'll need five admins to care and feed it. He didn't say the last part, but that obviously isn't going to work. The edge analytics could be things like recalibrating the machine for different tolerance. If it's seeing that it's getting out of the tolerance window or something like that. The cloud, and this is old news for anyone who's been around David, but you're going to have a lot of data, not all of it, but going back to the cloud to train both the instances of each robotic machine tool and the master of that machine tool. The reason is, an instance would be oh I'm operating in a high humidity environment, something like that. Another one would be operating where there's a lot of sand or something that screws up the behavior. Then the master might be something that has behavior that's sort of common to all of them. It's when the training, the training will take place on the instances and the master and will in all likelihood push down versions of each. Next to the physical device process, whatever, you'll have the instance one and a class one and between the two of them, they should give you the optimal view of behavior and the ability to simulate to improve things. It's worth mentioning, again as David found out, not by talking to GE, but by accidentally looking at their documentation, their whole positioning of edge versus cloud is a little bit hand waving and in talking to the guys from ThingWorks which is a division of what used to be called Parametric Technology which is just PTC, it appears that they're negotiating with GE to give them the orchestration and distributed database technology that GE can't build itself. I've heard also from two ISV's, one a major one and one a minor one who are both in the IOT ecosystem one who's part of the GE ecosystem that predicts as a mess. It's analysis paralysis. It's not that they don't have talent, it's just that they're not getting shit done. Anyway, the key thing now is when you get all this - >> David: Just from what I learned when I went to the GE event recently, they're aware of their requirement. They've actually already got some sub parts of the predix which they can put in the cloud, but there needs to be more of it and they're aware of that. >> George: As usual, just another reason I need a red phone hotline to David for any and all questions I have. >> David: Flattery will get you everywhere. >> George: All right. One of the key takeaways, not the action item, but the takeaway for a customer is when you get these data feedback loops reinforcing each other, the instances of say the robotic machine tools to the master, then the instance to the assembly line to the factory, when all that is being orchestrated and all the data is continually enhancing the models as well as the manual process of adding contextual information or new levels of structure, this is when you're on increasing returns sort of curve that really contributes to sustaining competitive advantage. Remember, think of how when Google started off on search, it wasn't just their algorithm, but it was collecting data about which links you picked, in which order and how long you were there that helped them reinforce the search rankings. They got so far ahead of everyone else that even if others had those algorithms, they didn't have that data to help refine the rankings. You get this same process going when you essentially have your ecosystem of learning models across the enterprise sort of all orchestrating. This sounds like motherhood and apple pie and there's going to be a lot of challenges to getting there and I haven't gotten all the warts of having gone through, talked to a lot of customers who've gotten the arrows in the back, but that's the theoretical, really cool end point or position where the entire company becomes a learning organization from these feedback loops. I want to, now that we're in the edit process on the overall digital twin, I do want to do a follow up on IBM's approach. Hopefully we can do it both as a report and then as a version that's for Silicon Angle because that thing I wrote on Cloudera got the immediate attention of Cloudera and Amazon and hopefully we can both provide client proprietary value add, but also the public impact stuff. That's my high level. >> This is fascinating. If you're the Chief of Data Science for example, in a large industrial company, having the ability to compile digital twins of all your edge devices can be extraordinarily valuable because then you can use that data to do more fine-grained segmentation of the different types of edges based on their behavior and their state under various scenarios. Basically then your team of data scientists can then begin to identify the extent to which they need to write different machine learning models that are tuned to the specific requirements or status or behavior of different end points. What I'm getting at is ultimately, you're going to have 10 zillion different categories of edge devices performing in various scenarios. They're going to be driven by an equal variety of machine learning, deep learning AI and all that. All that has to be built up by your data science team in some coherent architecture where there might be a common canonical template that all devices will, all the algorithms and so forth on those devices are being built from. Each of those algorithms will then be tweaked to the specific digital twins profile of each device is what I'm getting at. >> George: That's a great point that I didn't bring up which is folks who remember object oriented programming, not that I ever was able to write a single line of code, but the idea, go into this robotic machine tool, you can inherit a couple of essentially component objects that can also be used in slightly different models, but let's say in this machine tool, there's a model for a spinning device, I forget what it's called. Like a drive shaft. That drive shaft can be in other things as well. Eventually you can compose these twins, even instances of a twin with essentially component models themselves. Thing Works does this. I don't know if GE does this. I don't think IBM does. The interesting thing about IBM is, their go to market really influences their approach to this which is they have this huge industry solutions group and then obviously the global business services group. These guys are all custom development and domain experts so they'll go into, they're literally working with Airbus and with the goal of building a model of a particular airliner. Right now I think they're doing the de-icing subsystem, I don't even remember on which model. In other words they're helping to create this bespoke thing and so that's what actually gets them into trouble with potentially channel conflict or maybe it's more competitor conflict because Airbus is not going to be happy if they take their learnings and go work with Boeing next. Whereas with PTC and Thing Works, at least their professional services arm, they treat this much more like the implementation of a packaged software product and all the learnings stay with the customer. >> Very good. >> Dave: I got a question, George. In terms of the industrial design and engineering aspect of building products, you mentioned PTC which has been in the CAD business and the engineering business for software for 50 years, and Ansis and folks like that who do the simulation of industrial products or any kind of a product that gets built. Is there a natural starting point for digital twin coming out of that area? That would be the vice president of engineering would be the guy that would be a key target for this kind of thinking. >> George: Great point. This is, I think PTC is closely aligned with Terradata and they're attitude is, hey if it's not captured in the CAD tool, then you're just hand waving because you won't have a high fidelity twin. >> Dave: Yeah, it's a logical starting point for any mechanical kind of device. What's a thing built to do and what's it built like? >> George: Yeah, but if it's something that was designed in a CAD tool, yes, but if it's something that was not, then you start having to build it up in a different way. I think, I'm trying to remember, but IBM did not look like they had something that was definitely oriented around CAD. Theirs looked like it was more where the knowledge graph was the core glue that pulled all the structure and behavior together. Again, that was a reflection of their product line which doesn't have a CAD tool and the fact that they're doing these really, really, really bespoke twins. >> Dave: I'm thinking that it strikes me that from the industrial design in engineering area, it's really the individual product is really the focus. That's one part of the map. The dynamic you're pointing at, there's lots of other elements of the map in terms of an operational, a business process. That might be the fleet of wind turbines or the fleet of trucks. How they behave collectively. There's lots of different entry points. I'm just trying to grapple with, isn't the CAD area, the engineering area at least for hard products, have an obvious starting point for users to begin to look at this. The BP of Engineering needs to be on top of this stuff. >> George: That's a great point that I didn't bring up which is, a guy at Microsoft who was their CTO in their IT organization gave me an example which was, you have a pipeline that's 1,000 miles long. It's got 10,000 valves in it, but you're not capturing the CAD design of the valve, you just put a really simple model that measures pressure, temperature, and leakage or something. You string 10,000 of those together into an overall model of the pipeline. That is a low fidelity thing, but that's all they need to start with. Then they can see when they're doing maintenance or when the flow through is higher or what the impact is on each of the different valves or flanges or whatever. It doesn't always have to start with super high fidelity. It depends on which optimizing for. >> Dave: It's funny. I had a conversation years ago with a guy, the engineering McNeil Schwendler if you remember those folks. He was telling us about 30 to 40 years ago when they were doing computational fluid dynamics, they were doing one dimensional computational fluid dynamics if you can imagine that. Then they were able, because of the compute power or whatever, to get the two dimensional computational fluid dynamics and finally they got to three dimensional and they're looking also at four and five dimensional as well. It's serviceable, I guess what I'm saying in that pipeline example, the way that they build that thing or the way that they manage that pipeline is that they did the one dimensional model of a valve is good enough, but over time, maybe a two or three dimensional is going to be better. >> George: That's why I say that this is a journey that's got to take a decade or more. >> Dave: Yeah, definitely. >> Take the example of airplane. The old joke is it's six million parts flying in close formation. It's going to be a while before you fit that in one model. >> Dave: Got it. Yes. Right on. When you have that model, that's pretty cool. All right guys, we're about out of time. I need a little time to prep for my next meeting which is in 15 minutes, but final thoughts. Do you guys feel like this was useful in terms of guiding things that you might be able to write about? >> George: Hugely. This is hugely more valuable than anything we've done as a team. >> Jim: This is great, I learned a lot. >> Dave: Good. Thanks you guys. This has been recorded. It's up on the cloud and I'll figure out how to get it to Peter and we'll go from there. Thanks everybody. (closing thank you's)

Published Date : May 9 2017

SUMMARY :

There you go. and maybe the key issues that you see and is coming even more deeply into the core practice You had mentioned, you rattled off a bunch of parameters. It's all about the core team needs to be, I got a minimal modular, incremental, iterative, iterative, adaptive, and co-locational. in the context of data science, and get automation of many of the aspects everything that these people do needs to be documented that the whole rapid idea development flies in the face of that create the final product that has to go into production and the algorithms and so forth that were used and the working model is obviously a subset that handle the continuous training and retraining David: Is that the right way of doing it, Jim? and come back to sort of what I was trying to get to before Dave: Please, that would be great. so how in the world are you going to agilize that? I think if you try to represent data science the algorithm to be fit for purpose and he said something to me the other day. If you look at - Just to clarify, he said agile's dead? Dave: Go ahead, Jim. and the functional specifications and all that. and all that is increasingly the core that the key aspect of all the data scientists that incorporates the crux of data science Nick, you there? Tough to hear you. pivoting off the Micron news. the ability to create a whole number of nodes. Participant: This latency and the node count At the moment, 3D Crosspoint is a nice to have That is the secret sauce which allows you The latency is incredibly short. Move the processing to that particular node Is Micron the first market with this capability, David? David: Over fabric? and who are coming to market with their own versions. Dave: David? bring the application to the data, Now that the impact of non co-location and you have to do a join, and take the output of that and bring that together. All of the data science is there. NVMe, by definition is a point to point architecture. Dave: Oh, got it. Does that definition of (mumbling) make sense to you guys? Nick: You're emphasizing the network topology, the whole idea was the thrust was performance. of the systems as a whole Then the fact that you have a large memory space is that the amount of data that you can pass through it You just referenced 200 milliseconds or microseconds? David: Did I say milliseconds? Relate that to the five microsecond thing again. anywhere else in the thousand nodes, That's the reason why you can now do what I talked about when you have an MPI like mesh than a ring. They believe that the cost of the equivalent DSSD system Who develops the query optimizer for the system? Jim: The DBMS vendor would have to re-write theirs I don't recognize the voice. Dave: That was Neil. It happens to be very low latency which is that you can get at all the data, Yeah, the array controller is software from a company called That's the company that has produced the software from the ISDs to support this and other developers. and the agility of an organization in the marketplace. AI, the draw of AI is all this training data. for the cloud vendors to not just offer are the SaaS vendors who can take their applications and then there are going to be individual companies Latency and throughput and this starts to push Dave: Okay, good. I guess, I'm not sure you coined it, and the idea is that the twin captures the structure Conceivably, it can be fabricated on the fly and it should conform to the data model and that helps modify the behavior Dave: It's interesting, George. saying, "We'll take the data model, Make sure you validate that. I got it from the CTO of the IOT division as well. This was by the side of, at the coffee table I can't tell you how many times and say do you or don't you. What's the advice? of behavior and the ability to simulate to improve things. of the predix which they can put in the cloud, I need a red phone hotline to David and all the data is continually enhancing the models having the ability to compile digital twins and all the learnings stay with the customer. and the engineering business for software hey if it's not captured in the CAD tool, What's a thing built to do and what's it built like? and the fact that they're doing these that from the industrial design in engineering area, but that's all they need to start with. and finally they got to three dimensional that this is a journey that's got to take It's going to be a while before you fit that I need a little time to prep for my next meeting This is hugely more valuable than anything we've done how to get it to Peter and we'll go from there.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

JimPERSON

0.99+

Chris O'ConnorPERSON

0.99+

GeorgePERSON

0.99+

DavePERSON

0.99+

AirbusORGANIZATION

0.99+

BoeingORGANIZATION

0.99+

Jim KobeielusPERSON

0.99+

JamesPERSON

0.99+

AmazonORGANIZATION

0.99+

IBMORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

NeilPERSON

0.99+

JoePERSON

0.99+

NickPERSON

0.99+

David FloyerPERSON

0.99+

George GilbertPERSON

0.99+

1,000 milesQUANTITY

0.99+

10QUANTITY

0.99+

PeterPERSON

0.99+

195 microsecondsQUANTITY

0.99+

Nadeem Gulzar | DataWorks Summit Europe 2017


 

>> Announcer: Live from Munich, Germany, it's the CUBE, covering DataWorks Summit Europe 2017. Brought to you by Hortonworks. >> Hey welcome back everyone. We're here live in Munich Germany for DataWorks 2017 Summit, formerly know as Hadoop Summit, now called DataWorks. I'm John Furrier with the CUBE, my co-host Dave Vellante, here for two days of wall-to-wall coverage. Our next guest is Nadeem Gulzar, head of advanced Analytics at Danske Bank. Welcome to the CUBE. >> Thank you. >> You're a customer but also talking here at the event, bringing all your folks here. Your observation, I mean, Hadoop is not going away, certainly we see that. But now, as John Kreisa, who was MC'ing, was on earlier said, open up the aperture to analytics, is really where the action is. >> Nadeem: Absolutely. >> Your reaction to that. >> I completely agree, because again, Hadoop is basically just the basic infrastructure, right. Components build on components, and things like that. But, when you really utilize it, is when you add the advanced analytics frameworks. There are many out there. I'm not going to favor one over another. But the main thing is, you need that to really leverage Hadoop. And, at the same time, I think it's very important to realize how much power there actually is in this. For us at, in Danske Bank, getting Hadoop, getting the advanced analytics framework, has really proven quite a lot. It allowed us actually to dig into our core data, transaction data for instance, which we haven't been able to for decades. >> So take me through, because you guys are an interesting use case because you're advanced. You're gettin' at the data, which is cutting edge. But you're going through this transformation, and you have to because you're on the front lines. Take us inside the company, without giving away any trade secrets, and describe the environment. What's the current situation, and how is it evolving from an IT standpoint, and also from the relationship with the stakekholders in the business side. >> So again, we are a bank with 20,000 employees, so of course in a large organization you have silos, People feeling okay, this is my domain, this is my kingdom, don't touch it. Don't approach me, or you can approach me, talk to me, you have to convince me, otherwise don't talk to me at all. So we get that quite a lot, and to be honest, from my point of view, if we do not lift as a bank, we're not going to succeed. If I have success, if my organization of almost 60 people have success, that's good in itself, but we are not going to succeed as a bank. So for me, it's quite important that I go down and break down these barriers, and allow us to come in, tell the business units, tell them what sort of capabilities do we bring, and include them. That is actually the main key. I don't want to replace them or anything like that. >> So an organizational challenge is to get the mindset shifted. How 'about process gaps and product gaps? 'Cause I mean I almost see the sequence, kind of a group hug if you will, organizational mindset, kind of a reset or calibration. And then identify processes and then product gaps, seem to be the next transition. >> Absolutely, absolutely, and there are some gaps. Still, even though we have been on this journey for a considerable amount of time, there are still gaps, both in terms of processes and products. Because again, even though we have top management buy in, it doesn't go through all the way down to the middle layer. So we still struggle with this from time to time. >> How do you break down those barriers? What do you do, what's your strategy? >> I'm humble, to be honest. I go in, I tell them, listen you guys I have some capabilities that I can add to your capabilities. I want you to leverage me to make your life easier. I want to lift you as an organization. I don't care about myself, I want you to be better at what you're doing. >> So Nadeem, the money business and the technology business have always had a close relationship. It was like in 2010 after we came out of the downturn, it was like this other massive collision. You had begun experimenting with Cloud, the shift, CapEx to OpEx. The data thing hit in a big way, obviously mobile became real. So talk about the confluence of those technologies, specifically in the context of your big data journey. Where did you get started, and how did it evolve? >> So actually it fit in quite nicely because we were coming out of this down period, right, so there was extreme amount of focus on cost. So, of course at the time where we wanted to go into this journey, a lot of people were asking, okay how much does this cost, what's the big strategy, and so on. And how's the road map going to look like, and what's the cost of the road map? The thing is, if you buy some off the shelf commercial product, it's quite expensive. We can easily talk like half a billion, something like that, for a full end to end system. So with this, you were allowed, or we were allowed, to start up with relatively small funding, and I'm actually talking about just like a million dollars, roughly. And that actually allowed us a substantial boost in the capability department, in allowing us to show what kind of use cases we could build, and what kind of value we could bring to Danske Bank. >> So you started with understanding Hadoop? Is that right, was that the starting point? >> Yes, in a fairly small, very researched team set up. We did the initial research, we looked at, okay what could this bring? We did some initial, what we call, proof of value. So small, small, pilot projects, looking at, okay this is the data. We can leverage it in this way, this is the value we can bring. How much can we actually boost the business? So everything is directly linked to business value. So, for instance, one of the use cases was within customers, understanding customer behavior, directly linking it to marketing, do more targeted marketing, and at the end get more results in terms of increased sales. >> We just started a journey 2009, 2010, is that right? Or was it later? >> No, we started somewhat later. The initial research was in '14. >> In '14? Okay, alright, so '14 you sort of became familiar with Hadoop, and then I imagine, like many customers, you said okay, wow this stuff is complicated, but you were takin' it in small chunks, low risk. Let's get some value. Marketing is an obvious use case. I would imagine fraud is another obvious use case. So then, how did that evolve? I mean it's only a few years now, but I imagine you've evolved very quickly. >> Extremely quickly. Actually, within two months of the research, we actually saw a huge benefit in this area, and directly we went with the material to the senior members of the different boards we wanted to affect, and actually, you could call it luck. But, maybe we were just well prepared and convincing, so we actually directly got funding at that point in time. They said, listen, this is very promising. Here you go, start off with the initial, slightly larger projects, prove some value, and then come back to us. Initially they wanted us to do two things, look into the customer journey, or doing deeper customer behavior analytics, and the second was within risk. Doing things like, text mining, financial statements, getting some deeper into that, doing some web crawling on financial data such as Bloomberg, etcetera, and then pull it into the system. >> To inform your investments as a financial institution. From an architecture and infrastructure standpoint, we talked about starting at Hadoop. Has it evolved, how has it evolved? Where do you see it going? >> It has evolved quite a lot in the past couple of years. And again, to be honest, it's like every quarter something new is happening and we need to do some adjustments even to the core architecture. And with the introduction of HDB 3 hence later this year, I think we're going to see a massive change once again. Hortonworks already calls it a major change, or a major release. But actually, the things they are doing is extremely promising, so we want to take that step with them. But again, it's going to affect us. >> What's exciting about that to you? >> The thing that's very exciting is, we are now at like a balance point, where we have played quite a lot, we have released a couple of production grade solutions, but we have really not reached the full enterprise potential. So getting like into the real deep stuff with living under heavy SLA's, regulation stuff. All these kind of things is not in place yet, from my point of view. >> We talk a lot about, in the CUBE, and in our company, about these emergent work loads; you had batch, interactive, and the world went back to batch with Hadoop, and now you have this continuous workload, this streaming real-time workloads. How is that affecting your organization, generally, and specifically, you're thinking about architecture. How real is that and where do you see that in the future? >> It's the core, to be honest. Again, one of the main things we are trying to do is look into, so, gone are the days with heavy, heavy batches of data coming in. Because if you look at Weblocks for instance, so when customers interacts with our web, or our tablet solution, or mobile solution, the amount of data generated is humongous. So, no way on earth you can think about batches anymore. So it's more about streaming the data all the way in, doing real time analytics and then produce results. >> What would you say are your biggest, big data challenges, problems that you really want to attack and solve. >> So, what I really want to attack is, getting all sorts of data into the system. So, you can imagine, as a bank we have 2,000 plus systems. We have approximately 4,000 different points that delivers data. So getting all that mass into our data link, it's a huge task. We actually underestimated it. But now, we have seen we have to attack it and get it in because that is the gold. Data is the future gold. So we need to mine it in, we need to do analytics on top of it and produce value. >> And then once you get it in there, I'm sure you're anticipating that you want to make sure this doesn't go stale, doesn't become a swamp, doesn't get frozen. It's your job to talk about data oceans, which is really the long term vision I presume, right? >> And that is a key as well because with the GDPR for instance, we need to have full mapping and full control of all the data coming in. We need to be able to generate metadata, we need to have full data lineage. We need to know what, all the data where it came from, how it's interconnected, relations, all that. >> And that's what, two years away from implementation? Is that about right? >> It's going to take a while, of course. But again, the key thing is we make the framework so all the data coming in step by step, has that. >> Yeah, but so GDPR though, it goes into effect in '19, is that correct? >> It's actually May '18. >> May '18, oh, so it's much tighter time frame then I realized. >> John: You're under the gun. >> Nadeem: Yes. >> Okay, observation here at this event, obviously a lot of IOT, for you that's people. People and things are kind of the edge of the network. The intelligent edge is a big, big topic. Very dynamic. >> Nadeem: Extremely dynamic. >> A lot of things happening. Lot of opportunities for you to be this humble service provider to your constituents, but also your customers. How do you guys view that? What's the current landscape look like as you look outside the company and look at what's happening around you, the world. >> A lot of cool things are going on, to be honest. Especially in IOT, right? I mean, even though we are a core bank, still, there are a lot of sensors we can use. I talked a bit about, under the keynote, about ATM's, right? So, we're also looking at how can we utilize this technology? How can we enable our customers? If you look at our apps, they also generate extreme amounts of data, right? The mobile solution that we have, it gives away GPS location and things like that. And we want to include all that data in. At the end of the day, it's not for our gain, we are not always looking at making the next buck, right? It's also about being there for the customer, providing the services they need, making their banking life easier. >> And your ecosystem is evolving and rapidly adding new constituents to your network because, then you have the consumer with the phone, the mobile app alone, never mind the point of sale opportunity at the ATM. Now a digital, augmented reality experience could be enabled where you now have fintech suppliers, and potentially other suppliers in this now digital network that could be relational with you. >> Yes, and our job is to make sure that we leverage that. Acquiring a banking license is extremely difficult. But we have it, and what we need to do is to engage these fintechs, partners, even other banks, and say listen guys we invite you in. Utilize our services, utilize our framework, utilize our foundation and let's build something upon that. >> If you had to explain, Nadeem, this fintech start up trend because it is super hot, what is it? I mean how would you describe to someone who's not in the banking world. 'Cause most people would be scratching their head and say, isn't that banking? But, now this ecosystem is developing of new entrepreneurial activity and they're skyrocketing with success 'cause they have either a specialty focus, they do something extremely well. It may or may not be in a direct big space with a bank, but a white space. Use cases. So, is it good? Is it bad? Is it hype? What's the current state of the fintech situation? >> From my point of view, it's awesome. And the reason is, these guys are pushing us. Remember, we are a hundred fifty plus year old bank. And sometimes we do tend to just pat on our back and say, okay, this is going good, right? But, these guys are coming in, giving some competition, and we love it. >> Give me an example of a fintech capabilities. Randomly bring up some examples to highlight what fintech is. >> So what we've seen in, for instance the German market, is the fintechs coming in, utilizing some of the customer data, and then producing awesome new applications. Whether it is a new net bank, where a customer can interact with it, in a much, much more smoother way. Some of the banks tend to over clutter things, not make it simple. So things like, where you can put in, you can look at your transactions in a Google Map, for instance. You can see how much do you spend at this location. You can move around. >> You could literally follow the money, on a map. (laughing) >> So this is your home base, you go out here, you spend this amount of money, and maybe even add more on it. So, let's say you do your grocery shopping over here, but if I moved all my business from this company to this company, how much could I save? Imagine if you could just drag and drop it and see, okay, I could actually save a couple of thousand bucks, awesome. >> And machine learning is going to totally change the game with Augmented Intelligence. AI is called Artificial Intelligence, or Augmented Intelligence, depending upon your definition. This is a good thing for consumers. >> It is, it is. >> And thinking about disruption, what do you guys, what are your thoughts on blockchain? What is your research showing? You playing around with Hyperledger at all? >> Yes we are. And blockchain, it's also quite interesting. We're doing lots of research on that. What's it's shown actually is that this is a technology that we can also use. And we can also really utilize, even the security aspects of it. If you just take that, you could really implement that. >> The identity aspect, it's federating identity around fraud, another area you can innovate on. I'm bullish on blockchain, a lot of people are skeptical, but Dave knows I really, I love blockchain. Because it's not about Bitcoin per se, it's sort of the underlying opportunity. It just seems fascinating. Dave you know, I got to get on my soapbox, blockchain soapbox. >> We've never really looked at Bitcoin as just a currency, it's move of a technology platform, and I have always been fascinated with the security angle. Virtually unhackable, put that in quotes. No need for a third party to intermediate. So many positive fundamentals, now it's guys like you figuring out, okay the practitioner saying, here's how we're going to implement it and commercialize it. >> And actually it fits in quite well with things like GDPR. This is also about opening up, the same with PSD 2. Exposing the customer data, making it available for the general public. And ultimately the goal is, so you as a consumer, me as a consumer, we own our data. >> Nadeem, thank you so much for coming on the CUBE and sharing your practitioner situation, and your advice, as well as commentary. I'll give you the last word. As you and your team embark from DataWorks 2017 and head back to the ranch, so to speak, and bring back some stuff. What are you going to work on? What's the to do item? What are you going to sharpen the saw on and cut when you get back? >> So for us on the very, very short term, it's about taking our platform and our capabilities and move it into the real enterprise world. That is our first key milestone that we are going to go for. And, I'll tell you, we're going to go all in for that. Because, unless we do that, we're not able to really attack the core of banking, which requires this, right? Please remember that a consumer doing a transaction somewhere in the world, he cannot stand and wait for ages for something to be processed. It needs to be instantaneous. So, this is what we need to do. >> You think this event, you're armed up with product. >> Absolutely, absolutely. Lots of good insight we've gotten from this. Lots of potential, lots of networking guys and other companies that we can talk to about this. >> Also great recruiting, get some developers out there too, lot of great people. Congratulations on your success and thanks for sharing this great insight here on the CUBE, exposing the data to you live on the CUBE. Silicon Angle dot TV, I'm John Furrier, with Dave Vellante my co-host, more great coverage stay with us here live in Munich, Germany for DataWorks 2017 Summit. We'll be right back.

Published Date : Apr 6 2017

SUMMARY :

Brought to you by Hortonworks. Welcome to the CUBE. You're a customer but also talking here at the event, is when you add the advanced analytics frameworks. and you have to because you're on the front lines. So again, we are a bank with 20,000 employees, kind of a group hug if you will, So we still struggle with this from time to time. I want you to leverage me to make your life easier. the shift, CapEx to OpEx. And how's the road map going to look like, We did the initial research, we looked at, No, we started somewhat later. so '14 you sort of became familiar with Hadoop, and directly we went with the material Where do you see it going? and we need to do some adjustments So getting like into the real deep stuff and now you have this continuous workload, Again, one of the main things we are trying to do What would you say are your biggest, and get it in because that is the gold. And then once you get it in there, of all the data coming in. But again, the key thing is we make the framework so it's much tighter time frame then I realized. obviously a lot of IOT, for you that's people. Lot of opportunities for you A lot of cool things are going on, to be honest. then you have the consumer with the phone, and say listen guys we invite you in. I mean how would you describe to someone and we love it. Give me an example of a fintech capabilities. Some of the banks tend to over clutter things, You could literally follow the money, on a map. So, let's say you do your grocery shopping over here, And machine learning is going to totally change the game that we can also use. Dave you know, I got to get on my soapbox, and I have always been fascinated with the security angle. so you as a consumer, me as a consumer, we own our data. and cut when you get back? That is our first key milestone that we are going to go for. that we can talk to about this. exposing the data to you live on the CUBE.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

JohnPERSON

0.99+

Danske BankORGANIZATION

0.99+

NadeemPERSON

0.99+

John KreisaPERSON

0.99+

Nadeem GulzarPERSON

0.99+

DavePERSON

0.99+

May '18DATE

0.99+

2009DATE

0.99+

2010DATE

0.99+

John FurrierPERSON

0.99+

BloombergORGANIZATION

0.99+

20,000 employeesQUANTITY

0.99+

two daysQUANTITY

0.99+

half a billionQUANTITY

0.99+

two yearsQUANTITY

0.99+

two monthsQUANTITY

0.99+

HortonworksORGANIZATION

0.99+

'19DATE

0.99+

'14DATE

0.99+

CUBEORGANIZATION

0.99+

two thingsQUANTITY

0.99+

Google MapTITLE

0.99+

Munich, GermanyLOCATION

0.99+

bothQUANTITY

0.98+

DataWorks 2017 SummitEVENT

0.98+

GDPRTITLE

0.98+

HadoopTITLE

0.98+

DataWorksEVENT

0.98+

Munich GermanyLOCATION

0.98+

PSD 2TITLE

0.98+

first key milestoneQUANTITY

0.98+

secondQUANTITY

0.97+

DataWorks SummitEVENT

0.97+

Hadoop SummitEVENT

0.97+

oneQUANTITY

0.97+

almost 60 peopleQUANTITY

0.95+

HadoopORGANIZATION

0.95+

later this yearDATE

0.94+

approximately 4,000 different pointsQUANTITY

0.94+

2,000 plus systemsQUANTITY

0.93+

a hundred fifty plus year oldQUANTITY

0.93+

Silicon Angle dot TVORGANIZATION

0.93+

2017EVENT

0.93+

a couple of thousand bucksQUANTITY

0.87+

DataWorks Summit Europe 2017EVENT

0.85+

decadesQUANTITY

0.85+

GermanLOCATION

0.84+

OpExORGANIZATION

0.82+

past couple of yearsDATE

0.76+

a million dollarsQUANTITY

0.76+

earthLOCATION

0.75+

CapExORGANIZATION

0.74+

HyperledgerORGANIZATION

0.71+

2017DATE

0.7+

3COMMERCIAL_ITEM

0.68+

BitcoinOTHER

0.64+

SLATITLE

0.64+

EuropeLOCATION

0.59+

WeblocksORGANIZATION

0.59+

HDBTITLE

0.58+

yearsQUANTITY

0.57+

DataWorksTITLE

0.49+

CloudTITLE

0.44+

CUBETITLE

0.41+

Lynn A Comp, Intel Coporation - Mobile World Congress 2017 - #MWC17 - #theCUBE


 

(upbeat electronic music) >> Everyone, welcome to our special Mobile World Congress 2017 coverage. I'm John Furrier here in theCUBE for two days of wall-to-wall coverage. Monday and Tuesday, February 27th and 28th, and we have on the phone right now, Lynn Comp, who's the Senior Director of the Network Platforms Group within Intel, part of the team doing the whole network transformation. The big announcements that went out prior to Mobile World Congress and hitting the ground on Monday and Tuesday of all next week in Barcelona. Lynn, great to have you on the phone. Thanks for taking the time to walk through some of the big announcements. >> Lynn: Thanks, John, for having us. It's a really exciting Mobile World Congress. We're seeing more and more of the promise of the next generation networks starting to take solution form from ingredient form a couple years ago, so it's a great, great time to be in this business. >> So 5G is happening now. You're seeing it in the network and the cloud and at the client, that you guys use the word "client" but essentially, it's the people with their smartphones and devices, wearables, AIs, and now the client is now cars, and flying drones and potentially, whatever else is connected to the Internet as an Internet of things. This has been a really big moment and I think I want to take some time to kind of unpack with you some of the complexities and kind of what's going on under the hood because 4G to 5G is a huge step up in the announcement and capabilities, and it's not just another device. There's really unique intellectual property involved, there's more power, there's a market leadership in the ecosystem, and really is a new way for service providers to achieve profitability, and get those products that are trying to connect, that need more power, more bandwidth, more capabilities. Can you take a minute just to talk about the key announcements impacting Mobile World Congress from Intel's perspective this week in your area? >> Lynn: Yeah, so we had a group of announcements that came out. Everything from solutions labs where operators are invited in to work with Nokia and Intel starting out to start working through what does it mean to try and manage a network that includes unlicensed and licensed spectrum and all these different usage models, very different model for them, to Ericsson, an initiative with GE and Honeywell and Intel, that is in Innovator's Initiative, where companies are invited to come in in the ecosystem. An early start working through what does it mean to have this kind of network capability? If you think what happened, 2G, 3G, to 4G, you start looking at the iPhone, been around for 10 years, and you've seen how the uses have changed, and how application developers have come up with completely new ways of doing things, like, who would have thought about crowdsourcing traffic patterns for driving directions? We all wanted it years ago, but it was just recently that we were able to have that on a smartphone. They're trying to unleash that with pretty unique companies. I mean, GE and Honeywell, UC Berkeley, you wouldn't necessarily think of them as being first on innovating new usage models for a wireless network, but with something like 5G, with all of these diverse use cases, you end up with a completely different ecosystem, really wanting to come in early and take advantage of the potential that's there. >> Lynn, talk about this end-to-end store because one of the things that got hidden in all the news, and certainly SiliconANGLE covered it, as well as, there was a great article in Fortune about it, but kind of talk about more of the 5G versus Qualcomm, that was kind of the big story that, the battle of the chips, if you will, and the big 5G angle there, but there's more to it and one thing that caught my attention was this end-to-end architecture, and it wasn't just Intel. You guys are a big part of that as an ingredient, but it's not just Intel, and what does that mean, end-to-end, 'cause I can see the wireless pieces and overlaying connecting devices, but where's the end-to-end fit in? Can you give some color on that? >> Lynn: Absolutely. You know what's really fascinating is you've got Intel and we've been in the cloud and heard of the genesis of what would become the consumer and the enterprise cloud from the very start, and so what we've been doing in working in that end-to-end arena is taking things like virtualization, which has allowed these service providers and enterprises to slice up compute resources and instead of having something that's completely locked and dedicated on one workload, they can create slices of different applications that all sit on the same hardware and share it, and so if you look, years ago, many of the service providers, cloud and enterprise, they were looking at utilization rights as maybe 15% of the compute power of a server, and now, a lot of them are aiming for 75 to 85% utilization, and that's just a crazy amount of (mumbles) so bringing that to this market that in traditional, we had single purpose boxes, there's various detections for one thing, but that creates a business challenge if you need to do more than one thing, so really what we're showing, for example, at Mobile World Congress, it's something that we call FlexRAN, and it's an example of how to run a radio area network on a standard server on the technology, and it does implement that network slicing. Its's very similar to the virtualization and the compute slicing, but taking advantage of it to use different bandwidths and different rates for different scenarios, whether IoT or smartphones, or even connected cars. >> So I got to ask you about, the big question I get is, first of all, thanks for that, but the big question I get is, this isn't turning into an app show, we're Mobile World Congress, and apps are everything from cars to just phone apps to network apps, et cetera, and the question that everyone's asking is, we need more bandwidth, and certainly, 5G addresses that, but the service providers are saying, "Do we really need all that power? And "When is it coming?" "What's the timing of all this?" So, specific question to you is, Lynn, is what is Intel doing to accelerate the network transformation for the service providers to get 5G ready, 'cause that seems to be the main theme as the orientation of where the progress bar is relative to is it ready for primetime, is it here and now, is it out in the future, is this kind of a pre-announcement, so there's kind of some confusion. Clarify that up. Where's the progress bar and how is Intel accelerating network transformation for folks in the service provider vis-a-vis 5G-ready? >> Lynn: So there's a couple things. So let me start with the accelerating piece because it also relates to the end-to-end piece. When you look at the way that networks have been constructed all the way, end-to-end, it has traditionally been a very, very limited set of solution providers, and they tend to survive pretty granular, pretty high-granular functions, so the appliance, the full appliance, software, hardware, everything, and I would look at some of the smartphones up until you could put new applications on it, as appliances, it did voice, and so, we have this service provider begging us for many years, "Give us an ecosystem that looks like server and PC. "I want a building block ecosystem. "I want to be able to take advantage of fast and free wires "in software and hardware. "I need people to come innovate, "like they go innovate on Amazon," and so building an ecosystem, so Intel Network Builders is something that was started about three years ago, and we had, oh, half dozen to maybe 12 different vendors who were part of it, mostly software vendors. Since then, we have 250-plus number and they range from service providers like GT and Telefonica all the way to the hardware vendors like Cisco and Ericsson, and then the software vendors that you would expect. So that's one thing that we've been really working, for a few years now, on giving these operators building block approaches, supporting them in open source. We had a big announcement from AT&T, talking about how they're putting about seven millions lines of code into the Linux Foundation, and its code has been deployed in their network already, so pretty big departure from normal practice, and then today, we had an announcement that came out, where not only did AT&T and Bell Canada and Orange in that community. Now we've got China Mobile, China Telecom, and a project called Open-O, also joining forces. If you were to map out the topics for these operators, we've got almost all of the top ten. They are joining this project to completely change the way that they run their networks, and that translates into the kind of innovation, the kind of applications that consumers love, that they're already getting out of the cloud, now they can begin to get that piece of innovation and creativity in the network as well. So the building block approach seems to be your strategy for the ecosystem. What's the challenge to keep that rolling and cohesive? How are you guys going to foster that growth on the ecosystem? You guys going to be doing a lot of joint marketing, funding, projects, and (chuckles) how are you going to foster that continuing growth? >> Lynn: Well there's a couple, it's such an opportunity-rich environment right now. Even things that you would assume would be normal and kind of standard practice, like standardized benchmarking, because you want apples-to-apples performance comparison. Well that's something that this industry really hasn't had. We've done very conceptualized testing, so we're working with the operators in a project called OPNSG to make sure that the operators have a uniform way, even if it's synthetic benchmark, but they at least understand this synthetic benchmark has this kind of performance, so they start really being able to translate and have the vendors do comparisons on paper, and they can actually do better comparisons without having to do six months of testing, so that's a really big deal. The other thing that I do want to also say about 5G is we're in a pre-standards world right now. ITU and 3GPP will have standards dropped in 2018 and 2020 is when it will be final, but every time that you're looking at a new wireless standard, there's a lot of pretrials that are happening, and that's because you want to test before you state everything has to work a specific way, so there was a trial just announced in December, with Erisson, AT&T in Austin, Texas in the Intel offices, and so if you happen to be in that office, you're starting to be able to experiment with what you could possibly get out of 5G. You'll see more of that with the Olympics in 2018 and 2020, where you've got, Japan and Korea have said we're going to have 5G at those Olympics. >> So I got to ask you some of the questions that we are going to have some guests on here in theCUBE in the Palo Alto coverage around NFV, network function virtualization, plays right into the software-defined networking virtualization world, so why is NFV and SDN so vital to the network transformation? Why now and what's happening in those two areas, and what's the enabler? >> Lynn: The enabler really started about 10 years ago, the real inspiration for it, when we were all in a world of packet processing engines and network processors, and we had some people in our research labs that realized that a lot of the efficiency in doing packet processing quickly came from parallelism, and we knew there were about two or three years to wait, but that was when multi-core came out, and so this thing called data plane development kit was born. We've referred to it as DPDK. It's now an industry organization, not an Intel invention anymore. The industry's starting to foster it. Now is really when the operators realized, "I can run a network on a general purpose processor." (coughs) Excuse me, so they can use cores for running operating systems and applications, of course, they always do that for compute cores, but they can also use the compute cores for passing packets back and forth. The line rates that we're getting are astonishing. 160 gigabits per second, which at the time, we were getting six million packets per second. Very unimpressive 10 years ago, but now, for many of those applications, we're at line rate, so that allows you to then separate the hardware and the software, which is where virtualization comes in, and when you do that, you aren't actually embedding software and hardware together in creating an appliance that, if you needed to do a software update, you might as well update the hardware, too, 'cause there's absolutely no new software load that can happen unless you're in an environment with virtualization or something like containers. So that's why NFV, network function virtualization is important. Gives the operator the ability to use general purpose processors for more than one thing, and have the ability to have future proofing of workloads where a new application or a new use becomes really popular, you don't have to issue new hardware, they just need to spin up the new virtual machine and be able to put function in it. >> So that, I got-- >> Lynn: If you went back and, we were talking about 5G and all of this new way of managing the network, now management in orchestration, it's really important but SDN is also really critical, both for cloud and for comm, because it gives you one map of the connections on the network, so you know what is connected where, and it gives you the ability to remotely change how the servers or how the hardware is connected together. If you were going to ask the CIO, "What's your biggest problem today?" they would tell you that it's almost impossible for them to be able to spin up a fully functional, new application that meets all the security protocols because they don't have a network map of everything that's connected to everything. They don't really have an easy way to be able to issue a command and then have all of the reconfigurations happen. A lot of the information's embedded in router tables. >> Yeah. >> Lynn: So it makes it very, very hard to take advantage of a really complicated network connection map, and be agile. That's where SDN comes in. It just kind of like a command control center, whereas NFV gives them the ability to have agility and spin up new functions very quickly. >> Yeah, and certainly that's where the good security part of the action is. Lynn, I want to get your final thoughts on the final question is this Mobile World Congress, it really encapsulates years and years in the industry of kind of a tipping point, and this is kind of my observation, and I want to get your thoughts on this and reaction to it, is the telcos and the service providers are finally at a moment where there's been so much pressure on the business model. We heard this, you can go on back many, many years ago, "Oh, over the top, " and you're starting to see more and more pressure. This seems to be the year that people have a focus on seeing a straight and narrow set of solutions, building blocks and a ecosystem that poised to go to the next level, where there can be a business model that actually can scale, whether it's scaling the edge, or having the core of the network work well, and up and down the stack. Can you talk about the key challenges that these service providers have to do to address that key profitability equation that being a sustainable entity rather than being the pipes? >> Lynn: Well it comes down to being able to respond to the needs of the user. I will refer to a couple demos that we have in the data center section of our booth, and one of them is so impressive to China Telecom that have put together on complete commercial off-the-shelf hardware that a cloud vendor might use. A demo that shows 4K video running from a virtualized, fixed wireline connection, so one of the cable kind of usage. Now 4K video goes over a virtualized environment from a cable-like environment, to what we call virtual INF, and that's the way that you get different messages passed between different kinds of systems. So INF is wireless, so they've got 4K video from cable out to a wireless capability, running in a virtualized environment at performance in hardware that can be used in the cloud, it could be used in communication service providers 'cause it's general purpose. That kind of capability gives a company like China Telecom the flexibility they need, so with 5G, it's the usage model for 5G that's most important. Turns out to be fixed wireless, because it's so expensive for them to deploy in fiber, well, they have the ability to do it and they can spin it up, maybe not in real time, but certainly, it's not going to take a three-month rollout. >> Yes, and-- >> Lynn: So hopefully, that gives you one example. >> Well that's great enablement 'cause in a lot of execution, well, I thought it gave me one more idea for a question, so since I have my final, final question for you is, what are you most excited about 'cause you sounded super excited with that demo. What other exciting things are happening in the Intel demo area from Intel that's exciting for you, that you could share with the folks listening and watching? >> Lynn: So, I used to never be a believer in augmented reality. (John chuckling) I thought, who's going to walk around with goggles, it's just silly, (coughs) it seemed to me like a toy and maybe I shouldn't admit that on a radio show but I became a believer, and I started to really understand how powerful it could be when Pokemon Go took over all the world in over the summer, to this, an immersive experience, and it's sort of reality, but you're interacting with a brand, or in the booth, we have a really cool virtual reality demo and it was with Nokia next and it's showing 5G network transformation. The thing about virtual reality, we have to really have low latency for it to feel real, quote-unquote, and so, it harnesses the power that we can see just emerging with 5G, and then we get this really great immersive experience, so that, I think, is one that innovate how popular brands like Disney or Disney World or Disneyland, that immersive experience, so I think we're just starting to scratch the surface on the opportunities there. >> Lynn, thanks so much for spending the time. Know you got to go and run. Thanks so much for the commentary. We are low latency here inside theCUBE, bringing you all the action. It's a good title for a show, low latency. Really fast, bringing all the action. Lynn, thanks so much for sharing the color and congratulations on your success at Mobile World Congress and looking forward to getting more post-show, post-mortem after the event's over. Thanks for taking the time. We'll be back with more coverage of Mobile World Congress for a special CUBE live in studio in Palo Alto, covering all the action in Barcelona on Monday and Tuesday, 27th and 28th. I'm John Furrier. Wrap it with more after this short break, thanks for watching. (upbeat electronic music) (bright electronic music)

Published Date : Feb 27 2017

SUMMARY :

Thanks for taking the time to walk through of the next generation networks and at the client, that you and take advantage of the and the big 5G angle there, and heard of the genesis and the question that and they tend to survive pretty granular, and have the vendors do and have the ability on the network, so you know and spin up new functions very quickly. of the action is. INF, and that's the way that gives you one example. in the Intel demo area from and so, it harnesses the and looking forward to

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
EricssonORGANIZATION

0.99+

LynnPERSON

0.99+

GEORGANIZATION

0.99+

JohnPERSON

0.99+

AT&TORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

NokiaORGANIZATION

0.99+

John FurrierPERSON

0.99+

TelefonicaORGANIZATION

0.99+

15%QUANTITY

0.99+

OrangeORGANIZATION

0.99+

DecemberDATE

0.99+

DisneyORGANIZATION

0.99+

2018DATE

0.99+

IntelORGANIZATION

0.99+

75QUANTITY

0.99+

China TelecomORGANIZATION

0.99+

Bell CanadaORGANIZATION

0.99+

BarcelonaLOCATION

0.99+

HoneywellORGANIZATION

0.99+

UC BerkeleyORGANIZATION

0.99+

China MobileORGANIZATION

0.99+

two daysQUANTITY

0.99+

Palo AltoLOCATION

0.99+

2020DATE

0.99+

MondayDATE

0.99+

three-monthQUANTITY

0.99+

OlympicsEVENT

0.99+

AmazonORGANIZATION

0.99+

six monthsQUANTITY

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

TuesdayDATE

0.99+

Mobile World CongressEVENT

0.99+

QualcommORGANIZATION

0.99+

Lynn CompPERSON

0.99+

GTORGANIZATION

0.99+

Network Platforms GroupORGANIZATION

0.99+

Pokemon GoTITLE

0.99+

one thingQUANTITY

0.99+

12 different vendorsQUANTITY

0.99+

Disney WorldORGANIZATION

0.99+

todayDATE

0.98+

next weekDATE

0.98+

two areasQUANTITY

0.98+

85%QUANTITY

0.98+

250-plusQUANTITY

0.98+

28thDATE

0.98+

Intel CoporationORGANIZATION

0.97+

half dozenQUANTITY

0.97+

10 years agoDATE

0.97+

Austin, TexasLOCATION

0.97+

Mobile World Congress 2017EVENT

0.97+

Linux FoundationORGANIZATION

0.97+

more than one thingQUANTITY

0.97+

DisneylandORGANIZATION

0.97+

John Landry, HP - Spark Summit East 2017 - Spark Summit East 2017 - #SparkSummit - #theCUBE


 

>> Live from Boston, Massachusetts, this is the CUBE, covering Spark Summit East 2017 brought to you by databricks. Now, here are your hosts Dave Valante and George Gilbert. >> Welcome back to Boston everyone. It's snowing like crazy outside, it's a cold mid-winter day here in Boston but we're here with the CUBE, the world-wide leader in tech coverage. We are live covering Spark Summit. This is wall to wall coverage, this is our second day here. John Landry with us, he's the distinguished technologist for HP's personal systems data science group within Hewlett Packard. John, welcome. >> Thank you very much for having me here. >> So I was saying, I was joking, we do a lot of shows with HPE, it's nice to have HP back on the CUBE, it's been awhile. But I want to start there. The company split up just over a year ago and it's seemingly been successful for both sides but you were describing to us that you've gone through an IT transformation of sorts within HP. Can you describe that? >> In the past, we were basically a data warehousing type of approach with reporting and what have you coming out of data warehouses, using Vertica, but recently, we made an investment into more of a programming platform for analytics and so where transformation to the cloud is about that where we're basically instead of investing into our own data centers because really, with the split, our data centers went with Hewlett Packard Enterprise, is that we're building our software platform in the cloud and that software platform includes analytics and in this case, we're building big data on top of Spark and so that transformation is huge for us, but it's also enabled us to move a lot faster, the velocity of our business and to be able to match up to that better. Like I said, it's mainly around the software development really more than anything else. >> Describe your role in a little bit more detail inside of HP. >> My role is I'm the leader in our big data investments and so I've been leading teams internally and also collaborating across HP with our print group and what we've done is we've managed to put together a strategy around our cloud-based solution to that. One of the things that was important was we had a common platform because when you put a program platform in place, if it's not common, then we can't collaborate. Our investment could be fractured, we could have a lot of side little efforts going on and what have you so my role is to pry the leadership in the direction for that and also one of the reasons I'm here today is to get involved in the Spark community because our investment is in Spark so that's another part of my role is to get involved with the industry and to be able to connect with the experts in the industry so we can leverage off of that because we don't have that expertise internally. >> What are the strategic and tactical objectives of your analytics initiatives? Is it to get better predictive maintenance on your devices? Is it to create new services for customers? Can you describe that? >> It's two-fold, internal and external so internally, we got millions of dollars of opportunity to better our products with cost, also to optimize our business models and the way we can do that is by using the data that comes back from our products, our services, our customers, combining that together and creating models around that that are then automated and can be turned into apps that can be used internally by our organizations. The second part is to take the same approach, same data, but apply that back towards our customers and so with the split, our enterprise services group also went with Hewlett Packard Enterprise and so now, we have a dedicated effort towards creating manage services for the commercial environment. And that's both on the print size and on the personal system side so to basically fuel that, analytics is a big part of the story. So we've had different things that you'll see out there like touch point manager is one of our services we're delivering in personal systems. >> Dave: What is that? >> Touch point manager is aimed at providing management services for SMB and for commercial environments. So for instance, in touch point manager, we can provide predictive type of capabilities for support. A number of different services that companies are looking for when they buy our products. Another thing we're going after too is device as a service. So there's another thing that we've announced recently that basically we're invested into there and so this is obviously if you're delivering devices as a service, you want to do that as optimal as possible. Well, being able to understand the devices, what's happening with them, been able to predictive support on them, been able to optimize the usage of those devices, that's all important. >> Dave: A lot of data. >> The data really helps us out, right? So the data that we can collect back from our devices and to be able to take that and turn that around into applications that are delivering information inside or outside is huge for us, a huge opportunity. >> It's interesting where you talk about internal initiatives and manage services, which sound like they're most external, but on the internal ones, you were talking about taking customer data and internal data and turning those into live models. Can you elaborate on that? >> Sure, I can give you a great example is on our mobile products, they all have batteries. All of our batteries are instrumented as smart batteries and that's an industry standard but HP actually goes a step further on that, it's the information that we put into our batteries. So by monitoring those batteries and the usage in the field is we can tell how optimally they're performing, but also how they're being used and how we can better design batteries going forward. So in addition, we can actually provide information back into our supply chain. For instance, there's a cell supplier for the battery, there's a pack supplier, there's our unit manufacturer for the product, and so a lot of things that we've been able to uncover is that we can go and improve process. And so improving process alone helps to improve the quality of what we deliver and the quality of the experience to our customers. So that's one example of just using the data, turning that around into a model. >> Is there an advantage to having such high volume, such market share in getting not just more data, but sort of more of the bell curve, so you get the edge conditions? >> Absolutely, it's really interesting because when we started out on this, everybody's used to doing reporting which is absolute numbers and how much did you shift and all that kind of stuff. But, we're doing big data, right? So in big data, you just need a good sample population. Turn the data scientist into that and they've got their statistical algorithms against that. They give you the confidence factor based upon the data that you have so it's absolutely a good factor for us because we don't have to see all the platforms out there. Then, the other thing is, when you look at populations, we see variances in different customers so we're looking at, like one of our populations that's very valuable to us is our own, so we take the 60 thousand units that we have internally at HP and that's one of our sample populations. What a better way to get information on your own products? But, you take that and you take it to one of our other customers and their population's going to look slight different. Why? Because they use the products differently. So one of the things is just usage of the products, the environment they're used in, how they use them. Our sample populations are great in that respect. Of course, the other thing is, very important to point out, we only collect data under the rules and regulations that are out there, so we absolutely follow that and we absolutely keep our data secure and we absolutely keep everything and that's important. Sometimes, today they get a little bit spooked sometimes around that, but the case is that our services are provided based on customers signing up for them. >> I'm guessing you don't collect more data than Google. >> No, we're nowhere near Google. >> So, if you're not spooked at Google - >> That's what I tell people. I say if you got a smartphone, you're giving up a lot more data than we're collecting. >> Buy something from Amazon. Spark, where does Spark fit into all of this? >> Spark is great because we needed a programming platform that could scale in our data centers and in our previous approaches, we didn't have a programming platform. We started with a Hadoop, the Hadoop was very complex though. It really gets down to the hardware and you're programming and trying to distribute that load and getting clusters and you pick up Spark and immediately abstraction. The other thing is it allows me to hire people that can actually program on top of it. I don't have to get someone that knows Map Reduce. I can sit there and it's like what do you know? You know R, Scala, you know Python, it doesn't matter. I can run all of that on top of it. So that's huge for us. The other thing is flat out the speed because as you start getting going with this, we get this pull all of a sudden. It's like well I only need the data like once a month, it's like I need it once a week, I need it once a day, I need the output of this by the hour now. So, the scale and the speed of that is huge and then when you put that on the cloud platform, you know, Spark on a cloud platform like Amazon, now I've got access to all the compute instances. I can scale that, I can optimize it because I don't always need all the power. The flexibility of Spark and being able to deliver that is huge for our success. >> So, I've got to ask some columbo questions and George, maybe you can help me sort of frame it. So you mentioned you were using Hadoop. Like a lot of Hadoop practitioners, you found it very complex. Now, Hewlett Packard has resources. Many companies don't but so you mentioned people out doing Python and R and Scale and Map Reduce, are you basically saying okay, we're going to unify portions of our Hadoop complexity with Spark and that's going to simplify our efforts? >> No, what we actually did was we started on the Hadoop side of it. The first thing we did was try to move from a data warehouse to more of a data lake approach or repository and that was internal, right? >> Dave: And that was a cost reduction? >> That was a cost reduction but also, data accessibility. >> Dave: Yeah, okay. >> The other thing we did was ingesting the data. When you're starting to bring data in from millions of devices, we had a problem coming through the firewall type approach and you got to have something in front of that like a Kafka or something in front of it that can handle it. So when we moved to the cloud, we didn't even try to put up our own, we just used Kinesis and that we didn't have to spend any resources to go solve that problem. Well, the next thing was, when we got the data, you need to ingest the data in and our data's coming in, we want to split it out, we needed to clean it and what you, we actually started out running Java and then we ran Java on top of Hadoop, but then we came across Spark and we said that's it. For us to go to the next step of actually really get into Hadoop, we were going to have to get some more skills and to find the skills to actually program in Hadoop was going to be complex. And to train them organically was going to be complex. We got a lot of smart people, but- >> Dave: You got a lot of stuff to do, too. >> That's the thing, we wanted to spend more time getting information out of the data as opposed to the framework of getting it to run and everything. >> Dave: Okay, so there's a lot of questions coming out. You mentioned Kinesis, so you've replaced that? >> Yeah, when we went to the cloud, we used as many Amazon services as we can as opposed to growing something for ourselves so when we get onto Amazon, you know, getting data into an S3 bucket through Kineses was a no-brainer. When we transferred over to the cloud, it took us less than 30 days to point our devices at Kinesis and we had all our data flowing into S3. So that was like wow, let's go do something else. >> So I got to ask you something else. Again, I love when practitioners come on. So, one of the complaints that I hear sometimes from AWS users and I wonder if you see this is the data pipeline is getting more and more complex. I got an API for Kinesis, one for S3, one for DynamoDB, one for Elastic Plus. There must be 15 proprietary APIs that are primitive, and again, it gets complicated and sometimes it's hard to even figure out what's the right cost model to use. Is that increasingly becoming more complex or is it just so much simpler than what you had before and you're in nirvana right now? >> When you mentioned costs, just the cost of moving to the cloud was a major cost reduction for us. >> Reduction? >> So now it's - >> You had that HP corporate tax on you before - >> Yeah, now we're going from data centers and software licenses. >> So that was a big win for you? >> Yeah, huge, and that released us up to go spend dollars on resources to focus on the data science aspect. So when we start looking at it, we continually optimized, don't get me wrong. But, the point is, if we can bring it up real quickly, that's going to save us a lot of money even if you don't have to maintain it. So we want to focus on creating the code inside of Spark that's actually doing the real work as opposed to the infrastructure. So that cost savings was huge. Now, when you look at it over time, we could've over analyzed that and everything else, but what we did was we used a rapid prototyping approach and then from there, we continued to optimize. So what's really good about the cloud is you can predict the cost and with internal data centers and software licenses and everything else, you can't predict the cost because everybody's trying to figure out who's paying for what. But in the case of the cloud, it's all pretty much you get your bill and you understand what you're paying. So anyway - >> And then you can adjust accordingly? >> We continue to optimize so we use the services but if we have for some reason, it's going to deliver us an advantage, we'll go develop it. But right now, our advantage is we got umteen opportunities to create AI type code and applications to basically automate these services, we don't even have enough resources to do it right now. But, the common programming platform's going to help us. >> Can you drill into those umpteen examples? Just some of them because - >> I mentioned the battery one for instance. So take that across the whole system so now you've got your storage devices, you've got your software that's running on there, we've got built into our system security monitoring at the firmware level just basically connecting into that and adding AI around that is huge because now we can see a tax that may be happening upon your fleet and we can create services out of that. Anything that you can automate around that is money in our pocket or money in our customers' pocket so if we can save them money with these new services, they're going to be more willing to come to HP for products. >> It's actually more than just automation because it's the stuff you couldn't do with 1,000 monkeys trying to write Shakespeare. You have data that you could not get before. >> You're right, what we're doing, the automation is helping us uncover things that we would've never seen and you're right, the whole gorilla walking through the room, I could sit there and I could show you tons of examples of where we're missing the boat. Even when we brought up our first data sets, we started looking at them and some of the stuff we looked at, we thought this is just bad data and actually it wasn't, it was bad product. >> People talk about dark data - >> We had no data models, we had no data model to say is it good or bad? And now we have data models and we're continuing to create those data models around, you create the data model and then you can continue to teach it and that's where we create the apps around it. Our primitives are the data models that we're creating from the device data that we have. >> Are there some of these apps where some of the intelligence lives on the device and it can, like in a security attack, it's a big surface area, you want to lock it down right away. >> We do. The good example on the security is we built something into our products called Sure Start. What essentially it is is we have ability to monitor the firmware layer and so there's a local process that's running independent of everything else that's running that's monitoring what's happening at that firmware level. Well, if there's an attack, it's going to immediately prevent the attack or recover from the attack. Well, that's built into the product. >> But it has to have a model of what this anomalous behavior is. >> Well in our case, we're monitoring what the firmware should look like and if we see that the firmware, you know you take check sums from the firmware or the pattern - >> So the firmware does not change? >> Well basically we can take the characteristics of the firmware and monitor it. If we see that changing, then we know something's wrong. Now it can get corrupt through hardware failure maybe because glitches can happen maybe. I mean solar flares can cause problems sometimes. So, the point is we've found that customers had problems sometimes where basically their firmware would get corrupted and they couldn't start their system. So we're like are we getting attacked? Is this a hardware issue? Could it be bad Flash devices? There's always all kinds of things that could cause that. Well now we monitor it and we know what's going on. Now, the other cool thing is we create logs from that so when those events occur, we can collect those logs and we're monitoring those events so now we can have something monitor the logs that are monitoring all the units. So, if you've got millions of units out there, how are you going to do that manually? You can't and that's where the automation comes in. >> So the logs give you the ability up in the cloud or at HP to look at the ecosystem of devices, but there is intelligence down on the - >> There's intelligence to protect the device in an auto recover which is really cool. So in the past, you had to get your repair. Imagine if someone attacked your fleet of notebooks. Say you got 10 thousand of them and basically it brought every single one of them down one day. What would you do? >> Dave: Freak. >> And everything you got to replace. It was just an attack and it could happen so we basically protect against that with our products and at the same time, we can see that may be a current and then from the footprints of it, we can then do analysis on it and determine was that malicious, is this happening because of a hardware issue, is this happening because maybe we tried to update the firmware and something happened there? What caused that to happen? And so that's where collecting the data from the population then helps us do that and then mix that with other things like service events. Are we seeing service events being driven by this? Thermal, we can look at the thermal data. Maybe there's some kind of heat issue that's causing this to happen. So we starting mixing that. >> Did Samsung come calling to buy this? >> Well, actually what's funny is Samsung is actually a supplier of ours, is a battery supplier of ours. So, by monitoring the batteries, what's interesting is we're helping them out because we go back to them. One of the things I'm working on, is we want to create apps that can go back to them and they can see the performance of their product that they're delivering to us. So instead of us having to call a meeting and saying hey guys let's talk about this, we've got some problems here. Imagine how much time that takes. But if they can self-monitor, then they're going to want to keep supplying to us, then they're going to better their product. >> That's huge. What a productivity boost because you're like hey, we got a problem, let's meet and talk about it and then you take an action to go and figure out what it is. Now if you need a meeting, it's like let's look at the data. >> Yeah, you don't have enough people. >> But there's also potentially a shift in pricing power. I would imagine it shifts a little more in your favor if you have all the data that indicates the quality of their product. >> That's an interesting thing. I don't know that we've reached that point. I think that in the future, it would be something that could be included in the contracts. The fact that the world is the way it is today and data is a big part of that to where going forward, absolutely, the fact that you have that data helps you to better have a relationship with your suppliers. >> And your customers, I mean it used to be that the brand used to have all the information. The internet obviously changed all that, but this whole digital transformation and IOT and all those log data, that sort of levels the playing field back to the brand. >> John: It actually changes it. >> You can now add value for the consumer that you couldn't before. >> And that's what HP's trying to do. We're invested to exactly do that is to really improve or increase the value of our brand. We have a strong brand today but - >> What do you guys do with - we got to wrap - but what do you do with databricks? What's the relationship there? >> Databricks, again we decided that we didn't want to be the experts on managing the whole Spark thing. The other part was that we're going to be involved with Spark and help them drive the direction as far as our use cases and what have you. Databricks and Spark go hand in hand. They got the experts there and it's been huge, our relationship, being able to work with these guys. But I recognize the fact that, and going back to software development and everything else, we don't want to spare resources on that. We got too many other things to do and the less that I have to worry about my Spark code running and scaling and the cost of it and being able to put code in production, the better and so, having that layer there is saving us a ton of money and resources and a ton of time. Just imagine time to market, it's just huge. >> Alright, John, sorry we got to wrap. Awesome having you on, thanks for sharing your story. >> It's great to talk to you guys. >> Alright, keep it right there everybody. We'll be back with our next guest. This is the CUBE live from Spark Summit East, we'll be right back.

Published Date : Feb 9 2017

SUMMARY :

brought to you by databricks. the world-wide leader in tech coverage. we do a lot of shows with HPE, In the past, we were basically a data warehousing bit more detail inside of HP. One of the things that was important was we had a common the way we can do that is by using the data we can provide predictive type of capabilities for support. So the data that we can collect back from our devices It's interesting where you talk about internal and the quality of the experience to our customers. Then, the other thing is, when you look at populations, I say if you got a smartphone, you're giving up Spark, where does Spark fit into all of this? and then when you put that on the cloud platform, and that's going to simplify our efforts? and that was internal, right? and to find the skills to actually program That's the thing, we wanted to spend more time Dave: Okay, so there's a lot of questions coming out. so when we get onto Amazon, you know, getting data into So I got to ask you something else. of moving to the cloud was a major cost reduction for us. Yeah, now we're going from But, the point is, if we can bring it up real quickly, We continue to optimize so we use the services So take that across the whole system because it's the stuff you couldn't do with that we would've never seen and you're right, And now we have data models and we're continuing intelligence lives on the device and it can, The good example on the security is we built But it has to have a model of what Now, the other cool thing is we create logs from that So in the past, you had to get your repair. and at the same time, we can see that may be a current of their product that they're delivering to us. and then you take an action to go if you have all the data that indicates and data is a big part of that to where the playing field back to the brand. that you couldn't before. is to really improve or increase the value of our brand. and the less that I have to worry about Alright, John, sorry we got to wrap. This is the CUBE live from Spark Summit East,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

Dave ValantePERSON

0.99+

George GilbertPERSON

0.99+

JohnPERSON

0.99+

GeorgePERSON

0.99+

HPORGANIZATION

0.99+

BostonLOCATION

0.99+

John LandryPERSON

0.99+

Hewlett PackardORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

10 thousandQUANTITY

0.99+

JavaTITLE

0.99+

GoogleORGANIZATION

0.99+

SamsungORGANIZATION

0.99+

SparkORGANIZATION

0.99+

second dayQUANTITY

0.99+

AWSORGANIZATION

0.99+

second partQUANTITY

0.99+

60 thousand unitsQUANTITY

0.99+

PythonTITLE

0.99+

HadoopTITLE

0.99+

less than 30 daysQUANTITY

0.99+

millions of dollarsQUANTITY

0.99+

todayDATE

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

once a monthQUANTITY

0.99+

HPEORGANIZATION

0.99+

both sidesQUANTITY

0.99+

SparkTITLE

0.99+

1,000 monkeysQUANTITY

0.99+

oneQUANTITY

0.99+

Boston, MassachusettsLOCATION

0.99+

once a weekQUANTITY

0.98+

once a dayQUANTITY

0.98+

15 proprietary APIsQUANTITY

0.98+

OneQUANTITY

0.98+

bothQUANTITY

0.98+

one dayQUANTITY

0.98+

Map ReduceTITLE

0.97+

Spark Summit East 2017EVENT

0.97+

first data setsQUANTITY

0.97+

two-foldQUANTITY

0.97+

Spark SummitEVENT

0.96+

RTITLE

0.96+

a tonQUANTITY

0.95+

millions of unitsQUANTITY

0.95+

ScaleTITLE

0.95+

KafkaTITLE

0.94+

ShakespearePERSON

0.94+

S3TITLE

0.94+

Tendu Yogurtcu, Syncsort - #BigDataSV 2016 - #theCUBE


 

from San Jose in the heart of Silicon Valley it's the kue covering big data sv 2016 now your host John furrier and George Gilbert okay welcome back on we are here live in Silicon Valley for the cubes looking angles flagship program we go out to the events and extract the signal from the noise i'm john furrier mykos george gilbert big data analyst at Wikibon calm our next guest is 10 do yoga coo to yogurt coo coo I you see your last name yo Joe okay I gots clothes GM with big David sinks or welcome back to the cube sink starts been a long time guess one of those companies we love to cover because your value publishes is right in the center of all the action around mainframes and you know Dave and I always love to talk about mainframe not mean frame guys we know that we remember those days and still powering a lot of the big enterprises so I got to ask you you know what's your take on the show here one of the themes that came up last night on crowd chatters why is enterprise data warehousing failing so you know got some conversation but you're seeing a transformation what do you guys see thank you for having me it's great to be here yes we are seeing the transformation of the next generation data warehouse and evolution of the data warehouse architecture and as part of that mainframes are a big part of this data warehouse architecture because still seventy percent of data is on the mainframes world's data seventy percent of world's data this is a large amount of data so when we talk about big data architecture and making big data and enterprise data useful for the business and having advanced analytics not just gaining operational efficiencies with the new architecture and also having new products new services available to the customers of those organizations this data is intact and making that part of this next-generation data warehouse architecture is a big part of the initiatives and we play a very strong core role in this bridging the gap between mainframes and the big data platforms because we have product offerings spanning across platforms and we are very focused on accessing and integrating data accessing and integrating in a secure way from mainframes to the big data plan one is one of the things that's the mainframe highlights kind of a dynamic in the marketplace and wrong hall customers whether they have many firms are not your customers who have mainframes they already have a ton of data their data full as we say in the cube they have a ton of data do it but they spend a lot of times you mentioned cleaning the data how do you guys specifically solve that because that's a big hurdle that they want to just put behind they want to clean fast and get on to other things yes we see a few different trends and challenges first of all from the Big Data initiatives everybody is really trying to either gain operational efficiency business agility and make use of some of the data they weren't able to make use of before and enrich this data with some of the new data sources they might be actually adding to the data pipeline or they are trying to provide new products and services to their customers so when we talk about the mainframe data it's a it's really a how you access this mainframe data in a secure way and how you make that data preparation very easy for the data scientists the data scientists are still spending close to eighty percent of their time in data preparation and if you can't think of it when we talk about the compute frameworks like spark MapReduce flink versus the technology stack technologies these should not be relevant to the data scientist they should be just worried about how do i create my data pipeline what are the new insights that I'm trying to get from this data the simplification we bring in that data cleansing and data preparation is one well we are bringing simple way to access and integrate all of the enterprise data not just the legacy mainframe and the relational data sources and also the emerging data sources with streaming data sources the messaging frameworks new data sources we also make this in a cross-platform secure way and some of the new features for example we announced where we were simply the best in terms of accessing all of the mainframe data and having this available on Hadoop and spark we now also makes park and Hadoop understand this data in its original format you do not have to change the original record format which is very important for highly regulated industries like financial services banking and insurance and health care because you want to be able to do the data sanitization and data cleansing and yet bring that mainframe data in its original format for audit and compliance reasons okay so this is this is the product i think where you were telling us earlier that you can move the processing you can move the data from the mainframe do processing at scale and at cost that's not possible or even ii is is easy on the mainframe do it on a distributed platform like a dupe it preserves its original sort of way of being encoded send it back but then there's also this new way of creating a data fabric that we were talking about earlier where it used to be sort of point-to-point from the transactional systems to the data warehouse and now we've basically got this richer fabric and your tools sitting on some technologies perhaps like spark and Kafka tell us what that world looks like and how it was different from we see a greater interest in terms of the concept of a database because some organizations call it data as a service some organizations call it a Hadoop is a service but ultimately an easy way of publishing data and making data available for both the internal clients of the organization's and external clients of the organization's so Kafka is in the center of this and we see a lot of other partners of us including hot dog vendors like Cloudera map r & Horton works as well as data bricks and confluent are really focused on creating that data bus and servicing so we play a very strong there because phase one project for these organizations how do I create this enterprise data lake or enterprise data hub that is usually the phase one project because for advanced analytics or predictive analytics or when you make an engine your mortgage application you want to be able to see that change on your mobile phone under five minutes likewise when you make a change in your healthcare coverage or telecom services you want to be able to see that under five minutes on your phone these things really require easy access to that enterprise data hub what we have we have a tool called data funnel this basically simplifies in a one click and reduces the time for creating the enterprise data hub significantly and our customers are using this to migrate and make I would not say my great access data from the database tables like db2 for example thousands of tables populating an automatically mapping metadata whether that metadata is higher tables or parquet files or whatever the format is going to be in the distributed platform so this really simplifies the time to create the enterprise data hub it sounds actually really interesting when I'm hearing what you're saying the first sort of step was create this this data lake lets you know put data in there and start getting our feet wet and learning new analysis patterns but what if I'm hearing you correctly you're saying now radiating out of that is a new sort of data backbone that's much lower latency that gets data out of the analytic systems perhaps back into the operational systems or into new systems at a speed that we didn't do before so that we can now make decisions or or do an analysis and make decisions very quickly yes that's true basically operational intelligence and mathematics are converging okay and in that convergence what we are basically seeing is that I'm analyzing security data I'm analyzing telemetry data that's a streamed and I want to be able to react as fast as possible and some of the interest in the emerging computer platforms is really driven by this they eat the use case right many of our customers are basic saying that today operating under five minutes is enough for me however I want to be prepared I want to future-proof my applications because in a year it might be that I have to respond under a minute even in sub seconds when they talk about being future proofed and you mentioned to time you know time sort of brackets on either end our customers saying they're looking at a speed that current technologies don't support in other words are they evaluating some things that are you know essentially research projects right now you know very experimental or do they see a set of technologies that they can pick and choose from to serve those different latency needs we published a Hadoop survey earlier this year in january according to the results from that Hadoop survey seventy percent of the respondents were actually evaluating spark and this is very confused consistent with our customer base as well and the promise of spark is driven by multiple use cases and multiple workload including predictive analytics and streaming analytics and bat analytics all of these use cases being able to run on the same platform and all of the Hadoop vendors are also supporting this so we see as our customer base are heavy enterprise customers they are in production already in Hadoop so running spark on top of their Hadoop cluster is one way they are looking for future proofing their applications and this is where we also bring value because we really abstract that insulate the user while we are liberating all of the data from the enterprise whether it's on the relational legis data warehouse or it's on the mainframe side or it's coming from new web clients we are also helping them insulate their applications because they don't really need to worry about what's the next compute framework that's going to be the fastest most reliable and low latency they need to focus on the application layer they need to focus on creating that data pipeline today I want to ask you about the state of syncsort you guys have been great success with the mainframe this concept of data funneling or you can bring stuff in very fast new management new ownership what's the update on the market dynamics because now ingestion zev rethink data sources how do you guys view what's the plan for syncsort going forward share with the folks out there sure our new investors clearlake capital is very supportive of both organic and inorganic growth so acquisitions are one of the areas for us we plan to actually make one or two acquisitions this year and companies with the products in the near adjacent markets are real value add for us so that's one area in addition to organic growth in terms of the organic growth our investments are really we have been very successful with a lot of organizations insurance financial services banking and healthcare many many of the verticals very successful with helping our customers create the enterprise data hub integrate access all of the data integrated and now carrying them to the next generating generation frameworks those are the areas that we have been partnering with them the next is for us is really having streaming data sources as well as batch data sources through the single data pipeline and this includes bringing telemetry data and security data to the advanced analytics as well okay so it sounds like you're providing a platform that can handle the today's needs which were mostly batch but the emerging ones which are streaming and so you've got that sort of future proofing that customers are looking for once they've got that those types of data coming together including stuff from the mainframe that they want might want to enrich from public sources what new things do you see them doing predictive analytics and machine learning is a big part of this because ultimately once there are different phases right operational efficiency phase was the low-hanging fruit for many organizations I want to understand what I can do faster and serve my clients faster and create that operational efficiency in a cost-effective scalable way second was what our new for go to market opportunities with transformative applications what can I do by recognizing how my telco customers are interacting with the SAS services to help and how like under a couple of minutes I react to their responses or cell service is the second one and then the next phase is that how do I use this historical data in addition to the streaming of data rapidly I'm collecting to actually predict and prevent some of the things and this is already happening with a guy with banking for example it's really with the fraud detection a lot of predictive analysis happens so advanced analytics using AI advanced analytics using machine learning will be a very critical component of this moving forward this is really interesting because now you're honing in on a specific industry use case and something that you know every vendor is trying to sort of solve the fraud detection fraud prevention how repeatable is it across your customers is this something they have to build from scratch because there's no templates that get them fifty percent of the way there seventy percent of the way there actually there's an opportunity here because if you look at the health care or telco or financial services or insurance verticals there are repeating patterns and that one is fraud for fraud or some of the new use cases in terms of customer churn analytics or cosmetics estate so these patterns and the compliance requirements in these verticals creates an opportunity actually to come up with application applications for new companies start for new startups okay then do final question share with the folks out there to view the show right now this is ten years of Hadoop seven years of this event Big Data NYC we had a great event there New York City Silicon Valley what's the vibe here in Silicon Valley here this is one of the best events I really enjoy strata San Jose and I'm looking forward two days of keynotes and hearing from colleagues and networking with colleagues this is really the heartbeat happens because with the hadoop world and strata combined actually we started seeing more business use cases and more discussions around how to enable the business users which means the technology stack is maturing and the focus is really on the business and creating more insights and value for the businesses ten do you go to welcome to the cube thanks for coming by really appreciate it go check out our Dublin event on fourteenth of April hadoop summit will be in europe for that event of course go to SiliconANGLE TV check out our women in check every week we feature women in tech on wednesday thanks for joining us thanks for sharing the inside would sink so i really appreciate it thanks for coming by this turkey will be right back with more coverage live and Silicon Valley into the short break you

Published Date : Mar 29 2016

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
George GilbertPERSON

0.99+

fifty percentQUANTITY

0.99+

john furrierPERSON

0.99+

Silicon ValleyLOCATION

0.99+

seventy percentQUANTITY

0.99+

two daysQUANTITY

0.99+

San JoseLOCATION

0.99+

oneQUANTITY

0.99+

DavePERSON

0.99+

John furrierPERSON

0.99+

Silicon ValleyLOCATION

0.99+

ten yearsQUANTITY

0.99+

telcoORGANIZATION

0.99+

george gilbertPERSON

0.99+

seven yearsQUANTITY

0.99+

WikibonORGANIZATION

0.99+

NYCLOCATION

0.98+

HadoopTITLE

0.98+

thousands of tablesQUANTITY

0.98+

todayDATE

0.98+

under five minutesQUANTITY

0.98+

europeLOCATION

0.98+

JoePERSON

0.98+

januaryDATE

0.97+

wednesdayDATE

0.97+

one clickQUANTITY

0.97+

under five minutesQUANTITY

0.97+

under a minuteQUANTITY

0.97+

one areaQUANTITY

0.97+

phase oneQUANTITY

0.96+

earlier this yearDATE

0.96+

bothQUANTITY

0.96+

a yearQUANTITY

0.96+

second oneQUANTITY

0.96+

DublinLOCATION

0.95+

this yearDATE

0.95+

New York City Silicon ValleyLOCATION

0.92+

last nightDATE

0.91+

one wayQUANTITY

0.9+

a ton of dataQUANTITY

0.9+

Cloudera map r & HortonORGANIZATION

0.9+

a ton of dataQUANTITY

0.89+

two acquisitionsQUANTITY

0.89+

turkeyLOCATION

0.88+

one of the best eventsQUANTITY

0.87+

first sortQUANTITY

0.86+

eighty percentQUANTITY

0.85+

mykosPERSON

0.84+

SiliconANGLE TVORGANIZATION

0.83+

2016DATE

0.82+

secondQUANTITY

0.81+

Tendu YogurtcuPERSON

0.81+

single dataQUANTITY

0.8+

DavidPERSON

0.8+

parkTITLE

0.8+

a lot of timesQUANTITY

0.78+

10QUANTITY

0.77+

db2TITLE

0.76+

under a couple of minutesQUANTITY

0.75+

areasQUANTITY

0.71+

thingsQUANTITY

0.71+

SyncsortORGANIZATION

0.71+

every weekQUANTITY

0.7+

one ofQUANTITY

0.7+

svEVENT

0.69+

of AprilDATE

0.69+

firstQUANTITY

0.68+

#BigDataSVEVENT

0.66+

themesQUANTITY

0.66+

sparkORGANIZATION

0.65+

summitEVENT

0.62+

KafkaTITLE

0.62+

every vendorQUANTITY

0.61+

useQUANTITY

0.6+

Big DataEVENT

0.6+

MapReduceTITLE

0.55+