Bart Hickenlooper, Zettabytes & Rishi Yadav, Zettabytes | AWS re:Invent
>> Narrator: Live from Las Vegas, it's theCUBE covering AWS re:Invent 2017 presented by AWS, Intel, and our ecosystem of partners. >> Welcome back, I'm Stu Miniman here with my cohost, Justin Warren, and you are watching theCUBE SiliconANGLE Media's live production of AWS re:Invent 2017. Happy to welcome back to the program, Rishi Yadav, who is the CEO of Zettabytes, and Bart Hickenlooper, who is the SAP of client services. Thank you so much for joining us. >> Thank you, Stu. >> So Rishi, yesterday you on the other set with John Furrier and with Justin, and we were really excited to really launch Zettabytes, help you bring the company out. We've known Infoobjects and your company for a while, so of course we want people to go check out the other interview, but in today's hybrid multi-cloud world, we've seen Amazon kind of slowly moderating a little bit the way they discuss it, but why don't you bring us inside a little bit? What are you hearing from customers, and what kind of led to the creation of Zettabytes? >> What we are hearing from customers is that there is a lot of talk about the cloud and AWS. The challenges that what I discussed yesterday also was that how do I take those baby steps toward our option of the cloud? That is where the challenge comes. On one side and they say everything on prem is bad. Everything on cloud is good. Those kind of statements are okay, but somebody who has got billions of dollars of business running, for them it doesn't make any sense. They want to have logical steps, and they also want to have, with every step, what value they are adding. >> We always hear, right, we always take in and it's like all in on any one thing. Oh come on, you are using various SAS providers. You probably have multiple cloud providers. Yes, you've still got something sitting in the backend of your data center, where many things, and that migration takes time. What kind of strategy and tactics do you hear customers doing that gets Zettabyte engaged? >> What was in most enterprises is an effort to really modernize their applications. They want to make those so that they are cloud native leveraging the innovation that's taking place in the cloud. That application modernization is really what's driving an enterprise to do some things and move quickly to the cloud. It's no longer the economics of moving to the cloud, but that innovation engine that can be really egnited with those technologies. Getting there from their legacy platforms is a little tricky. They need a development cycle that works in a hybrid fashion to really go cloud native with those applications. >> When they're starting off on that journey, where do you find customers starting with? What are the applications that they do first, and what are the functions that they use from AWS, like are they going with just EC2 type things? Are they using S3 for storage? What do they start with? >> That's a good point that for the first phase of the cloud out option most of the work was on IES. Whatever you have on prem, you just put that on the cloud then obviously you go through your storage and things like that. That is where there was a lot of talk. If you remember a few years back, everybody was saying, "The cloud is not cheap, "the cloud is costly," and as Bart said, it's not about economics. Has always about convenience. It's always about the value-added, and especially when enterprises started getting into the cloud, started the all opt in cloud a few years back, that's where those things became very important because that is what they wanted. You just cannot save someone 20% in investiture cost and that may not even come and that it's worth the option of the cloud. >> We were talking yesterday about the 100 services, the number of services that you as an orginzation have to wrap your brain around and one of the things that Zettabytes helps with that is to give you some focus so again, what are the things that Zettabytes is focusing on that they find that customers actually really, really want from cloud. Amazon is so huge. Making sense of the whole thing is quite tricky. When you talk about application modernization, if you have a monolithic application, EC2 and S3 are great. If you are going to migrate it, you can do that. What we are seeing is really a switch to DevOps for application development, microservices development that leverage certain platform services from Amazon that are specific to enable an application, and those are things like Lambda, Kinesis, Elasticsearch, and you can write microservices that consume those services in addition to your traditional storage and compute, and really get cloud native. We've selected those services on our platform to help with that application modernization and really enable a customer to make applications with microservices enablement. >> WikiBon has been looking at many years really. What has happened with this transformation of the data center. The research that we put out a few years ago is what we call true private cloud because like you said, what was happening the virtualization plus. I love virtualization. I spent 15 years on that wave, but it wasn't enough, and even when we started simplifying with even some automation on there, that application journey, and how do we get ready for modern applications? I worked a lot on the HCI wave. It was, "Let's modernize our infrastructure, "and then we will worry about that application stuff later, "and maybe it's not a fit." What is different now? What are the toolings available? Why is now, can I put that stuff, I mean, the cloud native, isn't it? That should all live in a public cloud, no? What cannot live in many places? >> I think that's a great point. Three pieces of the puzzle, infrastructure, data, and the applications so there has been a lot of talk about infrastructure and not having infrastructure. Data, every other company in the Bay Area is a backup and restore company. Nobody is talking about applications. Yes, they are SAS players, right? We are like, okay, we will just host an application for you and you don't have to worry about anything else, but what about a lot of these legacy applications which have been built over the last 20 or 30 years? Nobody is talking about that. Everybody talks about greenfield applications. What about we start new? Everything is going to be the cloud. Is going to be cloud native. Everything is awesome, and then the clients say, "Yes "but I already invested $5 million in the applications "in the last 10 years." What's going to happen to them? >> To say that the first 80% of writing software is putting the bugs in, and then the second 80% of writing software is taking them out again, so if you have to completely start with these core business applications that are generating revenue, there's a lot of risk there in going to something brand-new. I know we've had Andy Jassy talking about if I was starting the company again today, I would go completely all serverless. And I think, really? Right now in 2017? Is it really that established and that great? What is your take on that? For an enterprise that have this investment already, should they be going completely all into serverless, or should they be picking off some of these other more mature services do you think? >> I would say it would be really application-specific. If it's traditional transactional, you may or may not want to go serverless because you've got that relational database really kind of structured around it. If it's a modern application and you are a company that has, for example, a brand-new mobile application, then you are going to want to leverage things like Lambda in those application development so you can trigger the correct service to spin up in that application. I think modernization is really specific to the use case. What we are seeing is a digital transformation in most companies where they are really requiring some newer applications to leverage past services like Lambda, Elasticsearch, Kinesis, and other things. >> Rishi, one of the things that we've heard from customers for many years is they so that they'd like to have the same in their data center as in the public cloud. We did a survey years ago and everybody, it was like 80% said that then said, "What do you have?" I've got VMware here and I've got an AWS there. Is it about having the whole stack? Are APIs enough? How much commonality do you need? How does Zettabytes look at this And had to help customers bridge this model? >> Yes absolutely, so number one is that the VMware type of solution is still pretty much, I like VMware but it's pretty much infrastructure as a service base. The second thing is the reason we have come up with our platform is that few core services and for few key workloads, IOT is being one of them, low intensive uploads being another of them, the workloads in which you need special type of security and governance which clients are used to from the last 20 years. Yes, AWS have amazing security and governance, no doubt about that, but still, part of the workload they may want to run locally, and they should at least have that freedom. Only those part of workloads are going to be run on Zettabytes plans, but the API compatibility will provide complete, so whether you want to run it on Zettabytes or on AWS, most of the workloads are going to work on AWS, and that you can run from our platform. >> I want to keep off the word you mentioned there. You mentioned appliances. We have seen lots of solutions over the last decade. You have done quite well as appliances. Talk about the HCI, talk about the backup recovery, lots of things there, but it's a software world now. Hex serverless, Andy Jassy says we will build serverless. Why an appliance? Talk a little bit about the go to market. What do people get today? How do they buy it? Why does that make sense for your customers? >> Yes absolutely, a value added more from the sales perspective but appliance, from the optimization perspective is perfect. In our case, yes, we have figured out a spec which is perfect perfectly optimized for tuning our software platform and we can provide to the clients. If they want to have a similar stack built themselves that is perfectly okay, but the idea is that hardware has an equally important role to play as software has. >> When you think about the platform services that we have like S3, RDS, and others, you definitely need hardware to support a real workload and for us to really standardize on something that someone can do, true development with that depth on the platform is really critical. You can go to GitHub and get open source S3 and work around it, but it's a mock. Really what you need is a platform to develop application, and the other thing is I worked for Cisco for 10 years, and the channel there is extremely powerful with companies like CDW, WWT, the routes to market there is really compelling for combined solution and I think part of the reason you are seeing success with those combined solutions is the customers are used to a service model where it's one throat to choke on those types of platforms, and the channel is a trusted advisor. It's a great way for us to go to market. >> Yeah, to just give my two cents on that, people conflate that we've had some commoditization of what's happening in infrastructure. It doesn't mean just grab stuff off of the shelf. I read an article 4 years ago. AWS infrastructure is hyper optimized. If you went to the Tuesday night's keynote, oh my gosh, they spent way more on hardware than anybody else in this ecosystem, I'm sure, and spent more as much, so it's not that you have IP in hardware. From my understanding it you are making sure you've got, your software is natural to package. If I can do it, somebody else can do it. >> Bart: Yeah, go ahead. >> Absolutely, I think that part is very, very important that one throat to choke, one company which supports everything. In our case, yes, if you want to have your own hardware, then you can do that, but in our case, if you take the whole appliance from us, that we are providing you complete support. Hardware, operating system, as well as the software. >> And in a certain sense, we are really trying to, l like you said, Stu, match the performance of those optimized environments on AWS for our clients so they get a similar experience from our platform that they would get on AWS. If they build something on Zettabytes and then deploy on AWS, they should get the same experience. >> I want to give you the last word. I'm sure you have lots of customers coming by our booth. It's not far from where you're sitting right now. What are some of the key things they are hearing? What is getting them excited, interested in, that they want to follow up with? >> I think most of the customers we've talked to we say, "Okay, are you using AWS?" they say, "Yes," because they are at re:Invent. I say, "Okay, how far you are in the AWS option?" that's where the devil comes in the details. These applications we have been able to migrate, these we have not been able to migrate. We are building our exponents around it and things like that, and then the question comes, "Do you really want to go too deep into figuring out "the problems which the vendors have solved, "or would you rather focus on your business problems?" that's what I would say. They would say one package, one platform when you get to focus on your business problems and we take care of the rest. >> Yeah, and I think in the keynote yesterday, Andy Jassy said it's all about the analytics and what we are hearing is we've given a lot of thought to putting together a platform that supports big data analytics in addition to the AWS abstraction that we've done, so those analytics workloads were really intriguing to people that are talking with us, our support of machine learning, converting what may be a traditional spark job into a Lambda function is really something people are raising their eyebrows about. >> Bart Hickenlooper, Rishi Yadav asking how deep you are into AWS? Well here at theCUBE we are about 60 interviews in which means we have a few more hours left of great interviews here, so for Justin Warren, I'm Stu Miniman. Thank you so much for watching theCUBE.
SUMMARY :
and our ecosystem of partners. and you are watching theCUBE What are you hearing from customers, and they also want to have, with every step, and that migration takes time. It's no longer the economics of moving to the cloud, and that it's worth the option of the cloud. and one of the things that Zettabytes helps with that What are the toolings available? and you don't have to worry about anything else, To say that the first 80% of writing software and you are a company that has, for example, Rishi, one of the things that we've heard from customers and that you can run from our platform. I want to keep off the word you mentioned there. and we can provide to the clients. and I think part of the reason you are seeing success so it's not that you have IP in hardware. that we are providing you complete support. l like you said, Stu, I want to give you the last word. I say, "Okay, how far you are in the AWS option?" the AWS abstraction that we've done, asking how deep you are into AWS?
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Justin Warren | PERSON | 0.99+ |
Bart Hickenlooper | PERSON | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
$5 million | QUANTITY | 0.99+ |
Rishi Yadav | PERSON | 0.99+ |
2017 | DATE | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Bart | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
10 years | QUANTITY | 0.99+ |
Rishi | PERSON | 0.99+ |
80% | QUANTITY | 0.99+ |
Justin | PERSON | 0.99+ |
15 years | QUANTITY | 0.99+ |
Three pieces | QUANTITY | 0.99+ |
S3 | TITLE | 0.99+ |
Zettabytes | ORGANIZATION | 0.99+ |
20% | QUANTITY | 0.99+ |
two cents | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
WWT | ORGANIZATION | 0.99+ |
Tuesday night | DATE | 0.99+ |
one platform | QUANTITY | 0.99+ |
Stu | PERSON | 0.99+ |
4 years ago | DATE | 0.99+ |
second thing | QUANTITY | 0.99+ |
CDW | ORGANIZATION | 0.98+ |
billions of dollars | QUANTITY | 0.98+ |
WikiBon | ORGANIZATION | 0.98+ |
Bay Area | LOCATION | 0.98+ |
Intel | ORGANIZATION | 0.98+ |
first phase | QUANTITY | 0.98+ |
100 services | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
today | DATE | 0.97+ |
Lambda | TITLE | 0.97+ |
first 80% | QUANTITY | 0.97+ |
SiliconANGLE Media | ORGANIZATION | 0.97+ |
one package | QUANTITY | 0.96+ |
Elasticsearch | TITLE | 0.96+ |
Zettabytes | PERSON | 0.96+ |
first | QUANTITY | 0.96+ |
about 60 interviews | QUANTITY | 0.95+ |
one side | QUANTITY | 0.93+ |
EC2 | TITLE | 0.93+ |
Zettabyte | ORGANIZATION | 0.93+ |
second 80% | QUANTITY | 0.93+ |
Zettabytes | TITLE | 0.92+ |
HCI wave | EVENT | 0.92+ |
few years ago | DATE | 0.92+ |
SAS | ORGANIZATION | 0.92+ |
RDS | TITLE | 0.91+ |
Kinesis | TITLE | 0.89+ |
last 10 years | DATE | 0.89+ |
Narrator: Live from Las | TITLE | 0.89+ |
one company | QUANTITY | 0.88+ |
GitHub | ORGANIZATION | 0.87+ |
Sudhir Jangir, Zettabytes & Rishi Yadav, Zettabytes | AWS re:Invent
>> Announcer: Live from Las Vegas, it's The Cube, covering AWS re:Invent 2017 presented by AWS, Intel, and our ecosystem of partners. >> Hey, welcome back everyone. Live here in Las Vegas, the Cube is covering exclusively the AWS re:Invent. We've got two sets. This is set one, set two behind me. We're here with a startup called Zettabytes, Rishi Yadav, Cube Alumni CEO, and Sudhir Janir, CTO. Hot new start-up, Zettabytes. Formerly, you're an entrepreneur, your other company's still going, Info Objects. Welcome back. >> Thanks for having us here. I dont know it's the seventh time, eighth time? I mean, we love Cube guys. Yes, so Info Objects is the mothership and doing really, really great, and today we are launching Zettabytes, which is our hybrid cloud, cloud integration platform. We are starting with AWS, and then it's going to have integration for the clouds. >> So start-ups are impacted, and we were talking yesterday about kind of a demarcation line between a point in time. I say 2012, maybe you can say 2014, if you were born before 2012 or 2014, you probably didn't factor the cloud as large scale as it is. But after that day, you're a new born start-up, you look at the cloud as a resource, an opportunity, so what's your perspective as an entrepreneur, a serial entrepreneur, you start a company, you look at the big beast in Amazon, opportunity, challenge, what's your view? >> So actually 2014 was an inflection point for two things. Number one is that the big data, big data, it started with the hyper scale companies, and at that time, you're talking about Facebook, and Yahoo and other places, but it was not enterprise-ready. And we suddenly saw the option. John, you have been following the big data directly from the, I think the cloud data basement days, right? So in 2014 it got a better option. And the things like security and governance, which were offered not much concern earlier, it became front and center. Another thing which happened was around 2014, 2015, timeframe, the public cloud, which were for eight, nine years, essentially AWS, that was about 70 start-ups about saving money for them. That also started getting an option, and the enterprise, and when you're talking about enterprise, there you cannot tell them that if you deploy 10 servers on AWS, it's going to save you $200,000. They would say you already have $500 million spent. We have these huge data centers, so they needed some more value than that. >> How about your company Zettabytes, so you're launching a new company, what is it, what does it do, why are you starting it? Take a minute to explain what you're doing. >> Yes, absolutely. So the Zettabytes idea came from this convergence of the big data, public cloud and IOT. And market is ripe for it, and the challenge was that we talked to a lot of customers, a lot of them have already started working in the cloud, and some of them were planning to start the journey in the cloud, and the challenge was that at the same time they also wanted to build a big data link, Andy talked about it a lot today, right, assuming the largest big data lake. So now the question was that do you really want to go the old school route in which you are using Hadoup and other services around it, and then you do lift and shift to AWS? And then you transform to PAS. So you spend one and a half, two years in doing Hadoup, and then you spend another one and a half, two years, doing the PAZ, that cloud-native transformation in a better way. And then realize that whether the clients are on AWS today, or they are going to be in one year, they need the same experience, the same cloud experience, the same AWS experience which they have on their AWS, they want on-prem. Now that includes the other cloud-native APIs, but also the agility and everything else. >> So let met ask Sudhir a question. So you're the CTO. I know you're technical too, so I have both of you. So the old days, I'm a developer, I have my local host, I'm banging away code, and then I go, okay I'm done. And I say, ship to the server for QA or whatever. And even the cloud. Businesses want that same kind of functionality on premise. They want to go to the cloud, so all the developers are changing, they want that local host like feel. They don't wanna have to write code, ship it to a server, put it through the cloud, they just want instant integration to Amazon. Is that what you're doing? >> Yeah. >> Did I get it right? 'Cause that seems what I think you're doing. >> Yes, you develop that seamless experience. So you have the same set of APIs, which you normally would do on AWS, so still use the same data, still use the same data blue CLI. Use all data blue APIs, we're accepted those APIs on this platform, build a good base, based on those APIs, now using Kubernetes, you decide where this workload will go. >> So one of the challenges of AWS though is that they release services like constantly. I think we had the announcer at the keynote today, it was like another hundred or so services that they were releasing. So how do you choose which ones? Do you support all of them, or do you focus on specific ones? >> No, first we are focusing on a few specific ones, which are mostly being used. We are starting with Lexi, for example, as three. Lamda, Kenesis, Kafka, and this bargain is DFS are there from day one also. And all of these are Lexi, we are doing Lexi, today official announcement, they have launched Kubernetes Now. Container management service. We have that flexibility from day one only. So we have that in our outlines, and using that, even for example, your workload says, some of the piece should run on that, on Lexi, on permalines. Some of the P should go to the cloud, that is also possible. >> So you're selling an appliance. >> Yeah, yeah. The one million Lexi, or Kubernetes million might run on the AWS, few of the menus might run on your uplines, you can easily Lexi's do the all the container management. >> This is model, they pay for the box, or is it a service? Or they get the box as part of a service? What's the business model? >> So we do both, so it's a (mumbles) format, as well as an appliance, so the beauty of appliances is that everything is already optimized for you, so that makes it very easy. But if a customer has a chosen hardware platform, and we can definitely deploy it on that also. And adding to the hunter services thing, I think that's a great point, that AWS has so many services now that can you really go and figure out which services are most optimized for your needs? So that's where you need a partner on prem-site, and that's what we are going to be, and another thing as Sudhir mentioned, the EKS which they announced today, Kubernetes, so you have Kubernetes on-prem, AWS is supporting Kubernetes, and we are also supporting Kubernetes, so if you want closer to that level, it's completely seamless. >> And you were saying before, your target is enterprise has been good, so the appliance delivery model and the simplicity of being able to manage a lot of different services. Clearly being able to manage things at scale is something that enterprisers are crying out for because otherwise I have to, AWS is great, if you wanna hand build everything yourself, it has all of those components that you can assemble like Lego, but if I'm an enterprise, I want to be able to do that at scale. Humans don't scale very well, so I need some technology to help with that. So it sounds like you are actually providing the leverage to get enterprise humans to be able to manage AWS. Is that a fair characterization? >> Absolutely, that is definitely a very important aspect of it, and another aspect of it is that if you do not want to have some workloads on AWS for one thing or another. IOT workloads by definition cannot be on AWS. Low intensive workloads. They cannot be on AWS. In the same way the workloads in which you need some actual level of security. So within your data center, as much as beat down the data center piece, you have your own security and governance. And you can do that, and that's coming back to your question that are we going to support all hundred services, yes, but the local execution we have only going to provide for some services, which by their very nature make more sense to learn on-prem. >> Yeah, keep the core services. >> Rishi: Core services. >> All right so how do you guys gonna sell this product, take us through the start-up situation, you're here, are you talking to customers? Why are they are buy you? What's the conversations like? When do they need you? Take us through your conversations here at re:Invent. >> Yeah, so before that, the AWS has been super successful for the green field applications. The new applications, the applications which are born in the cloud, but when it comes to transforming the existing application it becomes a big, big challenge. So a lot of customers are coming to us, they are interested in how I can seamlessly transform their-- >> John: What's an example workload? >> So the example workloads for us is going to be the big data workloads. Which we have specialized in for last so many years. So one of them can IOT. Sudhir, probably you can explain what that is. >> So that example could be for example from today's keynote, if you see Expedia case, or Lexi Goldman Sachs case, they spend a lot of time in converting their code to the AWS specific-word, right? Millions of lines, or billions of lines of code. What we are doing today, if you dealing with the application, tomorrow it could be future ready for AWS. It's more convenience, we are actually modeling your experience with AWS. >> So it's making for enterprisers to make that transition from what they're doing today across the cloud, because that's a big deal for them. >> Tomorrow when you are Lexi, then you go to AWS, your data will decide whether you want to earn your workload on our plans, or AWS. >> Okay, so your market is hybrid cloud, basically. People doing hybrid cloud should talk to you guys. >> Yeah, and code would be future proof. What you you are you developing today-- >> John: All right so is the product shipping? >> Yes, so we are in the early beta stage, we already have five beta customers. And the product is going to be ready in a week's time. >> So data now. >> Yeah, yes. >> Yeah, these guys are ready already. >> Open beta, restricted beta? >> It is going to be restricted beta for now. Then it's going to be open beta, so yes, we are going to five more customers in the next two months for the beta. >> Take a minute to explain the type of customer you're looking for. Are they all field spots, any more, you have five more spots, you said? >> Yeah, we have five more spots for the beta. >> John: Who are yo looking for out there? >> Any large enterprise which is planning to move to AWS, but are struggling with all the nitty gritties, looking at the hundred services, and how do you integrate your existing applications there. So how you could take baby steps, like so we are going to not just take that baby steps, but sprint through it, so that's what Zettabytes plans is for. >> Rishi, congratulations on the new start-up, launching here, Zettabytes, open beta, five more spots left. Check 'em out, Zettabytes, if you're doing hybrid cloud or true private cloud, they have five spots available. It's The Cube, bringing all the action, the start-up action here and also the conversations at re:Invent. I'm John Furrier, Justin Warren. We're back with more after this short break. (electronic jingle)
SUMMARY :
Announcer: Live from Las Vegas, it's The Cube, the Cube is covering exclusively the AWS re:Invent. Yes, so Info Objects is the mothership I say 2012, maybe you can say 2014, it's going to save you $200,000. Take a minute to explain what you're doing. So now the question was that do you So the old days, I'm a developer, 'Cause that seems what I think you're doing. So you have the same set of APIs, So one of the challenges of AWS though Some of the P should go to the cloud, few of the menus might run on your uplines, So that's where you need a partner and the simplicity of being able to manage but the local execution we have only going All right so how do you guys So a lot of customers are coming to us, So the example workloads for us is What we are doing today, if you dealing So it's making for enterprisers then you go to AWS, People doing hybrid cloud should talk to you guys. What you you are you developing today-- And the product is going to be ready in a week's time. in the next two months for the beta. the type of customer you're looking for. and how do you integrate your existing Rishi, congratulations on the new start-up,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
Andy | PERSON | 0.99+ |
Sudhir Jangir | PERSON | 0.99+ |
$200,000 | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Sudhir | PERSON | 0.99+ |
Justin Warren | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
2014 | DATE | 0.99+ |
10 servers | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
eight | QUANTITY | 0.99+ |
Zettabytes | ORGANIZATION | 0.99+ |
Yahoo | ORGANIZATION | 0.99+ |
$500 million | QUANTITY | 0.99+ |
Rishi | PERSON | 0.99+ |
2015 | DATE | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
tomorrow | DATE | 0.99+ |
hundred services | QUANTITY | 0.99+ |
Hadoup | TITLE | 0.99+ |
Rishi Yadav | PERSON | 0.99+ |
Tomorrow | DATE | 0.99+ |
yesterday | DATE | 0.99+ |
2012 | DATE | 0.99+ |
two years | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
one and a half | QUANTITY | 0.99+ |
Lego | ORGANIZATION | 0.98+ |
one year | QUANTITY | 0.98+ |
Sudhir Janir | PERSON | 0.98+ |
Kubernetes | TITLE | 0.98+ |
seventh time | QUANTITY | 0.98+ |
hundred | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
one million | QUANTITY | 0.98+ |
two sets | QUANTITY | 0.98+ |
Expedia | ORGANIZATION | 0.97+ |
five spots | QUANTITY | 0.97+ |
Millions of lines | QUANTITY | 0.97+ |
five more spots | QUANTITY | 0.97+ |
Info Objects | ORGANIZATION | 0.97+ |
Intel | ORGANIZATION | 0.96+ |
about 70 start-ups | QUANTITY | 0.96+ |
first | QUANTITY | 0.96+ |
eighth time | QUANTITY | 0.96+ |
nine years | QUANTITY | 0.96+ |
Subbu Iyer
>> And it'll be the fastest 15 minutes of your day from there. >> In three- >> We go Lisa. >> Wait. >> Yes >> Wait, wait, wait. I'm sorry I didn't pin the right speed. >> Yap, no, no rush. >> There we go. >> The beauty of not being live. >> I think, in the background. >> Fantastic, you all ready to go there, Lisa? >> Yeah. >> We are speeding around the horn and we are coming to you in five, four, three, two. >> Hey everyone, welcome to theCUBE's coverage of AWS re:Invent 2022. Lisa Martin here with you with Subbu Iyer one of our alumni who's now the CEO of Aerospike. Subbu, great to have you on the program. Thank you for joining us. >> Great as always to be on theCUBE Lisa, good to meet you. >> So, you know, every company these days has got to be a data company, whether it's a retailer, a manufacturer, a grocer, a automotive company. But for a lot of companies, data is underutilized yet a huge asset that is value added. Why do you think companies are struggling so much to make data a value added asset? >> Well, you know, we see this across the board. When I talk to customers and prospects there is a desire from the business and from IT actually to leverage data to really fuel newer applications, newer services newer business lines if you will, for companies. I think the struggle is one, I think one the, the plethora of data that is created. Surveys say that over the next three years data is going to be you know by 2025 around 175 zettabytes, right? A hundred and zettabytes of data is going to be created. And that's really a growth of north of 30% year over year. But the more important and the interesting thing is the real time component of that data is actually growing at, you know 35% CAGR. And what enterprises desire is decisions that are made in real time or near real time. And a lot of the challenges that do exist today is that either the infrastructure that enterprises have in place was never built to actually manipulate data in real time. The second is really the ability to actually put something in place which can handle spikes yet be cost efficient to fuel. So you can build for really peak loads, but then it's very expensive to operate that particular service at normal loads. So how do you build something which actually works for you for both users, so to speak. And the last point that we see out there is even if you're able to, you know bring all that data you don't have the processing capability to run through that data. So as a result, most enterprises struggle with one capturing the data, making decisions from it in real time and really operating it at the cost point that they need to operate it at. >> You know, you bring up a great point with respect to real time data access. And I think one of the things that we've learned the last couple of years is that access to real time data it's not a nice to have anymore. It's business critical for organizations in any industry. Talk about that as one of the challenges that organizations are facing. >> Yeah, when we started Aerospike, right? When the company started, it started with the premise that data is going to grow, number one exponentially. Two, when applications open up to the internet there's going to be a flood of users and demands on those applications. And that was true primarily when we started the company in the ad tech vertical. So ad tech was the first vertical where there was a lot of data both on the supply set and the demand side from an inventory of ads that were available. And on the other hand, they had like microseconds or milliseconds in which they could make a decision on which ad to put in front of you and I so that we would click or engage with that particular ad. But over the last three to five years what we've seen is as digitization has actually permeated every industry out there the need to harness data in real time is pretty much present in every industry. Whether that's retail, whether that's financial services telecommunications, e-commerce, gaming and entertainment. Every industry has a desire. One, the innovative companies, the small companies rather are innovating at a pace and standing up new businesses to compete with the larger companies in each of these verticals. And the larger companies don't want to be left behind. So they're standing up their own competing services or getting into new lines of business that really harness and are driven by real time data. So this compelling pressures, one, you know customer experience is paramount and we as customers expect answers in you know an instant, in real time. And on the other hand, the way they make decisions is based on a large data set because you know larger data sets actually propel better decisions. So there's competing pressures here which essentially drive the need one from a business perspective, two from a customer perspective to harness all of this data in real time. So that's what's driving an incessant need to actually make decisions in real or near real time. >> You know, I think one of the things that's been in short supply over the last couple of years is patience. We do expect as consumers whether we're in our business lives our personal lives that we're going to be getting be given information and data that's relevant it's personal to help us make those real time decisions. So having access to real time data is really business critical for organizations across any industries. Talk about some of the main capabilities that modern data applications and data platforms need to have. What are some of the key capabilities of a modern data platform that need to be delivered to meet demanding customer expectations? >> So, you know, going back to your initial question Lisa around why is data really a high value but underutilized or under-leveraged asset? One of the reasons we see is a lot of the data platforms that, you know, some of these applications were built on have been then around for a decade plus. And they were never built for the needs of today, which is really driving a lot of data and driving insight in real time from a lot of data. So there are four major capabilities that we see that are essential ingredients of any modern data platform. One is really the ability to, you know, operate at unlimited scale. So what we mean by that is really the ability to scale from gigabytes to even petabytes without any degradation in performance or latency or throughput. The second is really, you know, predictable performance. So can you actually deliver predictable performance as your data size grows or your throughput grows or your concurrent user on that application of service grows? It's really easy to build an application that operates at low scale or low throughput or low concurrency but performance usually starts degrading as you start scaling one of these attributes. The third thing is the ability to operate and always on globally resilient application. And that requires a really robust data platform that can be up on a five nine basis globally, can support global distribution because a lot of these applications have global users. And the last point is, goes back to my first answer which is, can you operate all of this at a cost point which is not prohibitive but it makes sense from a TCO perspective. 'Cause a lot of times what we see is people make choices of data platforms and as ironically their service or applications become more successful and more users join their journey the revenue starts going up, the user base starts going up but the cost basis starts crossing over the revenue and they're losing money on the service, ironically as the service becomes more popular. So really unlimited scale predictable performance always on a globally resilient basis and low TCO. These are the four essential capabilities of any modern data platform. >> So then talk to me with those as the four main core functionalities of a modern data platform, how does Aerospike deliver that? >> So we were built, as I said from day one to operate at unlimited scale and deliver predictable performance. And then over the years as we work with customers we build this incredible high availability capability which helps us deliver the always on, you know, operations. So we have customers who are who have been on the platform 10 years with no downtime for example, right? So we are talking about an amazing continuum of high availability that we provide for customers who operate these, you know globally resilient services. The key to our innovation here is what we call the hybrid memory architecture. So, you know, going a little bit technically deep here essentially what we built out in our architecture is the ability on each node or each server to treat a bank of SSDs or solid-state devices as essentially extended memory. So you're getting memory performance but you're accessing these SSDs. You're not paying memory prices but you're getting memory performance. As a result of that you can attach a lot more data to each node or each server in a distributed cluster. And when you kind of scale that across basically a distributed cluster you can do with Aerospike the same things at 60 to 80% lower server count. And as a result 60 to 80% lower TCO compared to some of the other options that are available in the market. Then basically, as I said that's the key kind of starting point to the innovation. We lay around capabilities like, you know replication, change data notification, you know synchronous and asynchronous replication. The ability to actually stretch a single cluster across multiple regions. So for example, if you're operating a global service you can have a single Aerospike cluster with one node in San Francisco one node in New York, another one in London and this would be basically seamlessly operating. So that, you know, this is strongly consistent, very few no SQL data platforms are strongly consistent or if they are strongly consistent they will actually suffer performance degradation. And what strongly consistent means is, you know all your data is always available it's guaranteed to be available there is no data lost any time. So in this configuration that I talked about if the node in London goes down your application still continues to operate, right? Your users see no kind of downtime and you know, when London comes up it rejoins the cluster and everything is back to kind of the way it was before, you know London left the cluster so to speak. So the ability to do this globally resilient highly available kind of model is really, really powerful. A lot of our customers actually use that kind of a scenario and we offer other deployment scenarios from a higher availability perspective. So everything starts with HMA or Hybrid Memory Architecture and then we start building a lot of these other capabilities around the platform. And then over the years what our customers have guided us to do is as they're putting together a modern kind of data infrastructure, we don't live in the silo. So Aerospike gets deployed with other technologies like streaming technologies or analytics technologies. So we built connectors into Kafka, Pulsar, so that as you're ingesting data from a variety of data sources you can ingest them at very high ingest speeds and store them persistently into Aerospike. Once the data is in Aerospike you can actually run Spark jobs across that data in a multi-threaded parallel fashion to get really insight from that data at really high throughput and high speed. >> High throughput, high speed, incredibly important especially as today's landscape is increasingly distributed. Data centers, multiple public clouds, Edge, IoT devices, the workforce embracing more and more hybrid these days. How are you helping customers to extract more value from data while also lowering costs? Go into some customer examples 'cause I know you have some great ones. >> Yeah, you know, I think, we have built an amazing set of customers and customers actually use us for some really mission critical applications. So, you know, before I get into specific customer examples let me talk to you about some of kind of the use cases which we see out there. We see a lot of Aerospike being used in fraud detection. We see us being used in recommendations engines we get used in customer data profiles, or customer profiles, Customer 360 stores, you know multiplayer gaming and entertainment. These are kind of the repeated use case, digital payments. We power most of the digital payment systems across the globe. Specific example from a specific example perspective the first one I would love to talk about is PayPal. So if you use PayPal today, then you know when you're actually paying somebody your transaction is, you know being sent through Aerospike to really decide whether this is a fraudulent transaction or not. And when you do that, you know, you and I as a customer are not going to wait around for 10 seconds for PayPal to say yay or nay. We expect, you know, the decision to be made in an instant. So we are powering that fraud detection engine at PayPal. For every transaction that goes through PayPal. Before us, you know, PayPal was missing out on about 2% of their SLAs which was essentially millions of dollars which they were losing because, you know, they were letting transactions go through and taking the risk that it's not a fraudulent transaction. With Aerospike they can now actually get a much better SLA and the data set on which they compute the fraud score has gone up by you know, several factors. So by 30X if you will. So not only has the data size that is powering the fraud engine actually gone up 30X with Aerospike but they're actually making decisions in an instant for, you know, 99.95% of their transactions. So that's- >> And that's what we expect as consumers, right? We want to know that there's fraud detection on the swipe regardless of who we're interacting with. >> Yes, and so that's a really powerful use case and you know, it's a great customer success story. The other one I would talk about is really Wayfair, right, from retail and you know from e-commerce. So everybody knows Wayfair global leader in really in online home furnishings and they use us to power their recommendations engine. And you know it's basically if you're purchasing this, people who bought this also bought these five other things, so on and so forth. They have actually seen their cart size at checkout go up by up to 30%, as a result of actually powering their recommendations engine through Aerospike. And they were able to do this by reducing the server count by 9X. So on one ninth of the servers that were there before Aerospike, they're now powering their recommendations engine and seeing cart size checkout go up by 30%. Really, really powerful in terms of the business outcome and what we are able to, you know, drive at Wayfair. >> Hugely powerful as a business outcome. And that's also what the consumer wants. The consumer is expecting these days to have a very personalized relevant experience that's going to show me if I bought this show me something else that's related to that. We have this expectation that needs to be really fueled by technology. >> Exactly, and you know, another great example you asked about you know, customer stories, Adobe. Who doesn't know Adobe, you know. They're on a mission to deliver the best customer experience that they can. And they're talking about, you know great Customer 360 experience at scale and they're modernizing their entire edge compute infrastructure to support this with Aerospike. Going to Aerospike basically what they have seen is their throughput go up by 70%, their cost has been reduced by 3X. So essentially doing it at one third of the cost while their annual data growth continues at, you know about north of 30%. So not only is their data growing they're able to actually reduce their cost to actually deliver this great customer experience by one third to one third and continue to deliver great Customer 360 experience at scale. Really, really powerful example of how you deliver Customer 360 in a world which is dynamic and you know on a data set which is constantly growing at north of 30% in this case. >> Those are three great examples, PayPal, Wayfair, Adobe, talking about, especially with Wayfair when you talk about increasing their cart checkout sizes but also with Adobe increasing throughput by over 70%. I'm looking at my notes here. While data is growing at 32%, that's something that every organization has to contend with data growth is continuing to scale and scale and scale. >> Yap, I'll give you a fun one here. So, you know, you may not have heard about this company it's called Dream11 and it's a company based out of India but it's a very, you know, it's a fun story because it's the world's largest fantasy sports platform. And you know, India is a nation which is cricket crazy. So you know, when they have their premier league going on and there's millions of users logged onto the Dream11 platform building their fantasy league teams and you know, playing on that particular platform, it has a hundred million users a hundred million plus users on the platform, 5.5 million concurrent users and they have been growing at 30%. So they are considered an amazing success story in terms of what they have accomplished and the way they have architected their platform to operate at scale. And all of that is really powered by Aerospike. Think about that they're able to deliver all of this and support a hundred million users 5.5 million concurrent users all with, you know 99 plus percent of their transactions completing in less than one millisecond. Just incredible success story. Not a brand that is, you know, world renowned but at least you know from what we see out there it's an amazing success story of operating at scale. >> Amazing success story, huge business outcomes. Last question for you as we're almost out of time is talk a little bit about Aerospike AWS the partnership Graviton2 better together. What are you guys doing together there? >> Great partnership. AWS has multiple layers in terms of partnerships. So, you know, we engage with AWS at the executive level. They plan out, really roll out of new instances in partnership with us, making sure that, you know those instance types work well for us. And then we just released support for Aerospike on the Graviton platform and we just announced a benchmark of Aerospike running on Graviton on AWS. And what we see out there is with the benchmark a 1.6X improvement in price performance. And you know about 18% increase in throughput while maintaining a 27% reduction in cost, you know, on Graviton. So this is an amazing story from a price performance perspective, performance per watt for greater energy efficiencies, which basically a lot of our customers are starting to kind of talk to us about leveraging this to further meet their sustainability target. So great story from Aerospike and AWS not just from a partnership perspective on a technology and an executive level, but also in terms of what joint outcomes we are able to deliver for our customers. >> And it sounds like a great sustainability story. I wish we had more time so we would talk about this but thank you so much for talking about the main capabilities of a modern data platform, what's needed, why, and how you guys are delivering that. We appreciate your insights and appreciate your time. >> Thank you very much. I mean, if folks are at re:Invent next week or this week come on and see us at our booth and we are in the data analytics pavilion and you can find us pretty easily. Would love to talk to you. >> Perfect, we'll send them there. Subbu Iyer, thank you so much for joining me on the program today. We appreciate your insights. >> Thank you Lisa. >> I'm Lisa Martin, you're watching theCUBE's coverage of AWS re:Invent 2022. Thanks for watching. >> Clear- >> Clear cutting. >> Nice job, very nice job.
SUMMARY :
the fastest 15 minutes I'm sorry I didn't pin the right speed. and we are coming to you in Subbu, great to have you on the program. Great as always to be on So, you know, every company these days And a lot of the challenges that access to real time data to put in front of you and I and data platforms need to have. One of the reasons we see is So the ability to do How are you helping customers let me talk to you about fraud detection on the swipe and you know, it's a great We have this expectation that needs to be Exactly, and you know, with Wayfair when you talk So you know, when they have What are you guys doing together there? And you know about 18% and how you guys are delivering that. and you can find us pretty easily. for joining me on the program today. of AWS re:Invent 2022.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
AWS | ORGANIZATION | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
60 | QUANTITY | 0.99+ |
London | LOCATION | 0.99+ |
Lisa | PERSON | 0.99+ |
PayPal | ORGANIZATION | 0.99+ |
New York | LOCATION | 0.99+ |
15 minutes | QUANTITY | 0.99+ |
3X | QUANTITY | 0.99+ |
2025 | DATE | 0.99+ |
Wayfair | ORGANIZATION | 0.99+ |
35% | QUANTITY | 0.99+ |
Adobe | ORGANIZATION | 0.99+ |
30% | QUANTITY | 0.99+ |
99.95% | QUANTITY | 0.99+ |
10 seconds | QUANTITY | 0.99+ |
San Francisco | LOCATION | 0.99+ |
30X | QUANTITY | 0.99+ |
70% | QUANTITY | 0.99+ |
32% | QUANTITY | 0.99+ |
27% | QUANTITY | 0.99+ |
1.6X | QUANTITY | 0.99+ |
each server | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
Aerospike | ORGANIZATION | 0.99+ |
millions of dollars | QUANTITY | 0.99+ |
India | LOCATION | 0.99+ |
Subbu | PERSON | 0.99+ |
9X | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
99 plus percent | QUANTITY | 0.99+ |
first answer | QUANTITY | 0.99+ |
third thing | QUANTITY | 0.99+ |
less than one millisecond | QUANTITY | 0.99+ |
10 years | QUANTITY | 0.99+ |
this week | DATE | 0.99+ |
Subbu Iyer | PERSON | 0.99+ |
one third | QUANTITY | 0.99+ |
millions of users | QUANTITY | 0.99+ |
over 70% | QUANTITY | 0.98+ |
both users | QUANTITY | 0.98+ |
Dream11 | ORGANIZATION | 0.98+ |
80% | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
Graviton | TITLE | 0.98+ |
each node | QUANTITY | 0.98+ |
second | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
three | QUANTITY | 0.98+ |
four | QUANTITY | 0.98+ |
Two | QUANTITY | 0.98+ |
one node | QUANTITY | 0.98+ |
hundred million users | QUANTITY | 0.98+ |
first vertical | QUANTITY | 0.97+ |
about 2% | QUANTITY | 0.97+ |
Aerospike | TITLE | 0.97+ |
single cluster | QUANTITY | 0.96+ |
Edward Naim, AWS | AWS Storage Day 2022
[Music] welcome back to aws storage day 2022 i'm dave vellante and we're pleased to have back on thecube edname the gm of aws file storage ed how you doing good to see you i'm good dave good good to see you as well you know we've been tracking aws storage for a lot of years 16 years actually we we've seen the evolution of services of course we started with s3 and object and saw that expand the block and file and and now the pace is actually accelerating and we're seeing aws make more moves again today and block an object but what about file uh it's one format in the world and the day wouldn't really be complete without talking about file storage so what are you seeing from customers in terms of let's start with data growth how are they dealing with the challenges what are those challenges if you could address you know specifically some of the issues that they're having that would be great and then later we're going to get into the role that cloud file storage plays take it away well dave i'm definitely increasingly hearing customers talk about the challenges in managing ever-growing data sets and they're especially challenged in doing that on-premises when we look at the data that's stored on premises zettabytes of data the fastest growing data sets consist of unstructured data that are stored as files and many cups have tens of petabytes or hundreds of petabytes or even exabytes of file data and this data is typically growing 20 30 percent a year and in reality on-premises models really designed to handle this amount of data in this type of growth and i'm not just talking about keeping up with hardware purchases and hardware floor space but a big part of the challenge is labor and talent to keep up with the growth they're seeing companies managing storage on-prem they really need an unprecedented number of skilled resources to manage the storage and these skill sets are in really high demand and they're in short supply and then another big part of the challenge that customers tell me all the time is that that operating at scale dealing with these ever-growing data sets at scale is really hard and it's not just hard in terms of the the people you need and the skill sets that you need but operating at scale presents net new challenges so for example it becomes increasingly hard to know what data you have and what storage media your data stored on when you have a massive amount of data that's spanning hundreds of thousands or uh thousands of applications and users and it's growing super fast each year and at scale you start seeing edge technical issues get triggered more commonly impacting your availability or your resiliency or your security and you start seeing processes that used to work when you were a much smaller scale no longer work it's just scale is hard it's really hard and then finally companies are wanting to do more with their fast growing data sets to get insights from it and they look at the machine learning and the analytics and the processing services and the compute power that they have at their fingertips on the cloud and having that data be in silos on-prem can really limit how they get the most out of their data you know i've been covering glad you brought up the skills gap i've been covering that quite extensively with my colleagues at etr you know our survey partner so that's a really important topic and we're seeing it across the board i mean really acute in cyber security but for sure just generally in i.t and frankly ceos they don't want to invest in training people to manage storage i mean it wasn't that long ago that managing loans was a was a talent and that's of course nobody does that anymore but they'd executives would much rather apply skills to get value from data so my specific question is what can be done what is aws doing to address this problem well with the growth of data that that we're seeing it it's just it's really hard for a lot of it teams to keep up with just the infrastructure management part that's needed so things like deploying capacity and provisioning resources and patching and conducting compliance reviews and that stuff is just table stakes the asks on these teams to your point are growing to be much bigger than than those pieces so we're really seeing fast uptake of our amazon fsx service because it's such an easy path for helping customers with these scaling challenges fsx enables customers to launch and to run and to scale feature rich and highly performant network attached file systems on aws and it provides fully managed file storage which means that we handle all of the infrastructure so all of that provisioning and that patching and ensuring high availability and customers simply make api calls to do things like scale up their storage or change their performance level at any point or change a backup policy and a big part of why fsx has been so feeling able to customers is it really enables them to to choose the file system technology that powers their storage so we provide four of the most popular file system technologies we provide windows file server netapp ontap open zfs and luster so that storage and application admins can use what they're familiar with so they essentially get the full capabilities and even the management clis that they're used to and that they've built workflows and applications around on-premises but they get along with that of course the benefits of fully managed elastic cloud storage that can be spin up and spun spin down and scaled on demand and performance changed on demand etc and what storage and application admins are seeing is that fsx not only helps them keep up with their scale and growth but it gives them the bandwidth to do more of what they want to do supporting strategic decision making helping their end customers figure out how they can get more value from their data identifying opportunities to reduce cost and what we realize is that for for a number of storage and application admins the cloud is is a different environment from what they're used to and we're making it a priority to help educate and train folks on cloud storage earlier today we talked about aws storage digital badges and we announced a dedicated file badge that helps storage admins and professionals to learn and demonstrate their aws skills in our aws storage badges you can think of them as credentials that represent cloud computing learning that customers can add to their repertoire add to their resume as they're embarking on this cloud journey and we'll be talking more in depth on this later today especially around the file badge which i'm very excited about so a couple things there that i wanted to comment on i mean i was there for the netapp you know your announcement we've covered that quite extensively this is just shows that it's not a zero-sum game necessarily right it's a win-win-win for customers you've got your you know specific aws services you've got partner services you know customers want want choice and then the managed service model you know to me is a no-brainer for most customers we learned this in the hadoop years i mean it just got so complicated then you saw what happened with the managed services around you know data lakes and lake houses it's just really simplified things for customers i mean there's still some customers that want to do it yourself but a managed service for the file storage sounds like a really easy decision especially for those it teams that are overburdened as we were talking about before and i also like you know the education component is nice touch too you get the badge thing so that's kind of cool so i'm hearing that if the fully managed file storage service is a catalyst for cloud adoption so the question is which workloads should people choose to move into the cloud where's the low friction low risk sweet spot ed well that's one of the first questions that customers ask when they're about to embark on their cloud journey and i wish i could give a simple or a single answer but the answer is really it varies and it varies per customer and i'll give you an example for some customers the cloud journey begins with what we call extending on-premises workloads into the cloud so an example of that is compute bursting workloads where customers have data on premises and they have some compute on premises but they want to burst their processing of that data to the cloud because they really want to take advantage of the massive amount of compute that they get on aws and that's common with workloads like visual effects ringer chip design simulation genomics analysis and so that's an example of extending to the cloud really leveraging the cloud first for your workloads another example is disaster recovery and that's a really common example customers will use a cloud for their secondary or their failover site rather than maintaining their their second on-prem location and so that's a lot of customers start with some of those workloads by extending to the cloud and then there's there's a lot of other customers where they've made the decision to migrate most or all of their workloads and they're not they're skipping the whole extending step they aren't starting there they're instead focused on going all in as fast as possible because they really want to get to the full benefits of the cloud as fast as possible and for them the migration journey is really it's a matter of sequencing sequencing which specific workloads to move and when and what's interesting is we're increasingly seeing customers prioritizing their most important and their most mission-critical applications ahead of their other workloads in terms of timing and they're they're doing that to get their workloads to benefit from the added resilience they get from running on the cloud so really it really does uh depend dave yeah thank you i mean that's pretty pretty good description of the options there and i i just come something you know bursting obviously i love those examples you gave around genomics chip design visual effects rendering the dr piece is again very common sort of cloud you know historical you know sweet spots for cloud but then the point about mission critical is interesting because i hear a lot of customers especially with the digital transformation push wanting to change their operating model i mean on the one hand not changing things put it in the cloud the lift and shift you have to change things low friction but then once they get there they're like wow we can do a lot more with the cloud so that was really helpful those those examples now last year at storage day you released a new file service and then you followed that up at re-event with another file service introduction sometimes i can admit i get lost in the array of services so help us understand when a customer comes to aws with like an nfs or an smb workload how do you steer them to the right managed service you know the right horse for the right course yeah well i'll start by saying uh you know a big part of our focus has been in providing choice to customers and what customers tell us is that the spectrum of options that we provide to them really helps them in their cloud journey because there really isn't a one-size-fits-all file system for all workloads and so having these options actually really helps them to to be able to move pretty easily to the cloud um and so my answer to your question about uh where do we steer a customer when they have a file workload is um it really depends on what the customer is trying to do and uh in many cases where they're coming from so i'll walk you through a little bit of of of how we think about this with customers so for storage and application admins who are extending existing workloads to the cloud or migrating workloads to aws the easiest path generally is to move to an fsx file system that provides the same or really similar underlying file system engine that they use on premises so for example if you're running a netapp appliance on premises or a windows file server on premises choosing that option within fsx provides the least effort for a customer to lift their application and their data set and they'll get the full safe set of capabilities that they're used to they'll get the performance profiles that they're used to but of course they'll get all the benefits of the cloud that i was talking about earlier like spin up and spin down and fully managed and elastic capacity then we also provide open source file systems within the fsx family so if you're a customer and you're used to those or if you aren't really wedded to a particular file system technology these are really good options and they're built on top of aws's latest infrastructure innovations which really allows them to provide pretty significant price and performance benefits to customers so for example the file system file servers for these offerings are powered by aws's graviton family of processors and under the hood we use storage technology that's built on top of aws's scalable reliable datagram transport protocol which really optimizes for for speed on the cloud and so for those two open source file systems we have open zfs and that provides a really powerful highly performant nfs v3 and v4 and 4.1 and 4.2 file system built on a fast and resilient open source linux file system it has a pretty rich set of capabilities it has things like point-to-time snapshots and in-place data cloning and our customers are really using it because of these capabilities and because of its performance for a pretty broad set of enterprise i.t workloads and vertically focused workloads like within the financial services space and the healthcare life sciences space and then luster is a scale-out file system that's built on the world's most popular high-performance file system which is the luster open source file system and customers are using it for compute intensive workloads where they're throwing tons of compute at massive data sets and they need to drive tens or hundreds of gigabytes per second of throughput it's really popular for things like machine learning training and high performance computing big data analytics video rendering and transcoding so really those scale out compute intensive workloads and then we have a very different type of customer very different persona and this is the individual that we call the aws builder and these are folks who are running cloud native workloads they leverage a broad spectrum of aws's compute and analytic services and they have really no history of on-prem examples are data scientists who require a file share for training sets research scientists who are performing analysis on lab data developers who are building containerized or serverless workloads and cloud practitioners who need a simple solution for storing assets for their cloud workflows and and these these folks are building and running a wide range of data focused workloads and they've grown up using services like lambda and building containerized workloads so most of these individuals generally are not storage experts and they look for storage that just works s3 and consumer file shares uh like dropbox are their reference point for how cloud storage works and they're indifferent to or unaware of bio protocols like smb or nfs and performing typical nas administrative tasks is just not it's not a natural experience for them it's not something they they do and we built amazon efs to meet the needs of that group it's fully elastic it's fully serverless spreads data across multiple availability zones by default it scales infinitely it works very much like s3 so for example you get the same durability and availability profile of s3 you get intelligent tiering of colder data just like you do on s3 so that service just clicks with cloud native practitioners it's it's intuitive and it just works there's mind-boggling the number of use cases you just went through and this is where it's so you know it's you know a lot of times people roll their eyes oh here's amazon talking about you know customer obsession again but if you don't stay close to your customers there's no way you could have predicted when you're building these services how they were going to be put to use the only way you can understand it is watch what customers do with it i loved the conversation about graviton we've written about that a lot i mean nitro we've written about that how it's you've completely rethought virtualization the security components in there the hpc luster piece and and the efs for data scientists so really helpful there thank you i'm going to change uh topics a little bit because there's been this theme that you've been banging on at storage day putting data to work and i tell you it's a bit of a passion of mine ed because frankly customers have been frustrated with the return on data initiatives it's been historically complicated very time consuming and expensive to really get value from data and often the business lines end up frustrated so let's talk more about that concept and i understand you have an announcement that fits with this scene can you tell us more about that absolutely today we're announcing a new service called amazon file cache and it's a service on aws that accelerates and simplifies hybrid workflows and specifically amazon file cache provides a high speed cache on aws that makes it easier to process file data regardless of where the data is stored and amazon file cache serves as a temporary high performance storage location and it's for data that's stored in on-premise file servers or in file systems or object stores in aws and what it does is it enables enterprises to make these dispersed data sets available to file based applications on aws with a unified view and at high speeds so think of sub millisecond latencies and and tens or hundreds of gigabytes per second of throughput and so a really common use case it supports is if you have data stored on premises and you want to burst the processing workload to the cloud you can set up this cache on aws and it allows you to have the working set for your compute workload be cached near your aws compute so what you would do as a customer when you want to use this is you spin up this cache you link it to one or more on-prem nfs file servers and then you mount this cache to your compute instances on aws and when you do this all of your on-prem data will appear up automatically as folders and files on the cache and when your aws compute instances access a file for the first time the cache downloads the data that makes up that file in real time and that data then would reside on the cache as you work with it and when it's in the cache your application has access to that data at those sub millisecond latencies and at up to hundreds of gigabytes per second of throughput and all of this data movement is done automatically and in the background completely transparent to your application that's running on the compute instances and then when you're done with your workload with your data processing job you can export the changes and all the new data back to your on-premises file servers and then tear down the cache another common use case is if you have a compute intensive file-based application and you want to process a data set that's in one or more s3 buckets you can have this cache serve as a really high speed layer that your compute instances mount as a network file system you can also place this cache in front of a mix of on-prem file servers and s3 buckets and even fsx file systems that are on aws all of the data from these locations will appear within a single name space that clients that mount the cache have access to and those clients get all the performance benefits of the cache and also get a unified view of their data sets and and to your point about listening to customers and really paying attention to customers dave we built this service because customers asked us to a lot of customers asked us to actually it's a really helpful enable enabler for a pretty wide variety of cloud bursting workloads and hybrid workflows ranging from media rendering and transcoding to engineering design simulation to big data analytics and it really aligns with that theme of extend that we were talking about earlier you know i often joke that uh aws has the best people working on solving the speed of light problem so okay but so this idea of bursting as i said has been a great cloud use case from the early days and and bringing it to file storage is very sound and approach with file cache looks really practical um when is the service available how can i get started you know bursting to aws give us the details there yeah well stay tuned we we announced it today at storage day and it will be generally available later this year and once it becomes available you can create a cache via the the aws management console or through the sdks or the cli and then within minutes of creating the cache it'll be available to your linux instances and your instances will be able to access it using standard file system mount commands and the pricing model is going to be a pretty familiar one to cloud customers customers will only pay for the cash storage and the performance they need and they can spin a cash up and use it for the duration of their compute burst workload and then tear it down so i'm really excited that amazon file cache will make it easier for customers to leverage the agility and the performance and the cost efficiency of aws for processing data no matter where the data is stored yeah cool really interested to see how that gets adopted ed always great to catch up with you as i said the pace is mind-boggling it's accelerating in the cloud overall but storage specifically so by asking us can we take a little breather here can we just relax for a bit and chill out uh not as long as customers are asking us for more things so there's there's more to come for sure all right ed thanks again great to see you i really appreciate your time thanks dave great catching up okay and thanks for watching our coverage of aws storage day 2022 keep it right there for more in-depth conversations on thecube your leader in enterprise and emerging tech coverage [Music] you
SUMMARY :
and then you mount this cache to your
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Edward Naim | PERSON | 0.99+ |
tens | QUANTITY | 0.99+ |
tens of petabytes | QUANTITY | 0.99+ |
hundreds of petabytes | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
amazon | ORGANIZATION | 0.99+ |
aws | ORGANIZATION | 0.99+ |
hundreds of thousands | QUANTITY | 0.99+ |
last year | DATE | 0.98+ |
16 years | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
first time | QUANTITY | 0.97+ |
each year | QUANTITY | 0.97+ |
dave | PERSON | 0.97+ |
second | QUANTITY | 0.97+ |
dave vellante | PERSON | 0.97+ |
20 30 percent a year | QUANTITY | 0.97+ |
later this year | DATE | 0.96+ |
one | QUANTITY | 0.96+ |
aws | TITLE | 0.95+ |
windows | TITLE | 0.94+ |
thousands of applications | QUANTITY | 0.94+ |
later today | DATE | 0.93+ |
one format | QUANTITY | 0.93+ |
hundreds of gigabytes per second | QUANTITY | 0.93+ |
first questions | QUANTITY | 0.93+ |
hundreds of gigabytes per second | QUANTITY | 0.92+ |
two open source | QUANTITY | 0.92+ |
s3 | TITLE | 0.92+ |
fsx | TITLE | 0.89+ |
4.1 | TITLE | 0.88+ |
first | QUANTITY | 0.88+ |
a lot of years | QUANTITY | 0.87+ |
earlier today | DATE | 0.84+ |
linux | TITLE | 0.84+ |
four of the most popular file | QUANTITY | 0.79+ |
nitro | ORGANIZATION | 0.79+ |
netapp | TITLE | 0.78+ |
4.2 | TITLE | 0.74+ |
single answer | QUANTITY | 0.74+ |
graviton | TITLE | 0.74+ |
zettabytes | QUANTITY | 0.73+ |
day | EVENT | 0.73+ |
lot of customers | QUANTITY | 0.73+ |
exabytes | QUANTITY | 0.72+ |
a lot of other customers | QUANTITY | 0.71+ |
2022 | DATE | 0.71+ |
v4 | TITLE | 0.71+ |
single name | QUANTITY | 0.68+ |
tons of compute | QUANTITY | 0.64+ |
couple things | QUANTITY | 0.63+ |
minutes | QUANTITY | 0.56+ |
Day | EVENT | 0.54+ |
David Friend, Wasabi | Secure Storage Hot Takes
>> The rapid rise of ransomware attacks has added yet another challenge that business technology executives have to worry about these days. Cloud storage, immutability and air gaps have become a must have arrows in the quiver of organization's data protection strategies. But the important reality that practitioners have embraced is data protection, it can't be an afterthought or a bolt on, it has to be designed into the operational workflow of technology systems. The problem is oftentimes data protection is complicated with a variety of different products, services, software components, and storage formats. This is why object storage is moving to the forefront of data protection use cases because it's simpler and less expensive. The put data get data syntax has always been alluring but object storage historically was seen as this low cost niche solution that couldn't offer the performance required for demanding workloads, forcing customers to make hard trade offs between cost and performance. That has changed. The ascendancy of cloud storage generally in the S3 format specifically has catapulted object storage to become a first class citizen in a mainstream technology. Moreover, innovative companies have invested to bring object storage performance to parody with other storage formats. But cloud costs are often a barrier for many companies as the monthly cloud bill and egress fees in particular steadily climb. Welcome to Secure Storage Hot Takes. My name is Dave Vellante and I'll be your host of the program today, where we introduce our community to Wasabi, a company that is purpose built to solve this specific problem with what it claims to be the most cost effective and secure solution on the market. We have three segments today to dig into these issues. First up is David Friend, the well known entrepreneur, who co-founded Carbonite and now Wasabi. We'll then dig into the product with Drew Schlussel of Wasabi. And then we'll bring in the customer perspective with Kevin Warenda of the Hotchkiss, cool. Let's get right into it. We're here with David Friend, the President and CEO, and co-founder of Wasabi, the hot storage company. David, welcome to theCUBE. >> Thanks, Dave. Nice to be here. >> Great to have you. So look, you hit a home run with Carbonite back when building a unicorn was a lot more rare than it has been in the last few years. Why did you start Wasabi? >> Well, when I was still CEO of Wasabi, my genius co-founder, Jeff Flowers, and our chief architect came to me and said, you know, when we started this company, a state of the art disc drive was probably 500 gigabytes. And now we're looking at eight terabyte, 16 terabyte, 20 terabyte, even hundred terabyte drives coming down the road. And, you know, sooner or later the old architectures that were designed around these much smaller disc drives is going to run out of steam, because even though the capacities are getting bigger and bigger, the speed with which you can get data on and off of a hard drive isn't really changing all that much. And Jeff foresaw a day when the architectures of sort of legacy storage like Amazon S3 and so forth, was going to become very inefficient and slow. And so he came up with a new highly parallelized architecture, and he said, I want to go off and see if I can make this work. So I said, you know, good luck go to it. And they went off and spent about a year and a half in the lab designing and testing this new storage architecture. And when they got it working, I looked at the economics of this and I said, holy cow, we could sell cloud storage for a fraction of the price of Amazon, still make very good gross margins and it will be faster. So this is a whole new generation of object storage that you guys have invented. So I recruited a new CEO for Carbonite and left to found Wasabi because the market for cloud storage is almost infinite, you know? When you look at all the world's data, you know, IDC has these crazy numbers, 120 zettabytes or something like that. And if you look at that as, you know, the potential market size during that data we're talking trillions of dollars, not billions. And so I said, look, this is a great opportunity. If you look back 10 years, all the world's data was on prem. If you look forward 10 years, most people agree that most of the world's data is going to live in the cloud. We're at the beginning of this migration, we've got an opportunity here to build an enormous company. >> That's very exciting. I mean, you've always been a trend spotter and I want to get your perspectives on data protection and how it's changed. It's obviously on people's minds with all the ransomware attacks and security breaches but thinking about your experiences and past observations, what's changed in data protection and what's driving the current very high interest in the topic? >> Well, I think, you know, from a data protection standpoint, immutability, the equivalent of the old worm tapes but applied to cloud storage is, you know, become core to the backup strategies and disaster recovery strategies for most companies. And if you look at our partners who make backup software like VEEAM, Commvault, Veritas, Arcserve, and so forth, most of them are really taking advantage of mutable cloud storage as a way to protect customer data, customers backups from ransomware. So the ransomware guys are pretty clever and they, you know, they discovered early on that if someone could do a full restore from their backups they're never going to pay a ransom. So once they penetrate your system, they get pretty good at sort of watching how you do your backups and before they encrypt your primary data, they figure out some way to destroy or encrypt your backups as well so that you can't do a full restore from your backups, and that's where immutability comes in. You know, in the old days you wrote what was called a worm tape, you know? Write once read many. And those could not be overwritten or modified once they were written. And so we said, let's come up with an equivalent of that for the cloud. And it's very tricky software, you know, it involves all kinds of encryption algorithms and blockchain and this kind of stuff. But, you know, the net result is, if you store your backups in immutable buckets in a product like Wasabi, you can't alter it or delete it for some period of time. So you could put a timer on it, say a year or six months or something like that. Once that date is written, you know, there's no way you can go in and change it, modify it or anything like that, including even Wasabi's engineers. >> So, David, I want to ask you about data sovereignty, it's obviously a big deal. I mean, especially for companies with a presence overseas but what's really is any digital business these days? How should companies think about approaching data sovereignty? Is it just large firms that should be worried about this? Or should everybody be concerned? What's your point of view? >> Well, all around the world countries are imposing data sovereignty laws. And if you're in the storage business, like we are, if you don't have physical data storage in country you're probably not going to get most of the business. You know, since Christmas we've built data centers in Toronto, London, Frankfurt, Paris, Sydney, Singapore and I've probably forgotten one or two. But the reason we do that is twofold. One is, you know, if you're closer to the customer, you're going to get better response time, lower latency and that's just a speed of light issue. But the bigger issue is, if you've got financial data, if you have healthcare data, if you have data relating to security, like surveillance videos and things of that sort, most countries are saying that data has to be stored in country, so you can't send it across borders to some other place. And if your business operates in multiple countries, you know, dealing with data sovereignty is going to become an increasingly important problem. >> So in may of 2018, that's when the fines associated with violating GDPR went into effect and GDPR was like this main spring of privacy and data protection laws. And we've seen it spawn other public policy things like the CCPA and it continues to evolve. We see judgements in Europe against big tech and this tech lash that's in the news in the US and the elimination of third party cookies. What does this all mean for data protection in the 2020s? >> Well, you know, every region and every country, you know, has their own idea about privacy, about security, about the use of, even the use of metadata surrounding, you know, customer data and things to this sort. So, you know, it's getting to be increasingly complicated because GDPR, for example, imposes different standards from the kind of privacy standards that we have here in the US. Canada has a somewhat different set of data sovereignty issues and privacy issues. So it's getting to be an increasingly complex, you know, mosaic of rules and regulations around the world. And this makes it even more difficult for enterprises to run their own, you know, infrastructure because companies like Wasabi where we have physical data centers in all kinds of different markets around the world. And we've already dealt with the business of how to meet the requirements of GDPR and how to meet the requirements of some of the countries in Asia, and so forth. You know, rather than an enterprise doing that just for themselves, if you running your applications or keeping your data in the cloud, you know, now a company like Wasabi with, you know, 34,000 customers, we can go to all the trouble of meeting these local requirements on behalf of our entire customer base. And that's a lot more efficient and a lot more cost effective than if each individual country has to go deal with the local regulatory authorities. >> Yeah. It's compliance by design, not by chance. Okay, let's zoom out for the final question, David. Thinking about the discussion that we've had around ransomware and data protection and regulations. What does it mean for a business's operational strategy and how do you think organizations will need to adapt in the coming years? >> Well, you know, I think there are a lot of forces driving companies to the cloud and, you know, and I do believe that if you come back five or 10 years from now, you're going to see majority of the world's data is going to be living in the cloud. And I think, storage, data storage is going to be a commodity much like electricity or bandwidth. And it's going to be done right, it will comply with the local regulations, it'll be fast, it'll be local. And there will be no strategic advantage that I can think of for somebody to stand up and run their own storage, especially considering the cost differential. You know, the most analysts think that the full all in costs of running your own storage is in the 20 to 40 terabytes per month range. Whereas, you know, if you migrate your data to the cloud like Wasabi, you're talking probably $6 a month. And so I think people are learning how to, are learning how to deal with the idea of an architecture that involves storing your data in the cloud, as opposed to, you know, storing your data locally. >> Wow. That's like a six X more expensive and the clouds more than six X. >> Yeah. >> All right, thank you, David. Go ahead, please. >> In addition to which, you know, just finding the people to babysit this kind of equipment has become nearly impossible today. >> Well, and with a focus on digital business you don't want to be wasting your time with that kind of heavy lifting. David, thanks so much for coming on theCUBE. Great Boston entrepreneur, we've followed your career for a long time and looking forward to the future. >> Thank you. >> Okay, in a moment, Drew Schlussel will join me and we're going to dig more into product. You're watching theCUBE, the leader in enterprise and emerging tech coverage. Keep it right there. (upbeat music)
SUMMARY :
and secure solution on the market. So look, you hit a home run with Carbonite the speed with which you can get data and I want to get your perspectives but applied to cloud storage is, you know, you about data sovereignty, One is, you know, if you're and the elimination of and how to meet the requirements and how do you think organizations is in the 20 to 40 more expensive and the In addition to which, you know, and looking forward to the future. the leader in enterprise
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
Kevin Warenda | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Drew Schlussel | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Drew Schlussel | PERSON | 0.99+ |
Sydney | LOCATION | 0.99+ |
Wasabi | ORGANIZATION | 0.99+ |
Paris | LOCATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
London | LOCATION | 0.99+ |
Toronto | LOCATION | 0.99+ |
Singapore | LOCATION | 0.99+ |
Jeff Flowers | PERSON | 0.99+ |
Frankfurt | LOCATION | 0.99+ |
Asia | LOCATION | 0.99+ |
Carbonite | ORGANIZATION | 0.99+ |
US | LOCATION | 0.99+ |
Jeff | PERSON | 0.99+ |
20 | QUANTITY | 0.99+ |
16 terabyte | QUANTITY | 0.99+ |
20 terabyte | QUANTITY | 0.99+ |
hundred terabyte | QUANTITY | 0.99+ |
2020s | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
a year | QUANTITY | 0.99+ |
six months | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
billions | QUANTITY | 0.99+ |
34,000 customers | QUANTITY | 0.99+ |
GDPR | TITLE | 0.99+ |
500 gigabytes | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
eight terabyte | QUANTITY | 0.99+ |
Europe | LOCATION | 0.99+ |
One | QUANTITY | 0.99+ |
David Friend | PERSON | 0.99+ |
trillions of dollars | QUANTITY | 0.98+ |
120 zettabytes | QUANTITY | 0.98+ |
about a year and a half | QUANTITY | 0.98+ |
IDC | ORGANIZATION | 0.98+ |
Boston | LOCATION | 0.98+ |
40 terabytes | QUANTITY | 0.97+ |
Christmas | EVENT | 0.97+ |
10 years | QUANTITY | 0.97+ |
$6 a month | QUANTITY | 0.97+ |
Hotchkiss | ORGANIZATION | 0.95+ |
today | DATE | 0.95+ |
six X | QUANTITY | 0.93+ |
three segments | QUANTITY | 0.91+ |
may of 2018 | DATE | 0.91+ |
Veritas | ORGANIZATION | 0.91+ |
Canada | LOCATION | 0.9+ |
theCUBE | ORGANIZATION | 0.9+ |
more than six X. | QUANTITY | 0.9+ |
Commvault | ORGANIZATION | 0.89+ |
twofold | QUANTITY | 0.88+ |
Wasabi | PERSON | 0.87+ |
Arcserve | ORGANIZATION | 0.85+ |
each individual country | QUANTITY | 0.85+ |
first class | QUANTITY | 0.83+ |
VEEAM | ORGANIZATION | 0.78+ |
years | DATE | 0.75+ |
cow | PERSON | 0.75+ |
day | QUANTITY | 0.75+ |
CCPA | ORGANIZATION | 0.74+ |
last | DATE | 0.64+ |
S3 | TITLE | 0.59+ |
Kimberly Leyenaar, Broadcom
(upbeat music) >> Hello everyone, and welcome to this CUBE conversation where we're going to go deep into system performance. We're here with an expert. Kim Leyenaar is the Principal Performance Architect at Broadcom. Kim. Great to see you. Thanks so much for coming on. >> Thanks so much too. >> So you have a deep background in performance, performance assessment, benchmarking, modeling. Tell us a little bit about your background, your role. >> Thanks. So I've been a storage performance engineer and architect for about 22 years. And I'm specifically been for abroad with Broadcom for I think next month is going to be my 14 year mark. So what I do there is initially I built and I manage their international performance team, but about six years ago I moved back into architecture, and what my roles right now are is I generate performance projections for all of our next generation products. And then I also work on marketing material and I interface with a lot of the customers and debugging customer issues, and looking at how our customers are actually using our storage. >> Great. Now we have a graphic that we want to share. It talks to how storage has evolved over the past decade. So my question is what changes have you seen in storage and how has that impacted the way you approach benchmarking. In this graphic we got sort of big four items that impact performance, memory processor, IO pathways, and the storage media itself, but walk us through this data if you would. >> Sure. So what I put together is a little bit of what we've seen over the past 15 to 20 years. So I've been doing this for about 22 years and kind of going back and focusing a little bit on the storage, we looked back at hard disk, they ruled for. And nearly they had almost 50 years of ruling. And our first hard drive that came out back in the 1950s was only capable of five megabytes in capacity. and one and a half iOS per second. It had almost a full second in terms of seat time. So we've come a long way since then. But when I first came on, we were looking at Ultra 320 SCSI. And one of the biggest memories that I have of that was my office is located close to our tech support. And I could hear the first question was always, what's your termination like? And so we had some challenges with SCSI, and then we moved on into SAS and data protocols. And we continued to move on. But right now, back in the early 2000s when I came on board, the best drives really could do maybe 400 iOS per second. Maybe two 250 megabytes per second, with millisecond response times. And so when I was benchmarking way back when it was always like, well, IOPS are IOPS. We were always faster than what the drives to do. And that was just how it was. The drives were always the bottleneck in the system. And so things started changing though by the early 2000s, mid 2000s. We started seeing different technologies come out. We started seeing that virtualization and multi-tenant infrastructures becoming really popular. And then we had cloud computing that was well on the horizon. And so at this point, we're like, well, wait a minute, we really can't make processors that much faster. And so everybody got excited to include (indistinct) and the home came out but, they had two cores per processor and four cores per processor. And so we saw a little time period where actually the processing capability kind of pulled ahead of everybody else. And memory was falling behind. We had good old DVR, 2, 6, 67. It was new with the time, but we only had maybe one or two memory channels per processor. And then in 2007 we saw disk capacity hit one terabyte. And we started seeing a little bit of an imbalance because we were seeing these drives are getting massive, but their performance per drive was not really kind of keeping up. So now we see a revolution around 2010. And my co-worker and I at the time, we have these little USB discs, if you recall, we would put them in. They were so fast. We were joking at the time. "Hey, you know what, wonder if we could make a raid array out of these little USB disks?" They were just so fast. The idea was actually kind of crazy until we started seeing it actually happen. So in 2010 SSD started revolutionizing storage. And the first SSDs that we really worked with these plaint LS-300 and they were amazing because they were so over-provisioned that they had almost the same reader, right performance. But to go from a drive that could do maybe 400 IOS per second to a drive like 40,000 plus iOS per second, really changed our thought process about how our storage controller could actually try and keep up with the rest of the system. So we started falling behind. That was a big challenge for us. And then in 2014, NVMe came around as well. So now we've got these drives, they're 30 terabytes. They can do one and a half million iOS per second, and over 6,000 megabytes per second. But they were expensive. So people start relegating SSDs more towards tiered storage or cash. And as the prices of these drives kind of came down, they became a lot more mainstream. And then the memory channels started picking up. And they started doubling every few years. And we're looking now at DVR 5 4800. And now we're looking at cores that used to go from two to four cores per processor up to 48 with some of the latest different processes that are out there. So our ability to consume the computing and the storage resources, it's astounding, you know, it's like that whole saying, 'build it and they will come.' Because I'm always amazed, I'm like, how are we going to possibly utilize all this memory bandwidth? How are we going to utilize all these cores? But we do. And the trick to this is having just a balanced infrastructure. It's really critical. Because if you have a performance mismatch between your server and your storage, you really lose a lot of productivity and it does impact your revenue. >> So that's such a key point. Pardon, begin that slide up again with the four points. And that last point that you made Kim about balance. And so here you have these, electronic speeds with memory and IO, and then you've got the spinning disc, this mechanical disc. You mentioned that SSD kind of changed the game, but it used to be, when I looked at benchmarks, it was always the D stage bandwidth of the cash out to the spinning disc was always the bottleneck. And, you go back to the days of you it's symmetrics, right? The huge backend disk bandwidth was how they dealt with that. But, and then you had things the oxymoron of the day was high spin speed disks of a high performance disk. Compared to memories. And, so the next chart that we have is show some really amazing performance increases over the years. And so you see these bars on the left-hand side, it looks at historical performance for 4k random IOPS. And on the right-hand side, it's the storage controller performance for sequential bandwidth from 2008 to 2022. That's 22 is that yellow line. It's astounding the increases. I wonder if you could tell us what we're looking at here, when did SSD come in and how did that affect your thinking? (laughs) >> So I remember back in 2007, we were kind of on the precipice of SSDs. We saw it, the writing was on the wall. We had our first three gig SAS and SATA capable HPAs that had come out. And it was a shock because we were like, wow, we're going to really quickly become the bottleneck once this becomes more mainstream. And you're so right though about people work in, building these massive hard drive based back ends in order to handle kind of that tiered architecture that we were seeing that back in the early 2010s kind of when the pricing was just so sky high. And I remember looking at our SAS controllers, our very first one, and that was when I first came in at 2007. We had just launched our first SAS controller. We're so proud of ourselves. And I started going how many IOPS can this thing, even handled? We couldn't even attach enough drives to figure it out. So what we would do is we'd do these little tricks where we would do a five 12 byte read, and we would do it on a 4k boundary, so that it was actually reading sequentially from the disc, but we were handling these discrete IOPS. So we were like, oh, we can do around 35,000. Well, that's just not going to hit it anymore. Bandwidth wise we were doing great. Really our limitation and our bottleneck on bandwidth was always either the host or the backend. So, our controllers are there basically, there were three bottlenecks for our storage controllers. The first one is the bottleneck from the host to the controller. So that is typically a PCIe connection. And then there's another bottleneck on the controller to the disc. And that's really the number of ports that we have. And then the third one is the discs themselves. So in typical storage, that's what we look at. And we say, well, how do we improve this? So some of these are just kind of evolutionary, such as PCIE generations. And we're going to talk a little bit about that, but some of them are really revolutionary, and those are some of the things that we've been doing over the last five or six years to try and make sure that we are no longer the bottleneck. And we can enable these really, really fast drives. >> So can I ask a question? I'm sorry to interrupted but on these blue bars here. So these all spinning disks, I presume, out years they're not. Like when did flash come in to these blue bars? is that..you said 27 you started looking at it, but on these benchmarks, is it all spinning disc? Is it all flash? How should we interpret that? >> No, no. Initially they were actually all hard drives. And the way that we would identify, the max iOS would be by doing very small sequential reads to these hard drives. We just didn't have SSDs at that point. And then somewhere around 2010 is where we.. it was very early in that chart, we were able to start incorporating SSD technology into our benchmarking. And so what you're looking at here is really the max that our controller is capable of. So we would throw as many drives as we could and do what we needed to do in order to just make sure our controller was the bottleneck and what can we expose. >> So the drive then when SSD came in was no longer the bottleneck. So you guys had to sort of invent and rethink sort of how, what your innovation and your technology, because, I mean, these are astounding increases in performance. I mean, I think in the left-hand side, we've built this out pad, you got 170 X increase for the 4k random IOPS, and you've got a 20 X increase for the sequential bandwidth. How were you able to achieve that level of performance over time? >> Well, in terms of the sequential bandwidth, really those come naturally by increases in the PCIe or the SAS generation. So we just make sure we stay out of the way, and we enable that bandwidth. But the IOPS that's where it got really, really tricky. So we had to start thinking about different things. So, first of all, we started optimizing all of our pathways, all of our IO management, we increased the processing capabilities on our IO controllers. We added more on-chip memory. We started putting in IO accelerators, these hardware accelerators. We put in SAS poor kind of enhancements. We even went and improved our driver to make sure that our driver was as thin as possible. So we can make sure that we can enable all the IOPS on systems. But a big thing happening a few couple of generations ago was we started introducing something called tri capable controllers, which means that you could attach NVMe. You could attach SAS or you could attach SATA. So you could have this really amazing deployment of storage infrastructure based around your customized needs and your cost requirements by using one controller. >> Yeah. So anybody who's ever been to a trade show where they were displaying a glass case with a Winchester disc drive, for example, you see it's spinning and its actuators is moving, wow, that's so fast. Well, no. That's like a tourist slower. It's like a snail compared to the system's speed. So it's, in a way life was easy back in those days, because when you did a right to a disk, you had plenty of time to do stuff, right. And now it's changed. And so I want to talk about Gen3 versus Gen4, and how all this relates to what's new in Gen4 and the impacts of PCIe here, you have a chart here that you've shared with us that talks to that. And I wonder if you could elaborate on that, Kim. >> Sure. But first, you said something that kind of hit my funny bone there. And I remember I made a visit once about 15 or 20 years ago to IBM. And this gentleman actually had one of those old ones in his office and he referred to them as disk files. And he never until the day he retired, he'd never stopped calling them disc files. And it's kind of funny to be a part of that history. >> Yeah. DASD. They used to call it. (both laughing) >> SD, DASD. I used to get all kinds of, you know, you don't know what it was like back then, but yeah. But now nowadays we've got it quite easily enabled because back then, we had, SD DASD and all that. And then, ATA and then SCSI, well now we've got PCIe. And what's fabulous about PCIe is that it just has the generations are already planned out. It's incredible. You know, we're looking at right now, Gen3 moving to Gen4, and that's a lot about what we're going to be talking about. And that's what we're trying to test out. What is Gen4 PCIe when to bias? And it really is. It's fantastic. And PCIe came around about 18 years ago and Broadcom is, and we do participate and contribute to the PCIe SIG, which is, who develops the standards for PCIe, but the host in both our host interface in our NVMe desk and utilize the standards. So this is really, really a big deal, really critical for us. But if you take a look here, you can see that in terms of the capabilities of it, it's really is buying us a lot. So most of our drives right now NVMe drives tend to be by four. And a lot of people will connect them. And what that means is four lanes of NVMe and a lot of people that will connect them either at by one or by two kind of depending on what their storage infrastructure will allow. But the majority of them you could buy, or there are so, as you can see right now, we've gone from eight gig transfers per second to 16 gig of transfers per second. What that means is for a by four, we're going from one drive being able to do 4,000 to do an almost 8,000 megabytes per second. And in terms of those 4k IOPS that really evade us, they were really really tough sometimes to squeeze out of these drives, but now we're got 1 million, all we have to 2 million, it's just, it's insane. You know, just the increase in performance. And there's a lot of other standards that are going to be sitting on top of PCIe. So it's not going away anytime soon. We've got to open standards like CXL and things like that, but we also have graphics cards. You've got all of your hosts connections, they're also sitting on PCIe. So it's fantastic. It's backwards, it's orbits compatible, and it really is going to be our future. >> So this is all well and good. And I think I really believe that a lot of times in our industry, the challenges in the plumbing are underappreciated. But let's make it real for the audience because we have all these new workloads coming out, AI, heavily data oriented. So I want to get your thoughts on what types of workloads are going to benefit from Gen4 performance increases. In other words, what does it mean for application performance? You shared a chart that lists some of the key workloads, and I wonder if we could go through those. >> Yeah, yeah. I could have a large list of different workloads that are able to consume large amounts of data, whether or not it's in small or large kind of bytes of data. But as you know right now, and I said earlier, our ability to consume these compute and storage resources is amazing. So you build it and we'll use it. And the world's data we're expected to grow 61% to 175 zettabytes by the year 2025, according to IDC. So that's just a lot of data to manage. It's a lot of data to have, and it's something that's sitting around, but to be useful, you have to actually be able to access it. And that's kind of where we come in. So who is accessing it? What kind of applications? I spend a lot of time trying to understand that. And recently I attended a virtual conference SDC and what I like to do when I attend these conferences is to try to figure out what the buzz words are. What's everybody talking about? Because every year it's a little bit different, but this year was edge, edge everything. And so I kind of put edge on there first in, even you can ask anybody what's edge computing and it's going to mean a lot of different things, but basically it's all the computing outside of the cloud. That's happening typically at the edge of the network. So it tends to encompass a lot of real time processing on those instant data. So in the data is usually coming from either users or different sensors. It's that last mile. It's where we kind of put a lot of our content caching. And, I uncovered some interesting stuff when I was attending this virtual conference and they say only about 25% of all the usable data actually even reach the data center. The rest is ephemeral and it's localized, locally and in real time. So what it does is in the goal of edge computing is to try and reduce the bandwidth costs for these kinds of IOT devices that go over a long distance. But the reality is the growth of real-time applications that require these kinds of local processing are going to drive this technology forward over the coming years. So Dave, your toaster and your dishwasher they're, IOT edge devices probably in the next year, if they're not already. So edge is a really big one and consumes a lot of the data. >> The buzzword does your now is met the metaverse, it's almost like the movie, the matrix is going to come in real time. But the fact is it's all this data, a lot of videos, some of the ones that I would call out here, you mentioned facial recognition, real-time analytics. A lot of the edge is going to be real-time inferencing, applying AI. And these are just a massive, massive data sets that you again, you and of course your customers are enabling. >> When we first came out with our very first Gen3 product, our marketing team actually asked me, "Hey, how can we show users how they can consume this?" So I actually set up a head to environment. I decided I'm going to learn how to do this. I set up this massive environment with Hadoop, and at the time they called big data, the 3V's, I don't know if you remember these big 3Vs, the volume, velocity and variety. Well Dave, did you know, there are now 10 Vs? So besides those three, we got velocity, we got valued, we got variability, validity, vulnerability, volatility, visualization. So I'm thinking we need just to add another beat of that. >> Yeah. (both laughing) Well, that's interesting. You mentioned that, and that sort of came out of the big data world, a dupe world, which was very centralized. You're seeing the cloud is expanding, the world's getting, you know, data is by its very nature decentralized. And so you've got to have the ability to do an analysis in place. A lot of the edge analytics are going to be done in real time. Yes, sure. Some of it's going to go back in the cloud for detailed modeling, but we are the next decade Kim, ain't going to be like the last I often say. (laughing) I'll give you the last word. I mean, how do you see this sort of evolving, who's going to be adopting this stuff. Give us a sort of a timeframe for this kind of rollout in your world. >> In terms of the timeframe. I mean really nobody knows, but we feel like Gen5, that it's coming out next year. It may not be a full rollout, but we're going to start seeing Gen5 devices and Gen5 infrastructure is being built out over the next year. And then follow very, very, very quickly by Gen6. And so what we're seeing though is, we're starting to see these graphics processors, These GPU's, and I'm coming out as well, that are going to be connecting, using PCIe interfaces as well. So being able to access lots and lots and lots of data locally is going to be a really, really big deal and order because worldwide, all of our companies they're using business analytics. Data is money. And the person that actually can improve their operational efficiency, bolster those sales and increase your customer satisfaction. Those are the companies that are going on to win. And those are the companies that are going to be able to effectively store, retrieve and analyze all the data that they're collecting over the years. And that requires an abundance of data. >> Data is money and it's interesting. It kind of all goes back to when Steve jobs decided to put flash inside of an iPhone and the industry exploded, consumer economics kicked in 5G now edge AI, a lot of the things you talked about, GPU's the neural processing unit. It's all going to be coming together in this decade. Very exciting. Kim, thanks so much for sharing this data and your perspectives. I'd love to have you back when you got some new perspectives, new benchmark data. Let's do that. Okay. >> I look forward to it. Thanks so much. >> You're very welcome. And thank you for watching this CUBE conversation. This is Dave Vellante and we'll see you next time. (upbeat music)
SUMMARY :
Kim Leyenaar is the Principal So you have a deep a lot of the customers and how has that impacted the And I could hear the And, so the next chart that we have And it was a shock because we were like, in to these blue bars? And the way that we would identify, So the drive then when SSD came in Well, in terms of the And I wonder if you could And it's kind of funny to They used to call it. and a lot of people that will But let's make it real for the audience and consumes a lot of the data. the matrix is going to come in real time. and at the time they the ability to do an analysis And the person that actually can improve a lot of the things you talked about, I look forward to it. And thank you for watching
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
2007 | DATE | 0.99+ |
Broadcom | ORGANIZATION | 0.99+ |
2008 | DATE | 0.99+ |
Kim Leyenaar | PERSON | 0.99+ |
2014 | DATE | 0.99+ |
61% | QUANTITY | 0.99+ |
Kimberly Leyenaar | PERSON | 0.99+ |
4,000 | QUANTITY | 0.99+ |
14 year | QUANTITY | 0.99+ |
2010 | DATE | 0.99+ |
20 X | QUANTITY | 0.99+ |
Dave | PERSON | 0.99+ |
1 million | QUANTITY | 0.99+ |
Kim | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
two cores | QUANTITY | 0.99+ |
third one | QUANTITY | 0.99+ |
2022 | DATE | 0.99+ |
2 million | QUANTITY | 0.99+ |
16 gig | QUANTITY | 0.99+ |
first question | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
five megabytes | QUANTITY | 0.99+ |
10 Vs | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
170 X | QUANTITY | 0.99+ |
eight gig | QUANTITY | 0.99+ |
30 terabytes | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
mid 2000s | DATE | 0.99+ |
400 | QUANTITY | 0.99+ |
early 2000s | DATE | 0.99+ |
one and a half million | QUANTITY | 0.99+ |
one terabyte | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
four cores | QUANTITY | 0.99+ |
175 zettabytes | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
three bottlenecks | QUANTITY | 0.99+ |
early 2010s | DATE | 0.99+ |
next decade | DATE | 0.99+ |
early 2000s | DATE | 0.99+ |
4k | QUANTITY | 0.99+ |
one drive | QUANTITY | 0.99+ |
first one | QUANTITY | 0.99+ |
IDC | ORGANIZATION | 0.99+ |
LS-300 | COMMERCIAL_ITEM | 0.98+ |
next month | DATE | 0.98+ |
1950s | DATE | 0.98+ |
about 22 years | QUANTITY | 0.98+ |
one controller | QUANTITY | 0.98+ |
2025 | DATE | 0.98+ |
iOS | TITLE | 0.98+ |
Steve | PERSON | 0.98+ |
Winchester | ORGANIZATION | 0.98+ |
five | QUANTITY | 0.98+ |
DVR 5 4800 | COMMERCIAL_ITEM | 0.98+ |
both | QUANTITY | 0.98+ |
four lanes | QUANTITY | 0.98+ |
around 35,000 | QUANTITY | 0.97+ |
first three gig | QUANTITY | 0.97+ |
about six years ago | DATE | 0.96+ |
about 25% | QUANTITY | 0.96+ |
first hard drive | QUANTITY | 0.96+ |
over 6,000 megabytes per second | QUANTITY | 0.96+ |
20 years ago | DATE | 0.96+ |
almost 50 years | QUANTITY | 0.96+ |
22 | QUANTITY | 0.95+ |
one and a half | QUANTITY | 0.95+ |
Show Introduction | Commvault Connections 2021
(gentle upbeat music) >> Hello, everyone, and welcome to theCUBE's coverage of Commvault Connections 21. My name is Dave Vellante and I'll be hosting the program today. I want to start with a bit of an assessment on the keynotes that we heard this morning, but before I get into that, I want to set the framework for thinking about Commvault as a company. This company has been around for a long time, since the late 1980s, but really came into prominence in the client server era and it has ridden numerous waves, including network backup and recovery, data management, and now cloud data services. It's a company with more than $700 million in revenue and a market value of nearly $3 billion. Since coming on as CEO, Sanjay Mirchandani has embarked on moving the company towards a subscription model, focusing on optionality for on premises, hybrid and cloud workloads. It's launch of metallic and data management as a service are two components that underpin the strategy. At his keynote earlier today, Mirchandani drew on his experience as both a former CIO and current CEO roles to connect with his audience. His major themes hit on data, the value of data, and the imperative to get control of your data. Of course, data protection has become a fundamental component of digital transformations. For years, data protection was an afterthought or a bolt on, but today, organizations are forced to think about their digital stacks in their entirety, which means they have to build resilience into their platforms from the start. Mirchandani said that if we embrace, manage, and properly protect data, it will become the defining disruptive difference for an organization. But he talked about the gap between what the business wants to do and what the technology teams are actually equipped to do and when it comes to data, I couldn't agree more. He called this the business integrity gap and I'll come back to that. He also put out some fun facts and I'll share those here. According to IDC, 64 zettabytes of data was created and replicated in 2020. That's the equivalent of 2 trillion 4K movies. That's a lot of data. Gardner says by 2025, 85% of business will be delivered through SAS applications. Sophos, the security firm, estimates that the average cost of a ransomware attack is approaching nearly $2 million. The security company Proofpoint did a survey and 64% of surveyed CSOs felt that they were at risk of a material cyber attack in the next 12 months. I'm surprised that number was so, so low. I think the other 36% are busy responding to a cyber attack. Coming back to Sanjay's business integrity gap. Here's how I see it. Data by its very nature is distributed, decentralized, and it's becoming more so with hybrid connections, multicloud installations, and edge use cases. This is only going to accelerate in the future. As such, organizations need to rethink their approaches to getting value from data. Instead of building monolithic data architectures and hyper-specialized technical data teams, organizations are beginning to empower lines of business and domain owners to take end-to-end responsibility for data ownership. The underlying technology platform is becoming an operational detail that serves the data owners, where data protection and governance is computationally automated in a federated model. So the policy is centralized, but the implementation of that policy is done by software. This means that data governance, security, privacy, access, and policy are all adjudicated wherever possible by software and our automated, irrespective of physical location. Data silos are not just a technology problem. They're a symptom of flawed organizational constructs, steeped in the notion that highly technical data specialists and centralized teams should be the stewards of the data and serve multiple lines of business simultaneously, without proper business context. Now, this is changing. Data is being used to create a new class of products and services that can be directly or indirectly monetized, or drive other value, for instance, like saving lives. It's about the organizational mission. Now in this sense, data is undergoing a renaissance, where the responsibility for end-to-end data ownership is being distributed and decentralized, where highly specialized technical teams are becoming enablers for generalists that reside within the lines of business, i.e., those who are building data products and services. This is not shadow IT. It's decentralized management with federated governance. Now, by rethinking the data management paradigm, the responsibility for good data protection policy transcends technical teams and becomes a priority for the entire organization. To that end, Commvault laid out its strategy to deliver a comprehensive set of intelligent data services, spanning data protection, security, compliance, governance, data transformation, and data insights. In my view, a huge part of Commvault's strategy lies in automation. That's a key ingredient of cloud and any cloud strategy. In other words, supporting cloud native and cloud-like data management capabilities that can be programmatically deployed, secured, managed, and governed, and applied across an organization's sprawling data empire. The world of enterprise technology is complex and the winning technology companies are going to be those that can abstract the underlying complexity and assist organizations to implement sound data management practices, irrespective of data location, in the most efficient way. So as you hear the stories and examples here at Commvault Connections, you can decide for yourself if the company is on the right track and if what you hear aligns with your digital business skill goals. So let's now get a practitioner's perspective and hear how the CSO is thinking about data protection. Up next is Dave Martin, Chief Information Security Officer at ADP. You're watching theCUBE. (gentle upbeat music)
SUMMARY :
and the imperative to
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Mirchandani | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Dave Martin | PERSON | 0.99+ |
Sophos | ORGANIZATION | 0.99+ |
Sanjay Mirchandani | PERSON | 0.99+ |
2020 | DATE | 0.99+ |
Proofpoint | ORGANIZATION | 0.99+ |
Commvault | ORGANIZATION | 0.99+ |
more than $700 million | QUANTITY | 0.99+ |
36% | QUANTITY | 0.99+ |
2025 | DATE | 0.99+ |
Gardner | PERSON | 0.99+ |
64% | QUANTITY | 0.99+ |
IDC | ORGANIZATION | 0.99+ |
nearly $2 million | QUANTITY | 0.99+ |
nearly $3 billion | QUANTITY | 0.99+ |
ADP | ORGANIZATION | 0.99+ |
Sanjay | PERSON | 0.99+ |
late 1980s | DATE | 0.98+ |
85% | QUANTITY | 0.98+ |
both | QUANTITY | 0.97+ |
today | DATE | 0.96+ |
Commvault Connections | ORGANIZATION | 0.94+ |
theCUBE | ORGANIZATION | 0.93+ |
two components | QUANTITY | 0.93+ |
this morning | DATE | 0.9+ |
Commvault Connections 21 | TITLE | 0.9+ |
2021 | DATE | 0.89+ |
earlier today | DATE | 0.86+ |
2 trillion 4K movies | QUANTITY | 0.84+ |
SAS | ORGANIZATION | 0.83+ |
64 zettabytes of data | QUANTITY | 0.73+ |
Chief | PERSON | 0.69+ |
next 12 months | DATE | 0.62+ |
Officer | PERSON | 0.51+ |
years | QUANTITY | 0.51+ |
Nick Halsey, Okera | CUBE Conversation
(soft electronic music) >> Welcome to this special CUBE Conversation. I'm John Furrier here, in theCUBE's Palo Alto studio. We're here, remotely, with Nick Halsey who's the CEO of OKERA, hot startup doing amazing work in cloud, cloud data, cloud security, policy governance as the intersection of cloud and data comes into real stable operations. That's the number one problem. People are figuring out, right now, is how to make sure that data's addressable and also secure and can be highly governed. So Nick, great to see you. Thanks for coming on theCUBE. >> It's great to be here, John, thank you. >> So you guys have a really hot company going on, here, and you guys are in an intersection, an interesting spot as the market kind of connects together as cloud is going full, kind of, whatever, 3.0, 4.0. You got the edge of the network developing with 5G, you've got space, you've got more connection points, you have more data flowing around. And the enterprises and the customers are trying to figure out, like, okay, how do I architect this thing. And oh, by the way, I got a, like all these compliance issues, too. So this is kind of what you could do. Take a minute to explain what your company's doing. >> Yeah, I'm happy to do that, John. So we're introduced a new category of software that we call universal data authorization or UDA which is really starting to gain some momentum in the market. And there're really two critical reasons why that happening. People are really struggling with how do I enable my digital transformation, my cloud migration while at the same time making sure that my data is secure and that I'm respecting the privacy of my customers, and complying with all of these emerging regulations around data privacy like GDPR, CCPA, and that alphabet soup of regulations that we're all starting to become aware of. >> I want to ask about the market opportunity because, you know, one of the things we see and the cloud covers normal conversations like, "Hey, modern applications are developing." We're starting to see cloud-native. You're starting to see these new use cases so you're starting to see new expectations from users and companies which creates new experiences. And this is throwing off all kinds of new, kinds of data approaches. And a lot of people are scratching their head and they feel like do they slow it down, they speed it up? Do I get a hold of the compliance side first? Do I innovate? So it's like a real kind of conflict between the two. >> Yeah, there's a real tension in most organizations. They're trying to transform, be agile, and use data to drive that transformation. But there's this explosion of the volume, velocity, and variety of data, we've all heard about the three Ds, we'll say there're five Ds. You know, it's really complicated. So you've got the people on the business side of the house and the Chief Data Officer who want to enable many more uses of all of these great data assets. But of course, you've got your security teams and your regulatory and compliance teams that want to make sure they're doing that in the right way. And so you've got to build a zero-trust infrastructure that allows you to be agile and be secure at the same time. And that's why you need universal data authorization because the old manual ways of trying to securely deliver data to people just don't scale in today's demanding environments. >> Well I think that's a really awesome approach, having horizontally scalable data. Like infrastructure would be a great benefit. Take me through what this means. I'd like to get you to define, if you don't mind, what is universal data authorization. What is the definition? What does that mean? >> Exactly and people are like, "I don't understand security. "I do data over here and privacy, "well I do that over here." But the reality is you really need to have the right security platform in order to express your privacy policies, right. And so in the old days, we used to just build it into the database, or we'd build it into the analytic tools. But now, we have too much data in too many platforms in too many locations being accessed by too many, you know, BI applications and A-I-M-L data apps and so you need to centralize the policy definition and policy enforcement so that it can be applied everywhere in the organization. And the example I like to give, John, is we are just like identity access management. Why do I need Okta or Sale Point, or one of those tools? Can't I just log in individually to, you know, SalesForce or to GitHub or? Sure, you can but once you have 30 or 40 systems and thousands of users, it's impossible to manage your employee onboarding and off-boarding policy in a safe and secure way. So you abstract it and then you centralize it and then you can manage and scale it. And that's the same thing you do with OKERA. We do all of the security policy enforcement for all of your data platforms via all of your analytic tools. Anything from Tableau to Databricks to Snowflake, you name it, we support those environments. And then as we're applying the security which says, "Oh, John is allowed access to this data in this format "at this time," we can also make sure that the privacy is governed so that we only show the last four digits of your social security number, or we obfuscate your home address. And we certainly don't show them your bank balance, right? So you need to enable the use of the data without violating the security and privacy rights that you need to enforce. But you can do both, with our customers are doing at incredible scale, then you have sort of digital transformation nirvana resulting from that. >> Yeah, I mean I love what you're saying with the scale piece, that's huge. At AWS's Reinforce Virtual Conference that they had to run because the event was canceled due to the Delta COVID surge, Stephen Schmidt gave a great keynote, I called it a master class, but he mainly focused on cyber security threats. But you're kind of bringing that same architectural thinking to the data privacy, data security piece. 'Cause it's not so much you're vulnerable for hacking, it's still a zero-trust infrastructure for access and management, but-- >> Well you mean you need security for many reasons. You do want to be able to protect external hacks. I mean, every week there's another T-Mobile, you know, you name it, so that's... But 30% of data breaches are by internal trusted users who have rights. So what you needed to make sure is that you're managing those rights and that you're not creating any long tails of data access privilege that can be abused, right? And you also need, one of the great benefits of using a platform like OKERA, is we have a centralized log of what everybody is doing and when, so I could see that you, John, tried to get into the salary database 37 times in the last hour and maybe we don't want to let you do that. So we have really strong stakeholder constituencies in the security and regulatory side of the house because, you know, they can integrate us with Splunk and have a single pane of glass on, weird things are happening in the network and there's, people are trying to hit these secure databases. I can really do event correlation and analysis, I can see who's touching what PII when and whether it's authorized. So people start out by using us to do the enforcement but then they get great value after they've been using us for a while, using that data, usage data, to be able to better manage their environments. >> It's interesting, you know, you bring up the compliance piece as a real added value and I wasn't trying to overlook it but it brings up a good point which is you have, you have multiple benefits when you have a platform like this. So, so take me through like, who's using the product. You must have a lot of customers kicking the tires and adopting it because architecturally, it makes a lot of sense. Take me through a deployment of what it's like in the customer environment. How are they using it? What is some of the first mover types using this approach? And what are some of the benefits they might be realizing? >> Yeah, as you would imagine, our early adopters have been primarily very large organizations that have massive amounts of data. And they tend also to be in more regulated industries like financial services, biomedical research and pharmaceuticals, retail with tons of, you know, consumer information, those are very important. So let me give you an example. We work with one of the very largest global sports retailers in the world, I can't use their name publicly, and we're managing all of their privacy rights management, GDPR, CCPA, worldwide. It's a massive undertaking. Their warehouse is over 65 petabytes in AWS. They have many thousands of users in applications. On a typical day, an average day OKERA is processing and governing six trillion rows of data every single day. On Black Friday, it peaked over 10 trillion rows of data a day, so this is scale that most people really will never get to. But one of the benefits of our architecture is that we are designed to be elastically scalable to sort of, we actually have a capability we call N scale because we can scale to the Nth degree. We really can go as far as you need to in terms of that. And it lets them do extraordinary things in terms of merchandising and profitability and market basket analysis because their teams can work with that data. And even though it's governed and redacted and obfuscated to maintain the individuals' privacy rights, we still let them see the totality of the data and do the kind of analytics that drive the business. >> So large scale, big, big customer base that wants scale, some, I'll say data's huge. What are some of the largest data lakes that you guys have been working with? 'Cause sometimes you hear people saying our data lakes got zettabytes and petabytes of content. What are some of the, give us a taste of the order of magnitude of some of the size of the data lakes and environments that your customers were able to accomplish. >> I want to emphasize that this is really important no matter what size because some of our customers are smaller tech-savvy businesses that aren't necessarily processing huge volumes of data, but it's the way that they are using the data that drives the need for us. But having said that, we're working with one major financial regulator who has a data warehouse with over 200 petabytes of data that we are responsible for providing the governance for. And one thing about that kind of scale that's really important, you know, when you want to have everybody in your organization using data at that scale, which people think of as democratizing your data, you can't just democratize the data, you also have to democratize the governance of the date, right? You can't centralize policy management in IT because then everybody who wants access to the data still has to go back to IT. So you have to make it really easy to write policy and you have to make it very easy to delegate policy management down to the departments. So I need to be able to say this person in HR is going to manage these 50 datasets for those 200 people. And I'm going to delegate the responsibility to them but I'm going to have centralized reporting and auditing so I can trust but verify, right? I can see everything they're doing and I can see how they are applying policy. And I also need to be able to set policy at the macro level at the corporate level that they inherit so I might just say I don't care who you are, nobody gets to see anything but the last four digits of your social security number. And they can do further rules beyond that but they can't change some of the master rules that you're creating. So you need to be able to do this at scale but you need to be able to do it easily with a graphical policy builder that lets you see policy in plain English. >> Okay, so you're saying scale, and then the more smaller use cases are more refined or is it more sensitive data? Regulated data? Or more just levels of granularity? Is that the use case? >> You know, I think there's two things that are really moving the market right now. So the move to remote work with COVID really changed everybody's ideas about how do you do security because you're no longer in a data center, you no longer have a firewall. The Maginot Line of security is gone away and so in a zero-trust world, you know, you have to secure four endpoints: the data, the device, the user, and the application. And so this pretty radical rethinking of security is causing everybody to think about this, big, small, or indifferent. Like, Gartner just came out with a study that said by 2025 75% of all user data in the world is going to be governed by privacy policy. So literally, everybody has to do this. And so we're seeing a lot more tech companies that manage data on behalf of other users, companies that use data as a commodity, they're transacting data. Really, really understand the needs for this and when you're doing data exchange between companies that is really delicate process that have to be highly governed. >> Yeah, I love the security redo. We asked Pat Gelsinger many, many years ago when he was a CEO of VMware what we thought about security and Dave Allante, my co-host at theCUBE said is it a do-over? He said absolutely it's a do-over. I think it was 2013. He mused around that time frame. It's kind of a do-over and you guys are hitting it. This is a key thing. Now he's actually the CEO of Intel and he's still driving forward. Love Pat's vision on this early, but this brings up the question okay, if it's a do-over and these new paradigms are existing and you guys are building a category, okay, it's a new thing. So I have to ask you, I'm sure your customers would say, "Hey, I already got that in another platform." So how do you address that because when you're new you have to convince the customer that this is a new thing. Like, I don't-- >> So, so look, if somebody is still running on Teradata and they have all their security in place and they have a single source of the truth and that's working for them, that's great. We see a lot of our adoption happening as people go on their cloud transformation journey. Because I'm lifting and shifting a lot of data up into the cloud and I'm usually also starting to acquire data from other sources as I'm doing that, and I may be now streaming it in. So when I lift and shift the data, unfortunately, all of the security infrastructure you've built gets left behind. And so a lot of times, that's the forcing function that gets people to realize that they have to make a change here, as well. And we also find other characteristics like, people who are getting proactive in their data transformation initiatives, they'll often hire a CDO, they'll start to use modern data cataloging tools and identity access management tools. And when we see people adopting those things, we understand that they are on a journey that we can help them with. And so we partner very closely with the catalog vendors, with the identity access vendors, you know, with many other parts of the data lake infrastructure because we're just part of the stack, right? But we are the last mile because we're the part of the stack that lets the user connect. >> Well I think you guys are on a wave that's massive and I think it's still, it's going to be bigger coming forward. Again, when you see categories being created it's usually at the beginning of a bigger wave. And I got to ask you because one thing's I've been really kind of harping on on theCUBE and pounding my fist on the table is these siloed approaches. And you're seeing 'em everywhere, I mean, even in the consumer world. LinkedIn's a silo. Facebook's a silo. So you have this siloed mentality. Certainly in the enterprise they're no stranger to silos. So if you want to be horizontally scalable with data you've got to have it free, you've got to break the silos. Are we going to get there? Is this the beginning? Are we breaking down the silos, Nick, or is this the time or what's your reaction to that? >> I'll tell you something, John. I have spent 30 years in the data and analytics business and I've been fortunate enough to help launch many great BI companies like Tableau and Brio Software, and Jaspersoft and Alphablocks we were talking about before the show. Every one of those companies would have been much more successful if they had OKERA because everybody wanted to spread those tools across the organization for better, more agile business analytics, but they were always held back by the security problem. And this was before privacy rights were even a thing. So now with UDA and I think hand-in-hand with identity access management, you truly have the ability to deliver analytic value at scale. And that's key, you need simplicity at scale and that is what lets you let all parts of your organization be agile with data and use it to transform the business. I think we can do that, now. Because if you run in the cloud, it's so easy, I can stand up things like Hadoop in, you know, like Databricks, like Snowflake. I could never do that in my on-prem data center but I can literally press a button and have a very sophisticated data platform, press a button, have OKERA, have enforcement. Really, almost any organization can now take advantage of what only the biggest and most sophisticated organizations use to be able to do it. >> I think Snowflake's an example for all companies that you could essentially build in the shadows of the big clouds and build your own franchise if you nail the security and privacy and that value proposition of scale and good product. So I got, I love this idea of security and privacy managed to a single platform. I'd love to get your final thought while I got you here, on programmability because I'm seeing a lot of regulators and people in the privacy world puttin' down all these rules. You got GDPR and I want to write we forgot and I got all these things... There's a trend towards programmability around extraction of data and managing data where just a simple query could be like okay, I want to know what's goin' on with my privacy and we're a media company and so we record a lot of data too, and we've got to comply with all these like, weird requests, like hey, can you, on June 10th, I want, can you take out my data? And so that's programmatic, that's not a policy thing. It's not like a lawyer with some privacy policy. That's got to be operationalized. So what's your reaction to that as this world starts to be programmable? >> Right, well that's key to our design. So we're an API first approach. We are designed to be a part of a very sophisticated mesh of technology and data so it's extremely simple to just call us to get the information that you need or to express a policy on the fly that might be created because of the current state-based things that are going on. And that's very, very important when you start to do real-time applications that require geo-fencing, you're doing 5G edge computing. It's a very dynamic environment and the policies need to change to reflect the conditions on the ground, so to speak. And so to be callable, programmable, and betable, that is an absolutely critical approach to implementing IUDA in the enterprise. >> Well this is super exciting, I feel you guys are on, again, a bigger wave than it appears. I mean security and privacy operating system, that's what you guys are. >> It is. >> It is what it is. Nick, great to chat with you. >> Couldn't have said it better. >> I love the category creation, love the mojo and I think you guys are on the right track. I love this vision merging data security policy in together into one to get some enablement and get some value creation for your customers and partners. Thanks for coming on to theCUBE. I really appreciate it. >> Now, it's my pleasure and I would just give one piece of advice to our listeners. You can use this everywhere in your organization but don't start with that. Don't boil the ocean, pick one use case like the right to be forgotten and let us help you implement that quickly so you can see the ROI and then we can go from there. >> Well I think you're going to have a customer in theCUBE. We will be calling you. We need this. We've done a lot of digital events now with the pandemic, so locked data that we didn't have to deal with before. But thanks for coming on and sharing, appreciate it. OKERA, hot startup. >> My pleasure, John and thank you so much. >> So OKERA conversation, I'm John Furrier here, in Palo Alto. Thanks for watching. (soft electronic music)
SUMMARY :
So Nick, great to see you. and you guys are in an category of software that we call of the things we see and the Chief Data I'd like to get you to And the example I like to the event was canceled to let you do that. What is some of the first mover types and do the kind of analytics of some of the size the data, you also have So the move to remote work So how do you address that all of the security And I got to ask you because and that is what lets you let all parts and people in the privacy world puttin' on the ground, so to speak. that's what you guys are. Nick, great to chat with you. and I think you guys like the right to be to have a customer in theCUBE. and thank you so much. So OKERA conversation, I'm John Furrier
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Nick Halsey | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Dave Allante | PERSON | 0.99+ |
Jaspersoft | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Stephen Schmidt | PERSON | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
June 10th | DATE | 0.99+ |
Nick | PERSON | 0.99+ |
Tableau | ORGANIZATION | 0.99+ |
OKERA | ORGANIZATION | 0.99+ |
2013 | DATE | 0.99+ |
37 times | QUANTITY | 0.99+ |
Alphablocks | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
30 years | QUANTITY | 0.99+ |
30 | QUANTITY | 0.99+ |
50 datasets | QUANTITY | 0.99+ |
30% | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
2025 | DATE | 0.99+ |
Gartner | ORGANIZATION | 0.99+ |
40 systems | QUANTITY | 0.99+ |
T-Mobile | ORGANIZATION | 0.99+ |
Pat | PERSON | 0.99+ |
both | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
two things | QUANTITY | 0.99+ |
200 people | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
over 200 petabytes | QUANTITY | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
GDPR | TITLE | 0.99+ |
AWS | ORGANIZATION | 0.98+ |
English | OTHER | 0.98+ |
Databricks | ORGANIZATION | 0.98+ |
Teradata | ORGANIZATION | 0.98+ |
single platform | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
Brio Software | ORGANIZATION | 0.98+ |
over 65 petabytes | QUANTITY | 0.98+ |
over 10 trillion rows of data a day | QUANTITY | 0.98+ |
Black Friday | EVENT | 0.98+ |
first approach | QUANTITY | 0.97+ |
thousands of users | QUANTITY | 0.97+ |
one piece | QUANTITY | 0.97+ |
75% | QUANTITY | 0.96+ |
Snowflake | ORGANIZATION | 0.96+ |
GitHub | ORGANIZATION | 0.96+ |
theCUBE | ORGANIZATION | 0.96+ |
Delta COVID surge | EVENT | 0.95+ |
Reinforce Virtual Conference | EVENT | 0.95+ |
single source | QUANTITY | 0.95+ |
first mover | QUANTITY | 0.94+ |
pandemic | EVENT | 0.93+ |
every single day | QUANTITY | 0.92+ |
six trillion rows of data | QUANTITY | 0.92+ |
Okta | ORGANIZATION | 0.91+ |
one thing | QUANTITY | 0.9+ |
single pane | QUANTITY | 0.9+ |
four endpoints | QUANTITY | 0.9+ |
CCPA | TITLE | 0.89+ |
UDA | ORGANIZATION | 0.89+ |
first | QUANTITY | 0.88+ |
two critical reasons | QUANTITY | 0.86+ |
zero | QUANTITY | 0.85+ |
Sale Point | ORGANIZATION | 0.85+ |
many years ago | DATE | 0.85+ |
Tableau | TITLE | 0.84+ |
IUDA | TITLE | 0.84+ |
petabytes | QUANTITY | 0.81+ |
thousands of users | QUANTITY | 0.81+ |
today | DATE | 0.8+ |
Sizzle Reel | VMWorld 2019
I'd say for me it's called it's really the power of the of the better together you know to me it's nobody's great apart it takes really an ecosystem of players to kind of work together for the customer benefit and the one that we've demonstrated the VMware with NetApp plus VMware has been a powerful one for well well over 17 years and the person they're putting in terms of the joint customers that have a ton of loyalty to both of us and they want us just to work it out so you know whether you're whether your allegiance on one side of the kubernetes community's battle or another or you're on one side of anyone's you know storage choice or another and I think customers want NetApp and VMware to work this out and in como solutions and we've done that and now what we for the second activist to come out will start that tomorrow I mean to me it starts from what the customer would like to do right and what what we're seeing from customers is it's increasingly a multi cloud world right that expands spans private cloud public cloud and Ed you're smiling when I yeah now there's an opportunity yeah but it's a chance for customers right and so if you look at how VMware is trying to help their meso sort of square the circle I think the first piece is this idea of consistent operations right then we have these management tools that you can use to consistently operate those environments whether they're based on a VMware based infrastructure or whether they're based on a native cloud infrastructure right so if you look at our cloud health platform for example it's a great example where that service can help you under get visibility to your cloud spend across different cloud platforms also be store based platforms and can help you reduce that spend over time so that's sort of what we refer to as consistent operations right which can span any any cloud you know my team is responsible for is more in that consistent infrastructure space and that's really all about how do we deliver consistent compute network and storage service that spans on from multiple public clouds an edge so that's really where we're bringing that same VMware cloud foundation stacked all those different environments you know the networking folks and networking was always relegated to being the underlay or the plumbing now what's becoming important is that the application are making their intent aware to the network and the intent is becoming aware as the intern becomes aware we networking people know what to do in the Estevan layer which then shields all the intricacies of what needs to get done in the underlay so to put it in very simple terms the container is what really drives the need and what we're doing is we're building the outcome to satisfy that need now containers are critical because as Pat was saying you know all of the new digital applications are going to be built with containers in mind so the reason we call it client to cloud to containers because the containers can literally be anywhere you know we're talking about them in the private cloud and in the public cloud they could be right next to where the client is because of the edge cloud they could be in the telco network which is the telco cloud so between these four clouds you literally have a network of these containers and the underlying infrastructure that we are doing is to provide that Estevan layer that will get the containers to talk to one another as well as to talk to the clients that are getting access to those application yeah I mean more than McAfee I think you know you you it's sort of you think of the and the the analog analog to cloud Security's data center security where you think of this sort of Amazon Cloud living in an Amazon datacenter and you know how can we protect that you know the data and the egress access into those cloud and you know same technology sort of apply but to your point that you sort of just touched upon its that cloud is not living in isolation right first of all that Amazon Cloud is connected to a whole bunch of you know applications that are still sitting in a data center right so they may not they're potentially not moving the Oracle database today since they're moving some workloads to the cloud right that's what most most companies are hey guess what there's all these endpoints that are connecting they're connecting both the data center and the cloud you're not gonna proxy to the cloud to get to the data center so there is a gateways so to me cloud security can't be an isolated you know sort of technology that companies have to sort of think about now is there is there opportunity to leverage the cloud to manage security better and get visibility in their security environments to do security analytics absolutely so I think to me that's where it's going because security I think has been proven is no longer you know sort of one thing single thing it's just you have to do multiple things every time I go talk to CIA source they tell me they got this technology that I said he made a minute you you have 20 did you cut down any yeah we've cut down a few but you know they just nervous about cutting down too much because if that one piece of software gets Paul so look I mean I think we again we're kind of really evolving our strategic aims you know historically we've looked at how do we really virtual eyes an entire data center right this concept of the software-defined data center really automating all that and driving great speed efficiency increases and now as we've been talking about we're in this world where you kind of STD sees everywhere right on pram and the cloud different public clouds and so how do you really manage across all those and these are the things we've been talking about so the cloud marketplace fits into that whole concept in the sense that now we can get people one place to go to get easy access to both software and solutions from our partners as well as open source solutions and these are things that come from the bitNami acquisition that we recently did so the idea here is that we cannot make it super simple for customers to become aware of the different solutions to draw those consistent operations that exists on top of our platform with our partners and then make it really easy for them to consume those as well I think we've really broaden and expanded our reach over the last ten years it used to be we're known primarily for our sports programming so now we have inclusive education and health programs we're being able to bring together people with and without intellectual disabilities through those mediums so we've divided resources to schools and education and they run Special Olympics programming during the school day so educators wanna have us because we're improving school clamp campuses reducing bullying enhancing social-emotional learning and so the work that we're doing is so so critical with that community then the air if health we have inclusive health so now we've got health and medical professionals that are now providing health screenings for our athletes so some of the the younger volunteers that we get that are there wanting to make a career in in the medical field they're exposed to our population right and so they learn more about their specific health needs so it's really about changing people's attitudes and so this community of supporters volunteers health Vettel's education were really our goal is to change people's attitudes fundamentally worldwide about people with intellectual disabilities and really kind of produce inclusive mindsets we call it really promote understanding and so now that the the road map that was shared in terms of what VMware looks to do to integrate containers into the ESXi platform itself right it's you know managing VMs and containers Nextiva that's perfect in terms of not having customers have to pick or choose between which platform and where you're gonna deploy something allow them to say you can deploy on whichever format you want it runs in the same ecosystem and management and then that trickles down to the again your storage layer so we do a lot of object storage within the container ecosystems today a lot of high-performance objects because you know the the the file sizes of instances or applications is much larger than you know a document file that URI might create online so there's a big need around performance in that space along with again management at scale the whole multi cloud hybrid cloud movement what's going on out at the enterprise your perspective on kind of where we are in that shift if you will or that transformation and and what's what's driving it you know what's what's creating all the bang you get that question a lot right people ask me what inning are we in question you know it's a regular you know I would say a couple years ago you know as people said I don't think that I think the national anthem is still being played kind of thing you know and I think the game has probably started you know but but I still were think for very early innings and you know I think I'd actually bring it up to even a higher level and talk about what's happening in terms of how companies are thinking about digital transformation and it would I what I think is happening is it's becoming a board level priority for companies they can't afford to ignore it you know digital is changing the Commun obey suspended of advantage in most industries around the globe and so they're investing in digital transformation and I think they're going to do that frankly independent of whatever macroeconomic climate we operate in and so and I think you know the big driving force probably you know in digital transformation today the cloud and so and what we're seeing is there's a you know it's a particular architecture of choice that's emerging for customers yep and I think you know you hit the nail on the head networking has changed it's no longer about speeds and feeds it's about availability and simplicity and so you know Dell and VMware I think are uniquely positioned to deliver a level of automation where this stuff just works right I don't need to go and configure these magic boxes individually I want to just write you know a line of code where my infrastructure is built into the CI CD pipeline and then when I deploy a workload it just works I don't need an army of people to go figure that out right and and I think that's the power of what we're working together to unleash so that was pretty dramatic moment of truth when we deployed atrium and we started the imaging process and it was finished and to be honest I thought that is broken but it actually was that fast so gave us a tremendous amount of I mean ability to deploy and manage and do the work during the workday instead of working after hours and what we doing for data protection before date really we use variety of different solutions backups just to tape and variety of services that actually backed up are they still do or know we've given that a lot up the floor of all the legacy stuff it got rid of that did you have to change your processes or what was that like Wow info we have to we have to get rid of a lot of process they were focused on backup focus on a time that it took to manage backup with atrium date reom didn't have the backup from the day one this is something that they designed I think a second year and that was very different to see the company that deals with storage creating such an innovative vision for developing old I mean developing a roadmap that was actually coming true with every iteration of the software deployment so the second tier that we provisioned was the snapshot and the snapshots that were incredibly fast that didn't take a lot of space that was you know give us ability to restore almost instantly gave us a huge amount of you know focus on not focusing and on storage anymore well since we're here at vmworld right you know be immoral has about 70 million work I think it's actually bigger than the public cloud I you correct me if I'm wrong right uh yeah I mean the I look I'm premise way bigger than the public cloud I have no question exactly and and and what's happening of course is faster sorry but the line is blurring between you know what's a public cloud what's a you know hybrid cloud multi-cloud edge and so look our opportunity is to really make all that go away for customers and allow them to choose and express our unique value add in whatever form the customer wants to use it so you've seen us align with all the public clouds you know you're seeing us take steps in the edge we're continuing to improve the on-premise systems you know with project dimension now it's the VMware cloud on Dell EMC that we're managing for you and it's on demand its consumption and it's consumed just like a public cloud I spend about 50 percent of my time talking to these customers so we learn a lot and here are the four big challenges they're facing first is the explosion of data data is just growing so fast Gartner estimates they'll be a hundred and seventy-five zettabytes of data in 2025 if you cram that into iphone so you take two point six trillion iPhones and go to the Sun and back right it's an enormous amount of data second they're worried about ransomware it's not a question of if you'll be attacked it's when you'll be attacked look at what's happening in Texas right now with the 22 municipalities dealing with that what you want in that case is a resilient infrastructure you want to be the ripples to restore from a really good backup copy of data third they want the hybrid multi cloud world just like Pat Gelson juror has been talking about that's what customers want but they want to be able to protect their data wherever it is make it highly available and get insights in their data wherever it's located and then finally they're dealing with this massive growth in government regulations around the world because of this concern about privacy I was in Australia a few weeks ago and one of our customers she was telling me that she deals with 27 different regulatory environments another customer was saying that California Privacy Act will be the death of him and he's based in st. Louis right so our strategy is focused on taking away the complexity and helping the largest companies in the world deal with these challenges and that's why we introduced the enterprise data services platform and that's why we're here at VMworld talking kubernetes the technology enabler I mean tcp/ip was that in the old networking days it enabled a lot of shifts in the industry you were far that way yeah kubernetes that disruptive enabler yeah I really see it as one of those key transition points in the industry and as I sort of joked if my name was Scott and we were 20 years ago I'd be banging the table calling it Java and Java defined enterprise software development for two decades by the way Scott's my neighbor he's down the hill so I looked down on mr. McNeely I always liked but you know the you know it changed how people did enterprise software development for the last two decades and kubernetes has that same kind of transformative effect but maybe even more important it's not just development but also operations and I think that's what we're uniquely bring together with project Pacific really being able to bridge those two worlds together so you know and if we deliver on this you know I think it is you know that X decade or two will be the center of innovation for us how we bridge those two roles together and really give developers what they need and make it operator friendly out-of-the-box across the history to the future this is pretty powerful yes so this conference is is I think a refreshing return to form so VMware is as you say this is an operators conference and VMware is for operators it's not for devs there was a period there where cloud was scary and and it was all this cloud native stuff and VMware tried to appeal to this new market and I guess tried to dress up and as something that it really wasn't and it didn't pull it off and we didn't it didn't feel right and now VMware has decided that well no actually this is what VMware is about and no one can be more VMware than VMware so it's returning to being its best self and I think against software they know software they know software so the the addition of putting project hands are in and having kubernetes in there and and it's it's to operate the software so it's it's going to be in there and apps will run on it and they want to have kubernetes baked into vSphere so that now yeah we'll have new app new apps and yeah there might be sass ups for the people who are consuming them but they've got to run somewhere and now we could run them on vmware whether it's on site at the edge could be in the cloud your vmware on AWS steve emotions [Music] you
SUMMARY :
the big driving force probably you know
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Australia | LOCATION | 0.99+ |
Texas | LOCATION | 0.99+ |
Pat Gelson | PERSON | 0.99+ |
Scott | PERSON | 0.99+ |
2025 | DATE | 0.99+ |
California Privacy Act | TITLE | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
20 | QUANTITY | 0.99+ |
Gartner | ORGANIZATION | 0.99+ |
Java | TITLE | 0.99+ |
Pat | PERSON | 0.99+ |
st. Louis | LOCATION | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
ESXi | TITLE | 0.99+ |
six trillion | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
vSphere | TITLE | 0.99+ |
CIA | ORGANIZATION | 0.99+ |
first piece | QUANTITY | 0.99+ |
two decades | QUANTITY | 0.99+ |
about 50 percent | QUANTITY | 0.99+ |
22 municipalities | QUANTITY | 0.99+ |
second tier | QUANTITY | 0.98+ |
VMworld | ORGANIZATION | 0.98+ |
iphone | COMMERCIAL_ITEM | 0.98+ |
one piece | QUANTITY | 0.98+ |
20 years ago | DATE | 0.98+ |
two roles | QUANTITY | 0.98+ |
McAfee | ORGANIZATION | 0.98+ |
Oracle | ORGANIZATION | 0.98+ |
27 different regulatory environments | QUANTITY | 0.98+ |
AWS | ORGANIZATION | 0.98+ |
both | QUANTITY | 0.97+ |
two | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
two worlds | QUANTITY | 0.97+ |
Vettel | PERSON | 0.97+ |
today | DATE | 0.97+ |
McNeely | PERSON | 0.96+ |
tomorrow | DATE | 0.95+ |
second year | QUANTITY | 0.95+ |
bitNami | ORGANIZATION | 0.95+ |
vmware | TITLE | 0.93+ |
Special Olympics | EVENT | 0.92+ |
four big challenges | QUANTITY | 0.92+ |
single | QUANTITY | 0.92+ |
first | QUANTITY | 0.91+ |
iPhones | COMMERCIAL_ITEM | 0.91+ |
a hundred and seventy-five zettabytes of | QUANTITY | 0.9+ |
Paul | PERSON | 0.9+ |
a couple years ago | DATE | 0.87+ |
decade | QUANTITY | 0.85+ |
about 70 million work | QUANTITY | 0.85+ |
over 17 years | QUANTITY | 0.82+ |
last two decades | DATE | 0.82+ |
a few weeks ago | DATE | 0.82+ |
one of | QUANTITY | 0.81+ |
vmworld | ORGANIZATION | 0.81+ |
VMWorld | EVENT | 0.8+ |
one side | QUANTITY | 0.79+ |
Dell EMC | ORGANIZATION | 0.78+ |
second activist | QUANTITY | 0.77+ |
four clouds | QUANTITY | 0.77+ |
Estevan | PERSON | 0.77+ |
telco | ORGANIZATION | 0.76+ |
Nextiva | TITLE | 0.75+ |
mr. | PERSON | 0.75+ |
NetApp | TITLE | 0.74+ |
2019 | DATE | 0.73+ |
VMware | TITLE | 0.72+ |
a minute | QUANTITY | 0.72+ |
two point | QUANTITY | 0.7+ |
customers | QUANTITY | 0.68+ |
one place | QUANTITY | 0.63+ |
NetApp | ORGANIZATION | 0.62+ |
X | QUANTITY | 0.62+ |
Sun | LOCATION | 0.6+ |
Ramin Sayar, Sumo Logic | AWS re:Invent 2019
>> Announcer: Live from Las Vegas, it's theCUBE, covering AWS re:Invent 2019. Brought to you by Amazon Web Services and Intel along with its ecosystem partners. >> Welcome back to the eighth year of AWS re:Invent. It's 2019. There's over 60,000 in attendance. Seventh year of theCUBE. Wall-to-wall coverage, covering all the angles of this broad and massively-growing ecosystem. I am Stu Miniman. My co-host is Justin Warren, and one of our Cube alumni are back on the program. Ramin Sayar, who is the president and CEO of Sumo Logic. >> Stu: Booth always at the front of the expo hall. I think anybody that's come to this show has one of the Sumo-- >> Squishies. >> Stu: Squish dolls there. I remember a number of years you actually had live sumos-- >> Again this year. >> At the event, so you know, bring us, the sixth year you've been at the show, give us a little bit of the vibe and your experience so far. >> Yeah, I mean, naturally when you've been here so many times, it's interesting to be back, not only as a practitioner who's attended this many years ago, but now as a partner of AWS, and seeing not only our own community growth in terms of Sumo Logic, but also the community in general that we're here to see. You know, it's a good mix of practitioners and business folks from DevOps to security and much, much more, and as we were talking right before the show, the vendors here are so different now then it was three years go, let alone six years ago. So, it's nice to see. >> All right, a lot of news from Amazon. Anything specific jump out from you from their side, or I know Sumo Logic has had some announcements this week. >> Yeah, I mean, like, true to Amazon, there's always a lot of announcements, and, you know, what we see is customers need time to understand and digest that. There's a lot of confusion, but, you know, selfishly speaking from the Sumo side, you know, we continue to be a strong AWS partner. We announced another set of services along with AWS at this event. We've got some new competencies for container, because that's a big aspect of what customers are doing today with microservices, and obviously we announced some new capabilities around our security intelligence capabilities, specifically for CloudTrail, because that's becoming a really important aspect of a lot of customers maturation of cloud and also operating in the cloud in this new world. >> Justin: So walk us through what customers are using CloudTrail to do, and how the Sumo Logic connection to CloudTrail actually helps them with what they're trying to do. >> Well, first and foremost, it's important to understand what Sumo does and then the context of CloudTrail and other services. You know, we started roughly a decade ago with AWS, and we built and intelligence platform on top of AWS that allows us to deal with the vast amount of unstructured data in specific use cases. So one very common use case, very applicable to the users here, is around the DevOps teams. And so, the DevOps teams are having a much more complicated and difficult time today understanding, ascertaining, where trouble, where problems reside, and how to go troubleshoot those. It's not just about a siloed monitoring tool. That's just not enough. It doesn't the analytics or intelligence. It's about understanding all the data, from CloudTrail, from EC2, and non-AWS services, so you can appropriately understand these new modern apps that are dependent on these microservices and architectures, and what's really causing the performance issue, the availability issue, and, God forbid, a security or breach issue, and that's a unique thing that Sumo provides unlike others here. >> Justin: Yeah, now I believe you've actually extended the Sumo support beyond CloudTrail and into some of the Kubernetes services that Amazon offers like AKS, and you also, I believe it's ESC FireLens support? >> Ramin: Yeah, so, and that's just a continuation of a lot of stuff we've done with respect to our analytics platform, and, you know, we introduced some things earlier this year at re:Inforce with AWS as well so, around VPC Flow Logs and the like, and this is a continuation now for CloudTrail. And really what it helps our customers and end users do is better better and more proactively be able to detect potential issues, respond to those security issues, and more importantly, automate the resolution process, and that's what's really key for our users, because they're inundated with false positives all the time whether it's on the ops side let alone the security side. So Sumo Logic is very unique back to our value prop, but providing a horizontal platform across all these different use cases. One being ops, two being cybersecurity and threat, and three being line-of-business users who are trying to understand what their own users on their digital apps are doing with their services and how to better deliver value. >> Justin: Now, automation is so important when you've got this scope and scale of cloud and the pace of innovation that's happening with all the technology that's around us here at the show, so the automation side of things I think is a little bit underappreciated this year. We're talking about transformation and we're talking about AI and ML. I think, with the automation piece, is one thing that's a little bit underestimated from this year's show. What do you think about that? >> Yeah, I mean, our philosophy all along has been, you can't automate without AI and ML, and it's proven fact that, you know, by next year the machine data growth is going to be 16 zettabytes. By 2025, it's going to be 75 zettabytes of data. Okay, while that's really impressive in terms of volume of data, the challenge is, the tsunami of data that's being generated, how to go decipher what's an important aspect and what's not an important aspect, so you first have to understand from the streaming data services, how to be able to dynamically and schema on read, be able to analyze that data, and then be able to put in context to those use cases I talked about, and then to drive automation remediation, so it's a multifaceted problem that we've been solving for nearly a decade. In a given day, we're analyzing several hundred petabytes of data, right? And we're trying to distill it down to the most important aspects for you, for your particular role and your responsibility. >> Stu: Yeah, um, we've talked a lot about transformation at this show, and one of the big challenges for customers is, they're going through that application modernization journey. I wonder if you could bring us inside some of your customers, you know, where are they having success, where are some of the bottlenecks slowing them down from moving along on this transformation journey? >> Yeah, so, it's interesting because, whether you're a cloud-native company like Sumo Logic or you're aspiring to be a cloud-native company or a cloud-first project going through migration, you have similar problems. It's now become a machine-scale problem, not a human-scale problem, back to the data growth, right? And so, some of our customers, regardless of their maturation, are really trying to understand, you know, as they embark on these digital transformations, how do they solve, what we call, the intelligence gap? And that is, because there's so much silos across the enterprise organizations today, across development, operations, IT, security, lines of business, in its context, in its completeness, it's creating more complexity for our customers. So, what Sumo tries to help solve, do, is, solve that intelligence gap in this new intelligence economy by providing an intelligence platform we call "continuous intelligence". So what do customers do? So, some of our customers use Sumo to monitor and troubleshoot their cloud workloads. So whether it's, you know, the Netflix team themselves, right, because they're born and bred in the cloud or it's Hudl, who's trying to provide, you know, analytics and intelligence for players and coaches, right, to insurance companies that are going through the migration journey to the cloud, Hartford Insurance, New York Life, to sports and media companies, Major League Baseball, with the whole cyber SOC, and what they're trying to do there on the backs of Sumo, to even trucking companies like Packard, who's trying to do driverless, autonomous cars. It doesn't matter what industry you're in, everyone is trying to do through the digital transformation or be disrupted. Everyone's trying to gain that intelligence or not just be left behind but be lapped, and so what Sumo really helps them do is provide one single intelligence platform across dev, sec, and ops, bringing these teams together to be able to collaborate much more efficiently and effectively through the true multi-tenant SaaS platform that we've optimized for 10 years on AWS. >> Justin: So we heard from Andy yesterday that one of the important ways to drive that transformational change is to actually have the top-down support for that. So you mentioned that you're able to provide that one layer across multiple different teams who traditionally haven't worked that well together, so what are you seeing with customers around, when they put in Sumo Logic, where does that transformational change come from? Are we seeing the top-down driven change? Is that were customers come from, or is it a little bit more bottom-up, were you have developers and operations and security all trying to work together, and then that bubbles up to the rest of the organization? >> Ramin: Well, it's interesting, it's both for us because a lot of times, it depends on the size of the organization, where the responsibilities reside, so naturally, in a larger enterprise where there's a lot of forces of mass because of the different siloed organizations, you have to, often times, start with the CISO, and we make sure the CISO is a transformation agent, and if they are the transformation agent, then we partner with them to really help get a handle and control on their cybersecurity and threat, and then he or she typically sponsors us into other parts of the line of business, the DevOps teams, like, for example, we've seen with Hartford Insurance, right, or that we saw with F5 Networks and many more. But then, there's a flip side of that where we actually start in, let's use another example, uh, you know, with, for example, Hearst Media, right. They actually started because they were doing a lift-and-shift to the cloud and their DevOps team, in one line of business, started with Sumo, and expanded the usage and growth. They migrated 32 applications over to AWS, and then suddenly the security teams got wind of it and then we went top-down. Great example of starting, you know, bottom-up in the case of Hearst or top-down in the case of other examples. So, the trick here is, as we look at embarking upon these journeys with our customers, we try to figure out which technology partners are they using. It's not only in the cloud provider, but it's also which traditional on-premise tools versus potentially cloud-native services and SaaS applications they're adopting. Second is, which sort of organizational models are they adopting? So, a lot of people talk about DevOps. They don't practice DevOps, and then you can understand that very quickly by asking them, "What tools are you using?" "Are you using GitHub, Jenkins, Artifactory?" "Are you using all these other tools, "and how are you actually getting visibility "into your pipeline, and is that actually speeding "the delivery of services and digital applications, "yes or no?" It's a very binary answer, and if they can't answer that, you know they're aspiring to be. So therefore, it's a consultative sale for us in that mode. If they're already embarking upon that, however, then we use a different approach, where we're trying to understand how they're challenged, what they're challenged with, and show other customers, and then it's really more of a partnership. Does that makes sense? >> Justin: Yeah, makes perfect sense to me. >> So, one of the debates we had coming into this show is, a lot of discussion at multicloud around the industry. Of course, Amazon doesn't talk specifically about multicloud all that well. If you look historically, attempts to manage lots of different environments under a single pane of glass, we always say, "pane is spelled P-I-A-N", when you try to do that. There's been great success. If you look at VMware in the data center, VMware didn't cover the entire environment, but vCenter was the center of your, you know, admin's world, and you would edge cases to manage some of the other environments here. Feels that AWS is extending their footprint with thing like Outposts and the environments, but there are lots of things that won't be on Amazon, whether it be a second cloud provider, my legacy data center pieces, or anything else there. Sounds like you touch many of the pieces, so I'm curious if you, just, weigh in on what you hear from customers, how they get their arms around the heterogeneous mess that IT traditionally is, and what we need to do as an industry to make things better. >> You know, for a long time, many companies have been bi-modal, and now they're tri-modal, right, meaning that, you know, they have their traditional and their new aspects of IT. Now they're tri-modal in the sense of, now they have a third leg of that complexity in stool, which is public cloud, and so, it's a reality regardless of Amazon or GCP or Azure, that customers want flexibility and choice, and if fact, we see that with our own data. Every year, as you guys well know, we put out an intelligence report that actually shows year-over-year, the adoption of not only various technologies, but adoption of technologies used across one cloud provider versus multicloud providers, and earlier this year in September when we put the new release of the report out, we saw that year-over-year, there was more than 2x growth in the user of Kubernetes in production, and it was almost three times growth year-over-year in use of Kubernetes across multiple cloud providers. That tells you something. That tells you that they don't want lock-in. That tells you that they also want choice. That tells you that they're trying to abstract away from the IaaS layer, infrastructure-as-a-service layer, so they have portability, so to speak, across different types of providers for the different types of workload needs as well as the data sovereignty needs they have to constantly manage because of regulatory requirements, compliance requirements and the like. And so, this is actually it benefits someone like Sumo to provide that agnostic platform to customers so they can have the choice, but also most importantly, the value, and this is something that we announced also at this event where we introduced editions to our Cloud Flex licensing model that allows you to not only address multi-tiers of data, but also allows you to have choice of where you run those workloads and have choice for different types of data for different types of use cases at different cost models. So again, delivering on that need for customers to have flexibility and choice, as well as, you know, the promise of options to move workloads from provider to provider without having to worry about the headache of compliance and audit and security requirements, 'cause that's what Sumo uniquely does versus point tools. >> Well, Ramin, I think that's a perfect point to end on. Thank you so much for joining us again. >> Thanks for having me. >> Stu: And looking forward to catching up with Sumo in the future. >> Great to be here. >> All right, we're at the midway point of three days, wall-to-wall coverage here in Las Vegas. AWS re:Invent 2019. He's Justin Warren, I'm Stu Miniman, and you're watching theCUBE. (upbeat music)
SUMMARY :
Brought to you by Amazon Web Services and one of our Cube alumni are back on the program. of the Sumo-- I remember a number of years you actually had live sumos-- At the event, so you know, bring us, the sixth year and business folks from DevOps to security Anything specific jump out from you from their side, and also operating in the cloud in this new world. and how the Sumo Logic connection to CloudTrail and how to go troubleshoot those. and more importantly, automate the resolution process, so the automation side of things I think from the streaming data services, how to be able I wonder if you could bring us inside some or it's Hudl, who's trying to provide, you know, so what are you seeing with customers around, and then you can understand that very quickly and you would edge cases to manage to have flexibility and choice, as well as, you know, Well, Ramin, I think that's a perfect point to end on. Stu: And looking forward to catching up with Sumo and you're watching theCUBE.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Justin Warren | PERSON | 0.99+ |
Ramin Sayar | PERSON | 0.99+ |
Justin | PERSON | 0.99+ |
Ramin | PERSON | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
Andy | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Packard | ORGANIZATION | 0.99+ |
Hartford Insurance | ORGANIZATION | 0.99+ |
Hearst Media | ORGANIZATION | 0.99+ |
F5 Networks | ORGANIZATION | 0.99+ |
10 years | QUANTITY | 0.99+ |
Sumo Logic | ORGANIZATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
16 zettabytes | QUANTITY | 0.99+ |
2025 | DATE | 0.99+ |
New York Life | ORGANIZATION | 0.99+ |
yesterday | DATE | 0.99+ |
32 applications | QUANTITY | 0.99+ |
Second | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
three days | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
Sumo | ORGANIZATION | 0.99+ |
eighth year | QUANTITY | 0.99+ |
six years ago | DATE | 0.99+ |
Stu | PERSON | 0.98+ |
three | QUANTITY | 0.98+ |
sixth year | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
Seventh year | QUANTITY | 0.98+ |
Sumo | PERSON | 0.98+ |
over 60,000 | QUANTITY | 0.97+ |
a decade ago | DATE | 0.97+ |
next year | DATE | 0.97+ |
third leg | QUANTITY | 0.97+ |
this week | DATE | 0.97+ |
DevOps | TITLE | 0.97+ |
first | QUANTITY | 0.97+ |
this year | DATE | 0.97+ |
more than 2x | QUANTITY | 0.96+ |
second cloud | QUANTITY | 0.96+ |
one layer | QUANTITY | 0.96+ |
Cloud Flex | TITLE | 0.95+ |
AKS | ORGANIZATION | 0.94+ |
one thing | QUANTITY | 0.94+ |
earlier this year | DATE | 0.93+ |
Cube | ORGANIZATION | 0.93+ |
EC2 | TITLE | 0.91+ |
Breaking Analysis: The Transformation of Dell Technologies
from the silicon angle media office in Boston Massachusetts it's the queue now here's your host David on tape hello everyone and welcome to this week's episode of the cube insights powered by ETR you know this past week we attended the Dell technologies Industry Analysts event and in this breaking analysis I want to summarize the key takeaways and discuss some of the macro trends in the industry that are affecting Dell I'll also discuss some of the fundamental assumptions that Dell is making in its operating model and I'll talk about some of the challenges that I see for the company going forward and hopefully what is a frank manner now let me start with the event itself it was held in Austin Texas and it's clear that Austin Texas is becoming the epicenter of Dell post-acquisition of EMC it's shifting strongly back to Texas while the legacy of EMC remains what is the most critical part of Dells portfolio thanks to vmware the energy of Dell emanates from its founder Michael Dell the event was attended by about 250 press and analysts over a two-day period it was very well run with strong levels of executive access which is always very important to the analysts and lots of transparency and I thought clarity of message now the number one takeaway on this is Dell in four years the company has gone from irrelevance to a dominant and highly relevant player in the enterprise tech especially the CIOs and it's one of the most amazing transformations of a company that personally I've ever seen and I've seen several there were four other key takeaways for me that I'll show on this first slide of Alex if you bring it up first Michael Dell has put forth a set of moonshot goals for 2030 let me give you some examples by 2030 Dell says that for every product that they sell they're going to recycle an equivalent product by 2030 50 percent of the global workforce of Dell will be women and 40 percent of the managers of people will be women 25 percent of the u.s. workforce will be either Hispanic or African now most tech stories today are negative and this is a great positive message I'm not gonna spend a lot of time on this because in there's much more that Dell laid out but kudos for Dell to make for making these initiatives a priority you know particularly the women in tech and the diversity in the minorities I think it's excellent the second takeaway is Dell for Dell is the Dell is being driven by Jeff Clark and this guy is on a mission to simplify the portfolio Dell claims its reduced its product portfolio from 88 platforms down to 20 of that power platforms that powers a new brand now the reality is Dell really hasn't deprecated 68 products many if not most are still around but the RMD energy is all going into the new stuff now the third takeaway was a big announcement around power one power one is Dells new platform for the next generation of converged infrastructure now a lot of people might look at this and say well this is converged infrastructure without Cisco well it is actually and while that's true power one according to Dell is a much more of a developer friendly API and micro services based platform with a lot of automation software built in it's essentially going to be Dells go forward platform for customers that don't want to roll their own infrastructure the expectation or inference that that we took away was that power one will integrate most if not all future storage networking and server products Adela's positioning this as a complement to HCI or hyper-converged infrastructure which comprises VX rail VX flex which is the scale i/o and of course the OEM Nutanix so you can see Dell still got some work to do in terms of streamlining its portfolio and here's my lock of the day is that they'll be phasing out the Nutanix OEM relationship you could take that one to the bank now the fourth takeaway was the Dells cloud strategy is really coming into focus is it a winning strategy I honestly can't say at this point but in my view it's the only option that Dell has and and because of VMware they have a fighting chance Dell is in a much better position than other suppliers that that rely on you know Prem install bases because of VMware VMware is not only Dells piggy bank it is but it also gives Dell strategic levers with with CIOs and partners like for instance AWS now later on I'm going to share some ETR data that will give you some context but the bottom line is that the cloud is having an impact on everyone's business including Dells and I mean let me add the Dells cloud strategy in addition to relying on VMware is completely dependent on the assumptions that the world is going to be hybrid which is a good assumption and that multi cloud is going to evolve from what today I've said as a symptom of multi-vendor to a fundamental priority for CIOs again not a bad assumption but because of VMware adele has more than a fighting chance to compete for share now finally that that adele is going to be able to capitalize on the edge personally I think this is the biggest wildcard what I do think is that developers are going to be a crucial part of the edge and at this point in time Dell and VMware are not really top of mine in the developer community now the event involved keynotes from Michael Dell and other execs including including the CFO it was Tom sweet and and many other breakout sessions you know the normal one-on-ones as well now I don't have time to go into all this but there are some things that I want to share about Jeff Clark's presentation specifically he's the person that took over from David David Gordon a couple years ago he's been at Dell for more than 30 years and he was there when I think it was called pcs limited so a long time he's a trusted operational executive of Michael Dell's I'm very impressed with this guy he doesn't use a cheap prompter when he talks and in fact he has some notes but he's got these facts and figures at the in his head that he rattles off like a staccato pace he's an OBS exec and so let me summarize the his discussion now to bring up this slide the the big picture is the data sphere is gonna grow to 175 zettabytes and half of that is going to be created at the edge of that 30% is gonna require real-time processing now he talked about the mandate for simplification and he called this staying the easy button now in QA I asked him like why did it take you guys so long to figure out something so obvious which is kind of a snarky analyst question not his credit he didn't throw his predecessors under the bus rather what he did is he focused on the future and sit he said you know they shared the figures that I stated earlier about you know taking 88 platforms down to 20 and he focused on the priorities of the future so he didn't say it but I'm gonna say it for him he inherited a very messy portfolio and he had to clean up the crime scene me tell let me tell you what a buyer said about EMC back in 2018 this is from the ETR Venn survey when they go out and they probe you know specific customers and they talk to them this guy says NetApp has done a really good job of advertising and positioning itself within the cloud and within data centers themselves they've got a broad portfolio and I don't want to make comments about NetApp but so just I'm not sure I agree with all this but okay come back to his statements and and they've they've integrated fairly well here's what's relevant what he said was EMC on the other hand is not as well integrated they've got a broad portfolio but it's not necessarily - easy easy to pick and choose from the different categories okay so I agree with that you know look the mega launch product dujour worked for EMC it allowed them to carry on for another five or six years after the downturn but the lack of integration eventually caught up to that minute and it will always you know caught up catch up to large companies who rely on either lots of M&A or spinning out new products with lots of overlap anyway I digress the third thing that Clarke talked about was the big market size and the share gains pcs are a 200 billion dollar market servers are an 80 billion dollar market an external storage is a 26 billion dollar market Della's gains 600 basis points according to Clarke in pcs over the last six years 400 came in the last three years 375 basis points in storage in the past two years now of course what he didn't mention that was after a dismal performance a few years earlier so they had a pretty easy compare but my point is this when you talk to Michael Dell you talked to Tom sweet you talked to Jeff Clark and all the people folks in the company share gains are critical to Dells strategy especially because the cloud is taking so much share of wallet in the enterprise I'll make some other comments on that now finally there are two fundamental beliefs that dell has that i want to share with you one is that they can be a consolidator of these core markets in a downturn deltax they can hold their breath you know so to speak longer than the competitors and of course in an up market they think they can accelerate their leverage points which leads to the second belief that jeff clark talked about which is how dell will deliver differentiation and value so he decided four items there one is they got 40,000 direct sellers so they got a big go-to market presence they got 35,000 service professionals a 66 billion-dollar supply chain and then Dell financial services arm which you know forces Dell to carry a lot of debt but that debt throws off cash and it's not really part of Dells core debt from EMC acquisition now others have that too but but Dells got you know big presents there all right so I want to pivot to the ETR data and let's see how Dell looks in the spending survey and since market share is so important to Dell why don't we take a look at how they're doing so Alex this slide that I'm showing here what each er refers to as market share market share is defined by you TR as vendor citations in the survey excluding replacements so customers that are adding spending the same or spending more as spending less divided by the total number of respondents in the survey so it's a measure of how pervasive the vendor is in the data set what I'm showing in this slide is Dells market share and its three most important business lines namely VMware Delhi MC and Adele's laptop business and I'm showing this from the January 17 survey to October 19 now notice the survey sample overall is 960 for respondents and the three brands they show 800 and said six hundred and twenty two and three hundred and two shared ends within that 964 so there's two points one else doing pretty well I mean I'd say it's better than holding serve and as you can see it's steadily gaining now the second point is that look at the net scores here you know they're okay especially for vmware intel's laptop but Dell EMC for instance specifically their server and storage and networking business you know not so much so there's there's a mixed story here so let me make some comments on the macro and things that I've discussed with with ETR and and my narrative on demand overall some things that I've said you shared with you before as we've discussed in past breaking analyses spending is reverting back to pre eighteen levels but it's not falling off a cliff we're seeing fewer adoptions of new tech and more replacements of old tech so combine this with lower levels of spending and more citations overall we're seeing net score go down relative to previous surveys so here's what we think is happening there's less experimentation going on with the digital initiatives which started you know back in 2016 so you're seeing fewer adoptions of new tech as customers are start placing their bets and they're retiring leggy legacy systems that they were keeping on as a hedge and they're narrowing their spend on the new stuff and unplugging the stuff they don't need anymore and they're going at the serious production mode with the pocs so that means overall spending is softer it's not a disaster but it's lower than expected then coming into this year storage is on the back burner in a lot of accounts because of cloud and the big flash injection that I've talked about giving him more Headroom servers are really soft for Dell especially because they have a tough compared with previous with last year PC is actually pretty good all things being considered so where is the spending action well it's in the cloud now q how many vendors tell me that there's a big rebate repatriation trend happening ie people have cloud remorse and they're all moving back on pram not all but many M say it doesn't happen but at the macro-level its noise compared to the spending that's happening in the cloud just do the math all you got to do is look at AWS and Microsoft and what they report and compare it to any enterprise company that relies on on-prem selling I mean I don't want to argue about it you believe what you want but I would much prefer to look at the data so let's do that so here's a slide that shows ETR data on customer spending on the cloud so you got a AWS Azure and Google spenders and how their spending patterns have changed over time for dell emc servers so you got six hundred and thirty six cloud accounts 175 to 200 shared dell emc server accounts over the past three periods and yet net scores of 24% down to 16% so look at the gray bar versus the yellow bar gray is October 18 yellow is October 19 okay you get the picture the next slide is the same view for Dell EMC storage the gray bar is last year yellow bar is this year's survey so look at it 22% down to 5% that's not good so storage is getting hit by cloud and that's going to continue all right so let me conclude with some comments in general overall I like to tell strategy you know honestly without VMware I'm probably not gonna fly to Austin this week just being honest but with VMware Dell is far more important to our community so I pay more attention to it I haven't shared many thoughts on Dells financials but I think they have some upside here as they continue to pay down their debt by the way every five billion of dollars that they retire in debt it drops twenty five cents right to earnings per share Dell throws off a lot of cash it's a very well-run company they got an excellent management team we talked about their share gain lever they'll have a public cloud so they got to make on Prem as simple as possible and ideally is cloud like as they can you know the on-premise experience frankly is well behind that of the cloud but but cloud you know getting less simple and it's not cheap so on Prem in my view doesn't have to be exactly cloud it's just got to be good enough now Dell this week also refreshed its on demand pricing but it's good and it's obviously relevant to cloud not have time to go into all the detail but suffice to say that near-term there on-demand stuff it's it's going to be a small factor in their business but longer-term I think it's going to play in it's particularly to the cloud model Dell is also betting on hybrid and multi cloud they have to and but they're up against several competitors Microsoft is the is really strong in this space Microsoft's also a partner of course but you got IBM and Red Hat Cisco Google sort of and some others but VMware it gives Dell an advantage and that is the key the big hole that I see in Dell I'm going to come back to innovation you know Dell spends billions of dollars on R&D I think it's the numbers 20 billion over the last four years so that's good but you know innovation this industry is being delivered delivered by developers no those are the drivers and and it's they're taking advantage of data applying machine intelligence and cloud for scale and Dell is clearly well positioned for the data trend you know could partner for cloud it can certainly play an AI but what it lacks in my opinion is appeal to the developer community and just as Dell has become relevant to CIOs it needs this a similar type of relevance with the devs and that's a different ballgame so it's hopes are leaning on VMware and is of course its acquisition of pivotal but if I were Dell I would not sit back and wait for pivotal and VMware to figure it out here's what I would do if I were Dell I would deploy at least a thousand engineers they got twenty thousand engineers take a thousand or fifteen hundred them and point them toward developing open source tools and build applications and tools around all these hot emerging trends that we hear about multi-cloud multi cloud management edge all the innovations going on at edge autonomous vehicles etc AI workloads machine intelligence machine learning I would open-source that work and make a big commitment to the developer community big contributions and that would build hooks in from my hardware into these tools to make my hardware run better faster cheaper on these systems I want to thank my friend Peter burrows for forgiving me that idea but I think it's a great idea I think it's radical but it makes sense in this world that is really being driven by developers okay this is Dave Volante signing out from this episode of cube insights powered by ETR thanks for watching we'll see you next time
SUMMARY :
from the January 17 survey to October 19
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff Clark | PERSON | 0.99+ |
Michael Dell | PERSON | 0.99+ |
October 19 | DATE | 0.99+ |
Dave Volante | PERSON | 0.99+ |
Tom | PERSON | 0.99+ |
January 17 | DATE | 0.99+ |
2018 | DATE | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
2016 | DATE | 0.99+ |
October 18 | DATE | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
second point | QUANTITY | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Austin Texas | LOCATION | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
24% | QUANTITY | 0.99+ |
Michael Dell | PERSON | 0.99+ |
two points | QUANTITY | 0.99+ |
20 billion | QUANTITY | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
35,000 service professionals | QUANTITY | 0.99+ |
22% | QUANTITY | 0.99+ |
Clarke | PERSON | 0.99+ |
30% | QUANTITY | 0.99+ |
David | PERSON | 0.99+ |
Austin | LOCATION | 0.99+ |
26 billion dollar | QUANTITY | 0.99+ |
Texas | LOCATION | 0.99+ |
66 billion-dollar | QUANTITY | 0.99+ |
Dells | ORGANIZATION | 0.99+ |
2030 | DATE | 0.99+ |
16% | QUANTITY | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
80 billion dollar | QUANTITY | 0.99+ |
175 | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
88 platforms | QUANTITY | 0.99+ |
twenty thousand engineers | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
David David Gordon | PERSON | 0.99+ |
40 percent | QUANTITY | 0.99+ |
68 products | QUANTITY | 0.99+ |
adele | PERSON | 0.99+ |
second takeaway | QUANTITY | 0.99+ |
964 | QUANTITY | 0.99+ |
375 basis points | QUANTITY | 0.99+ |
third takeaway | QUANTITY | 0.99+ |
jeff clark | PERSON | 0.99+ |
Austin Texas | LOCATION | 0.99+ |
600 basis points | QUANTITY | 0.99+ |
200 billion dollar | QUANTITY | 0.99+ |
three brands | QUANTITY | 0.99+ |
Boston Massachusetts | LOCATION | 0.99+ |
400 | QUANTITY | 0.98+ |
twenty five cents | QUANTITY | 0.98+ |
200 | QUANTITY | 0.98+ |
Peter burrows | PERSON | 0.98+ |
Tim Ferris, GreenPages | CUBE Conversation, September 2019
>> From the SiliconANGLE Media office in Boston Massachusetts, it's the theCUBE. Now here's your host, Dave Velllante (electronic music) >> Hi everybody. Welcome to the special CUBE conversation sponsored by Hewlett Packard Enterprise. This is part of our partner series. You know the partner business has changed quite dramatically over the years. It used to be you could make a lot of money pushing hardware and get some pretty good margins there. But increasingly, partners are becoming system integrators. They're becoming much more specialized in helping organizations transform, supporting their digital transformations, their infrastructure modernization, moving to the cloud, hybrid cloud, security. It really runs the gamut. And here to talk to me about that is Tim Ferris, who's a solutions architect at GreenPages. Tim, good to see you. Thanks for coming on. >> Great to be here. Thank you. >> So tell me a little bit about GreenPages. It's kind of a cool name. Where did that come from? And what are you guys all about? >> Oh God, I'm going to be killed for not knowing the history here. But, I think back in the old days, we used to hand out a neon green catalog. So we couldn't, back when we were doing cold calls, you'd probably get a lot of okay, we shipped you a catalog. Did you get that? Oh, I'm not quite sure. It may be buried under there. Neon green catalog, you could not lose. (laughs) I think we do our invoices on neon green paper now. >> That's good, green, the color of money. So tell us about your role as a solutions architect. What does that entail? And what's your background? >> Sure. So I'm a solutions architect. We have a number of different solutions architects at GreenPages who have a number of different specialties. My specialty is storage, disaster recovery and data management and protection and DR automation. And that's where it computes hyperconvergence, infrastructure and hybrid cloud. So specialization, a little bit wide, but we have other architects who are very deep in networking and hybrid cloud networking and that sort of thing as well. >> So let's get into some of that. Looking at your website, you guys are into everything. You've got software defined. You got cloud. You got security. You got DevOps and really runs the gamut. Well sometimes in this industry, we suffer from acronym soup. The reality is that things are changing quite dramatically. I mean it used to be you'd build an infrastructure to support a single application. You'd harden that infrastructure and that was it. It became a silo and people don't want that anymore. They want their data to be shared. They want it out of the silos, but at the same time it has to be protected. So what are some of the big trends that you're seeing in the marketplace and let's get into it. >> Sure. So yeah, many years ago that one app, one server, one application thing went the way of the dodo. You just got back from VM world. I paid my dues during the Wave One virtualization boom. When people were transforming racks and racks of servers into virtual machines. And so it used to be so easy to impress a customer. You show them a vMotion and it was like magic. You move the server from this server to that server without missing a beat. Now people are looking at hybrid cloud. So not just cloud, but hybrid cloud. Everybody we're talking to, we hear some people say that this is the last major hardware purchase that I want to make. Now I don't know the reality is that. That's debatable, right? But I think people want to have a roadmap to move their infrastructure to cloud or cloud services. Not just infrastructure as a service. You know, lift and shift. Software is a service and take advantage of that. So helping our customers manage that hybrid cloud journey is a big part of what GreenPages does. >> And of course, what the customer is really telling you is we don't want to spend a lot of time provisioning LUNs anymore because it doesn't add value to our business. We want to focus on building new apps or our digital transformation, etc. So and I think you're right. It's sort of aspirational that okay, we're not going to buy anymore hardware anymore. To me the key is, can the industry, through R & D, simplify what's on-prem and you know, lets face it, those mission critical apps you don't just want to throw them into the cloud. I mean, they're working. You don't want to have to refactor them and migrate. That's sort of an evil word. So to the extent that the industry can deliver that cloud-like experience on-prem, you can start to see this hybrid cloud vision evolve. What are your thoughts on that? >> Sure. So I think, in H it's fortuitous that we're here with HPE. I think they're doing a couple of things with some of their products and services that help push that. So it used to be that storage was relatively complicated. There were a lot of knobs and dials on storage that you could push and rotate in order to increase performance. You could have a number of different RAID levels. You know the 3PAR chunklets, and this sort of thing. There was a lot of customization you could do, you could use as a customer in order to properly set up your array for your workloads. People appreciate that level of detail that you can put into that but they want it easier. So I'm seeing a trend toward less customization and more ready just set it and forget it arrays. Nimble, the 3PAR array was highly available. Very good, very good array, very fast, but a little bit higher end to operate. Nimble with HPE's acquisition of Nimble, they've taken that operational complexity down significantly. Not only with operating the array, provisioning LUNs but managing it, maintaining it and performing predictive analytics through Infosite and that sort of thing. So at the storage level I think Nimble, in that paradigm is transforming storage. And HPE's GreenLake technologies, that is very much an answer to the private cloud. Having that hyperscale feel, that ability to expand elastically and get out of the hardware maintenance business by using the GreenLake service. >> So actually, a little bit of history here. So 3PAR was actually, the company was formed in the early 2000s before the term cloud computing really came out. They used, I think utility computing in their S-1 registration. But what 3PAR did is, it really simplified that high end. And then 3PAR reached escape velocity by going after the high end EMC base and did very well and of course famously got acquired by Hewlett Packard. At the time HP then became HPE. Nimble now is bringing sort of a new level where you're talking about intelligent automation and AI managing infrastructure, predictive analytics and that drives more automation which I think, Tim has really got to be a theme of hybrid cloud. I mean cloud is all about automation so hybrid cloud, on-prem, and public some kind of interconnection has to be highly automated, doesn't it? >> It absolutely does and people don't have time to turn the dials and to optimize their storage. They need systems that will do that for them. And there's the level one, the level two support that you get through those predictive analytics of Infosite are critical to customers. They don't, you know a lot of customers don't have time for full time storage admins anymore. And these technologies are what's freeing up those resources, those people resources to do other strategic things for the business. >> Especially in small and mid-sized businesses. >> Absolutely. >> Where they're generalist really, not really specialists at one thing. I want to come back to the hybrid cloud. You know thinking about data governance and management and security. Are we at the point where you can start to see sort of a consistent framework across clouds? You're smiling. (laughs) So what's the journey there? How are we going to get there? I mean (mumbling) (laughs) >> Yeah, I would say we're certainly early days there. I think you know customers need to be much more cognizant of the tools that they use and buy. They can't be necessarily proprietary on-prem tools. The best use of your money is to buy tools, that can be used to manage hybrid and secure hybrid infrastructures. So that should be a main qualifier for what people are looking for for security technologies and that sort of thing. It's not quite the wild west, though we still see, you know there's that shared governance model. That shared responsibility in the cloud. I think there are still some who haven't woken up to that basic concept. That just because I moved the workload to the cloud doesn't mean it's no longer my responsibility to secure that data. Though we're still talking with people today who may be under that misimpression. >> You're right, Tim. I mean that is not well understood and people think if I move in the cloud, I'm good. But there is shared responsibility model whether it's for security or governance, etc. And when you talk to chief information security officers they'll tell you, yeah, you know the cloud vendor might secure the storage device, but its' really our responsibility to do everything else. And the list of everything else is still quite long. >> Absolutely, you know rights, roles and responsibilities. Those sorts of things, firewall rules. They provide the firewall. They make sure the firewall is up to date on it's firmware. But you're setting the rules. You're setting the ingress, egress. So, yes it's very much still a shared responsibility. And yeah, it's eye opening still to some. >> Let's talk about your partnership with HPE. We talked about some of the products, but what do you look for in a partner? Obviously as I said before, you know used to be you sell in boxes. You want margin and I'm sure you still want margin. But there's got to be more, right? >> Well yeah, I mean we've known for quite a while. I mean we've seen the writing on the wall that, I remember the glory, I don't know glory days, the old days back when people could make a fortune selling memory, back before the turn of the century, turn of the century. (laughs) I'm dating myself. But it's true you could make quite a bit of money selling memory back then. But today and certainly over the past 20 years, people, our clients are choosing partners that they can't, not just the cheapest price, but people who can talk to them about a solution. Not just a product. Hear their business problems and turn that into technology solutions that help them address those problems. So that's what I would look for as a partner, if I were, we look to HPE for the same thing. Not just pushing product, where to sell product, but to solve business problems. And I think that HPE is listening, they're hearing their clients. They were listening to them with the acquisition of HPE Nimble. They're listening to them, how they're expanding Infosite from just a Nimble platform, the 3PAR and Prolient and other things and expanding some of those things. >> Yeah, the pendulum has swung after the dot.com boom it became cut, cut, cut. Everyone was concerned about budgets. You know IT doesn't matter anymore. We heard all that >> crosstalk >> and that's totally changed. IT's driving revenue. It's driving top line. Of course budgets are still critical and we've talked a lot about simplification which is a lot about attacking the IT labor problem. But right now the sentiment with the booming economy we're in. This tenth, ninth year of a run on a bull market, obviously in the late cycles, but the sentiment is much more toward how do I enable the business with technology? >> Yes. Yeah, yeah. So how does IT add value back to the business? They can do that through AI, through analytics and through digital transformation in general. I think we've seen a, you know there's always been this upward curve to storage growth. But it's dramatically increased I think. It's upward, predicted to be upward of 40 zettabytes, or something like that by the year 2022. And that's because more and more businesses are using this data more creatively. They're saving it more and not only is that growing, the usable data, but they need to retain it for longer. You've got to retain it, you've got to protect it and we've still got data protection problems. Not just storing it and providing the right performance level for it. But it's really difficult. And then you've got to secure all that extra data, as well. >> Well, I think you're right too. The curve is getting non-linear. I mean it used to be, I've said this often on the theCUBE that we for decades, we've marched to the cadence of Moore's law. But now the innovation sandwich, if you will, it's about applying machine intelligence to data and then automating, whether it's public cloud or on-prem cloud-like, it's being able to scale. >> Right. >> And it's those three pieces of the sandwich that are now driving innovation. No longer the doubling of transistors every 18 months. >> Yeah, so do people want to scale on-prem? Do they want to scale to the cloud and the cloud market itself as it's very elastic, very easy to grow and shrink and contrast? Or can you do some of those types of things on-prem? You know with GreenLake and with some other programs that let you have your on-prem security blanket and your on-prem performance with the hands-off operational paradigm and the elastic growth that you have in cloud. I think that's the best of both worlds for some. >> Let's end with a call to action. So what advice would you give to practitioners, clients that are looking to modernize their infrastructure? They're trying to support their digital transformation. They want to get from point A to point B. They don't want to spend a billion dollars doing it. They got to go on a journey. How do they get there? What's your advice? >> My advice is to certainly, I'm jaded here, but I would say engage professionals who have done this many many times. Don't learn on the job here. You can make some expensive mistakes moving workloads to the cloud. And we've seen anecdotal evidence, and in-person evidence of people moving to the cloud, doing it the wrong way and then having to migrate that back. That's a costly mistake. So make sure you do your planning. Migrate in phases. Move your data there in phases. Bite off some smaller chunks first to make sure if you have growing pain, teething pains that that happens with a non-critical application. Build your knowledge base and then make some better decisions. Engage people like GreenPages to help you roadmap your journey, your hybrid cloud journey. And don't go in with a preconceived notion of where you need to end, right? The applications, their performance requirements and that assessment work up front should dictate where the best place is for those workloads. >> Great advice. Tim Ferris from GreenPages. Thanks so much for coming on theCUBE. It's great to have you. >> Thank you. >> And thank you for watching everybody. We'll see you next time. This is Dave Vellante. We're out. (electronic music)
SUMMARY :
in Boston Massachusetts, it's the theCUBE. You know the partner business has changed Great to be here. And what are you guys all about? for not knowing the history here. That's good, green, the color of money. and that sort of thing as well. You got DevOps and really runs the gamut. You move the server from this server to that server And of course, what the customer is really telling you So at the storage level I think Nimble, in that paradigm and that drives more automation which I think, Tim that you get through those predictive analytics Are we at the point where you can start to see I think you know customers need to be much more cognizant And the list of everything else is still quite long. They make sure the firewall is up to date on it's firmware. You want margin and I'm sure you still want margin. But it's true you could make quite a bit of money Yeah, the pendulum has swung after the dot.com boom how do I enable the business with technology? or something like that by the year 2022. But now the innovation sandwich, if you will, No longer the doubling of transistors every 18 months. and the elastic growth that you have in cloud. clients that are looking to modernize their infrastructure? to help you roadmap your journey, your hybrid cloud journey. It's great to have you. And thank you for watching everybody.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Dave Velllante | PERSON | 0.99+ |
Tim Ferris | PERSON | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Hewlett Packard | ORGANIZATION | 0.99+ |
September 2019 | DATE | 0.99+ |
Tim | PERSON | 0.99+ |
Hewlett Packard Enterprise | ORGANIZATION | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
GreenPages | ORGANIZATION | 0.99+ |
Boston Massachusetts | LOCATION | 0.99+ |
Nimble | ORGANIZATION | 0.99+ |
three pieces | QUANTITY | 0.99+ |
ninth year | QUANTITY | 0.99+ |
one server | QUANTITY | 0.99+ |
one application | QUANTITY | 0.99+ |
40 zettabytes | QUANTITY | 0.99+ |
Infosite | ORGANIZATION | 0.99+ |
today | DATE | 0.98+ |
early 2000s | DATE | 0.98+ |
one app | QUANTITY | 0.98+ |
tenth | QUANTITY | 0.98+ |
both worlds | QUANTITY | 0.97+ |
2022 | DATE | 0.97+ |
billion dollars | QUANTITY | 0.96+ |
single application | QUANTITY | 0.96+ |
GreenLake | ORGANIZATION | 0.96+ |
HPE Nimble | ORGANIZATION | 0.95+ |
one thing | QUANTITY | 0.95+ |
first | QUANTITY | 0.94+ |
dot.com | ORGANIZATION | 0.92+ |
SiliconANGLE | ORGANIZATION | 0.91+ |
GreenLake | TITLE | 0.9+ |
level two | QUANTITY | 0.89+ |
dodo | TITLE | 0.88+ |
level one | QUANTITY | 0.87+ |
many years ago | DATE | 0.87+ |
Wave One | COMMERCIAL_ITEM | 0.85+ |
Moore | PERSON | 0.84+ |
3PAR | TITLE | 0.81+ |
Prolient | ORGANIZATION | 0.76+ |
decades | QUANTITY | 0.76+ |
3PAR | OTHER | 0.75+ |
past 20 years | DATE | 0.73+ |
S-1 | COMMERCIAL_ITEM | 0.72+ |
every 18 months | QUANTITY | 0.7+ |
vMotion | TITLE | 0.69+ |
3PAR | ORGANIZATION | 0.66+ |
EMC | ORGANIZATION | 0.55+ |
theCUBE | ORGANIZATION | 0.53+ |
point | OTHER | 0.49+ |
DevOps | TITLE | 0.49+ |
CUBE | ORGANIZATION | 0.34+ |
Phil Buckellew, IBM | Actifio Data Driven 2019
>> From Boston, Massachusetts, it's theCUBE! Covering Actifio 2019 Data Driven. Brought to you by Actifio. >> Here we are in Boston, Massachusetts. I'm Stu Miniman, this is theCUBE at the special, at Data Driven '19, Actifio's user event. Happy to bring on a CUBE alum who's a partner of Actifio, Phil Buckellew, who's General Manager of IBM Cloud Object Storage. Phil, thanks for coming back. >> Great, great to be here Stu. >> All right, so object storage. Why don't you give us first just kind of an encapsulation of kind of the state of your business today. >> Sure, object storage is really an extremely important business for the industry today because really it's a new way accessing data, it's been around obviously for a decade or so but really, it's increasingly important because it's a way to cost-effectively store a lot of data, to really to be able to get access to that data in new and exciting ways, and with the growth in the volume of data, of particularly unstructured data, like 103 zettabytes by 2023 I think I heard from the IDC guys, that really kind of shows how important being able to handle that volume of data really is. >> So Phil, I go back, think about 12 years ago, all the technologists in this space were like, "The future of storage is object," and I was working at one of the big storage companies and I'm like, "Well we've been doing block and file," and there was this big gap out there, and kind of quietly object's taken over the world because underneath a lot of the cloud services there, object's there, so IBM made a big acquisition in this space. Talk about, you know, customers that I talk to it's not like they come out and say, "Oh jeez, I'm buying object storage, "I'm thinking about object storage." They've got use cases and services that they're using that happen to have object underneath. Is that what you hear from your users? >> Yeah, there's a couple of different buying groups that exist in the object storage market today. The historic market is really super large volumes. I mean, we're unique in that IBM acquired the Cleversafe company back in 2015 and that technology is technology we've expanded upon and it really, it's great because it can go to exabyte scale and beyond and that's really important for certain use cases. So some customers that have high volumes of videos and other unstructured data, that is really a super good fit for those clients. Additionally, clients that really have the need for highly resilient, because the other thing that's important the way that we built our object storage is to be able to have a lot of resiliency, to be able to run across multiple data centers, to be able to use erasure coding to ensure the data's protected, that's really a large part of the value, and because you can do that at scale without having downtime when you upgrade, those are really a lot of core benefits of object storage. >> Right, that resiliency is kind of built into the way we do it and that was something that was just kind of a mind shift as opposed to, okay I've got to have this enterprise mindset with an HA configuration and everything with N plus whatever version of it. Object's going to give you some of that built-in. The other thing I always found really interesting is storing data is okay, there's some value there, but how do I gain leverage out of the data? And there's the metadata underneath that helps. You talk about video, you talk about all these kinds there. If I don't understand what I've got and how I'd leverage it, it's not nearly as valuable for me, and that's something, you know really that one of the key topics of this show is, how do I become data driven, is the show, and that I have to believe is something critically important to your customers. >> Absolutely, and really object storage is the foundation for modern cloud-native data lakes, if you will, because it's cost-effective enough you can drop any kind of storage in there and then you can really get value from those assets wherever you are, and wherever you're accessing the data. We've taken the same technology that was the exabyte scale on-premise technology, and we've put it in the IBM public cloud, and so that really allows us to be able to deliver against all kinds of use cases with the data sets that clients want, and there's a lot of great innovation that's happening especially on the cloud side. We've got the ability to query that data, any kind of rectangular data with standard ANSI SQL statements, and that just really allows clients to unlock the potential of those data sets, so really good innovation going on in that space to unlock the value of the data that you put inside of object storage. >> All right, Phil let's make the connection. Actifio's here, IBM OEM's the solution. So, talk about the partnership and what customers are looking for when they're looking at their IPs. Sure, so, quite a ways prior to the partnership our object storage team partnered up with the Actifio team at a large financial services customer that recognized the growth in the volume of the data that they had, that had some unique use cases like cyber resiliency. They get attacked with ransomware attacks, they needed to have a standard way to have those data sets and those databases running in a resilient way against object storage that can still be mounted and used, effectively immediately, in case of ransomware attacks, and so that plus a lot of other traditional backup use cases is what drew the IBM Cloud Object Storage team and the Actifio team together. Successful deployments at large customers are really where we got our traction. And with that we also really began to notice the uptick in clients that wanted to use, they wanted to do test data management, they wanted, they needed to be able to have DevOps team that needed to spin up a replica of this database or that database very fast, and, you know, what we found was the combination of the Actifio product, which we've OEM'd as IBM Virtual Data Pipeline, allows us to run those virtual databases extremely cost-effectively backed by object storage, versus needing to make full replicas on really expensive block storage that takes a long time. >> Well yeah, we'd actually done research on this a number of years ago. Copies are great, but how do I leverage that right? From the developer team it's, I want to have something that mirrors what I have in production, not just some test data, so the more I can replicate that, the better. Phil, please, go ahead. >> There's some really important parts of that whole story, of being able to get that data flow right, to be able to go do point-in-time recoveries of those databases so that the data is accurate, but also being able to mask out that PII or sensitive information, credit card data or others that you really shouldn't be exposing to your testers and DevOps people. Being able to have the kind of-- (Phil laughs) >> Yeah, yeah, shouldn't because, you know, there's laws and lawsuits and security and all these things we have. >> Good, good, absolutely. >> So, Phil, we're talking a lot about data, you've actually got some new data to share with us, a recent survey that was done, should we share some of your data with us? >> Yeah, we did some, we did a, the ESG guys actually worked with us to build out a piece of research that looked at what would it cost to take a 50 terabyte Oracle 12c database and effectively spin up five copies the way you traditionally would so that different test teams can hammer away against that data set. And we compared that to running the VDP offering with our Cloud Object Storage solution. You know, distances apart, we had one where the source database is in Dallas and the destination database is in Washington, D.C. over a 10 gigabyte link, and we were able to show that you could set up five replicas of the database in like 90 minutes, compared with the two weeks that it would take to do full replication, because you were going against object storage, which runs about 2.3 cents per gigabyte per month, versus block storage fully loaded, which runs about 58 cents per gigabyte per month. The economics would blow away. And the fact that you could even do queries, because object storage is interesting. Yes, if you're using, if you have microsecond response times for small queries you got to run some of that content on block storage, but for traditional queries, we look at, like, really big queries that would run against 600 rows, and we were half the time that you would need on traditional block storage. So, for those DevOps use cases where you're doing that test in development you can have mass data, five different copies, and you can actually point back in time because really, the Actifio technology is really super in that it can go do point-in-time, it was able to store the right kind of data so the developers can get the most recent current copies of the data. All in, it was like 80% less than what you would have paid doing it the traditional way. >> Okay, so Phil, you started talking a little bit about some of the cloud pieces, you know, Actifio in the last year launched their first SaaS offering Actifio GO. How much of these solutions are for the cloud versus on-premises these days? >> Absolutely, so one of the benefits of using a virtual data approach is being able to leverage cloud economics 'cause a lot of clients they want to do, you know, they want to be able to do the test in dev which has ups and downs and peaks and valleys when you need to use those resources, the cloud is really an ideal way to do those types of workloads. And so, the integration work that we've done with the Actifio team around VDP allows you to replicate or have virtual copies of those databases in the cloud where you want to do your testing, or we can do it in traditional on-prem object storage environments. Really, whatever makes most sense for the client is where we can stand up those environments. >> The other thing I wonder if you could expand on a little bit more, you talked about, like, cloud-native deployment and what's happening there. How does that tie into this discussion? >> Well, obviously modern architectures and ways of Agile, ways of building things, cloud-native with microservices, those are all extremely important, but you've got to be able to access the data, and it's that core data that no matter how much you do with putting Kubernetes around all of your existing applications you've still got to be able to access that core data, often systems record data, which is sitting on these standard databases of record, and so being able to have the VDP technology, be able to replicate those, stand those up like in our public cloud right next to all of our Kubernetes service and all the other technologies, it gives you the kind of full stack that you need to go do that dev in test, or run production workloads if you prefer from a public cloud environment, without having all of the burdens of running the data centers and maintaining things on your own. >> Okay, so Phil, everybody here for this two day event are going to get a nice, you know, jolt of where Actifio fits. You know, lots of orange here at the show. Give us the final word of what does it mean with orange and blue coming together. >> Well absolutely, we think this is going to be great for our clients. We've got, you know, tons of interested clients in this space because they see the value of being able to take what Actifio's done, to be able to virtualize that data, combine it with some of the technologies we've got for object storage or even block storage, to be able to serve up those environments in a super cost-effective way, all underlined by one of our core values at IBM, which is really trust and being responsible. And so, we often say that there's no AI, which all of this data leads up to, without information architecture and that's really where we specialize, is providing that governance, all the masking, all of the things that you need to feel confident that the data you've got is in the right hands, being used the right way, to be able to give you maximum advantage for your business, so we're super excited about the partnership. >> Phil, definitely a theme we heard at IBM Think, there is no AI without the IA, so, Phil Buckellew, thanks so much for joining us, sharing all the updates on what IBM is doing here with Actifio. >> Great, great to be here. >> All right, and we'll be back with more coverage here in Boston, Massachusetts at Actifio Data Driven 2019. I'm Stu Miniman and thanks for watching theCUBE. (futuristic music)
SUMMARY :
Brought to you by Actifio. Happy to bring on a CUBE alum who's a encapsulation of kind of the state of your business today. from the IDC guys, that really kind of shows how important and kind of quietly object's taken over the world and because you can do that at scale and that I have to believe is something Absolutely, and really object storage is the and the Actifio team together. so the more I can replicate that, the better. that you really shouldn't be exposing and all these things we have. And the fact that you could even do queries, some of the cloud pieces, you know, 'cause a lot of clients they want to do, you know, The other thing I wonder if you could expand on and all the other technologies, are going to get a nice, you know, all of the things that you need to feel confident sharing all the updates on what IBM I'm Stu Miniman and thanks for watching theCUBE.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Phil Buckellew | PERSON | 0.99+ |
2015 | DATE | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Dallas | LOCATION | 0.99+ |
Phil | PERSON | 0.99+ |
Cleversafe | ORGANIZATION | 0.99+ |
Actifio | ORGANIZATION | 0.99+ |
90 minutes | QUANTITY | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
600 rows | QUANTITY | 0.99+ |
80% | QUANTITY | 0.99+ |
Washington, D.C. | LOCATION | 0.99+ |
two day | QUANTITY | 0.99+ |
two weeks | QUANTITY | 0.99+ |
2023 | DATE | 0.99+ |
Boston, Massachusetts | LOCATION | 0.99+ |
50 terabyte | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
10 gigabyte | QUANTITY | 0.99+ |
103 zettabytes | QUANTITY | 0.99+ |
five copies | QUANTITY | 0.98+ |
five replicas | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
a decade | QUANTITY | 0.98+ |
Kubernetes | TITLE | 0.96+ |
ESG | ORGANIZATION | 0.96+ |
one | QUANTITY | 0.95+ |
DevOps | TITLE | 0.94+ |
Stu | PERSON | 0.92+ |
CUBE | ORGANIZATION | 0.91+ |
IDC | ORGANIZATION | 0.91+ |
today | DATE | 0.91+ |
Agile | TITLE | 0.9+ |
IBM Cloud | ORGANIZATION | 0.89+ |
five | QUANTITY | 0.87+ |
12 years ago | DATE | 0.84+ |
IBM Think | ORGANIZATION | 0.82+ |
about 58 cents per gigabyte per | QUANTITY | 0.8+ |
Actifio GO | TITLE | 0.78+ |
Virtual Data Pipeline | COMMERCIAL_ITEM | 0.78+ |
Oracle | ORGANIZATION | 0.78+ |
about 2.3 cents per gigabyte per | QUANTITY | 0.77+ |
of years ago | DATE | 0.75+ |
Data | EVENT | 0.74+ |
Actifio 2019 | TITLE | 0.63+ |
2019 | DATE | 0.63+ |
theCUBE | ORGANIZATION | 0.59+ |
VDP | TITLE | 0.57+ |
tons | QUANTITY | 0.57+ |
DevOps | ORGANIZATION | 0.52+ |
Data Driven 2019 | EVENT | 0.46+ |
Actifio | TITLE | 0.44+ |
12c | TITLE | 0.41+ |
Data Driven | EVENT | 0.32+ |
'19 | EVENT | 0.3+ |
Amit Walia, Informatica | CUBEConversations, May 2019
(funky guitar music) >> From our studios, in the heart of Silicon Valley, Palo Alto, California, This is theCUBE conversation. >> Everyone welcome to this CUBE conversation here in Palo Alto, California CUBE studios, I'm John Furrier, the host of theCUBE. Were with CUBE alumni, special guest Amit Walia, President of Products & Marketing at Informatica. Amit, it's great to see you. It's been a while. It's been a couple of months, how's things? >> Good to be back as always. >> Welcome back. Okay, Informatica worlds is coming up, we have a whole segment on that but we have been covering you guys for a long long time, data is at the center of the value proposition again and again, it's more amplified now, the fog is lifting. >> Sure. >> And the world is now seeing what we were talking about four years ago. (giggles) >> Yeah. >> With data, what's new? What's the big trends that going on that you guys are doubling down on? What's new, what's changed? Give us the update. >> Sure. I think we have been talking the last couple of years, I think your right, data has becoming more and more important. I think, three things we see a lot. One is obviously, you saw this whole world of digital transformation. I think that has de faintly has picked up so much steam now. I mean, every company is going digital and obviously that creates a whole new paradigm shift for companies to carry out almost recreate themselves, rebuild them, so data becomes the new definition. And that's what we call those things you saw at Infomatica even before data3.org, but data is the center of everything, right? And you see the volume of data growth, you know, the utilization of data to make decisions, whether it's, you know, decisions on the shop floor, decisions basically related to cyber security or whatever it is. And the key to what you see different now is the whole AI assisted data management. I mean the scale of complexity, the scale of growth, you know, multi-cloud, multi-platform, all the stuff that is in front of us, it's really difficult to run the old way of doing things, so that's why we see one thing that we see a whole lot is AI is becoming a lot more mainstream, still early days but it's assisting the whole ability for companies, what I call, exploit data to really become a lot more transformative. >> You have been on this for a while, again we can go back to theCUBE archives, we can almost pull out clips from two years ago, be relevant today, you know, the data control, understanding >> Yeah. >> Understanding where the data governance is-- >> Sure. >> That's always a foundational thing but you guys nailed the chat bots, you have been doing AI was previous announcements, this is putting a lot of pressure on you, the president of the products, you got to get this out there. >> What's new? What's happening inside Informatica? pedaling as fast as you can? What is some of the updates? >> No. >> Gives us the-- >> The best example always is like a duck, right? Your really swimming and feel things are calm at the top and then you are really paddling. No, I think it's great for us. I think, I look at AI's, AI is like, there is so much FUD [fear, uncertainty and doubt] around it and machine learning AI. We look at it as two different ways. One is how we leverage machine learning within our products to help our customers. Making it easy for them, like I said, so many different data types, think of IOT data, unstructured data, streaming data, how do you bring all that stuff together and marry it with your existing transactional data to make sense. So, we're leveraging a lot of machine learning to make the internal products a lot more easier to consume, a lot more smarter, a lot more richer. The second thing is that, we're what we call it our AI, CLAIRE, which we unveiled, if you remember, a couple of years ago at the Informatica World. How that then helps our customers make smarter decisions, you know, in data science and all of these data workbenches, you know, the old statistical models is only as good as they can ever be. So, we leveraging helping our customers see the value proposition of our AI, CLAIRE, then to what I make things that, you know, find patterns, you know, statistical models cannot. So, to me I look at both of those really, leveraging ML to shape our products, which is where we do a lot of innovation and then creating our AI, CLAIRE, to help customers to make smarter decisions, easier decisions, complex decisions, which I called the humans or statistical models, really cannot. >> Well this is the balance with machines and humans. >> Right. >> working together, you guys have nailed this before and I'm, I think this was two years ago. I started to hear the words, land, adopt, expand, form you guys, right? Which is, you got to get adoption. >> Right. >> And so, as you're iterating on this product focus, you got to getting working, making secure your products-- >> Big, big maniacal focus on that one. >> So, tell me what you have learned there because that's a hard thing. >> Right. >> You guy are doing well at it. You got to get adoption, which means you got to listen customers, you got to do the course correction. >> Yeah. >> what's the learnings coming out of that piece of that. >> That's actually such a good point. We've made such, we've always been a customer centric company but as you said, like, as whole world shifted towards a new subscription cloud model, we've really focused on helping our customers adopt our products and you know, in this new world, customers are struggling with new architectures and everything, so we doubled down on what we called customer success. Making sure we can help our customers adopt the products and by the way it's to our benefit. Our customers get value really quickly and of course we believe in what we call a customer for life. Our ability to then grow with our customers and help them deliver value becomes a lot better. So, we really focused, so, we have globally across the board customers, success managers, we really invest in our customers, the moment a customer buys a product from us, we directly engage with them to help them understand for this use case, how you implement the product. >> It's not just self service, that's one thing that I appreciate 'cause I know how hard it is to build products these days, especially with the velocity of change but it's also when you have a large scale data. >> Yeah. >> You need automation, you got to have machine learning, you got to have these disciplines. >> Sure. >> And this is both on your end and but also on the customer. >> Yes. >> Any on the updates on the CLAIRE and some customer learnings you're seeing that are turning into use cases or best practices, what are some of them? >> So many of them. So take a simple example, right? I mean, we think of, we take these things for granted, right? I mean, take note, we don't talk about IOB these days right? All these cell cells, we were streaming data, right? Or even robots on the shop floor. So much of that data has no schema, no structure, no definition, it's coming, right? Netflix data and for customers there is a lot of volume in it, a lot of it could be junk, right? So, how do you first take that volume of data? Create some structure to it for you to do analytics. You can only do analytics if you put some structure to it, right? So, first thing is I've leverage CLAIRE, we help our customers to create, what I call, schema and you can create some structure to it. Then what we do allow is basically CLAIRE through CLAIRE, it can naturally bring what we have the data quality on top of it, like how much of it is irrelevant, how much of it is noise, how much of it really makes sense, so, then, as you said it, signal from the noise We are helping our customers get signal from the noise of data. That's where it AI comes very handy because it's very manual, cumbersome, time consuming and sometimes very difficult to do. So, that's a area we have leveraged creating structure and data quality on top and finding rules that didn't naturally probably didn't exist, that you and me wouldn't be able to see. Machines are able to do it and to your point, our belief is, this is my 100% belief, we believe AI assisting the humans. We have given the value of CLAIRE to our users, so it complements you and that's where we are trying to help our users get more productive and deliver more value to you faster. >> Productivity is multifold, it's like, also, efficiency, people wasting time on project that can be automated, so you can focus that valuable resource somewhere else. >> Yeah. >> Okay, let's shift gears onto Informatica World coming up. Let's spend some time on that. What's the focus this year, the show, it's coming up, right around the corner, what's going to be the focus? What's going to be the agenda? What's on the plate? >> Give you a quick sense on how it's shape up, it's probably going to be our Informatica World. So, it's 20th year, again back in Waze, you know, we love Waze of course. We have obviously, a couple of days lined up over there, I know you guys will be there too. A great set of speakers. Obviously, we will have me on stage, speakers like, we'll have some, the CEO of Google Cloud, Thomas Kurian is going to be there, we'll have on the main stage with Anil, we'll have the CEO of Databricks, Ali, with me, we'll also have CMO of AWS, Ariel, there, then we have a couple of customers lined up, Simon from Credit Suisse, Daniel is the CDO of Nissan, we also have the Head of AI, Simon Guggenheimer from Microsoft as well as the Chief Product Officer of Tableau, Francois Ajenstat, so, we have a great line up of speakers, customers and some of our very very strategic partners with us. If you remember last year, We also had Scott Guthrie there main stage. 80 plus sessions, pretty much 90% lead by customers. We have 70 to 80 customers presenting. >> Technical sessions or going to be a Ctrack? >> Technical, business, we have all kinds of tracks, we have hands on labs, we have learnings, customers really want to learn our products, talk with the experts, some want to the product managers, some want to talk to the engineers, literally so many hands on labs, so, it's going to be a full blown couple of days for us. >> What's the pitch for someone watching that never been Informatica World? Why should they come for the show? >> I'll always tell them three things. Number one is that, it's a user conference for our customers to learn all things about data management and of course in that context they learn a lot about. So, they learn a lot about the industry. So, day one we kick it off by market perspectives. We are giving a sense on how the market is going, how everybody is stepping back from the day to and understanding, where are these digital transformation, AI, where is all the world of data going. We've got some great annalists coming, talkings, some customers talking, we are talking about futures over there. Then it is all about hands on learning, right?, learning about the product. Hearing from some of these experts, right?, from the industry experts as well as our customers, teaching what to do and what not to do and networking, it's always go to network, right, it's a great place for people to learn from each other. So, it's a great forum for all those three things but the theme this year is all about AI. I talked about CLAIRE, I'll in fact our tagline this year is, Clarity Unleashed. We really want, basically, AI has been developing over the last couple of years, it's becoming a lot more mainstream, for us in our offerings and this year we're really taking it mainstream, so, it's kind of like, unleashing it for everybody can genuinely use it, truly use it, for the day to day data management activities. >> Clarity is a great theme, I mean, it plays on CLAIRE but this is what we're starting to see some visiblility into some clear >> Yeah. >> Economic benefits, business benefits. >> Yep. >> Technical benefits, >> Yep. >> Kind of all starting to come in. How would you categorize those three areas because you know, generally that's the consensus these days that what was once a couple years ago was, like, foggy when you see, now you're starting to see that lift, you're seeing economic, business and technical benefits. >> To me it's all about economic and business. So, technology plays a role in driving value for the business, right, I'm a full believer in that, right, and if you think about some of the trends today, right, a billion users are coming into play that will be assisted by AI. Data is doubling every year, you know the volume of data, >> Yep. >> The amount of, and I always say business users today, I mean, I run a business, I want, I always say, tomorrow data, yesterday to make a decision today. It's just in time and that's where AI comes into play. So our goal is to help organizations transform themselves, truly be more productive, reduce operation cost, by the way governance and compliance, that's becoming such a mainstream topic. It's not just basically making analytical decisions. How do you make sure your data is safe and secure, you don't want to get basically get hit by all of these cyber attacks, they're all are coming after data. So, governance, compliance of data that's becoming very, so, those-- >> Again you guys are right on the data thing. >> Yeah. >> I want to get your reaction, you mentioned some stats. >> Sure. >> I've got some stats here. Data explosion, 15.3 zettabytes per year >> Yeah, in global traffic. >> Yeah. >> 500 million business data users and growing 20 billion in connected devices, one billion workers will be assisted by machine learning, so, thanks for plugging those stats but I want to get your reaction to some of these other points here. 80% of enterprises are looking at multicloud, their really evaluating where the data sits in that equation >> Sure. And the other thing is the responsibility and role of the Chief Data Officer >> Yes. >> These are new dynamics, I think you guys will be addressing that into the event. >> Absolutely, absolutely. >> Because organizational dynamics, skill gaps are issues but also you have multicloud. So your thoughts on those to. >> That's a big thing, look at, in the old world, John, Hidrantes is always still in large enterprises, right, and it's going to stay here. In fact I think it's not just cloud, think of it this way, on-premise is still here, it's not going a way. It's reducing in scope but then you have this multicloud world, SAS apps, PAS apps, infrastructure, if I'm a customer, I want to do all of it but the biggest problem is that my data is everywhere, how do I make sense of it and then how do I govern it, like my customer data is sitting somewhere in this SAS app, in that platform, on this on-prem application transaction app I'm running, how do I connect the three and how do I make sense it doesn't get, I can have a governance control around it. That's when data management becomes more important but more complex but that's why AI comes in to making it easier. What are the things we've seen a lot, as you touched upon, is the rise of CDO. In fact we have Daniel from Nissan, she is the CDO of Nissan North America, on main stage, talking about her role and how they have leveraged data to transform themselves. That is something we're seeing a lot more because you know, the role of the CDO is making sure that is not only a sense of governance and compliance, a sense of how do we even understand the value of data across an enterprise. Again, I see, one of the things we going to talk about is system thinking around data. We call it System Thinking 3.0, data is becoming a platform. See, there was OSA-D hardware layer whether it is server, or compute, we believe that data is becoming a platform in itself. Whether you think about it in terms of scale, in terms of governance, in terms of AI, in terms of privacy, you have to think of data as a platform. That's the other big thing. >> I think that is a very powerful statement and I like to get your thoughts, we had many conversations on camera, off camera, around product, Silicon Valley, Venture Capital, how can startups create value. On of the old antigens use to be, build a platform, that's your competitive strategy, you were a platform company and that was a strategic competitive advantage. >> Yes. >> That was unique to the company, they created enablement, Facebook is a great example. >> Yeah. >> They monetized all the data from the users, look where they are. >> Sure. >> If you think about platforms today. >> Sure. >> It seems to be table steaks, not as a competitive advantage but more of a foundational. >> Sure. >> Element of all businesses. >> Yeah. >> Not just startups and enterprises. This seems to be a common thread, do you agree with that, that platforms becoming table steaks, 'cause of if we have to think like systems people >> Mm-hmm. >> Whether it's an enterprise. >> Sure. >> Or a supplier, then holistically the platform becomes table steaks on premer or cloud. Your reaction to that. Do you agree? >> No, I think I agree. I'll say it slightly differently, yes. I think platform is a critical component for any enterprise when they think of their end to end technology strategy because you can't do piece meals otherwise you become a system integrator of your own, right? But it's no easy to be a platform player itself, right, because as a platform player, the responsibility of what you have to offer your customer becomes a lot bigger. So, we obviously has this intelligent data platform but the other thing is that the rule of the platform is different too. It has to be very modular and API driven. Nobody wants to buy a monolithic platform. I don't want to, as a enterprise, I don't buy all now, I'm going to implement five years of platform. You want it, it's going to be like a Lego block, okay you, it builds by itself. Not monolithic, very API driven, maybe microservices based and that's our belief that in the new world, yes, platform is very critical for to accelerate your transformational journeys or data driven transformational journeys but the platform better be API driven, microservices based, very nimble that is not a percussor to value creation but creates value as you go along. >> It's all, kind of up to, depends on the customer it could have a thin foundational data platform, from you guys for instance, then what you're saying, compose. >> Of different components. >> On whatever you need. >> For example you have data integration platform, you can do data quality on top, you can do master data management on top, you can provide governance, you can provide privacy, you can do cataloging, it all builds. >> Yeah. >> It's not like, oh my gosh, I have go do all these things over the course of five years, then I get value. You got to create value all along. >> Yeah. >> Today's customers want value like, in two months, three months, you don't want to wait for a year or two. >> This is the excatly the, I think, the operating system, systems mindset. >> Yes. >> You were referring too, this is kind of how enterprises are behaving now. There is the way you see on-premise, >> Yep. >> Thinking around data, cloud, multicloud emerging, it's a systems view distributed computing, with the right Lego blocks. >> That's what our belief is. That's what we heard from customers. See our, I spend most of my time talking to customers and are we trying to understand what customers want today and you know, some of this latent demands that they have, sometimes can't articulate, my job, I always end up on the road most of the time, just hearing customers, that's what they want. They want exactly to your point, a platform that builds, not monolithic, but they do want a platform. They do want to make it easy for them not to do everything piece meal. Every project is a data project. Whether it's a customer experience project, whether it's a governance project, whether it's nothing else but a analytical project, it's a data project. You don't repeat it every time. That's what they want. >> I know you got a hard stop but I want to get your thoughts on this because I have heard the word, workload, mentioned so many more times in the past year, if there was a tag cloud of all theCUBE conversations where the word workload was mentioned, it would be the biggest font. (laughs) >> Yes. >> Workload has been around for a while but now you are seeing more workloads coming on. >> Yeah. >> That's more important for data. >> Yes. >> Workloads being tied into data. >> Absolutely. >> And then sharing data across multiple workloads, that's a big focus, do you see that same thing? >> We absolutely see that and the unique thing we see also is that newer workloads are being created and the old workloads are not going away, which is where the hybrid becomes very important. See, we serve large enterprises and their goal is to have a hybrid. So, you know, I'm running a old transaction workload order here, I want to have a experimental workload, I want to start a new workload, I want all of them to talk to each other, I don't want them to become silos and that's when they look to us to say connect the dots for me, you can be in the cloud, as an example, our cloud platform, you know last time, we talked about a 5 trillion transactions a month, today is double that, eight to ten trillion transactions a month. Growing like crazy but our traditional workload is also still there so we connect the dots for our customers. >> Amit, thank you for coming on sharing your insights, obviously you guys are doing well. You've got 300,000 developers, billions in revenue, thanks for coming on, appreciate the insight and looking forward to your Informatica World. >> Thank you very much. >> Amit Walia here inside theCUBE, with theCUBE conversation, in Palo Alto, thanks for watching.
SUMMARY :
in the heart of Silicon Valley, I'm John Furrier, the host of theCUBE. but we have been covering you guys And the world is now seeing what we were talking about that you guys are doubling down on? And the key to what you see different now but you guys nailed the chat bots, then to what I make things that, you know, working together, you guys have nailed this before So, tell me what you have learned there which means you got to listen customers, and you know, in this new world, but it's also when you have a large scale data. You need automation, you got to have machine learning, and but also on the customer. and you can create some structure to it. so you can focus that valuable resource somewhere else. What's the focus this year, I know you guys will be there too. so, it's going to be a full blown couple of days for us. how everybody is stepping back from the day to because you know, generally that's the consensus and if you think about some of the trends today, right, How do you make sure your data is safe and secure, I've got some stats here. but I want to get your reaction and role of the Chief Data Officer I think you guys will be addressing that into the event. are issues but also you have multicloud. Again, I see, one of the things we going to talk about and I like to get your thoughts, they created enablement, Facebook is a great example. They monetized all the data from the users, It seems to be table steaks, do you agree with that, Do you agree? the responsibility of what you have to offer from you guys for instance, you can do master data management on top, over the course of five years, then I get value. three months, you don't want to wait for a year or two. This is the excatly the, I think, the operating system, There is the way you see on-premise, it's a systems view distributed computing, and you know, some of this latent demands that they have, I know you got a hard stop but now you are seeing more workloads coming on. and the unique thing we see also is that Amit, thank you for coming on sharing your insights, with theCUBE conversation, in Palo Alto,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Daniel | PERSON | 0.99+ |
Simon Guggenheimer | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
May 2019 | DATE | 0.99+ |
Amit Walia | PERSON | 0.99+ |
20 billion | QUANTITY | 0.99+ |
70 | QUANTITY | 0.99+ |
Scott Guthrie | PERSON | 0.99+ |
eight | QUANTITY | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
Simon | PERSON | 0.99+ |
Thomas Kurian | PERSON | 0.99+ |
five years | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
100% | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
80% | QUANTITY | 0.99+ |
Amit | PERSON | 0.99+ |
90% | QUANTITY | 0.99+ |
three months | QUANTITY | 0.99+ |
Nissan | ORGANIZATION | 0.99+ |
yesterday | DATE | 0.99+ |
last year | DATE | 0.99+ |
Credit Suisse | ORGANIZATION | 0.99+ |
Tableau | ORGANIZATION | 0.99+ |
300,000 developers | QUANTITY | 0.99+ |
a year | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
tomorrow | DATE | 0.99+ |
CUBE | ORGANIZATION | 0.99+ |
two months | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
two | QUANTITY | 0.99+ |
CLAIRE | PERSON | 0.99+ |
One | QUANTITY | 0.99+ |
Informatica World | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
Ali | PERSON | 0.99+ |
Databricks | ORGANIZATION | 0.99+ |
data3.org | OTHER | 0.99+ |
Informatica | ORGANIZATION | 0.99+ |
80 plus sessions | QUANTITY | 0.99+ |
this year | DATE | 0.99+ |
two years ago | DATE | 0.99+ |
one billion workers | QUANTITY | 0.99+ |
Google Cloud | ORGANIZATION | 0.98+ |
80 customers | QUANTITY | 0.98+ |
Francois Ajenstat | PERSON | 0.98+ |
Nissan North America | ORGANIZATION | 0.98+ |
billions | QUANTITY | 0.98+ |
second thing | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
first | QUANTITY | 0.97+ |
three | QUANTITY | 0.97+ |
Infomatica | ORGANIZATION | 0.97+ |
Anil | PERSON | 0.97+ |
one thing | QUANTITY | 0.97+ |
two different ways | QUANTITY | 0.96+ |
Today | DATE | 0.96+ |
Lego | ORGANIZATION | 0.96+ |
20th year | QUANTITY | 0.96+ |
three things | QUANTITY | 0.96+ |
CDO | TITLE | 0.96+ |
ten trillion transactions | QUANTITY | 0.96+ |
Palo Alto, California | LOCATION | 0.95+ |
Venture Capital | ORGANIZATION | 0.94+ |
last couple of years | DATE | 0.94+ |
theCUBE | ORGANIZATION | 0.93+ |
President | PERSON | 0.93+ |
Michael St-Jean, Red Hat Storage | Dell Technologies World 2019
(funky music) >> Live from Las Vegas, its theCUBE, covering Dell Technologies World 2019, brought to you by Dell Technologies and its ecosystem partners. >> Welcome to theCUBE. Day three of our live coverage from Dell Technologies World 2019 continues. Lisa Martin with my co-host Stu Miniman and we're welcoming to theCUBE for the first time Michael St-Jean, Principal Marketing Manager for Red Hat Storage. Michael welcome. >> Thanks Lisa. Hi Stu. >> So day three this event is still pretty loud around us. This has about we're hearing upwards of fifteen thousand people. A lot of partners. Give us your perspective on Dell Technologies World 2019. >> I got to tell you this is an awesome show. I got to tell you the energy, and not just in the sessions but out on the show floor as well. It's amazing. And some of the conversations that we've been having out there around things like emerging technologies, emerging workflows around artificial intelligence, machine learning things like that. And the whole adoption around hybrid cloud, it really speaks to all of the things that we're doing, the initiatives that we're leading at Red Hat. So it's a great validation of all of the things that we've been working on for the past 10, 15, 20 years. >> And you had a long-standing relationship with Dell. >> Oh yeah, absolutely. >> 18 years or so? >> Yeah, yeah we've had not just a long relationship but very collaborative relationship with Dell over the past 18 years. It's something like If you take a look at some of the initiatives that we've been working on, we have ready architectures around open stack, around open shift. We have just, we have highlighting a few things here around Microsoft sequel server, around SAP HANA. And actually, we're really talking a lot around open shift and a ready architecture that we've developed, that we have architecture guides, deployment guides all around open shift and open shift container storage for Dell hardware. And actually, next week at our Red Hat Summit event, you should really take a look on Wednesday morning our keynote, our EVP Paul Cormier will be talking about some great, new, very interesting initiatives that we've been working with Dell on. >> Alright well Michael I'm excited we're going to have theCUBE at Red Hat Summit in Boston. It's our sixth year there. I'll be one of the hosts there. John Walls will be there with me. We're going to have Paul Cormier on the program. (laughs) Jim Whitehurst hacking the keynote. It's actually not a secret Satya Nadella and Ginni Rometty will both be up on the main stage there. And just my perspective you were talking about hybrid cloud. As you said, Red Hat Summit, I've been for many years. That hybrid cloud, that adoption. They're both open stack at the infrastructure layer and up to the application with open chip. Something we've been hearing for years and you're right. The general themes seem to echo and resonate here as to what I've been hearing at Red Hat. Can you help expand a little bit those conversations you're having here? I love you talking about some of that app modernization analytics that are going on there. How does that fit into the ready architectures that Dell's offering? >> Sure. Well I represent our storage business unit. So a lot of times, the conversations I'm having over there at the booth are kind of revolving around storage and storage growth. How data is expanding, how do we deal with the scalability of that? How do we deal with persistence of storage and containers for staple applications, things of that nature. But really, at the end of the day as I'm listening to some of the other conversations that my colleagues are having over there, it's really about how do we get work done? How do we now move into these areas where we need that cloud like experience not just in a public cloud or even in a private cloud but everywhere that we touch infrastructure. We need to have that simplified cloud-like experience. >> So just point on your subject area. Talk about the containerization and what's happening with storage pieces. Give us that layer between the infrastructure layer because let me say I believe the t shirt I saw was Linux is container, containers are Linux. So Linux has lived on Dell hardware for a long time. But anything that users should understand about the differentiation between whether they were bare metal or virtualized in the past and containerized environments today? >> Yeah well I like to say that you can't spell Red Hat without storage. (laughs) I don't know that that's particularly true but (laughing) >> It sells good. >> It sells good. Yeah so storage is near and dear to my heart but really at the end of the day, you can't have storage sitting in an island, it has to integrate and be collaborative with the rest of the portfolio that we're expanding out for our customers solving real issues, real problems. And so we've been watching industry trends and certainly these are things like that from an industry we've been looking at over the past five, 10 years so nothing new but we see the evolution of certain things like for example developers and data analysts, data scientists, these people are really charged with going out there and making dramatic differences, transforming their companies, their organizations. And as that transformational application, service development or bringing back insights on data is really integral to a company's ability to transform or differentiate in the industry. They have to be much, much more agile. And it seems that they are more and more taking over a lot of the role that we would normally see traditional I.T. managers making a lot of the purchasing decisions. A lot of the industry trends show that these folks, developers, data analysts are actually making some of those I.T. decisions now. And of course, everything is really being developed as cloud native. So we see cloud native as being more of the new norm. And if you kind of look at the expansion of data, Lisa Spellman a couple of days ago said "Hey look. "We've seen data double in the past two years "but we're only using two percent of that data." >> Two percent? >> Two percent. >> Wow, it's not very much. >> Yeah. And if you look at IDC mentioned that the data sphere has now grown to over 33 Zettabytes. A zettabyte is a billion gigabytes. So put that into perspective. Alright. 33 Zettabytes. By 2025, they project that we're going to grow to 175 Zetabytes. How can we make better use of that data? A lot of that data is coming from IOT type applications. You look at trends, traffic trends and how that might be correlated to weather activities or other events that are going on or archeological digs or all sorts of just information that is brought back. How do we make best use of that information? And so the need for scalability in a hybrid cloud environment, has become more and more of a key industry trend as the data sphere continues to grow. And I think across all three of those, that's really driving this need for hyper convergence and not just hyper convergence in the traditional sense. we've seen hyper convergence in the field for probably about five, 10 years now. But initially it was kind of a niche play and it was based on appliances. Well the past two years, you've seen the Gardner reports on hyper convergence really talking about how it is moving and evolving to more of a software defined nature. And in fact, in the past Magic Quadrant around hyper convergence, you see Red Hat show up. Something that is probably not known that Red Hat has hyper converged offerings. It's something that actually we didn't get into it just because the analysts were suggesting it. We had customers come to us and they were trying to put together Red Hat Enterprise Linux, Red Hat virtualization, storage, et cetera et cetera with varying degrees of success with that because they were doing it more or less as a project. And so we took upon ourselves to develop that, put it into a product and start to develop it with things like Ansible for deployment management. We have dedupe and compression with our virtual data optimization products, virtual GPUs, et cetera. So we're really in that space now too. >> Yeah Michael I mean it really from our standpoint it was a natural extension of what happens if you look at what hyper converged was, it was simplification and it had to be tight integration down at the OS level or the virtualization level. As a matter of fact, when we first wrote our research on it, we called it server SAN because it was the benefits of storage area network but built at the server level. So we said those OS manufacturers. Now I have to admit, I called out VMware and Microsoft are the ones that I considered the biggest ones. But as a natural fit that Red Hat would look out of that environment and if you look at the leaders in the marketplace today, we're here, VMware is here, their softwares piece. Techtonic has transitioned to be a software company. So yeah, welcome to the party. It's been a fun ride to watch that over the last five years. >> Yeah absolutely. >> So let's talk about customers and this spirit of collaboration. You just mentioned sort of the entrance into HCIs being really driven by the voice and the actions and the needs of Red Hat customers. You guys have three major pillars, themes that you have been delivering at Dell Technologies World. Talk to us a little bit about this and how your customers are helping to drive what you're delivering here and what you'll be delivering in the future. >> Yes certainly. I mean that's the whole open source model. And we don't we don't just contribute to the open source community but we develop enterprise grade infrastructure solutions for customers based on the open source way. And so essentially, as I think of it these market trends that I was talking about. It's not that we're leading them or that we're following them. It's we're tightly integrated with them because all of these industry trends are being formulated as we're in progress. It's a great opportunity for Red Hat to really express what we can do with our customers, with our partners, our developers, the folks that we have on our staff that are working directly in the community. Most products that we work on, we're the number one contributor for. So it's all very special opportunity for us. I would say from a storage perspective, what we've really focused on this year is around three main pillars. One is around data portability for those application portability projects that we see in open shift. So being able to offer an enterprise grade persistent storage for stateful applications that are running in these containerized environments. Another area is around that hybrid cloud scalable storage. And this is something that being able to scale that storage to hundreds of petabytes is kind of a big deal (laughs) and especially as we see a lot of the workloads that we've been working with customers on around data analytics and now artificial intelligence, machine learning. Those types of data lakes type projects where we're able to, by using open stack or open shift, we're able to do multi-tenant workforce workload isolation of the work that all of these people are doing while having a shared data context underneath with Red Hat storage. And then the third is around hyper convergence. I think we've touched on that already. >> Yeah so Michael before letting you go I have to touch on the hot thing that everybody needs to understand what's going. The ripple that will be felt throughout the industry. And I'm not talking about a certain 34 billion dollar pending acquisition. (laughs) Constant in the last, most of my career there has been a certain logo that I would see at every conference and that Red Hat that I got my first one, I don't know, 15, 16 years ago. So the shadow man has been deprecated. There's a new Red Hat logo. >> Oh yeah yeah. And we just brought out the new logo today. So a great segue into actually, it was last night, they pulled down the old logos, they put the new logos on the buildings, pretty much around the world. I think it's May Day in Europe. So maybe some of that will happen tomorrow or. Trying to think of what time it is, probably tonight. So yeah it's a great new logo and it's, our old logo has been over, it was around for 19 years since 2000. And it came back from a lot of feedback from customers but also from people who didn't know Red Hat, didn't know what we did. And quite honestly, some of them said that shadow man looked a little sneaky. (laughing) >> I guess on the rise of all those cyber challenges, maybe they're right. >> (laughs) so we have a new logo just launched today. Very proud of it, we're looking forward to working with everybody in the industry and go forward with all these new, wonderful opportunities that we have. >> I look forward to pointing out to all the vendors that they're now using the old Red Hat logo just like they do for every other vendor in this space when it changes. >> As of how many hours ago. (laughing) >> Well it'll be interesting to see and hear what Stu and team uncover at the summit next week in terms of the impact of this brand. We thank you so much for your time Michael, >> Absolutely. >> joining Stu and me on theCUBE. I guess it is just after day of day three. It's hard to tell right it's all blending in together. (laughs) Well we thank you for your time and your insight. >> Thank you very much and see you next week Stu. >> Exactly. For Stu Miniman, I am Lisa Martin, you're watching theCUBE live from day three of our coverage of Dell technologies world 2019. Thanks for watching. (light music)
SUMMARY :
brought to you by Dell Technologies for the first time Michael St-Jean, A lot of partners. And some of the conversations at some of the initiatives that we've been working on, How does that fit into the ready architectures but everywhere that we touch infrastructure. because let me say I believe the t shirt I saw was that you can't spell Red Hat without storage. And it seems that they are more and more that the data sphere has now grown that I considered the biggest ones. and the actions and the needs of Red Hat customers. the folks that we have on our staff that everybody needs to understand what's going. So maybe some of that will happen tomorrow or. I guess on the rise of all those cyber challenges, (laughs) so we have a new logo just launched today. I look forward to pointing out As of how many hours ago. in terms of the impact of this brand. Well we thank you for your time and your insight. of Dell technologies world 2019.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Spellman | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Jim Whitehurst | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Michael | PERSON | 0.99+ |
Ginni Rometty | PERSON | 0.99+ |
John Walls | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
Paul Cormier | PERSON | 0.99+ |
Stu | PERSON | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Michael St-Jean | PERSON | 0.99+ |
two percent | QUANTITY | 0.99+ |
Wednesday morning | DATE | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Two percent | QUANTITY | 0.99+ |
Satya Nadella | PERSON | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Boston | LOCATION | 0.99+ |
sixth year | QUANTITY | 0.99+ |
175 Zetabytes | QUANTITY | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
33 Zettabytes | QUANTITY | 0.99+ |
next week | DATE | 0.99+ |
Red Hat Summit | EVENT | 0.99+ |
hundreds of petabytes | QUANTITY | 0.99+ |
Europe | LOCATION | 0.99+ |
Techtonic | ORGANIZATION | 0.99+ |
2025 | DATE | 0.99+ |
15 | DATE | 0.99+ |
both | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
SAP HANA | TITLE | 0.99+ |
Red Hat Storage | ORGANIZATION | 0.99+ |
tomorrow | DATE | 0.99+ |
fifteen thousand people | QUANTITY | 0.98+ |
first time | QUANTITY | 0.98+ |
Linux | TITLE | 0.98+ |
today | DATE | 0.98+ |
tonight | DATE | 0.98+ |
over 33 Zettabytes | QUANTITY | 0.98+ |
Red Hat Enterprise Linux | TITLE | 0.98+ |
third | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
Dell Technologies World 2019 | EVENT | 0.98+ |
last night | DATE | 0.98+ |
18 years | QUANTITY | 0.97+ |
three | QUANTITY | 0.97+ |
first one | QUANTITY | 0.97+ |
IDC | ORGANIZATION | 0.97+ |
Red Hat | TITLE | 0.96+ |
a billion gigabytes | QUANTITY | 0.96+ |
day three | QUANTITY | 0.96+ |
19 years | QUANTITY | 0.95+ |
VMware | ORGANIZATION | 0.95+ |
first | QUANTITY | 0.95+ |
Day three | QUANTITY | 0.93+ |
2019 | DATE | 0.93+ |
16 years ago | DATE | 0.93+ |
Gardner | PERSON | 0.92+ |
15 | QUANTITY | 0.92+ |
2000 | DATE | 0.91+ |
Chhandomay Mandal, Dell EMC | Dell Technologies World 2019
(upbeat music) >> Live from Las Vegas, it's theCUBE covering Dell Technologies World 2019. Brought to you by Dell Technologies and its ecosystem partners. >> Welcome back everyone to theCUBE's live coverage of Dell Technologies World here in Las Vegas, Nevada. I'm your host, Rebecca Knight along with my co-host, Dave Vellante. We are joined by Chhandomay Mandal, he is the Director of Solutions Marketing for Dell EMC. Thanks so much for coming on theCUBE. >> Happy to be here. >> Direct from Boston. This is a Boston panel, I love it. >> Yes, and we were on the same flight yesterday. >> (laughing) There you go! >> Ah, so half of Hopkinton. >> Yeah. So, we're here at Dell Technologies World, but you're here to talk to us about SAP. Explain to our viewers a little bit about the connection between your companies. >> Sure, so SAP connects a lot of our customers. They are running their ERP, CRM, digital procurement, HR systems, and many other workloads on SAP, and we, Dell Technologies, as a company, have a portfolio of solutions to support SAP workloads. So, that's the big connection. SAP and Dell EMC, we are big partners, and we work hand in hand as well. >> Talk a little bit about what SAP customers are doing. You know, everybody knows the stories of SAP multi-year implementation, very complicated, although driving business value, but today people want to be more agile, cloud, Hana, who's been around now for quite a number of years. SAP obviously pushing hard for a number of reasons. What are you seeing in the customer base? >> Yeah, SAP customers are in a journey. As you mentioned, the SAP landscapes implementations. In fact, in 2016, greater than fifty percent of SAP landscapes were running on Oracle. SAP has come up with the in-memory database, SAP Hana, and there is a mandate that by 2025, the customers need to be running on SAP Hana to run any SAP workload. So, customers need to go through that transition, and as the data explodes from IoT, Big Data, BlockChain, our next gen intelligent applications, they are driving a lot of analytics, and SAP has come up with a platform called SAP Leonardo for mission learning. So, customers are trying to consolidate their old SAP landscapes on an agile, modern infrastructure. They are planning to migrate all the older databases to SAP Hana. At the same time, they are looking into deploying SAP Leonardo to take advantage of IoT, AI, BlockChain, all those things. >> So SAP is dangling the carrot. With Hana, it's in memory, performance, efficiency. With Leonardo, it's the promise of machine intelligence, but there are challenges in migrating off of Oracle. How are customers dealing with that? Are you guys in a position to help with the partnership with SAP? Can you talk about that a little bit? >> Yes, SAP implementations, as you know, is fairly complex, takes many months, years, and customers have been running SAP for a long time, so their challenge are, "How do we keep our businesses running while we need to transition from what we have to these SAP Hana based deployments." They are looking into modern infrastructures that will be able to consolidate all of this around their applications with the same SLS, and at the same time when they migrate one application to the next on SAP Hana, that platform should be able to add up and deliver all the SLS. So, refactoring what they have into this SAP Hana is really big for all of our customers, and how to have a better performing platform, how to deliver the agility's simplification, as well as lower the TCO. These are the projects that CIO's are running for our customers. >> So, as we know, simpler is always better. Can you talk about some of the ROI? What are companies actually seeing in terms of these benefits? >> So, let's take specific examples. Dell EMC PowerMax is the backbone of running SAP applications for a long time. Our previous generations in terms of VMAX, VMAX All Flash, now with our PowerMax, it has the highest skill ability of SLP Hana. It can actually run 162 SAP Hana nodes on a single array, but that's not the end game. The thing is, it can consolidate SAP, traditional SAP workloads, SAP Hana, as well as other mixed workloads while delivering the same performance masking the SLS, with it's built-in mission learning capabilities. Now, what does that translate to? We have several customers seeing benefits out of this. For example, a big sports equipment manufacturer, when they move to this platform, there are software quality assurance process. It used to take like ten days in all the infrastructure. Now they could run on this new platform in two days. That's literally eighty percent improvement, because of the higher performance, the more consolidation that they were able to access. So that's one example just from the performance perspective, but if you take a consolidation simpler to run, there are other examples I can actually walk you through. >> So, I want to double click on that, because every storage company wants to partner with SAP, target that stuff, because Oracle's not that friendly these days. They have their own hardware, right? They're trying to elbow you out with Exadata. So, talk a little bit more about the differentiation that Dell EMC brings relative to some of your other storage competitors, specifically within SAP environments. >> Sure, so first Dell comes in with a portfolio of solutions. As you are mentioning, these are fairly complex deployments, and customers are looking for cross state partner, with professional services, experience, and a portfolio of solutions, not just one solution fits all. Just to continue on that aspect, I talked about Dell EMC PowerMax. It's great for consolidation, for running Hana and the existing workloads, but then when you look at the next generation of applications, the IoT, AI, BlockChain, the unstructured world, Dell EMC Isilon is a great platform which has already been in the market and in the forefront of AI workloads. Dell, as a company, offer a portfolio of solutions, and it's not piecemeal. We see the broader picture, and plug in all the right pieces with the right consulting surfaces as well, so that the customers can run their applications day in and day out, and transition as well as bring in new deployments like SAP Leonardo. I'll give you one example here. Another big service provider, their analytics, their SAP APOs, used to take like 32 hours of run time, and they could only do in weekends. Now, with this Dell EMC storage solutions, they are actually down to, give or take, seven hours. So that's like 78% improvement in terms of how fast they can run this analytics, and this is turning into better decision making for the procurement manager, for the business analyst, and they are able to drive value from time to market, time to value, from all the data that's captured in these SAP landscapes. >> And these are realtime or near realtime analyses that are going on, right? But then ultimately you have to persist the data, that's where things like PowerMax come in, and then sometimes you got to bring it back in, and so are you guys architecting high speed interconnects and InfiniBands and all kinds of crazy stuff? >> All kinds of things-- >> NVMe's... >> And actually, you brought up a very good point. SAP Hana is an in-memory database, so everything is running in the memory speed. Why do you need high performing array like Dell EMC PowerMax? Guess what? Everything is in memory, but this is all critical databases. Everything needs to be persisted back to the storage array, and then when something reboots, you cannot stay still til all the data is back from the storage array into the memory. So, persisting the data quickly and fast reboots are also necessary. Driving the needs of throughputs like what PowerMax provides, 150 gigabits per second throughput, so that's where the connection comes in. >> So the throughputs you're describing really were unthinkable five years ago. Can you reflect on that a little bit in terms of what you've seen the technology do that you really couldn't have even imagined it doing, even in very recent times. >> In fact, that's a very good point. One of the customers that participated in this TOI study, they mentioned they wanted to go to the cloud, public cloud. When they wanted to go to the cloud at the time the maximum size of our database you could do was 2.5 terabyte, and they already had a 4 terabyte SAP database, so there was no way they could go to a public cloud. What they were looking into, the cloud operating model, so that you can actually be flexible with your infrastructure, consume as you go, and we were able to help in that transition with all of the solutions. >> Great. So where you think we're going to be going? I mean in terms of next year's Dell Technologies World 2020, which will be big just because it's a cool number. What do you think we'll be talking about next year's conference? >> That's a very good point, and as you mentioned 2020, we are already seven billion people, and by 2020 it's predicted to be like 30 billion devices generating 44 zettabytes of data, so managing all of this data, putting the data at the right tier, the data that needs to be accessed quickly to make realtime analysis process. The data that's seven days old, putting them in the right tier, accessing them, and driving the value from your data, from this past amount of data, so that you can make decisions, you can gather intelligence, and take this value to drive competitive differentiation will be where we are. And the form factor? Yes, everybody will be able to do all of this pretty much like realtime in phones or even smaller devices. >> It's the march to 2025, when everybody's going to be off Oracle. >> Well exactly! You're right. >> Oh, that's your mandate. >> Anyway, @dvellante if you want to talk about that. We've got a lot pf research on it, so... >> Exactly. >> Not trivial. >> Well Chhandomay, thank you so much for coming on theCUBE. It was a pleasure having you. >> Same here. Thank you. >> Thank you. >> I'm Rebecca Knight for Dave Vellante. We will have much more of theCUBE's live coverage of Dell Technologies World coming up in just a little bit. (upbeat electronic music)
SUMMARY :
Brought to you by Dell Technologies he is the Director of Solutions Marketing for Dell EMC. This is a Boston panel, I love it. the connection between your companies. So, that's the big connection. What are you seeing in the customer base? and as the data explodes from IoT, Big Data, BlockChain, So SAP is dangling the carrot. and at the same time when they migrate Can you talk about some of the ROI? the more consolidation that they were able to access. So, talk a little bit more about the differentiation and in the forefront of AI workloads. So, persisting the data quickly So the throughputs you're describing One of the customers that participated in this TOI study, So where you think we're going to be going? and driving the value from your data, It's the march to 2025, Well exactly! Anyway, @dvellante if you want to talk about that. Well Chhandomay, thank you so much for coming on theCUBE. Thank you. of Dell Technologies World coming up in just a little bit.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Rebecca Knight | PERSON | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
Chhandomay | PERSON | 0.99+ |
32 hours | QUANTITY | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
2016 | DATE | 0.99+ |
Boston | LOCATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
ten days | QUANTITY | 0.99+ |
Chhandomay Mandal | PERSON | 0.99+ |
78% | QUANTITY | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
two days | QUANTITY | 0.99+ |
2025 | DATE | 0.99+ |
4 terabyte | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
Dell EMC | ORGANIZATION | 0.99+ |
yesterday | DATE | 0.99+ |
eighty percent | QUANTITY | 0.99+ |
Las Vegas, Nevada | LOCATION | 0.99+ |
seven billion people | QUANTITY | 0.99+ |
seven hours | QUANTITY | 0.99+ |
2.5 terabyte | QUANTITY | 0.99+ |
SAP | ORGANIZATION | 0.99+ |
greater than fifty percent | QUANTITY | 0.99+ |
One | QUANTITY | 0.98+ |
next year | DATE | 0.98+ |
five years ago | DATE | 0.98+ |
one example | QUANTITY | 0.98+ |
PowerMax | COMMERCIAL_ITEM | 0.98+ |
SAP Hana | TITLE | 0.97+ |
30 billion devices | QUANTITY | 0.97+ |
Dell Technologies World | ORGANIZATION | 0.97+ |
single array | QUANTITY | 0.97+ |
first | QUANTITY | 0.97+ |
theCUBE | ORGANIZATION | 0.97+ |
@dvellante | PERSON | 0.96+ |
Dell Technologies World 2020 | EVENT | 0.96+ |
SAP Leonardo | TITLE | 0.96+ |
one solution | QUANTITY | 0.95+ |
Leonardo | ORGANIZATION | 0.93+ |
one application | QUANTITY | 0.92+ |
Exadata | ORGANIZATION | 0.91+ |
Dell Technologies World 2019 | EVENT | 0.91+ |
Dell Technologies World | EVENT | 0.91+ |
today | DATE | 0.89+ |
seven days old | QUANTITY | 0.87+ |
150 gigabits per second | QUANTITY | 0.85+ |
SAP | TITLE | 0.85+ |
Hopkinton | PERSON | 0.85+ |
march | DATE | 0.83+ |
44 zettabytes of data | QUANTITY | 0.83+ |
EMC | COMMERCIAL_ITEM | 0.8+ |
DD, Cisco + Han Yang, Cisco | theCUBE NYC 2018
>> Live from New York, It's the CUBE! Covering theCUBE, New York City 2018. Brought to you by SiliconANGLE Media and its Ecosystem partners. >> Welcome back to the live CUBE coverage here in New York City for CUBE NYC, #CubeNYC. This coverage of all things data, all things cloud, all things machine learning here in the big data realm. I'm John Furrier and Dave Vellante. We've got two great guests from Cisco. We got DD who is the Vice President of Data Center Marketing at Cisco, and Han Yang who is the Senior Product Manager at Cisco. Guys, welcome to the Cube. Thanks for coming on again. >> Good to see ya. >> Thanks for having us. >> So obviously one of the things that has come up this year at the Big Data Show, used to be called Hadoop World, Strata Data, now it's called, the latest name. And obviously CUBE NYC, we changed from Big Data NYC to CUBE NYC, because there's a lot more going on. I heard hallway conversations around blockchain, cryptocurrency, Kubernetes has been said on theCUBE already at least a dozen times here today, multicloud. So you're seeing the analytical world try to be, in a way, brought into the dynamics around IT infrastructure operations, both cloud and on premises. So interesting dynamics this year, almost a dev ops kind of culture to analytics. This is a new kind of sign from this community. Your thoughts? >> Absolutely, I think data and analytics is one of those things that's pervasive. Every industry, it doesn't matter. Even at Cisco, I know we're going to talk a little more about the new AI and ML workload, but for the last few years, we've been using AI and ML techniques to improve networking, to improve security, to improve collaboration. So it's everywhere. >> You mean internally, in your own IT? >> Internally, yeah. Not just in IT, in the way we're designing our network equipment. We're storing data that's flowing through the data center, flowing in and out of clouds, and using that data to make better predictions for better networking application performance, security, what have you. >> The first topic I want to talk to you guys about is around the data center. Obviously, you do data center marketing, that's where all the action is. The cloud, obviously, has been all the buzz, people going to the cloud, but Andy Jassy's announcement at VMworld really is a validation that we're seeing, for the first time, hybrid multicloud validated. Amazon announced RDS on VMware on-premises. >> That's right. This is the first time Amazon's ever done anything of this magnitude on-premises. So this is a signal from the customers voting with their wallet that on-premises is a dynamic. The data center is where the data is, that's where the main footprint of IT is. This is important. What's the impact of that dynamic, of data center, where the data is with the option of a cloud. How does that impact data, machine learning, and the things that you guys see as relevant? >> I'll start and Han, feel free to chime in here. So I think those boundaries between this is a data center, and this a cloud, and this is campus, and this is the edge, I think those boundaries are going away. Like you said, data center is where the data is. And it's the ability of our customers to be able to capture that data, process it, curate it, and use it for insight to take decision locally. A drone is a data center that flies, and boat is a data center that floats, right? >> And a cloud is a data center that no one sees. >> That's right. So those boundaries are going away. We at Cisco see this as a continuum. It's the edge cloud continuum. The edge is exploding, right? There's just more and more devices, and those devices are cranking out more data than ever before. Like I said, it's the ability of our customers to harness the data to make more meaningful decisions. So Cisco's take on this is the new architectural approach. It starts with the network, because the network is the one piece that connects everything- every device, every edge, every individual, every cloud. There's a lot of data within the network which we're using to make better decisions. >> I've been pretty close with Cisco over the years, since '95 timeframe. I've had hundreds of meetings, some technical, some kind of business. But I've heard that term edge the network many times over the years. This is not a new concept at Cisco. Edge of the network actually means something in Cisco parlance. The edge of the network >> Yeah. >> that the packets are moving around. So again, this is not a new idea at Cisco. It's just materialized itself in a new way. >> It's not, but what's happening is the edge is just now generating so much data, and if you can use that data, convert it into insight and make decisions, that's the exciting thing. And that's why this whole thing about machine learning and artificial intelligence, it's the data that's being generated by these cameras, these sensors. So that's what is really, really interesting. >> Go ahead, please. >> One of our own studies pointed out that by 2021, there will be 847 zettabytes of information out there, but only 1.3 zettabytes will actually ever make it back to the data center. That just means an opportunity for analytics at the edge to make sense of that information before it ever makes it home. >> What were those numbers again? >> I think it was like 847 zettabytes of information. >> And how much makes it back? >> About 1.3. >> Yeah, there you go. So- >> So a huge compression- >> That confirms your research, Dave. >> We've been saying for a while now that most of the data is going to stay at the edge. There's no reason to move it back. The economics don't support it, the latency doesn't make sense. >> The network cost alone is going to kill you. >> That's right. >> I think you really want to collect it, you want to clean it, and you want to correlate it before ever sending it back. Otherwise, sending that information, of useless information, that status is wonderful. Well that's not very valuable. And 99.9 percent, "things are going well." >> Temperature hasn't changed. (laughs) >> If it really goes wrong, that's when you want to alert or send more information. How did it go bad? Why did it go bad? Those are the more insightful things that you want to send back. >> This is not just for IoT. I mean, cat pictures moving between campuses cost money too, so why not just keep them local, right? But the basic concepts of networking. This is what I want to get in my point, too. You guys have some new announcements around UCS and some of the hardware and the gear and the software. What are some of the new announcements that you're announcing here in New York, and what does it mean for customers? Because they want to know not only speeds and feeds. It's a software-driven world. How does the software relate? How does the gear work? What's the management look like? Where's the control plane? Where's the management plane? Give us all the data. >> I think the biggest issues starts from this. Data scientists, their task is to export different data sources, find out the value. But at the same time, IT is somewhat lagging behind. Because as the data scientists go from data source A to data source B, it could be 3 petabytes of difference. IT is like, 3 petabytes? That's only from Monday through Wednesday? That's a huge infrastructure requirement change. So Cisco's way to help the customer is to make sure that we're able to come out with blueprints. Blueprints enabling the IT team to scale, so that the data scientists can work beyond their own laptop. As they work through the petabytes of data that's come in from all these different sources, they're able to collaborate well together and make sense of that information. Only by scaling with IT helping the data scientists to work the scale, that's the only way they can succeed. So that's why we announced a new server. It's called a C480ML. Happens to have 8 GPUs from Nvidia inside helping customers that want to do that deep learning kind of capabilities. >> What are some of the use cases on these as products? It's got some new data capabilities. What are some of the impacts? >> Some of the things that Han just mentioned. For me, I think the biggest differentiation in our solution is things that we put around the box. So the management layer, right? I mean, this is not going to be one server and one data center. It's going to be multiple of them. You're never going to have one data center. You're going to have multiple data centers. And we've got a really cool management tool called Intersight, and this is supported in Intersight, day one. And Intersight also uses machine learning techniques to look at data from multiple data centers. And that's really where the innovation is. Honestly, I think every vendor is bend sheet metal around the latest chipset, and we've done the same. But the real differentiation is how we manage it, how we use the data for more meaningful insight. I think that's where some of our magic is. >> Can you add some code to that, in terms of infrastructure for AI and ML, how is it different than traditional infrastructures? So is the management different? The sheet metal is not different, you're saying. But what are some of those nuances that we should understand. >> I think especially for deep learning, multiple scientists around the world have pointed that if you're able to use GPUs, they're able to run the deep learning frameworks faster by roughly two waters magnitude. So that's part of the reason why, from an infrastructure perspective, we want to bring in that GPUs. But for the IT teams, we didn't want them to just add yet another infrastructure silo just to support AI or ML. Therefore, we wanted to make sure it fits in with a UCS-managed unified architecture, enabling the IT team to scale but without adding more infrastructures and silos just for that new workload. But having that unified architecture, it helps the IT to be more efficient and, at the same time, is better support of the data scientists. >> The other thing I would add is, again, the things around the box. Look, this industry is still pretty nascent. There is lots of start-ups, there is lots of different solutions, and when we build a server like this, we don't just build a server and toss it over the fence to the customer and say "figure it out." No, we've done validated design guides. With Google, with some of the leading vendors in the space to make sure that everything works as we say it would. And so it's all of those integrations, those partnerships, all the way through our systems integrators, to really understand a customer's AI and ML environment and can fine tune it for the environment. >> So is that really where a lot of the innovation comes from? Doing that hard work to say, "yes, it's going to be a solution that's going to work in this environment. Here's what you have to do to ensure best practice," etc.? Is that right? >> So I think some of our blueprints or validated designs is basically enabling the IT team to scale. Scale their stores, scale their CPU, scale their GPU, and scale their network. But do it in a way so that we work with partners like Hortonworks or Cloudera. So that they're able to take advantage of the data lake. And adding in the GPU so they're able to do the deep learning with Tensorflow, with Pytorch, or whatever curated deep learning framework the data scientists need to be able to get value out of those multiple data sources. These are the kind of solutions that we're putting together, making sure our customers are able to get to that business outcome sooner and faster, not just a-- >> Right, so there's innovation at all altitudes. There's the hardware, there's the integrations, there's the management. So it's innovation. >> So not to go too much into the weeds, but I'm curious. As you introduce these alternate processing units, what is the relationship between traditional CPUs and these GPUs? Are you managing them differently, kind of communicating somehow, or are they sort of fenced off architecturally. I wonder if you could describe that. >> We actually want it to be integrated, because by having it separated and fenced off, well that's an IT infrastructure silo. You're not going to have the same security policy or the storage mechanisms. We want it to be unified so it's easier on IT teams to support the data scientists. So therefore, the latest software is able to manage both CPUs and GPUs, as well as having a new file system. Those are the solutions that we're putting forth, so that ARC-IT folks can scale, our data scientists can succeed. >> So IT's managing a logical block. >> That's right. And even for things like inventory management, or going back and adding patches in the event of some security event, it's so much better to have one integrated system rather than silos of management, which we see in the industry. >> So the hard news is basically UCS for AI and ML workloads? >> That's right. This is our first server custom built ground up to support these deep learning, machine learning workloads. We partnered with Nvidia, with Google. We announced earlier this week, and the phone is ringing constantly. >> I don't want to say godbot. I just said it. (laughs) This is basically the power tool for deep learning. >> Absolutely. >> That's how you guys see it. Well, great. Thanks for coming out. Appreciate it, good to see you guys at Cisco. Again, deep learning dedicated technology around the box, not just the box itself. Ecosystem, Nvidia, good call. Those guys really get the hot GPUs out there. Saw those guys last night, great success they're having. They're a key partner with you guys. >> Absolutely. >> Who else is partnering, real quick before we end the segment? >> We've been partnering with software sci, we partner with folks like Anaconda, with their Anaconda Enterprise, which data scientists love to use as their Python data science framework. We're working with Google, with their Kubeflow, which is open source project integrating Tensorflow on top of Kubernetes. And of course we've been working with folks like Caldera as well as Hortonworks to access the data lake from a big data perspective. >> Yeah, I know you guys didn't get a lot of credit. Google Cloud, we were certainly amplifying it. You guys were co-developing the Google Cloud servers with Google. I know they were announcing it, and you guys had Chuck on stage there with Diane Greene, so it was pretty positive. Good integration with Google can make a >> Absolutely. >> Thanks for coming on theCUBE, thanks, we appreciate the commentary. Cisco here on theCUBE. We're in New York City for theCUBE NYC. This is where the world of data is converging in with IT infrastructure, developers, operators, all running analytics for future business. We'll be back with more coverage, after this short break. (upbeat digital music)
SUMMARY :
It's the CUBE! Welcome back to the live CUBE coverage here So obviously one of the things that has come up this year but for the last few years, Not just in IT, in the way we're designing is around the data center. and the things that you guys see as relevant? And it's the ability of our customers to It's the edge cloud continuum. The edge of the network that the packets are moving around. is the edge is just now generating so much data, analytics at the edge Yeah, there you go. that most of the data is going to stay at the edge. I think you really want to collect it, (laughs) Those are the more insightful things and the gear and the software. the data scientists to work the scale, What are some of the use cases on these as products? Some of the things that Han just mentioned. So is the management different? it helps the IT to be more efficient in the space to make sure that everything works So is that really where a lot of the data scientists need to be able to get value There's the hardware, there's the integrations, So not to go too much into the weeds, Those are the solutions that we're putting forth, in the event of some security event, and the phone is ringing constantly. This is basically the power tool for deep learning. Those guys really get the hot GPUs out there. to access the data lake from a big data perspective. the Google Cloud servers with Google. This is where the world of data
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Han Yang | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
New York | LOCATION | 0.99+ |
Diane Greene | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
Hortonworks | ORGANIZATION | 0.99+ |
2021 | DATE | 0.99+ |
New York City | LOCATION | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
8 GPUs | QUANTITY | 0.99+ |
847 zettabytes | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
99.9 percent | QUANTITY | 0.99+ |
Monday | DATE | 0.99+ |
SiliconANGLE Media | ORGANIZATION | 0.99+ |
3 petabytes | QUANTITY | 0.99+ |
Anaconda | ORGANIZATION | 0.99+ |
Wednesday | DATE | 0.99+ |
DD | PERSON | 0.99+ |
first time | QUANTITY | 0.99+ |
one server | QUANTITY | 0.99+ |
Cloudera | ORGANIZATION | 0.99+ |
Python | TITLE | 0.99+ |
first topic | QUANTITY | 0.99+ |
one piece | QUANTITY | 0.99+ |
VMworld | ORGANIZATION | 0.99+ |
'95 | DATE | 0.98+ |
1.3 zettabytes | QUANTITY | 0.98+ |
NYC | LOCATION | 0.98+ |
both | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
this year | DATE | 0.98+ |
Big Data Show | EVENT | 0.98+ |
Caldera | ORGANIZATION | 0.98+ |
two waters | QUANTITY | 0.97+ |
today | DATE | 0.97+ |
Chuck | PERSON | 0.97+ |
One | QUANTITY | 0.97+ |
Big Data | ORGANIZATION | 0.97+ |
earlier this week | DATE | 0.97+ |
Intersight | ORGANIZATION | 0.97+ |
hundreds of meetings | QUANTITY | 0.97+ |
CUBE | ORGANIZATION | 0.97+ |
first server | QUANTITY | 0.97+ |
last night | DATE | 0.95+ |
one data center | QUANTITY | 0.94+ |
UCS | ORGANIZATION | 0.92+ |
petabytes | QUANTITY | 0.92+ |
two great guests | QUANTITY | 0.9+ |
Tensorflow | TITLE | 0.86+ |
CUBE NYC | ORGANIZATION | 0.86+ |
Han | PERSON | 0.85+ |
#CubeNYC | LOCATION | 0.83+ |
Strata Data | ORGANIZATION | 0.83+ |
Kubeflow | TITLE | 0.82+ |
Hadoop World | ORGANIZATION | 0.81+ |
2018 | DATE | 0.8+ |
Eric Herzog, IBM | DataWorks Summit 2018
>> Live from San Jose in the heart of Silicon Valley, it's theCUBE, covering DataWorks Summit 2018, brought to you by Hortonworks. >> Welcome back to theCUBE's live coverage of DataWorks here in San Jose, California. I'm your host, Rebecca Knight, along with my co-host, James Kobielus. We have with us Eric Herzog. He is the Chief Marketing Officer and VP of Global Channels at the IBM Storage Division. Thanks so much for coming on theCUBE once again, Eric. >> Well, thank you. We always love to be on theCUBE and talk to all of theCUBE analysts about various topics, data, storage, multi-cloud, all the works. >> And before the cameras were rolling, we were talking about how you might be the biggest CUBE alum in the sense of you've been on theCUBE more times than anyone else. >> I know I'm in the top five, but I may be number one, I have to check with Dave Vellante and crew and see. >> Exactly and often wearing a Hawaiian shirt. >> Yes. >> Yes, I was on theCUBE last week from CISCO Live. I was not wearing a Hawaiian shirt. And Stu and John gave me a hard time about why was not I wearing a Hawaiian shirt? So I make sure I showed up to the DataWorks show- >> Stu, Dave, get a load. >> You're in California with a tan, so it fits, it's good. >> So we were talking a little bit before the cameras were rolling and you were saying one of the points that is sort of central to your professional life is it's not just about the storage, it's about the data. So riff on that a little bit. >> Sure, so at IBM we believe everything is data driven and in fact we would argue that data is more valuable than oil or diamonds or plutonium or platinum or silver to anything else. It is the most viable asset, whether you be a global Fortune 500, whether you be a midsize company or whether you be Herzogs Bar and Grill. So data is what you use with your suppliers, with your customers, with your partners. Literally everything around your company is really built around the data so most effectively managing it and make sure, A, it's always performant because when it's not performant they go away. As you probably know, Google did a survey that one, two, after one, two they go off your website, they click somewhere else so has to be performant. Obviously in today's 365, 7 by 24 company it needs to always be resilient and reliable and it always needs to be available, otherwise if the storage goes down, guess what? Your AI doesn't work, your Cloud doesn't work, whatever workload, if you're more traditional, your Oracle, Sequel, you know SAP, none of those workloads work if you don't have a solid storage foundation underneath your data driven enterprise. >> So with that ethos in mind, talk about the products that you are launching, that you newly launched and also your product roadmap going forward. >> Sure, so for us everything really is that storage is this critical foundation for the data driven, multi Cloud enterprise. And as I've said before on theCube, all of our storage software's now Cloud-ified so if you need to automatically tier out to IBM Cloud or Amazon or Azure, we automatically will move the data placement around from one premise out to a Cloud and for certain customers who may be multi Cloud, in this case using multiple private Cloud providers, which happens due to either legal reasons or procurement reasons or geographic reasons for the larger enterprises, we can handle that as well. That's part of it, second thing is we just announced earlier today an artificial intelligence, an AI reference architecture, that incorporates a full stack from the very bottom, both servers and storage, all the way up through the top layer, then the applications on top, so we just launched that today. >> AI for storage management or AI for run a range of applications? >> Regular AI, artificial intelligence from an application perspective. So we announced that reference architecture today. Basically think of the reference architecture as your recipe, your blueprint, of how to put it all together. Some of the components are from IBM, such as Spectrum Scale and Spectrum Computing from my division, our servers from our Cloud division. Some are opensource, Tensor, Caffe, things like that. Basic gives you what the stack needs to be, and what you need to do in various AI workloads, applications and use cases. >> I believe you have distributed deep learning as an IBM capability, that's part of that stack, is that correct? >> That is part of the stack, it's like in the middle of the stack. >> Is it, correct me if I'm wrong, that's containerization of AI functionality? >> Right. >> For distributed deployment? >> Right. >> In an orchestrated Kubernetes fabric, is that correct? >> Yeah, so when you look at it from an IBM perspective, while we clearly support the virtualized world, the VM wares, the hyper V's, the KVMs and the OVMs, and we will continue to do that, we're also heavily invested in the container environment. For example, one of our other divisions, the IBM Cloud Private division, has announced a solution that's all about private Clouds, you can either get it hosted at IBM or literally buy our stack- >> Rob Thomas in fact demoed it this morning, here. >> Right, exactly. And you could create- >> At DataWorks. >> Private Cloud initiative, and there are companies that, whether it be for security purposes or whether it be for legal reasons or other reasons, don't want to use public Cloud providers, be it IBM, Amazon, Azure, Google or any of the big public Cloud providers, they want a private Cloud and IBM either A, will host it or B, with IBM Cloud Private. All of that infrastructure is built around a containerized environment. We support the older world, the virtualized world, and the newer world, the container world. In fact, our storage, allows you to have persistent storage in a container's environment, Dockers and Kubernetes, and that works on all of our block storage and that's a freebie, by the way, we don't charge for that. >> You've worked in the data storage industry for a long time, can you talk a little bit about how the marketing message has changed and evolved since you first began in this industry and in terms of what customers want to hear and what assuages their fears? >> Sure, so nobody cares about speeds and feeds, okay? Except me, because I've been doing storage for 32 years. >> And him, he might care. (laughs) >> But when you look at it, the decision makers today, the CIOs, in 32 years, including seven start ups, IBM and EMC, I've never, ever, ever, met a CIO who used to be a storage guy, ever. So, they don't care. They know that they need storage and the other infrastructure, including servers and networking, but think about it, when the app is slow, who do they blame? Usually they blame the storage guy first, secondarily they blame the server guy, thirdly they blame the networking guy. They never look to see that their code stack is improperly done. Really what you have to do is talk applications, workloads and use cases which is what the AI reference architecture does. What my team does in non AI workloads, it's all about, again, data driven, multi Cloud infrastructure. They want to know how you're going to make a new workload fast AI. How you're going to make their Cloud resilient whether it's private or hybrid. In fact, IBM storage sells a ton of technology to large public Cloud providers that do not have the initials IBM. We sell gobs of storage to other public Cloud providers, both big, medium and small. It's really all about the applications, workloads and use cases, and that's what gets people excited. You basically need a position, just like I talked about with the AI foundations, storage is the critical foundation. We happen to be, knocking on wood, let's hope there's no earthquake, since I've lived here my whole life, and I've been in earthquakes, I was in the '89 quake. Literally fell down a bunch of stairs in the '89 quake. If there's an earthquake as great as IBM storage is, or any other storage or servers, it's crushed. Boom, you're done! Okay, well you need to make sure that your infrastructure, really your data, is covered by the right infrastructure and that it's always resilient, it's always performing and is always available. And that's what IBM drives is about, that's the message, not about how many gigabytes per second in bandwidth or what's the- Not that we can't spew that stuff when we talk to the right person but in general people don't care about it. What they want to know is, "Oh that SAP workload took 30 hours and now it takes 30 minutes?" We have public references that will say that. "Oh, you mean I can use eight to ten times less storage for the same money?" Yes, and we have public references that will say that. So that's what it's really about, so storage is really more from really a speeds and feeds Nuremberger sort of thing, and now all the Nurembergers are doing AI and Caffe and TensorFlow and all of that, they're all hackers, right? It used to be storage guys who used to do that and to a lesser extent server guys and definitely networking guys. That's all shifted to the software side so you got to talk the languages. What can we do with Hortonworks? By the way we were named in Q1 of 2018 as the Hortonworks infrastructure partner of the year. We work with Hortonworks all time, at all levels, whether it be with our channel partners, whether it be with our direct end users, however the customer wants to consume, we work with Hortonworks very closely and other providers as well in that big data analytics and the AI infrastructure world, that's what we do. >> So the containerizations side of the IBM AI stack, then the containerization capabilities in Hortonworks Data Platform 3.0, can you give us a sense for how you plan to, or do you plan at IBM, to work with Hortonworks to bring these capabilities, your reference architecture, into more, or bring their environment for that matter, into more of an alignment with what you're offering? >> So we haven't an exact decision of how we're going to do it, but we interface with Hortonworks on a continual basis. >> Yeah. >> We're working to figure out what's the right solution, whether that be an integrated solution of some type, whether that be something that we do through an adjunct to our reference architecture or some reference architecture that they have but we always make sure, again, we are their partner of the year for infrastructure named in Q1, and that's because we work very tightly with Hortonworks and make sure that what we do ties out with them, hits the right applications, workloads and use cases, the big data world, the analytic world and the AI world so that we're tied off, you know, together to make sure that we deliver the right solutions to the end user because that's what matters most is what gets the end users fired up, not what gets Hortonworks or IBM fired up, it's what gets the end users fired up. >> When you're trying to get into the head space of the CIO, and get your message out there, I mean what is it, what would you say is it that keeps them up at night? What are their biggest pain points and then how do you come in and solve them? >> I'd say the number one pain point for most CIOs is application delivery, okay? Whether that be to the line of business, put it this way, let's take an old workload, okay? Let's take that SAP example, that CIO was under pressure because they were trying, in this case it was a giant retailer who was shipping stuff every night, all over the world. Well guess what? The green undershirts in the wrong size, went to Paducah, Kentucky and then one of the other stores, in Singapore, which needed those green shirts, they ended up with shoes and the reason is, they couldn't run that SAP workload in a couple hours. Now they run it in 30 minutes. It used to take 30 hours. So since they're shipping every night, you're basically missing a cycle, essentially and you're not delivering the right thing from a retail infrastructure perspective to each of their nodes, if you will, to their retail locations. So they care about what do they need to do to deliver to the business the right applications, workloads and use cases on the right timeframe and they can't go down, people get fired for that at the CIO level, right? If something goes down, the CIO is gone and obviously for certain companies that are more in the modern mode, okay? People who are delivering stuff and their primary transactional vehicle is the internet, not retail, not through partners, not through people like IBM, but their primary transactional vehicle is a website, if that website is not resilient, performant and always reliable, then guess what? They are shut down and they're not selling anything to anybody, which is to true if you're Nordstroms, right? Someone can always go into the store and buy something, right, and figure it out? Almost all old retailers have not only a connection to core but they literally have a server and storage in every retail location so if the core goes down, guess what, they can transact. In the era of the internet, you don't do that anymore. Right? If you're shipping only on the internet, you're shipping on the internet so whether it be a new workload, okay? An old workload if you're doing the whole IOT thing. For example, I know a company that I was working with, it's a giant, private mining company. They have those giant, like three story dump trucks you see on the Discovery Channel. Those things cost them a hundred million dollars, so they have five thousand sensors on every dump truck. It's a fricking dump truck but guess what, they got five thousand sensors on there so they can monitor and make sure they take proactive action because if that goes down, whether these be diamond mines or these be Uranium mines or whatever it is, it costs them hundreds of millions of dollars to have a thing go down. That's, if you will, trying to take it out of the traditional, high tech area, which we all talk about, whether it be Apple or Google, or IBM, okay great, now let's put it to some other workload. In this case, this is the use of IOT, in a big data analytics environment with AI based infrastructure, to manage dump trucks. >> I think you're talking about what's called, "digital twins" in a networked environment for materials management, supply chain management and so forth. Are those requirements growing in terms of industrial IOT requirements of that sort and how does that effect the amount of data that needs to be stored, the sophistication of the AI and the stream competing that needs to be provisioned? Can you talk to that? >> The amount of data is growing exponentially. It's growing at yottabytes and zettabytes a year now, not at just exabytes anymore. In fact, everybody on their iPhone or their laptop, I've got a 10GB phone, okay? My laptop, which happens to be a Power Book, is two terabytes of flash, on a laptop. So just imagine how much data's being generated if you're doing in a giant factory, whether you be in the warehouse space, whether you be in healthcare, whether you be in government, whether you be in the financial sector and now all those additional regulations, such as GDPR in Europe and other regulations across the world about what you have to do with your healthcare data, what you have to do with your finance data, the amount of data being stored. And then on top of it, quite honestly, from an AI big data analytics perspective, the more data you have, the more valuable it is, the more you can mine it or the more oil, it's as if the world was just oil, forget the pollution side, let's assume oil didn't cause pollution. Okay, great, then guess what? You would be using oil everywhere and you wouldn't be using solar, you'd be using oil and by the way you need more and more and more, and how much oil you have and how you control that would be the power. That right now is the power of data and if anything it's getting more and more and more. So again, you always have to be able to be resilient with that data, you always have to interact with things, like we do with Hortonworks or other application workloads. Our AI reference architecture is another perfect example of the things you need to do to provide, you know, at the base infrastructure, the right foundation. If you have the wrong foundation to a building, it falls over. Whether it be your house, a hotel, this convention center, if it had the wrong foundation, it falls over. >> Actually to follow the oil analogy just a little bit further, the more of this data you have, the more PII there is and it usually, and the more the workloads need to scale up, especially for things like data masking. >> Right. >> When you have compliance requirements like GDPR, so you want to process the data but you need to mask it first, therefore you need clusters that conceivably are optimized for high volume, highly scalable masking in real time, to drive the downstream app, to feed the downstream applications and to feed the data scientist, you know, data lakes, whatever, and so forth and so on? >> That's why you need things like Incredible Compute which IBM offers with the Power Platform. And why you need storage that, again, can scale up. >> Yeah. >> Can get as big as you need it to be, for example in our reference architecture, we use both what we call Spectrum Scale, which is a big data analytics workload performance engine, it has multiple threaded, multi tasking. In fact one of the largest banks in the world, if you happen to bank with them, your credit card fraud is being done on our stuff, okay? But at the same time we have what's called IBM Cloud Object Storage which is an object store, you want to take every one of those searches for fraud and when they find out that no one stole my MasterCard or the Visa, you still want to put it in there because then you mine it later and see patterns of how people are trying to steal stuff because it's all being done digitally anyway. You want to be able to do that. So you A, want to handle it very quickly and resiliently but then you want to be able to mine it later, as you said, mining the data. >> Or do high value anomaly detection in the moment to be able to tag the more anomalous data that you can then sift through later or maybe in the moment for realtime litigation. >> Well that's highly compute intensive, it's AI intensive and it's highly storage intensive on a performance side and then what happens is you store it all for, lets say, further analysis so you can tell people, "When you get your Am Ex card, do this and they won't steal it." Well the only way to do that, is you use AI on this ocean of data, where you're analyzing all this fraud that has happened, to look at patterns and then you tell me, as a consumer, what to do. Whether it be in the financial business, in this case the credit card business, healthcare, government, manufacturing. One of our resellers actually developed an AI based tool that can scan boxes and cans for faults on an assembly line and actually have sold it to a beer company and to a soda company that instead of people looking at the cans, like you see on the Food Channel, to pull it off, guess what? It's all automatically done. There's no people pulling the can off, "Oh, that can is damaged" and they're looking at it and by the way, sometimes they slip through. Now, using cameras and this AI based infrastructure from IBM, with our storage underneath the hood, they're able to do this. >> Great. Well Eric thank you so much for coming on theCUBE. It's always been a lot of fun talking to you. >> Great, well thank you very much. We love being on theCUBE and appreciate it and hope everyone enjoys the DataWorks conference. >> We will have more from DataWorks just after this. (techno beat music)
SUMMARY :
in the heart of Silicon He is the Chief Marketing Officer and talk to all of theCUBE analysts in the sense of you've been on theCUBE I know I'm in the top five, Exactly and often And Stu and John gave me a hard time about You're in California with and you were saying one of the points and it always needs to be available, that you are launching, for the data driven, and what you need to do of the stack, it's like in in the container environment. Rob Thomas in fact demoed it And you could create- and that's a freebie, by the Sure, so nobody cares And him, he might care. and the AI infrastructure So the containerizations So we haven't an exact decision so that we're tied off, you know, together and the reason is, they of the AI and the stream competing and by the way you need more of this data you have, And why you need storage that, again, my MasterCard or the Visa, you still want anomaly detection in the moment at the cans, like you of fun talking to you. the DataWorks conference. We will have more from
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Diane Greene | PERSON | 0.99+ |
Eric Herzog | PERSON | 0.99+ |
James Kobielus | PERSON | 0.99+ |
Jeff Hammerbacher | PERSON | 0.99+ |
Diane | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Mark Albertson | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Rebecca Knight | PERSON | 0.99+ |
Jennifer | PERSON | 0.99+ |
Colin | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Rob Hof | PERSON | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
Tricia Wang | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Singapore | LOCATION | 0.99+ |
James Scott | PERSON | 0.99+ |
Scott | PERSON | 0.99+ |
Ray Wang | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Brian Walden | PERSON | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
Verizon | ORGANIZATION | 0.99+ |
Jeff Bezos | PERSON | 0.99+ |
Rachel Tobik | PERSON | 0.99+ |
Alphabet | ORGANIZATION | 0.99+ |
Zeynep Tufekci | PERSON | 0.99+ |
Tricia | PERSON | 0.99+ |
Stu | PERSON | 0.99+ |
Tom Barton | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Sandra Rivera | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Qualcomm | ORGANIZATION | 0.99+ |
Ginni Rometty | PERSON | 0.99+ |
France | LOCATION | 0.99+ |
Jennifer Lin | PERSON | 0.99+ |
Steve Jobs | PERSON | 0.99+ |
Seattle | LOCATION | 0.99+ |
Brian | PERSON | 0.99+ |
Nokia | ORGANIZATION | 0.99+ |
Europe | LOCATION | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Scott Raynovich | PERSON | 0.99+ |
Radisys | ORGANIZATION | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
Eric | PERSON | 0.99+ |
Amanda Silver | PERSON | 0.99+ |
David Hatfield, Pure Storage | Pure Storage Accelerate 2018
>> Announcer: Live from the Bill Graham Auditorium in San Francisco, it's theCUBE, covering Pure Storage Accelerate 2018. Brought to be you by Pure Storage. >> Welcome back to theCUBE, we are live at Pure Storage Accelerate 2018 in San Francisco. I'm Lisa Prince Martin with Dave The Who Vellante, and we're with David Hatfield, or Hat, the president of Purse Storage. Hat, welcome back to theCUBE. >> Thank you Lisa, great to be here. Thanks for being here. How fun is this? >> The orange is awesome. >> David: This is great. >> Super fun. >> Got to represent, we love the orange here. >> Always a good venue. >> Yeah. >> There's not enough orange. I'm not as blind yet. >> Well it's the Bill Graham, I mean it's a great venue. But not generally one for technology conferences. >> Not it's not. You guys are not conventional. >> So far so good. >> But then-- >> Thanks for keeping us out of Las Vegas for a change. >> Over my dead body I thin I've said once or twice before. >> Speaking of-- Love our customers in Vegas. Unconventional, you've said recently this is not your father's storage company. What do you mean by that? >> Well we just always want to do things a little bit less conventional. We want to be modern. We want to do things differently. We want to create an environment where it's community so our customers and our partners, prospective customers can get a feel for what we mean by doing things a little bit more modern. And so the whole orange thing is something that we all opt in for. But it's more about really helping transform customer's organizations think differently, think out of the box, and so we wanted to create a venue that forced people to think differently, and so the last three years, one was on Pier 48, we transformed that. Last year was in a big steelworkers, you know, 100 year old steel manufacturing, ship building yard which is now long since gone. But we thought the juxtaposition of that, big iron rust relative to what we're doing from a modern solid state perspective, was a good metaphor. And here it's about making music, and how can we together as an industry, develop new things and develop new songs and really help transform organizations. >> For those of you who don't know, spinning disk is known as spinning rust, right? Eventually, so very clever sort of marketing. >> The more data you put on it the slower it gets and it gets really old and we wanted to get rid of that. We wanted to have everything be online in the data center, so that was the point. >> So Hat, as you go around and talk to customers, they're going through a digital transformation, you hear all this stuff about machine intelligence, artificial intelligence, whatever you want to call it, what are the questions that you're getting? CEO's, they want to get digital right. IT professionals are wondering what's next for them. What kind of questions and conversations are you having? >> Yeah, I think it's interesting, I was just in one of the largest financial services companies in New York, and we met with the Chief Data Officer. The Chief Data Officer reports into the CEO. And he had right next to him the CIO. And so they have this development of a recognition that moving into a digital world and starting to harness the power of data requires a business context. It requires people that are trying to figure out how to extract value from the data, where does our data live? But that's created the different organization. It drives devops. I mean, if you're going to go through a digital transformation, you're going to try and get access to your data, you have to be a software development house. And that means you're going to use devops. And so what's happened from our point of view over the last 10 years is that those folks have gone to the public cloud because IT wasn't really meeting the needs of what devops needed and what the data scientists were looking for, and so what we wanted to create not only was a platform and a tool set that allowed them to bridge the gap, make things better today dramatically, but have a platform that gets you into the future, but also create a community and an ecosystem where people are aware of what's happening on the devop's side, and connect the dots between IT and the data scientists. And so we see this exploding as companies digitize, and somebody needs to be there to help kind of bridge the gap. >> So what's your point of view and advice to that IT ops person who maybe really good at provisioning LUNS, should they become more dev like? Maybe ops dev? >> Totally, I mean I think there's a huge opportunity to kind of advance your career. And a lot of what Charlie talked about and a lot of what we've been doing for nine years now, coming up on nine years, is trying to make our customers heroes. And if data is a strategic asset, so much so they're actually going to think about putting it on your balance sheet, and you're hiring Chief Data Officers, who knows more about the data than the storage and infrastructure team. They understand the limitations that we had to go through over the past. They've recognized they had to make trade offs between performance and cost. And in a shared accelerated storage platform where you have tons of IO and you can put all of your applications (mumbles) at the same time, you don't have to make those trade offs. But the people that really know that are the storage leads. And so what we want to do is give them a path for their career to become strategic in their organization. Storage should be self driving, infrastructure should be self driving. These are not things that in a boardroom people care about, gigabytes and petabytes and petaflops, and whatever metric. What they care about is how they can change their business and have a competitive advantage. How they can deliver better customer experiences, how they can put more money on the bottom line through better insights, etc. And we want to teach and work with and celebrate data heroes. You know, they're coming from the infrastructure side and connecting the dots. So the value of that data is obviously something that's new in terms of it being front and center. So who determines the value of that data? You would think it's the business line. And so there's got to be a relationship between that IT ops person and the business line. Which maybe here to for was somewhat adversarial. Business guys are calling, the clients are calling again. And the business guys are saying, oh IT, they're slow, they say no. So how are you seeing that relationship changing? >> It has to come together because, you know, it does come down to what are the insights that we can extract from our data? How much more data can we get online to be able to get those insights? And that's a combination of improving the infrastructure and making it easy and removing those trade offs that I talked about. But also being able to ask the right questions. And so a lot has to happen. You know, we have one of the leaders in devops speaking tomorrow to go through, here's what's happening on the software development and devops side. Here's what the data scientists are trying to get at. So our IT professionals understand the language, understand the problem set. But they have to come together. We have Dr. Kate Harding as well from MIT, who's brilliant and thinking about AI. Well, there's only .5% of all the data has actually been analyzed. You know, it's all in these piggy banks as Burt talked about onstage. And so we want to get rid of the piggy banks and actually create it and make it more accessible, and get more than .5% of the data to be usable. You know, bring as much of that online as possible, because it's going to provide richer insights. But up until this point storage has been a bottleneck to making that happen. It was either too costly or too complex, or it wasn't performing enough. And with what we've been able to bring through solid state natively into sort of this platform is an ability to have all of that without the trade offs. >> That number of half a percent, or less than half a percent of all data in the world is actually able to be analyzed, is really really small. I mean we talk about, often you'll here people say data's the lifeblood of an organization. Well, it's really a business catalyst. >> David: Oil. >> Right, but catalysts need to be applied to multiple reactions simultaneously. And that's what a company needs to be able to do to maximize the value. Because if you can't do that there's no value in that. >> Right. >> How are you guys helping to kind of maybe abstract storage? We hear a lot, we heard the word simplicity a lot today from Mercedes Formula One, for example. How are you partnering with customers to help them identify, where do we start narrowing down to find those needles in the haystack that are going to open up new business opportunities, new services for our business? >> Well I think, first of all, we recognize at Pure that we want to be the innovators. We want to be the folks that are, again, making things dramatically better today, but really future-proofing people for what applications and insights they want to get in the future. Charlie talked about the three-legged stool, right? There's innovations that's been happening in compute, there's innovations that have been happening over the years in networking, but storage hasn't really kept up. It literally was sort of the bottleneck that was holding people back from being able to feed the GPUs in the compute that's out there to be able to extract the insights. So we wanted to partner with the ecosystem, but we recognize an opportunity to remove the primary bottleneck, right? And if we can remove the bottleneck and we can partner with firms like NVIDIA and firms like Cisco, where you integrate the solution and make it self driving so customers don't have to worry about it. They don't have to make the trade offs in performance and cost on the backend, but it just is easy to stamp out, and so it was really great to hear Service Now and Keith walk through is story where he was able to get a 3x level improvement and something that was simple to scale as their business grew without having an impact on the customer. So we need to be part of an ecosystem. We need to partner well. We need to recognize that we're a key component of it because we think data's at the core, but we're only a component of it. The one analogy somebody shared with me when I first started at Pure was you can date your compute and networking partner but you actually get married to your storage partner. And we think that's true because data's at the core of every organization, but it's making it available and accessible and affordable so you can leverage the compute and networking stacks to make it happen. >> You've used the word platform, and I want to unpack that a little bit. Platform versus product, right? We hear platform a lot today. I think it's pretty clear that platforms beat products and that allows you to grow and penetrate the market further. It also has an implication in terms of the ecosystem and how you partner. So I wonder if you could talk about platform, what it means to you, the API economy, however you want to take that. >> Yeah, so, I mean a platform, first of all I think if you're starting a disruptive technology company, being hyper-focused on delivering something that's better and faster in every dimension, it had to be 10x in every dimension. So when we started, we said let's start with tier one block, mission critical data workloads with a product, you know our Flash Array product. It was the fastest growing product in storage I think of all time, and it still continues to be a great contributor, and it should be a multi-billion dollar business by itself. But what customers are looking for is that same consumer like or cloud like experience, all of the benefits of that simplicity and performance across their entire data set. And so as we think about providing value to customers, we want to make sure we capture as much of that 99.5% of the data and make it online and make it affordable, regardless of whether it's block, file, or object, or regardless if it's tier one, tier two, and tier three. We talk about this notion of a shared accelerated storage platform because we want to have all the applications hit it without any compromise. And in an architecture that we've provided today you can do that. So as we think about partnering, we want to go, in our strategy, we want to go get as much of the data as we possibly can and make it usable and affordable to bring online and then partner with an API first open approach. There's a ton of orchestration tools that are out there. There's great automation. We have a deep integration with ACI at Cisco. Whatever management and orchestration tools that our customer wants to use, we want to make those available. And so, as you look at our Flash Array, Flash Deck, AIRI, and Flash Blade technologies, all of them have an API open first approach. And so a lot of what we're talking about with our cloud integrations is how do we actually leverage orchestration, and how do we now allow and make it easy for customers to move data in and out of whatever clouds they may want to run from. You know, one of the key premises to the business was with this exploding data growth and whether it's 30, 40, 50 zettabytes of data over the next you know, five years, there's only two and a half or three zettabytes of internet connectivity in those same period of time. Which means that companies, and there's not enough data platform or data resources to actually handle all of it, so the temporal nature of the data, where it's created, what a data center looks like, is going to be highly distributed, and it's going to be multi cloud. And so we wanted to provide an architecture and a platform that removed the trade offs and the bottlenecks while also being open and allowing customers to take advantage of Red Shift and Red Hat and all the container technologies and platform as a service technologies that exist that are completely changing the way we can access the data. And so we're part of an ecosystem and it needs to be API and open first. >> So you had Service Now on stage today, and obviously a platform company. I mean any time they do M and A they bring that company into their platform, their applications that they build are all part of that platform. So should we think about Pure? If we think about Pure as a platform company, does that mean, I mean one of your major competitors is consolidating its portfolio. Should we think of you going forward as a platform company? In other words, you're not going to have a stovepipe set of products, or is that asking too much as you get to your next level of milestone. >> Well we think we're largely there in many respects. You know, if you look at any of the competitive technologies that are out there, you know, they have a different operating system and a different customer experience for their block products, their file products, and their object products, etc. So we wanted to have a shared system that had these similar attributes from a storage perspective and then provide a very consistent customer experience with our cloud-based Pure One platform. And so the combination of our systems, you hear Bill Cerreta talk about, you have to do different things for different protocols to be able to get the efficiencies in the data servers as people want. But ultimately you need to abstract that into a customer experience that's seamless. And so our Pure One cloud-based software allows for a consistent experience. The fact that you'll have a, one application that's leveraging block and one application that's leveraging unstructured tool sets, you want to be able to have that be in a shared accelerated storage platform. That's why Gartner's talking about that, right? Now you can do it with a solid state world. So it's super key to say, hey look, we want consistent customer experience, regardless of what data tier it used to be on or what protocol it is and we do that through our Pure One cloud-based platform. >> You guys have been pretty bullish for a long time now where competition is concerned. When we talk about AWS, you know Andy Jassy always talks about, they look forward, they're not looking at Oracle and things like that. What's that like at Pure? Are you guys really kind of, you've been also very bullish recently about NVME. Are you looking forward together with your partners and listening to the voice of the customer versus looking at what's blue over the corner? >> Yes, so first of all we have a lot of respect for companies that get big. One of my mentors told me one time that they got big because they did something well. And so we have a lot of respect for the ecosystem and companies that build a scale. And we actually want to be one of those and are already doing that. But I think it's also important to listen and be part of the community. And so we've always wanted to the pioneers. We always wanted to be the innovators. We always wanted to challenge conventions. And one of the reasons why we founded the company, why Cos and Hayes founded the company originally was because they saw that there was a bottleneck and it was a media level bottleneck. In order to remove that you need to provide a file system that was purpose built for the new media, whatever it was going to be. We chose solid state because it was a $40 billion industry thanks to our consumer products and devices. So it was a cost curve where I and D was going to happen by Samsung and Toshiba and Micron and all those guys that we could ride that curve down, allowing us to be able to get more and more of the data that's out there. And so we founded the company with the premise that you need to remove that bottleneck and you can drive innovation that was 10x better in every dimension. But we also recognize in doing so that putting an evergreen ownership model in place, you can fundamentally change the business model that customers were really frustrated by over the last 25 years. It was fair because disk has lots of moving parts, it gets slower with the more data you put on, etc., and so you pass those maintenance expenses and software onto customers. But in a solid state world you didn't need that. So what we wanted to do was actually, in addition to provide innovation that was 10x better, we wanted to provide a business model that was evergreen and cloud like in every dimension. Well, those two forces were very disruptive to the competitors. And so it's very, very hard to take a file system that's 25 years old and retrofit it to be able to really get the full value of what the stack can provide. So we focus on innovation. We focus on what the market's are doing, and we focus on our customer requirements and where we anticipate the use cases to be. And then we like to compete, too. We're a company of folks that love to win, but ultimately the real focus here is on enabling our customers to be successful, innovating forward. And so less about looking sidewise, who's blue and who's green, etc. >> But you said it before, when you were a startup, you had to be 10x better because those incumbents, even though it was an older operating system, people's processes were wired to that, so you had to give them an incentive to do that. But you have been first in a number of things. Flash itself, the sort of All-Flash, at a spinning disk price. Evergreen, you guys set the mark on that. NVME you're doing it again with no premium. I mean, everybody's going to follow. You can look back and say, look we were first, we led, we're the innovator. You're doing some things in cloud which are similar. Obviously you're doing this on purpose. But it's not just getting close to your customers. There's got to be a technology and architectural enabler for you guys. Is that? >> Well yeah, it's software, and at the end of the day if you write a file system that's purpose built for a new media, you think about the inefficiencies of that media and the benefits of that media, and so we knew it was going to be memory, we knew it was going to be silicon. It behaves differently. Reads are effectively free. Rights are expensive, right? And so that means you need to write something that's different, and so you know, it's NVME that we've been plumbing and working on for three years that provides 44,000 parallel access points. Massive parallelism, which enables these next generation of applications. So yeah we have been talking about that and inventing ways to be able to take full advantage of that. There's 3D XPoint and SCM and all kinds of really interesting technologies that are coming down the line that we want to be able to take advantage of and future proof for our customers, but in order to do that you have to have a software platform that allows for it. And that's where our competitive advantage really resides, is in the software. >> Well there are lots more software companies in Silicon Valley and outside Silicon Valley. And you guys, like I say, have achieved that escape velocity. And so that's pretty impressive, congratulations. >> Well thank you, we're just getting started, and we really appreciate all the work you guys do. So thanks for being here. >> Yeah, and we just a couple days ago with the Q1FY19, 40%, you have a year growth, you added 300 more customers. Now what, 4800 customers globally. So momentum. >> Thank you, thank you. Well we only do it if we're helping our customers one day at a time. You know, I'll tell you that this whole customer first philosophy, a lot of customers, a lot of companies talk about it, but it truly has to be integrated into the DNA of the business from the founders, and you know, Cos's whole pitch at the very beginning of this was we're going to change the media which is going to be able to transform the business model. But ultimately we want to make this as intuitive as an iPhone. You know, infrastructure should just work, and so we have this focus on delivering simplicity and delivering ownership that's future proofed from the very beginning. And you know that sort of permeates, and so you think about our growth, our growth has happened because our customers are buying more stuff from us, right? If you look at our underneath the covers on our growth, 70 plus percent of our growth every single quarter comes from customers buying more stuff, and so, as we think about how we partner and we think about how we innovate, you know, we're going to continue to build and innovate in new areas. We're going to keep partnering. You know, the data protection staff, we've got great partners like Veeam and Cohesity and Rubrik that are out there. And we're going to acquire. We do have a billion dollars of cash in the bank to be able to go do that. So we're going to listen to our customers on where they want us to do that, and that's going to guide us to the future. >> And expansion overseas. I mean, North America's 70% of your business? Is that right? >> Rough and tough. Yeah, we had 28%-- >> So it's some upside. >> Yeah, yeah, no any mature B2B systems company should line up to be 55, 45, 55 North America, 45, in line with GDP and in line with IT spend, so we made investments from the beginning knowing we wanted to be an independent company, knowing we wanted to support global 200 companies you have to have operations across multiple countries. And so globalization is always going to be key for us. We're going to continue our march on doing that. >> Delivering evergreen from an orange center. Thanks so much for joining Dave and I on the show this morning. >> Thanks Lisa, thanks Dave, nice to see you guys. >> We are theCUBE Live from Pure Accelerate 2018 from San Francisco. I'm Lisa Martin for Dave Vellante, stick around, we'll be right back with our next guests.
SUMMARY :
Brought to be you by Pure Storage. Welcome back to theCUBE, we are live Thank you Lisa, great to be here. There's not enough orange. Well it's the Bill Graham, I mean it's a great venue. You guys are not conventional. Thanks for keeping us What do you mean by that? and so we wanted to create a venue that For those of you who don't know, and it gets really old and we wanted to get rid of that. So Hat, as you go around and talk to customers, and somebody needs to be there And so there's got to be a relationship and get more than .5% of the data to be usable. is actually able to be analyzed, Right, but catalysts need to be applied that are going to open up new business opportunities, and we can partner with firms like NVIDIA and that allows you to grow You know, one of the key premises to the business was Should we think of you going forward as a platform company? And so the combination of our systems, and listening to the voice of the customer and so you pass those maintenance expenses and architectural enabler for you guys. And so that means you need to And you guys, like I say, and we really appreciate all the work you guys do. Yeah, and we just a couple days ago with the Q1FY19, 40%, and so we have this focus on delivering simplicity And expansion overseas. Yeah, we had 28%-- And so globalization is always going to be key for us. on the show this morning. We are theCUBE Live from Pure Accelerate 2018
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa | PERSON | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Samsung | ORGANIZATION | 0.99+ |
David Hatfield | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Toshiba | ORGANIZATION | 0.99+ |
30 | QUANTITY | 0.99+ |
Vegas | LOCATION | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Bill Cerreta | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Lisa Prince Martin | PERSON | 0.99+ |
Charlie | PERSON | 0.99+ |
Veeam | ORGANIZATION | 0.99+ |
New York | LOCATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
David | PERSON | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Kate Harding | PERSON | 0.99+ |
Keith | PERSON | 0.99+ |
nine years | QUANTITY | 0.99+ |
28% | QUANTITY | 0.99+ |
70% | QUANTITY | 0.99+ |
Purse Storage | ORGANIZATION | 0.99+ |
$40 billion | QUANTITY | 0.99+ |
Micron | ORGANIZATION | 0.99+ |
Hat | PERSON | 0.99+ |
Last year | DATE | 0.99+ |
Gartner | ORGANIZATION | 0.99+ |
10x | QUANTITY | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
Pure Storage | ORGANIZATION | 0.99+ |
MIT | ORGANIZATION | 0.99+ |
25 years | QUANTITY | 0.99+ |
4800 customers | QUANTITY | 0.99+ |
three years | QUANTITY | 0.99+ |
Rubrik | ORGANIZATION | 0.99+ |
half a percent | QUANTITY | 0.99+ |
99.5% | QUANTITY | 0.99+ |
less than half a percent | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
40% | QUANTITY | 0.99+ |
.5% | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
55 | QUANTITY | 0.99+ |
one day | QUANTITY | 0.98+ |
twice | QUANTITY | 0.98+ |
3x | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
One | QUANTITY | 0.98+ |
two forces | QUANTITY | 0.98+ |
Pure One | COMMERCIAL_ITEM | 0.98+ |
five years | QUANTITY | 0.98+ |
44,000 parallel access points | QUANTITY | 0.98+ |
Cohesity | ORGANIZATION | 0.97+ |
200 companies | QUANTITY | 0.97+ |
tomorrow | DATE | 0.97+ |
North America | LOCATION | 0.97+ |
one application | QUANTITY | 0.97+ |
first approach | QUANTITY | 0.97+ |
45 | QUANTITY | 0.97+ |
theCUBE | ORGANIZATION | 0.97+ |
Pure | ORGANIZATION | 0.97+ |
once | QUANTITY | 0.96+ |
ACI | ORGANIZATION | 0.96+ |
300 more customers | QUANTITY | 0.95+ |
70 plus percent | QUANTITY | 0.95+ |
NVME | ORGANIZATION | 0.95+ |
billion dollars | QUANTITY | 0.95+ |
two and a half | QUANTITY | 0.95+ |
Day One Afternoon Keynote | Red Hat Summit 2018
[Music] [Music] [Music] [Music] ladies and gentlemen please welcome Red Hat senior vice president of engineering Matt Hicks [Music] welcome back I hope you're enjoying your first day of summit you know for us it is a lot of work throughout the year to get ready to get here but I love the energy walking into someone on that first opening day now this morning we kick off with Paul's keynote and you saw this morning just how evolved every aspect of open hybrid cloud has become based on an open source innovation model that opens source the power and potential of open source so we really brought me to Red Hat but at the end of the day the real value comes when were able to make customers like yourself successful with open source and as much passion and pride as we put into the open source community that requires more than just Red Hat given the complexity of your various businesses the solution set you're building that requires an entire technology ecosystem from system integrators that can provide the skills your domain expertise to software vendors that are going to provide the capabilities for your solutions even to the public cloud providers whether it's on the hosting side or consuming their services you need an entire technological ecosystem to be able to support you and your goals and that is exactly what we are gonna talk about this afternoon the technology ecosystem we work with that's ready to help you on your journey now you know this year's summit we talked about earlier it is about ideas worth exploring and we want to make sure you have all of the expertise you need to make those ideas a reality so with that let's talk about our first partner we have him today and that first partner is IBM when I talk about IBM I have a little bit of a nostalgia and that's because 16 years ago I was at IBM it was during my tenure at IBM where I deployed my first copy of Red Hat Enterprise Linux for a customer it's actually where I did my first professional Linux development as well you and that work on Linux it really was the spark that I had that showed me the potential that open source could have for enterprise customers now iBM has always been a steadfast supporter of Linux and a great Red Hat partner in fact this year we are celebrating 20 years of partnership with IBM but even after 20 years two decades I think we're working on some of the most innovative work that we ever have before so please give a warm welcome to Arvind Krishna from IBM to talk with us about what we are working on Arvind [Applause] hey my pleasure to be here thank you so two decades huh that's uh you know I think anything in this industry to going for two decades is special what would you say that that link is made right Hatton IBM so successful look I got to begin by first seeing something that I've been waiting to say for years it's a long strange trip it's been and for the San Francisco folks they'll get they'll get the connection you know what I was just thinking you said 16 it is strange because I probably met RedHat 20 years ago and so that's a little bit longer than you but that was out in Raleigh it was a much smaller company and when I think about the connection I think look IBM's had a long long investment and a long being a long fan of open source and when I think of Linux Linux really lights up our hardware and I think of the power box that you were showing this morning as well as the mainframe as well as all other hardware Linux really brings that to life and I think that's been at the root of our relationship yeah absolutely now I alluded to a little bit earlier we're working on some new stuff and this time it's a little bit higher in the software stack and we have before so what do you what would you say spearheaded that right so we think of software many people know about some people don't realize a lot of the words are called critical systems you know like reservation systems ATM systems retail banking a lot of the systems run on IBM software and when I say IBM software names such as WebSphere and MQ and db2 all sort of come to mind as being some of that software stack and really when I combine that with some of what you were talking about this morning along hybrid and I think this thing called containers you guys know a little about combining the two we think is going to make magic yeah and I certainly know containers and I think for myself seeing the rise of containers from just the introduction of the technology to customers consuming at mission-critical capacities it's been probably one of the fastest technology cycles I've ever seen before look we completely agree with that when you think back to what Paul talks about this morning on hybrid and we think about it we are made of firm commitment to containers all of our software will run on containers and all of our software runs Rell and you put those two together and this belief on hybrid and containers giving you their hybrid motion so that you can pick where you want to run all the software is really I think what has brought us together now even more than before yeah and the best part I think I've liked we haven't just done the product in downstream alignment we've been so tied in our technology approach we've been aligned all the way to the upstream communities absolutely look participating upstream participating in these projects really bringing all the innovation to bear you know when I hear all of you talk about you can't just be in a single company you got to tap into the world of innovation and everybody should contribute we firmly believe that instead of helping to do that is kind of why we're here yeah absolutely now the best part we're not just going to tell you about what we're doing together we're actually going to show you so how every once you tell the audience a little bit more about what we're doing I will go get the demo team ready in the back so you good okay so look we're doing a lot here together we're taking our software and we are begging to put it on top of Red Hat and openshift and really that's what I'm here to talk about for a few minutes and then we go to show it to you live and the demo guard should be with us so it'll hopefully go go well so when we look at extending our partnership it's really based on three fundamental principles and those principles are the following one it's a hybrid world every enterprise wants the ability to span across public private and their own premise world and we got to go there number two containers are strategic to both of us enterprise needs the agility you need a way to easily port things from place to place to place and containers is more than just wrapping something up containers give you all of the security the automation the deploy ability and we really firmly believe that and innovation is the path forward I mean you got to bring all the innovation to bear whether it's around security whether it's around all of the things we heard this morning around going across multiple infrastructures right the public or private and those are three firm beliefs that both of us have together so then explicitly what I'll be doing here number one all the IBM middleware is going to be certified on top of openshift and rel and through cloud private from IBM so that's number one all the middleware is going to run in rental containers on OpenShift on rail with all the cloud private automation and deployability in there number two we are going to make it so that this is the complete stack when you think about from hardware to hypervisor to os/2 the container platform to all of the middleware it's going to be certified up and down all the way so that you can get comfort that this is certified against all the cyber security attacks that come your way three because we do the certification that means a complete stack can be deployed wherever OpenShift runs so that way you give the complete flexibility and you no longer have to worry about that the development lifecycle is extended all the way from inception to production and the management plane then gives you all of the delivery and operation support needed to lower that cost and lastly professional services through the IBM garages as well as the Red Hat innovation labs and I think that this combination is really speaks to the power of both companies coming together and both of us working together to give all of you that flexibility and deployment capabilities across one can't can't help it one architecture chart and that's the only architecture chart I promise you so if you look at it right from the bottom this speaks to what I'm talking about you begin at the bottom and you have a choice of infrastructure the IBM cloud as well as other infrastructure as a service virtual machines as well as IBM power and IBM mainframe as is the infrastructure choices underneath so you choose what what is best suited for the workload well with the container service with the open shift platform managing all of that environment as well as giving the orchestration that kubernetes gives you up to the platform services from IBM cloud private so it contains the catalog of all middle we're both IBM's as well as open-source it contains all the deployment capability to go deploy that and it contains all the operational management so things like come back up if things go down worry about auto scaling all those features that you want come to you from there and that is why that combination is so so powerful but rather than just hear me talk about it I'm also going to now bring up a couple of people to talk about it and what all are they going to show you they're going to show you how you can deploy an application on this environment so you can think of that as either a cloud native application but you can also think about it as how do you modernize an application using micro services but you don't want to just keep your application always within its walls you also many times want to access different cloud services from this and how do you do that and I'm not going to tell you which ones they're going to come and tell you and how do you tackle the complexity of both hybrid data data that crosses both from the private world to the public world and as well as target the extra workloads that you want so that's kind of the sense of what you're going to see through through the demonstrations but with that I'm going to invite Chris and Michael to come up I'm not going to tell you which one's from IBM which runs from Red Hat hopefully you'll be able to make the right guess so with that Chris and Michael [Music] so so thank you Arvind hopefully people can guess which ones from Red Hat based on the shoes I you know it's some really exciting stuff that we just heard there what I believe that I'm I'm most excited about when I look out upon the audience and the opportunity for customers is with this announcement there are quite literally millions of applications now that can be modernized and made available on any cloud anywhere with the combination of IBM cloud private and OpenShift and I'm most thrilled to have mr. Michael elder a distinguished engineer from IBM here with us today and you know Michael would you maybe describe for the folks what we're actually going to go over today absolutely so when you think about how do I carry forward existing applications how do I build new applications as well you're creating micro services that always need a mixture of data and messaging and caching so this example application shows java-based micro services running on WebSphere Liberty each of which are then leveraging things like IBM MQ for messaging IBM db2 for data operational decision manager all of which is fully containerized and running on top of the Red Hat open chip container platform and in fact we're even gonna enhance stock trader to help it understand how you feel but okay hang on so I'm a little slow to the draw sometimes you said we're gonna have an application tell me how I feel exactly exactly you think about your enterprise apps you want to improve customer service understanding how your clients feel can't help you do that okay well this I'd like to see that in action all right let's do it okay so the first thing we'll do is we'll actually take a look at the catalog and here in the IBM cloud private catalog this is all of the content that's available to deploy now into this hybrid solution so we see workloads for IBM will see workloads for other open source packages etc each of these are packaged up as helm charts that are deploying a set of images that will be certified for Red Hat Linux and in this case we're going to go through and start with a simple example with a node out well click a few actions here we'll give it a name now do you have your console up over there I certainly do all right perfect so we'll deploy this into the new old namespace and will deploy notate okay alright anything happening of course it's come right up and so you know what what I really like about this is regardless of if I'm used to using IBM clout private or if I'm used to working with open shift yeah the experience is well with the tool of whatever I'm you know used to dealing with on a daily basis but I mean you know I got to tell you we we deployed node ourselves all the time what about and what about when was the last time you deployed MQ on open shift you never I maybe never all right let's fix that so MQ obviously is a critical component for messaging for lots of highly transactional systems here we'll deploy this as a container on the platform now I'm going to deploy this one again into new worlds I'm gonna disable persistence and for my application I'm going to need a queue manager so I'm going to have it automatically setup my queue manager as well now this will deploy a couple of things what do you see I see IBM in cube all right so there's your stateful set running MQ and of course there's a couple of other components that get stood up as needed here including things like credentials and secrets and the service etc but all of this is they're out of the box ok so impressive right but that's the what I think you know what I'm really looking at is maybe how a well is this running you know what else does this partnership bring when I look at IBM cloud private windows inches well so that's a key reason about why it's not just about IBM middleware running on open shift but also IBM cloud private because ultimately you need that common management plane when you deploy a container the next thing you have to worry about is how do I get its logs how do I manage its help how do I manage license consumption how do I have a common security plan right so cloud private is that enveloping wrapper around IBM middleware to provide those capabilities in a common way and so here we'll switch over to our dashboard this is our Griffin and Prometheus stack that's deployed also now on cloud private running on OpenShift and we're looking at a different namespace we're looking at the stock trader namespace we'll go back to this app here momentarily and we can see all the different pieces what if you switch over to the stock trader workspace on open shipped yeah I think we might be able to do that here hey there it is alright and so what you're gonna see here all the different pieces of this op right there's d b2 over here I see the portfolio Java microservice running on Webster Liberty I see my Redis cash I see MQ all of these are the components we saw in the architecture picture a minute ago ya know so this is really great I mean so maybe let's take a look at the actual application I see we have a fine stock trader app here now we mentioned understanding how I feel exactly you know well I feel good that this is you know a brand new stock trader app versus the one from ten years ago that don't feel like we used forever so the key thing is this app is actually all of those micro services in addition to things like business rules etc to help understand the loyalty program so one of the things we could do here is actually enhance it with a a AI service from Watson this is tone analyzer it helps me understand how that user actually feels and will be able to go through and submit some feedback to understand that user ok well let's see if we can take a look at that so I tried to click on youth clearly you're not very happy right now here I'll do one quick thing over here go for it we'll clear a cache for our sample lab so look you guys don't actually know as Michael and I just wrote this no js' front end backstage while Arvin was actually talking with Matt and we deployed it real-time using continuous integration and continuous delivery that we have available with openshift well the great thing is it's a live demo right so we're gonna do it all live all the time all right so you mentioned it'll tell me how I'm feeling right so if we look at so right there it looks like they're pretty angry probably because our cache hadn't been cleared before we started the demo maybe well that would make me angry but I should be happy because I mean I have a lot of money well it's it's more than I get today for sure so but you know again I don't want to remain angry so does Watson actually understand southern I know it speaks like eighty different languages but well you know I'm from South Carolina to understand South Carolina southern but I don't know about your North Carolina southern alright well let's give it a go here y'all done a real real know no profanity now this is live I've done a real real nice job on this here fancy demo all right hey all right likes me now all right cool and the key thing is just a quick note right it's showing you've got a free trade so we can integrate those business rules and then decide to I do put one trade if you're angry give me more it's all bringing it together into one platform all running on open show yeah and I can see the possibilities right of we've not only deployed services but getting that feedback from our customers to understand well how well the services are being used and are people really happy with what they have hey listen Michael this was amazing I read you joining us today I hope you guys enjoyed this demo as well so all of you know who this next company is as I look out through the crowd based on what I can actually see with the sun shining down on me right now I can see their influence everywhere you know Sports is in our everyday lives and these guys are equally innovative in that space as they are with hybrid cloud computing and they use that to help maintain and spread their message throughout the world of course I'm talking about Nike I think you'll enjoy this next video about Nike and their brand and then we're going to hear directly from my twitting about what they're doing with Red Hat technology new developments in the top story of the day the world has stopped turning on its axis top scientists are currently racing to come up with a solution everybody going this way [Music] the wrong way [Music] please welcome Nike vice president of infrastructure engineering Mike witig [Music] hi everybody over the last five years at Nike we have transformed our technology landscape to allow us to connect more directly to our consumers through our retail stores through Nike comm and our mobile apps the first step in doing that was redesigning our global network to allow us to have direct connectivity into both Asia and AWS in Europe in Asia and in the Americas having that proximity to those cloud providers allows us to make decisions about application workload placement based on our strategy instead of having design around latency concerns now some of those workloads are very elastic things like our sneakers app for example that needs to burst out during certain hours of the week there's certain moments of the year when we have our high heat product launches and for those type of workloads we write that code ourselves and we use native cloud services but being hybrid has allowed us to not have to write everything that would go into that app but rather just the parts that are in that application consumer facing experience and there are other back-end systems certain core functionalities like order management warehouse management finance ERP and those are workloads that are third-party applications that we host on relevent over the last 18 months we have started to deploy certain elements of those core applications into both Azure and AWS hosted on rel and at first we were pretty cautious that we started with development environments and what we realized after those first successful deployments is that are the impact of those cloud migrations on our operating model was very small and that's because the tools that we use for monitoring for security for performance tuning didn't change even though we moved those core applications into Azure in AWS because of rel under the covers and getting to the point where we have that flexibility is a real enabler as an infrastructure team that allows us to just be in the yes business and really doesn't matter where we want to deploy different workload if either cloud provider or on-prem anywhere on the planet it allows us to move much more quickly and stay much more directed to our consumers and so having rel at the core of our strategy is a huge enabler for that flexibility and allowing us to operate in this hybrid model thanks very much [Applause] what a great example it's really nice to hear an IQ story of using sort of relish that foundation to enable their hybrid clout enable their infrastructure and there's a lot that's the story we spent over ten years making that possible for rel to be that foundation and we've learned a lot in that but let's circle back for a minute to the software vendors and what kicked off the day today with IBM IBM s one of the largest software portfolios on the planet but we learned through our journey on rel that you need thousands of vendors to be able to sport you across all of your different industries solve any challenge that you might have and you need those vendors aligned with your technology direction this is doubly important when the technology direction is changing like with containers we saw that two years ago bread had introduced our container certification program now this program was focused on allowing you to identify vendors that had those shared technology goals but identification by itself wasn't enough in this fast-paced world so last year we introduced trusted content we introduced our container health index publicly grading red hats images that form the foundation for those vendor images and that was great because those of you that are familiar with containers know that you're taking software from vendors you're combining that with software from companies like Red Hat and you are putting those into a single container and for you to run those in a mission-critical capacity you have to know that we can both stand by and support those deployments but even trusted content wasn't enough so this year I'm excited that we are extending once again to introduce trusted operations now last week we announced that cube con kubernetes conference the kubernetes operator SDK the goal of the kubernetes operators is to allow any software provider on kubernetes to encode how that software should run this is a critical part of a container ecosystem not just being able to find the vendors that you want to work with not just knowing that you can trust what's inside the container but knowing that you can efficiently run that software now the exciting part is because this is so closely aligned with the upstream technology that today we already have four partners that have functioning operators specifically Couchbase dynaTrace crunchy and black dot so right out of the gate you have security monitoring data store options available to you these partners are really leading the charge in terms of what it means to run their software on OpenShift but behind these four we have many more in fact this morning we announced over 60 partners that are committed to building operators they're taking their domain expertise and the software that they wrote that they know and extending that into how you are going to run that on containers in environments like OpenShift this really brings the power of being able to find the vendors being able to trust what's inside and know that you can run their software as efficiently as anyone else on the planet but instead of just telling you about this we actually want to show you this in action so why don't we bring back up the demo team to give you a little tour of what's possible with it guys thanks Matt so Matt talked about the concept of operators and when when I think about operators and what they do it's taking OpenShift based services and making them even smarter giving you insight into how they do things for example have we had an operator for the nodejs service that I was running earlier it would have detected the problem and fixed itself but when we look at it what really operators do when I look at it from an ecosystem perspective is for ISVs it's going to be a catalyst that's going to allow them to make their services as manageable and it's flexible and as you know maintainable as any public cloud service no matter where OpenShift is running and to help demonstrate this I've got my buddy Rob here Rob are we ready on the demo front we're ready awesome now I notice this screen looks really familiar to me but you know I think we want to give folks here a dev preview of a couple of things well we want to show you is the first substantial integration of the core OS tectonic technology with OpenShift and then the other thing is we are going to dive in a little bit more into operators and their usefulness so Rob yeah so what we're looking at here is the service catalog that you know and love and openshift and we've got a few new things in here we've actually integrated operators into the Service Catalog and I'm going to take this filter and give you a look at some of them that we have today so you can see we've got a list of operators exposed and this is the same way that your developers are already used to integrating with products they're right in your catalog and so now these are actually smarter services but how can we maybe look at that I mentioned that there's maybe a new view I'm used to seeing this as a developer but I hear we've got some really cool stuff if I'm the administrator of the console yeah so we've got a whole new side of the console for cluster administrators to get a look at under the infrastructure versus this dev focused view that we're looking at today today so let's go take a look at it so the first thing you see here is we've got a really rich set of monitoring and health status so we can see that we've got some alerts firing our control plane is up and we can even do capacity planning anything that you need to do to maintenance your cluster okay so it's it's not only for the the services in the cluster and doing things that you know I may be normally as a human operator would have to do but this this console view also gives me insight into the infrastructure itself right like maybe the nodes and maybe handling the security context is that true yes so these are new capabilities that we're bringing to open shift is the ability to do node management things like drain and unscheduled nodes to do day-to-day maintenance and then as well as having security constraints and things like role bindings for example and the exciting thing about this is this is a view that you've never been able to see before it's cross-cutting across namespaces so here we've got a number of admin bindings and we can see that they're connected to a number of namespaces and these would represent our engineering teams all the groups that are using the cluster and we've never had this view before this is a perfect way to audit your security you know it actually is is pretty exciting I mean I've been fortunate enough to be on the up and shift team since day one and I know that operations view is is something that we've you know strived for and so it's really exciting to see that we can offer that now but you know really this was a we want to get into what operators do and what they can do for us and so maybe you show us what the operator console looks like yeah so let's jump on over and see all the operators that we have installed on the cluster you can see that these mirror what we saw on the Service Catalog earlier now what we care about though is this Couchbase operator and we're gonna jump into the demo namespace as I said you can share a number of different teams on a cluster so it's gonna jump into this namespace okay cool so now what we want to show you guys when we think about operators you know we're gonna have a scenario here where there's going to be multiple replicas of a Couchbase service running in the cluster and then we're going to have a stateful set and what's interesting is those two things are not enough if I'm really trying to run this as a true service where it's highly available in persistent there's things that you know as a DBA that I'm normally going to have to do if there's some sort of node failure and so what we want to demonstrate to you is where operators combined with the power that was already within OpenShift are now coming together to keep this you know particular database service highly available and something that we can continue using so Rob what have you got there yeah so as you can see we've got our couch based demo cluster running here and we can see that it's up and running we've got three members we've got an off secret this is what's controlling access to a UI that we're gonna look at in a second but what really shows the power of the operator is looking at this view of the resources that it's managing you can see that we've got a service that's doing load balancing into the cluster and then like you said we've got our pods that are actually running the software itself okay so that's cool so maybe for everyone's benefit so we can show that this is happening live could we bring up the the Couchbase console please and keep up the openshift console both sides so what we see there we go so what we see on the on the right hand side is obviously the same console Rob was working in on the left-hand side as you can see by the the actual names of the pods that are there the the couch based services that are available and so Rob maybe um let's let's kill something that's always fun to do on stage yeah this is the power of the operator it's going to recover it so let's browse on over here and kill node number two so we're gonna forcefully kill this and kick off the recovery and I see right away that because of the integration that we have with operators the Couchbase console immediately picked up that something has changed in the environment now why is that important normally a human being would have to get that alert right and so with operators now we've taken that capability and we've realized that there has been a new event within the environment this is not something that you know kubernetes or open shipped by itself would be able to understand now I'm presuming we're gonna end up doing something else it's not just seeing that it failed and sure enough there we go remember when you have a stateful application rebalancing that data and making it available is just as important as ensuring that the disk is attached so I mean Rob thank you so much for you know driving this for us today and being here I mean you know not only Couchbase but as was mentioned by matt we also have you know crunchy dynaTrace and black duck I would encourage you all to go visit their booths out on the floor today and understand what they have available which are all you know here with a dev preview and then talk to the many other partners that we have that are also looking at operators so again rub thank you for joining us today Matt come on out okay this is gonna make for an exciting year of just what it means to consume container base content I think containers change how customers can get that I believe operators are gonna change how much they can trust running that content let's circle back to one more partner this next partner we have has changed the landscape of computing specifically with their work on hardware design work on core Linux itself you know in fact I think they've become so ubiquitous with computing that we often overlook the technological marvels that they've been able to overcome now for myself I studied computer engineering so in the late 90s I had the chance to study processor design I actually got to build one of my own processors now in my case it was the most trivial processor that you could imagine it was an 8-bit subtractor which means it can subtract two numbers 256 or smaller but in that process I learned the sheer complexity that goes into processor design things like wire placements that are so close that electrons can cut through the insulation in short and then doing those wire placements across three dimensions to multiple layers jamming in as many logic components as you possibly can and again in my case this was to make a processor that could subtract two numbers but once I was done with this the second part of the course was studying the Pentium processor now remember that moment forever because looking at what the Pentium processor was able to accomplish it was like looking at alien technology and the incredible thing is that Intel our next partner has been able to keep up that alien like pace of innovation twenty years later so we're excited have Doug Fisher here let's hear a little bit more from Intel for business wide open skies an open mind no matter the context the idea of being open almost only suggests the potential of infinite possibilities and that's exactly the power of open source whether it's expanding what's possible in business the science and technology or for the greater good which is why-- open source requires the involvement of a truly diverse community of contributors to scale and succeed creating infinite possibilities for technology and more importantly what we do with it [Music] you know what Intel one of our core values is risk-taking and I'm gonna go just a bit off script for a second and say I was just backstage and I saw a gentleman that looked a lot like Scott Guthrie who runs all of Microsoft's cloud enterprise efforts wearing a red shirt talking to Cormier I'm just saying I don't know maybe I need some more sleep but that's what I saw as we approach Intel's 50th anniversary these words spoken by our co-founder Robert Noyce are as relevant today as they were decades ago don't be encumbered by history this is about breaking boundaries in technology and then go off and do something wonderful is about innovation and driving innovation in our industry and Intel we're constantly looking to break boundaries to advance our technology in the cloud in enterprise space that is no different so I'm going to talk a bit about some of the boundaries we've been breaking and innovations we've been driving at Intel starting with our Intel Xeon platform Orion Xeon scalable platform we launched several months ago which was the biggest and mark the most advanced movement in this technology in over a decade we were able to drive critical performance capabilities unmatched agility and added necessary and sufficient security to that platform I couldn't be happier with the work we do with Red Hat and ensuring that those hero features that we drive into our platform they fully expose to all of you to drive that innovation to go off and do something wonderful well there's taking advantage of the performance features or agility features like our advanced vector extensions or avx-512 or Intel quick exist those technologies are fully embraced by Red Hat Enterprise Linux or whether it's security technologies like txt or trusted execution technology are fully incorporated and we look forward to working with Red Hat on their next release to ensure that our advancements continue to be exposed and their platform and all these workloads that are driving the need for us to break boundaries and our technology are driving more and more need for flexibility and computing and that's why we're excited about Intel's family of FPGAs to help deliver that additional flexibility for you to build those capabilities in your environment we have a broad set of FPGA capabilities from our power fish at Mac's product line all the way to our performance product line on the 6/10 strat exten we have a broad set of bets FPGAs what i've been talking to customers what's really exciting is to see the combination of using our Intel Xeon scalable platform in combination with FPGAs in addition to the acceleration development capabilities we've given to software developers combining all that together to deliver better and better solutions whether it's helping to accelerate data compression well there's pattern recognition or data encryption and decryption one of the things I saw in a data center recently was taking our Intel Xeon scalable platform utilizing the capabilities of FPGA to do data encryption between servers behind the firewall all the while using the FPGA to do that they preserve those precious CPU cycles to ensure they delivered the SLA to the customer yet provided more security for their data in the data center one of the edges in cyber security is innovation and route of trust starts at the hardware we recently renewed our commitment to security with our security first pledge has really three elements to our security first pledge first is customer first urgency we have now completed the release of the micro code updates for protection on our Intel platforms nine plus years since launch to protect against things like the side channel exploits transparent and timely communication we are going to communicate timely and openly on our Intel comm website whether it's about our patches performance or other relevant information and then ongoing security assurance we drive security into every one of our products we redesigned a portion of our processor to add these partition capability which is adding additional walls between applications and user level privileges to further secure that environment from bad actors I want to pause for a second and think everyone in this room involved in helping us work through our security first pledge this isn't something we do on our own it takes everyone in this room to help us do that the partnership and collaboration was next to none it's the most amazing thing I've seen since I've been in this industry so thank you we don't stop there we continue to advance our security capabilities cross-platform solutions we recently had a conference discussion at RSA where we talked about Intel Security Essentials where we deliver a framework of capabilities and the end that are in our silicon available for those to innovate our customers and the security ecosystem to innovate on a platform in a consistent way delivering that assurance that those capabilities will be on that platform we also talked about things like our security threat technology threat detection technology is something that we believe in and we launched that at RSA incorporates several elements one is ability to utilize our internal graphics to accelerate some of the memory scanning capabilities we call this an accelerated memory scanning it allows you to use the integrated graphics to scan memory again preserving those precious cycles on the core processor Microsoft adopted this and are now incorporated into their defender product and are shipping it today we also launched our threat SDK which allows partners like Cisco to utilize telemetry information to further secure their environments for cloud workloads so we'll continue to drive differential experiences into our platform for our ecosystem to innovate and deliver more and more capabilities one of the key aspects you have to protect is data by 2020 the projection is 44 zettabytes of data will be available 44 zettabytes of data by 2025 they project that will grow to a hundred and eighty s data bytes of data massive amount of data and what all you want to do is you want to drive value from that data drive and value from that data is absolutely critical and to do that you need to have that data closer and closer to your computation this is why we've been working Intel to break the boundaries in memory technology with our investment in 3d NAND we're reducing costs and driving up density in that form factor to ensure we get warm data closer to the computing we're also innovating on form factors we have here what we call our ruler form factor this ruler form factor is designed to drive as much dense as you can in a 1u rack we're going to continue to advance the capabilities to drive one petabyte of data at low power consumption into this ruler form factor SSD form factor so our innovation continues the biggest breakthrough and memory technology in the last 25 years in memory media technology was done by Intel we call this our 3d crosspoint technology and our 3d crosspoint technology is now going to be driven into SSDs as well as in a persistent memory form factor to be on the memory bus giving you the speed of memory characteristics of memory as well as the characteristics of storage given a new tier of memory for developers to take full advantage of and as you can see Red Hat is fully committed to integrating this capability into their platform to take full advantage of that new capability so I want to thank Paul and team for engaging with us to make sure that that's available for all of you to innovate on and so we're breaking boundaries and technology across a broad set of elements that we deliver that's what we're about we're going to continue to do that not be encumbered by the past your role is to go off and doing something wonderful with that technology all ecosystems are embracing this and driving it including open source technology open source is a hub of innovation it's been that way for many many years that innovation that's being driven an open source is starting to transform many many businesses it's driving business transformation we're seeing this coming to light in the transformation of 5g driving 5g into the networked environment is a transformational moment an open source is playing a pivotal role in that with OpenStack own out and opie NFV and other open source projects were contributing to and participating in are helping drive that transformation in 5g as you do software-defined networks on our barrier breaking technology we're also seeing this transformation rapidly occurring in the cloud enterprise cloud enterprise are growing rapidly and innovation continues our work with virtualization and KVM continues to be aggressive to adopt technologies to advance and deliver more capabilities in virtualization as we look at this with Red Hat we're now working on Cube vert to help move virtualized workloads onto these platforms so that we can now have them managed at an open platform environment and Cube vert provides that so between Intel and Red Hat and the community we're investing resources to make certain that comes to product as containers a critical feature in Linux becomes more and more prevalent across the industry the growth of container elements continues at a rapid rapid pace one of the things that we wanted to bring to that is the ability to provide isolation without impairing the flexibility the speed and the footprint of a container with our clear container efforts along with hyper run v we were able to combine that and create we call cotta containers we launched this at the end of last year cotta containers is designed to have that container element available and adding elements like isolation both of these events need to have an orchestration and management capability Red Hat's OpenShift provides that capability for these workloads whether containerized or cube vert capabilities with virtual environments Red Hat openshift is designed to take that commercial capability to market and we've been working with Red Hat for several years now to develop what we call our Intel select solution Intel select solutions our Intel technology optimized for downstream workloads as we see a growth in a workload will work with a partner to optimize a solution on Intel technology to deliver the best solution that could be deployed quickly our effort here is to accelerate the adoption of these type of workloads in the market working with Red Hat's so now we're going to be deploying an Intel select solution design and optimized around Red Hat OpenShift we expect the industry's start deploying this capability very rapidly I'm excited to announce today that Lenovo is committed to be the first platform company to deliver this solution to market the Intel select solution to market will be delivered by Lenovo now I talked about what we're doing in industry and how we're transforming businesses our technology is also utilized for greater good there's no better example of this than the worked by dr. Stephen Hawking it was a sad day on March 14th of this year when dr. Stephen Hawking passed away but not before Intel had a 20-year relationship with dr. Hawking driving breakthrough capabilities innovating with him driving those robust capabilities to the rest of the world one of our Intel engineers an Intel fellow which is the highest technical achievement you can reach at Intel got to spend 10 years with dr. Hawking looking at innovative things they could do together with our technology and his breakthrough innovative thinking so I thought it'd be great to bring up our Intel fellow Lema notch Minh to talk about her work with dr. Hawking and what she learned in that experience come on up Elina [Music] great to see you Thanks something going on about the breakthrough breaking boundaries and Intel technology talk about how you use that in your work with dr. Hawking absolutely so the most important part was to really make that technology contextually aware because for people with disability every single interaction takes a long time so whether it was adapting for example the language model of his work predictor to understand whether he's gonna talk to people or whether he's writing a book on black holes or to even understand what specific application he might be using and then making sure that we're surfacing only enough actions that were relevant to reduce that amount of interaction so the tricky part is really to make all of that contextual awareness happen without totally confusing the user because it's constantly changing underneath it so how is that your work involving any open source so you know the problem with assistive technology in general is that it needs to be tailored to the specific disability which really makes it very hard and very expensive because it can't utilize the economies of scale so basically with the system that we built what we wanted to do is really enable unleashing innovation in the world right so you could take that framework you could tailor to a specific sensor for example a brain computer interface or something like that where you could actually then support a different set of users so that makes open-source a perfect fit because you could actually build and tailor and we you spoke with dr. Hawking what was this view of open source is it relevant to him so yeah so Stephen was adamant from the beginning that he wanted a system to benefit the world and not just himself so he spent a lot of time with us to actually build this system and he was adamant from day one that he would only engage with us if we were commit to actually open sourcing the technology that's fantastic and you had the privilege of working with them in 10 years I know you have some amazing stories to share so thank you so much for being here thank you so much in order for us to scale and that's what we're about at Intel is really scaling our capabilities it takes this community it takes this community of diverse capabilities it takes two births thought diverse thought of dr. Hawking couldn't be more relevant but we also are proud at Intel about leading efforts of diverse thought like women and Linux women in big data other areas like that where Intel feels that that diversity of thinking and engagement is critical for our success so as we look at Intel not to be encumbered by the past but break boundaries to deliver the technology that you all will go off and do something wonderful with we're going to remain committed to that and I look forward to continue working with you thank you and have a great conference [Applause] thank God now we have one more customer story for you today when you think about customers challenges in the technology landscape it is hard to ignore the public cloud these days public cloud is introducing capabilities that are driving the fastest rate of innovation that we've ever seen in our industry and our next customer they actually had that same challenge they wanted to tap into that innovation but they were also making bets for the long term they wanted flexibility and providers and they had to integrate to the systems that they already have and they have done a phenomenal job in executing to this so please give a warm welcome to Kerry Pierce from Cathay Pacific Kerry come on thanks very much Matt hi everyone thank you for giving me the opportunity to share a little bit about our our cloud journey let me start by telling you a little bit about Cathay Pacific we're an international airline based in Hong Kong and we serve a passenger and a cargo network to over 200 destinations in 52 countries and territories in the last seventy years and years seventy years we've made substantial investments to develop Hong Kong as one of the world's leading transportation hubs we invest in what matters most to our customers to you focusing on our exemplary service and our great product and it's both on the ground and in the air we're also investing and expanding our network beyond our multiple frequencies to the financial districts such as Tokyo New York and London and we're connecting Asia and Hong Kong with key tech hubs like San Francisco where we have multiple flights daily we're also connecting Asia in Hong Kong to places like Tel Aviv and our upcoming destination of Dublin in fact 2018 is actually going to be one of our biggest years in terms of network expansion and capacity growth and we will be launching in September our longest flight from Hong Kong direct to Washington DC and that'll be using a state-of-the-art Airbus a350 1000 aircraft so that's a little bit about Cathay Pacific let me tell you about our journey through the cloud I'm not going to go into technical details there's far smarter people out in the audience who will be able to do that for you just focus a little bit about what we were trying to achieve and the people side of it that helped us get there we had a couple of years ago no doubt the same issues that many of you do I don't think we're unique we had a traditional on-premise non-standardized fragile infrastructure it didn't meet our infrastructure needs and it didn't meet our development needs it was costly to maintain it was costly to grow and it really inhibited innovation most importantly it slowed the delivery of value to our customers at the same time you had the hype of cloud over the last few years cloud this cloud that clouds going to fix the world we were really keen on making sure we didn't get wound up and that so we focused on what we needed we started bottom up with a strategy we knew we wanted to be clouded Gnostic we wanted to have active active on-premise data centers with a single network and fabric and we wanted public clouds that were trusted and acted as an extension of that environment not independently we wanted to avoid single points of failure and we wanted to reduce inter dependencies by having loosely coupled designs and finally we wanted to be scalable we wanted to be able to cater for sudden surges of demand in a nutshell we kind of just wanted to make everything easier and a management level we wanted to be a broker of services so not one size fits all because that doesn't work but also not one of everything we want to standardize but a pragmatic range of services that met our development and support needs and worked in harmony with our public cloud not against it so we started on a journey with red hat we implemented Red Hat cloud forms and ansible to manage our hybrid cloud we also met implemented Red Hat satellite to maintain a manager environment we built a Red Hat OpenStack on crimson vironment to give us an alternative and at the same time we migrated a number of customer applications to a production public cloud open shift environment but it wasn't all Red Hat you love heard today that the Red Hat fits within an overall ecosystem we looked at a number of third-party tools and services and looked at developing those into our core solution I think at last count we had tried and tested somewhere past eight different tools and at the moment we still have around 62 in our environment that help us through that journey but let me put the technical solution aside a little bit because it doesn't matter how good your technical solution is if you don't have the culture and the people to get it right as a group we needed to be aligned for delivery and we focused on three core behaviors we focused on accountability agility and collaboration now I was really lucky we've got a pretty fantastic team for whom that was actually pretty easy but but again don't underestimate the importance of getting the culture and the people right because all the technology in the world doesn't matter if you don't have that right I asked the team what did we do differently because in our situation we didn't go out and hire a bunch of new people we didn't go out and hire a bunch of consultants we had the staff that had been with us for 10 20 and in some cases 30 years so what did we do differently it was really simple we just empowered and supported our staff we knew they were the smart ones they were the ones that were dealing with a legacy environment and they had the passion to make the change so as a team we encouraged suggestions and contributions from our overall IT community from the bottom up we started small we proved the case we told the story and then we got by him and only did did we implement wider the benefits the benefit through our staff were a huge increase in staff satisfaction reduction and application and platform outage support incidents risk free and failsafe application releases work-life balance no more midnight deployments and our application and infrastructure people could really focus on delivering customer value not on firefighting and for our end customers the people that travel with us it was really really simple we could provide a stable service that allowed for faster releases which meant we could deliver value faster in terms of stats we migrated 16 production b2c applications to a public cloud OpenShift environment in 12 months we decreased provisioning time from weeks or occasionally months we were waiting for hardware two minutes and we had a hundred percent availability of our key customer facing systems but most importantly it was about people we'd built a culture a culture of innovation that was built on a foundation of collaboration agility and accountability and that permeated throughout the IT organization not those just those people that were involved in the project everyone with an IT could see what good looked like and to see what it worked what it looked like in terms of working together and that was a key foundation for us the future for us you will have heard today everything's changing so we're going to continue to develop our open hybrid cloud onboard more public cloud service providers continue to build more modern applications and leverage the emerging technology integrate and automate everything we possibly can and leverage more open source products with the great support from the open source community so there you have it that's our journey I think we succeeded by not being over awed and by starting with the basics the technology was key obviously it's a cool component but most importantly it was a way we approached our transition we had a clear strategy that was actually developed bottom-up by the people that were involved day to day and we empowered those people to deliver and that provided benefits to both our staff and to our customers so thank you for giving the opportunity to share and I hope you enjoy the rest of the summer [Applause] I got one thanks what a great story would a great customer story to close on and we have one more partner to come up and this is a partner that all of you know that's Microsoft Microsoft has gone through an amazing transformation they've we've built an incredibly meaningful partnership with them all the way from our open source collaboration to what we do in the business side we started with support for Red Hat Enterprise Linux on hyper-v and that was truly just the beginning today we're announcing one of the most exciting joint product offerings on the market today let's please give a warm welcome to Paul correr and Scott Scott Guthrie to tell us about it guys come on out you know Scot welcome welcome to the Red Hat summer thanks for coming really appreciate it great to be here you know many surprises a lot of people when we you know published a list of speakers and then you rock you were on it and you and I are on stage here it's really really important and exciting to us exciting new partnership we've worked together a long time from the hypervisor up to common support and now around hybrid hybrid cloud maybe from your perspective a little bit of of what led us here well you know I think the thing that's really led us here is customers and you know Microsoft we've been on kind of a transformation journey the last several years where you know we really try to put customers at the center of everything that we do and you know as part of that you quickly learned from customers in terms of I'm including everyone here just you know you've got a hybrid of state you know both in terms of what you run on premises where it has a lot of Red Hat software a lot of Microsoft software and then really is they take the journey to the cloud looking at a hybrid of state in terms of how do you run that now between on-premises and a public cloud provider and so I think the thing that both of us are recognized and certainly you know our focus here at Microsoft has been you know how do we really meet customers with where they're at and where they want to go and make them successful in that journey and you know it's been fantastic working with Paul and the Red Hat team over the last two years in particular we spend a lot of time together and you know really excited about the journey ahead so um maybe you can share a bit more about the announcement where we're about to make today yeah so it's it's it's a really exciting announcement it's and really kind of I think first of its kind in that we're delivering a Red Hat openshift on Azure service that we're jointly developing and jointly managing together so this is different than sort of traditional offering where it's just running inside VMs and it's sort of two vendors working this is really a jointly managed service that we're providing with full enterprise support with a full SLA where the you know single throat to choke if you will although it's collectively both are choke the throats in terms of making sure that it works well and it's really uniquely designed around this hybrid world and in that it supports will support both Windows and Linux containers and it role you know it's the same open ship that runs both in the public cloud on Azure and on-premises and you know it's something that we hear a lot from customers I know there's a lot of people here that have asked both of us for this and super excited to be able to talk about it today and we're gonna show off the first demo of it just a bit okay well I'm gonna ask you to elaborate a bit more about this how this fits into the bigger Microsoft picture and I'll get out of your way and so thanks again thank you for coming here we go thanks Paul so I thought I'd spend just a few minutes talking about wouldn't you know that some of the work that we're doing with Microsoft Asher and the overall Microsoft cloud I didn't go deeper in terms of the new offering that we're announcing today together with red hat and show demo of it actually in action in a few minutes you know the high level in terms of you know some of the work that we've been doing at Microsoft the last couple years you know it's really been around this this journey to the cloud that we see every organization going on today and specifically the Microsoft Azure we've been providing really a cloud platform that delivers the infrastructure the application and kind of the core computing needs that organizations have as they want to be able to take advantage of what the cloud has to offer and in terms of our focus with Azure you know we've really focused we deliver lots and lots of different services and features but we focused really in particular on kind of four key themes and we see these four key themes aligning very well with the journey Red Hat it's been on and it's partly why you know we think the partnership between the two companies makes so much sense and you know for us the thing that we've been really focused on has been with a or in terms of how do we deliver a really productive cloud meaning how do we enable you to take advantage of cutting-edge technology and how do we kind of accelerate the successful adoption of it whether it's around the integration of managed services that we provide both in terms of the application space in the data space the analytic and AI space but also in terms of just the end-to-end management and development tools and how all those services work together so that teams can basically adopt them and be super successful yeah we deeply believe in hybrid and believe that the world is going to be a multi cloud and a multi distributed world and how do we enable organizations to be able to take the existing investments that they already have and be able to easily integrate them in a public cloud and with a public cloud environment and get immediate ROI on day one without how to rip and replace tons of solutions you know we're moving very aggressively in the AI space and are looking to provide a rich set of AI services both finished AI models things like speech detection vision detection object motion etc that any developer even at non data scientists can integrate to make application smarter and then we provide a rich set of AI tooling that enables organizations to build custom models and be able to integrate them also as part of their applications and with their data and then we invest very very heavily on trust Trust is sort of at the core of a sure and we now have more compliant certifications than any other cloud provider we run in more countries than any other cloud provider and we really focus around unique promises around data residency data sovereignty and privacy that are really differentiated across the industry and terms of where Iser runs today we're in 50 regions around the world so our region for us is typically a cluster of multiple data centers that are grouped together and you can see we're pretty much on every continent with the exception of Antarctica today and the beauty is you're going to be able to take the Red Hat open shift service and run it on ashore in each of these different locations and really have a truly global footprint as you look to build and deploy solutions and you know we've seen kind of this focus on productivity hybrid intelligence and Trust really resonate in the market and about 90 percent of Fortune 500 companies today are deployed on Azure and you heard Nike talked a little bit earlier this afternoon about some of their journeys as they've moved to a dot public cloud this is a small logo of just a couple of the companies that are on ashore today and what I do is actually even before we dive into the open ship demo is actually just show a quick video you know one of the companies thing there are actually several people from that organization here today Deutsche Bank who have been working with both Microsoft and Red Hat for many years Microsoft on the other side Red Hat both on the rel side and then on the OpenShift side and it's just one of these customers that have helped bring the two companies together to deliver this managed openshift service on Azure and so I'm just going to play a quick video of some of the folks that Deutsche Bank talking about their experiences and what they're trying to get out of it so we could roll the video that'd be great technology is at the absolute heart of Deutsche Bank we've recognized that the cost of running our infrastructure was particularly high there was a enormous amount of under utilization we needed a platform which was open to polyglot architecture supporting any kind of application workload across the various business lines of the third we analyzed over 60 different vendor products and we ended up with Red Hat openshift I'm super excited Microsoft or supporting Linux so strongly to adopting a hybrid approach we chose as here because Microsoft was the ideal partner to work with on constructs around security compliance business continuity as you as in all the places geographically that we need to be we have applications now able to go from a proof of concept to production in three weeks that is already breaking records openshift gives us given entities and containers allows us to apply the same sets of processes automation across a wide range of our application landscape on any given day we run between seven and twelve thousand containers across three regions we start see huge levels of cost reduction because of the level of multi-tenancy that we can achieve through containers open ship gives us an abstraction layer which is allows us to move our applications between providers without having to reconfigure or recode those applications what's really exciting for me about this journey is the way they're both Red Hat and Microsoft have embraced not just what we're doing but what each other are doing and have worked together to build open shift as a first-class citizen with Microsoft [Applause] in terms of what we're announcing today is a new fully managed OpenShift service on Azure and it's really the first fully managed service provided end-to-end across any of the cloud providers and it's jointly engineer operated and supported by both Microsoft and Red Hat and that means again sort of one service one SLA and both companies standing for a link firmly behind it really again focusing around how do we make customers successful and as part of that really providing the enterprise-grade not just isolates but also support and integration testing so you can also take advantage of all your rel and linux-based containers and all of your Windows server based containers and how can you run them in a joint way with a common management stack taking the advantage of one service and get maximum density get maximum code reuse and be able to take advantage of a containerized world in a better way than ever before and make this customer focus is very much at the center of what both companies are really centered around and so what if I do be fun is rather than just talk about openshift as actually kind of show off a little bit of a journey in terms of what this move to take advantage of it looks like and so I'd like to invite Brendan and Chris onstage who are actually going to show off a live demo of openshift on Azure in action and really walk through how to provision the service and basically how to start taking advantage of it using the full open ship ecosystem so please welcome Brendan and Chris we're going to join us on stage for a demo thanks God thanks man it's been a good afternoon so you know what we want to get into right now first I'd like to think Brandon burns for joining us from Microsoft build it's a busy week for you I'm sure your own stage there a few times as well you know what I like most about what we just announced is not only the business and technical aspects but it's that operational aspect the uniqueness the expertise that RedHat has for running OpenShift combined with the expertise that Microsoft has within Azure and customers are going to get this joint offering if you will with you know Red Hat OpenShift on Microsoft Azure and so you know kind of with that again Brendan I really appreciate you being here maybe talk to the folks about what we're going to show yeah so we're going to take a look at what it looks like to deploy OpenShift on to Azure via the new OpenShift service and the real selling point the really great part of this is the the deep integration with a cloud native app API so the same tooling that you would use to create virtual machines to create disks trade databases is now the tooling that you're going to use to create an open chip cluster so to show you this first we're going to create a resource group here so we're going to create that resource group in East us using the AZ tool that's the the azure command-line tooling a resource group is sort of a folder on Azure that holds all of your stuff so that's gonna come back into the second I've created my resource group in East us and now we're gonna use that exact same tool calling into into Azure api's to provision an open shift cluster so here we go we have AZ open shift that's our new command line tool putting it into that resource group I'm gonna get into East us alright so it's gonna take a little bit of time to deploy that open shift cluster it's doing a bunch of work behind the scenes provisioning all kinds of resources as well as credentials to access a bunch of different as your API so are we actually able to see this to you yeah so we can cut over to in just a second we can cut over to that resource group in a reload so Brendan while relating the beauty of what you know the teams have been doing together already is the fact that now open shift is a first-class citizen as it were yeah absolutely within the agent so I presume not only can I do a deployment but I can do things like scale and check my credentials and pretty much everything that I could do with any other service with that that's exactly right so we can anything that you you were used to doing via the my computer has locked up there we go the demo gods are totally with me oh there we go oh no I hit reload yeah that was that was just evil timing on the house this is another use for operators as we talked about earlier today that's right my dashboard should be coming up do I do I dare click on something that's awesome that was totally it was there there we go good job so what's really interesting about this I've also heard that it deploys you know in as little as five to six minutes which is really good for customers they want to get up and running with it but all right there we go there it is who managed to make it see that shows that it's real right you see the sweat coming off of me there but there you can see the I feel it you can see the various resources that are being created in order to create this openshift cluster virtual machines disks all of the pieces provision for you automatically via that one single command line call now of course it takes a few minutes to to create the cluster so in order to show the other side of that integration the integration between openshift and Azure I'm going to cut over to an open shipped cluster that I already have created alright so here you can see my open shift cluster that's running on Microsoft Azure I'm gonna actually log in over here and the first sign you're gonna see of the integration is it's actually using my credentials my login and going through Active Directory and any corporate policies that I may have around smart cards two-factor off anything like that authenticate myself to that open chef cluster so I'll accept that it can access my and now we're gonna load up the OpenShift web console so now this looks familiar to me oh yeah so if anybody's used OpenShift out there this is the exact same console and what we're going to show though is how this console via the open service broker and the open service broker implementation for Azure integrates natively with OpenShift all right so we can go down here and we can actually see I want to deploy a database I'm gonna deploy Mongo as my key value store that I'm going to use but you know like as we talk about management and having a OpenShift cluster that's managed for you I don't really want to have to manage my database either so I'm actually going to use cosmos DB it's a native Azure service it's a multilingual database that offers me the ability to access my data in a variety of different formats including MongoDB fully managed replicated around the world a pretty incredible service so I'm going to go ahead and create that so now Brendan what's interesting I think to me is you know we talked about the operational aspects and clearly it's not you and I running the clusters but you do need that way to interface with it and so when customers are able to deploy this all of this is out of the box there's no additional contemporary like this is what you get when you create when you use that tool to create that open chef cluster this is what you get with all of that integration ok great step through here and go ahead don't have any IP ranges there we go all right and we create that binding all right and so now behind the scenes openshift is integrated with the azure api's with all of my credentials to go ahead and create that distributed database once it's done provisioning actually all of the credentials necessary to access the database are going to be automatically populated into kubernetes available for me inside of OpenShift via service discovery to access from my application without any further work so I think that really shows not only the power of integrating openshift with an azure based API but actually the power of integrating a Druze API is inside of OpenShift to make a truly seamless experience for managing and deploying your containers across a variety of different platforms yeah hey you know Brendan this is great I know you've got a flight to catch because I think you're back onstage in a few hours but you know really appreciate you joining us today absolutely I look forward to seeing what else we do yeah absolutely thank you so much thanks guys Matt you want to come back on up thanks a lot guys if you have never had the opportunity to do a live demo in front of 8,000 people it'll give you a new appreciation for standing up there and doing it and that was really good you know every time I get the chance just to take a step back and think about the technology that we have at our command today I'm in awe just the progress over the last 10 or 20 years is incredible on to think about what might come in the next 10 or 20 years really is unthinkable you even forget 10 years what might come in the next five years even the next two years but this can create a lot of uncertainty in the environment of what's going to be to come but I believe I am certain about one thing and that is if ever there was a time when any idea is achievable it is now just think about what you've seen today every aspect of open hybrid cloud you have the world's infrastructure at your fingertips and it's not stopping you've heard about this the innovation of open source how fast that's evolving and improving this capability you've heard this afternoon from an entire technology ecosystem that's ready to help you on this journey and you've heard from customer after customer that's already started their journey in the successes that they've had you're one of the neat parts about this afternoon you will aren't later this week you will actually get to put your hands on all of this technology together in our live audience demo you know this is what some it's all about for us it's a chance to bring together the technology experts that you can work with to help formulate how to pull off those ideas we have the chance to bring together technology experts our customers and our partners and really create an environment where everyone can experience the power of open source that same spark that I talked about when I was at IBM where I understood the but intial that open-source had for enterprise customers we want to create the environment where you can have your own spark you can have that same inspiration let's make this you know in tomorrow's keynote actually you will hear a story about how open-source is changing medicine as we know it and literally saving lives it is a great example of expanding the ideas it might be possible that we came into this event with so let's make this the best summit ever thank you very much for being here let's kick things off right head down to the Welcome Reception in the expo hall and please enjoy the summit thank you all so much [Music] [Music]
SUMMARY :
from the bottom this speaks to what I'm
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Doug Fisher | PERSON | 0.99+ |
Stephen | PERSON | 0.99+ |
Brendan | PERSON | 0.99+ |
Chris | PERSON | 0.99+ |
Deutsche Bank | ORGANIZATION | 0.99+ |
Robert Noyce | PERSON | 0.99+ |
Deutsche Bank | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Michael | PERSON | 0.99+ |
Arvind | PERSON | 0.99+ |
20-year | QUANTITY | 0.99+ |
March 14th | DATE | 0.99+ |
Matt | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Nike | ORGANIZATION | 0.99+ |
Paul | PERSON | 0.99+ |
Hong Kong | LOCATION | 0.99+ |
Antarctica | LOCATION | 0.99+ |
Scott Guthrie | PERSON | 0.99+ |
2018 | DATE | 0.99+ |
Asia | LOCATION | 0.99+ |
Washington DC | LOCATION | 0.99+ |
London | LOCATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
10 years | QUANTITY | 0.99+ |
two minutes | QUANTITY | 0.99+ |
Arvin | PERSON | 0.99+ |
Tel Aviv | LOCATION | 0.99+ |
two numbers | QUANTITY | 0.99+ |
two companies | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
Paul correr | PERSON | 0.99+ |
September | DATE | 0.99+ |
Kerry Pierce | PERSON | 0.99+ |
30 years | QUANTITY | 0.99+ |
20 years | QUANTITY | 0.99+ |
8-bit | QUANTITY | 0.99+ |
Mike witig | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
2025 | DATE | 0.99+ |
five | QUANTITY | 0.99+ |
dr. Hawking | PERSON | 0.99+ |
Linux | TITLE | 0.99+ |
Arvind Krishna | PERSON | 0.99+ |
Dublin | LOCATION | 0.99+ |
first partner | QUANTITY | 0.99+ |
Rob | PERSON | 0.99+ |
first platform | QUANTITY | 0.99+ |
Matt Hicks | PERSON | 0.99+ |
today | DATE | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
OpenShift | TITLE | 0.99+ |
last week | DATE | 0.99+ |
Dinesh Nirmal, IBM | Machine Learning Everywhere 2018
>> Announcer: Live from New York, it's theCUBE, covering Machine Learning Everywhere: Build Your Ladder to AI. Brought to you by IBM. >> Welcome back to Midtown, New York. We are at Machine Learning Everywhere: Build Your Ladder to AI being put on by IBM here in late February in the Big Apple. Along with Dave Vellante, I'm John Walls. We're now joined by Dinesh Nirmal, who is the Vice President of Analytics Development and Site Executive at the IBM Silicon Valley lab, soon. Dinesh, good to see you, this morning, sir. >> Thank you, John. >> Fresh from California. You look great. >> Thanks. >> Alright, you've talked about this, and it's really your world: data, the new normal. Explain that. When you say it's the new normal, what exactly... How is it transforming, and what are people having to adjust to in terms of the new normal. >> So, if you look at data, I would say each and every one of us has become a living data set. Our age, our race, our salary. What our likes or dislikes, every business is collecting every second. I mean, every time you use your phone, that data is transmitted somewhere, stored somewhere. And, airlines for example, is looking, you know, what do you like? Do you like an aisle seat? Do you like to get home early? You know, all those data. >> All of the above, right? >> And petabytes and zettabytes of data is being generated. So now, businesses' challenge is that how do you take that data and make insights out of it to serve you as a better customer. That's where I've come to, but the biggest challenge is that, how do you deal with this tremendous amount of data? That is the challenge. And creating sites out of it. >> That's interesting. I mean, that means the definition of identity is really... For decades it's been the same, and what you just described is a whole new persona, identity of an individual. >> And now, you take the data, and you add some compliance or provisioning like GDPR on top of it, all of a sudden how do-- >> John: What is GDPR? For those who might not be familiar with it. >> Dinesh: That's the regulatory term that's used by EU to make sure that, >> In the EU. >> If me as a customer come to an enterprise, say, I don't want any of my data stored, it's up to you to go delete that data completely, right? That's the term that's being used. And that goes into effect in May. How do you make sure that that data gets completely deleted by that time the customer has... How do you get that consent from the customer to go do all those... So there's a whole lot of challenges, as data multiplies, how do you deal with the data, how do you create insights to the data, how do you create consent on the data, how do you be compliant on that data, how do you create the policies that's needed to generate that data? All those things need to be... Those are the challenges that enterprises face. >> You bring up GDPR, which, for those who are not familiar with it, actually went into effect last year but the fines go into effect this year, and the fines are onerous, like 4% of turnover, I mean it's just hideous, and the question I have for you is, this is really scary for companies because they've been trying to catch up to the big data world, and so they're just throwing big data projects all over the place, which is collecting data, oftentimes private information, and now the EU is coming down and saying, "Hey you have to be able to, if requested, delete that." A lot of times they don't even know where it is, so big challenge. Are you guys, can you help? >> Yeah, I mean, today if you look at it, the data exists all over the place. I mean, whether it's in your relational database or in your Hadoop, unstructured data, whereas you know, optics store, it exists everywhere. And you have to have a way to say where the data is and the customer has to consent given to go, for you to look at the data, for you to delete the data, all those things. We have tools that we have built and we have been in the business for a very long time for example our governance catalog where you can see all the data sources, the policies that's associated with it, the compliance, all those things. So for you to look through that catalog, and you can see which data is GDPR compliant, which data is not, which data you can delete, which data you cannot. >> We were just talking in the open, Dave made the point that many companies, you need all-stars, not just somebody who has a specialty in one particular area, but maybe somebody who's in a particular regiment and they've got to wear about five different hats. So how do you democratize data to the point that you can make these all-stars? Across all kinds of different business units or different focuses within a company, because all of a sudden people have access to great reams of information. I've never had to worry about this before. But now, you've got to spread that wealth out and make everybody valuable. >> Right, really good question. Like I said, the data is existing everywhere, and most enterprises don't want to move the data. Because it's a tremendous effort to move from an existing place to another one and make sure the applications work and all those things. We are building a data virtualization layer, a federation layer, whereby which if you are, let's say you're a business unit. You want to get access to that data. Now you can use that federational data virtualization layer without moving data, to go and grab that small piece of data, if you're a data scientist, let's say, you want only a very small piece of data that exists in your enterprise. You can go after, without moving the data, just pick that data, do your work, and build a model, for example, based on that data. So that data virtualization layer really helps because it's basically an SQL statement, if I were to simplify it. That can go after the data that exists, whether it's at relational or non-relational place, and then bring it back, have your work done, and then put that data back into work. >> I don't want to be a pessimist, because I am an optimist, but it's scary times for companies. If they're a 20th century organization, they're really built around human expertise. How to make something, how to transact something, or how to serve somebody, or consult, whatever it is. The 21st century organization, data is foundational. It's at the core, and if my data is all over the place, I wasn't born data-driven, born in the cloud, all those buzzwords, how do traditional organizations catch up? What's the starting point for them? >> Most, if not all, enterprises are moving into a data-driven economy, because it's all going to be driven by data. Now it's not just data, you have to change your applications also. Because your applications are the ones that's accessing the data. One, how do you make an application adaptable to the amount of data that's coming in? How do you make accuracy? I mean, if you're building a model, having an accurate model, generating accuracy, is key. How do you make it performant, or govern and self-secure? That's another challenge. How do you make it measurable, monitor all those things? If you take three or four core tenets, that's what the 21st century's going to be about, because as we augment our humans, or developers, with AI and machine learning, it becomes more and more critical how do you bring these three or four core tenets into the data so that, as the data grows, the applications can also scale. >> Big task. If you look at the industries that have been disrupted, taxis, hotels, books, advertising. >> Dinesh: Retail. >> Retail, thank you. Maybe less now, and you haven't seen that disruption yet in banks, insurance companies, certainly parts of government, defense, you haven't seen a big disruption yet, but it's coming. If you've got the data all over the place, you said earlier that virtually every company has to be data-driven, but a lot of companies that I talk to say, "Well, our industry is kind of insulated," or "Yeah, we're going to wait and see." That seems to me to be the recipe for disaster, what are your thoughts on that? >> I think the disruption will come from three angles. One, AI. Definitely that will change the way, blockchain, another one. When you say, we haven't seen in the financial side, blockchain is going to change that. Third is quantum computing. The way we do compute is completely going to change by quantum computing. So I think the disruption is coming. Those are the three, if I have to predict into the 21st century, that will change the way we work. I mean, AI is already doing a tremendous amount of work. Now a machine can basically look at an image and say what it is, right? We have Watson for cancer oncology, we have 400 to 500,000 patients being treated by Watson. So AI is changing, not just from an enterprise perspective, but from a socio-economic perspective and a from a human perspective, so Watson is a great example for that. But yeah, disruption is happening as we speak. >> And do you agree that foundational to AI is the data? >> Oh yeah. >> And so, with your clients, like you said, you described it, they've got data all over the place, it's all in silos, not all, but much of it is in silos. How does IBM help them be a silo-buster? >> Few things, right? One, data exists everywhere. How do you make sure you get access to the data without moving the data, that's one. But if you look at the whole lifecycle, it's about ingesting the data, bringing the data, cleaning the data, because like you said, data becomes the core. Garbage in, garbage out. You cannot get good models unless the data is clean. So there's that whole process, I would say if you're a data scientist, probably 70% of your time is spent on cleaning the data, making the data ready for building a model or for a model to consume. And then once you build that model, how do you make sure that the model gets retrained on a regular basis, how do you monitor the model, how do you govern the model, so that whole aspect goes in. And then the last piece is visualizational reporting. How do you make sure, once the model or the application is built, how do you create a report that you want to generate or you want to visualize that data. The data becomes the base layer, but then there's a whole lifecycle on top of it based on that data. >> So the formula for future innovation, then, starts with data. You add in AI, I would think that cloud economics, however we define that, is also a part of that. My sense is most companies aren't ready, what's your take? >> For the cloud, or? >> I'm talking about innovation. If we agree that innovation comes from the data plus AI plus you've got to have... By cloud economics I mean it's an API economy, you've got massive scale, those kinds of, to compete. If you look at the disruptions in taxis and retail, it's got cloud economics underneath it. So most customers don't really have... They haven't yet even mastered cloud economics, yet alone the data and the AI component. So there's a big gap. >> It's a huge challenge. How do we take the data and create insights out of the data? And not just existing data, right? The data is multiplying by the second. Every second, petabytes or zettabytes of data are being generated. So you're not thinking about the data that exists within your enterprise right now, but now the data is coming from several different places. Unstructured data, structured data, semi-structured data, how do you make sense of all of that? That is the challenge the customers face, and, if you have existing data, on top of the newcoming data, how do you predict what do you want to come out of that. >> It's really a pretty tough conundrum that some companies are in, because if you're behind the curve right now, you got a lot of catching up to do. So you think that we have to be in this space, but keeping up with this space, because the change happens so quickly, is really hard, so we have to pedal twice as fast just to get in the game. So talk about the challenge, how do you address it? How do you get somebody there to say, "Yep, here's your roadmap. "I know it's going to be hard, "but once you get there you're going to be okay, "or at least you're going to be on a level playing field." >> I look at the three D's. There's the data, there's the development of the models or the applications, and then the deployment of those models or applications into your existing enterprise infrastructure. Not only the data is changing, but that development of the models, the tools that you use to develop are also changing. If you look at just the predictive piece, I mean look from the 80's to now. You look at vanilla machine learning versus deep learning, I mean there's so many tools available. How do you bring it all together to make sense which one would you use? I think, Dave, you mentioned Hadoop was the term from a decade ago, now it's about object store and how do you make sure that data is there or JSON and all those things. Everything is changing, so how do you bring, as an enterprise, you keep up, afloat, on not only the data piece, but all the core infrastructure piece, the applications piece, the development of those models piece, and then the biggest challenge comes when you have to deploy it. Because now you have a model that you have to take and deploy in your current infrastructure, which is not easy. Because you're infusing machine learning into your legacy applications, your third-party software, software that was written in the 60's and 70's, it's not an easy task. I was at a major bank in Europe, and the CTO mentioned to me that, "Dinesh, we built our model in three weeks. "It has been 11 months, we still haven't deployed it." And that's the reality. >> There's a cultural aspect too, I think. I think it was Rob Thomas, I was reading a blog that he wrote, and he said that he was talking to a customer saying, "Thank god I'm not in the technology industry, "things change so fast I could never, "so glad I'm not a software company." And Rob's reaction was, "Uh, hang on. (laughs) "You are in the technology business, "you are a software company." And so there's that cultural mindset. And you saw it with GE, Jeffrey Immelt said, "I went to bed an industrial giant, "woke up a software company." But look at the challenges that industrial giant has had transforming, so... They need partners, they need people that have done this before, they need expertise and obviously technology, but it's people and process that always hold it up. >> I mean technology is one piece, and that's where I think companies like IBM make a huge difference. You understand enterprise. Because you bring that wealth of knowledge of working with them for decades and they understand your infrastructure, and that is a core element, like I said the last piece is the deployment piece, how do you bring that model into your existing infrastructure and make sure that it fits into that architecture. And that involves a tremendous amount of work, skills, and knowledge. >> Job security. (all laugh) >> Dinesh, thanks for being with us this morning, we appreciate that and good luck with the rest of the event, here in New York City. Back with more here on theCUBE, right after this. (calming techno music)
SUMMARY :
Brought to you by IBM. and Site Executive at the IBM Silicon Valley lab, soon. You look great. When you say it's the new normal, what exactly... I mean, every time you use your phone, how do you take that data and make insights out of it and what you just described is a whole new persona, For those who might not be familiar with it. How do you get that consent from the customer and the question I have for you is, given to go, for you to look at the data, So how do you democratize data to the point a federation layer, whereby which if you are, It's at the core, and if my data is all over the place, One, how do you make If you look at the industries that have been disrupted, Maybe less now, and you haven't seen that disruption yet When you say, we haven't seen in the financial side, like you said, you described it, how do you make sure that the model gets retrained So the formula for future innovation, If you look at the disruptions in taxis and retail, how do you predict what do you want to come out of that. So talk about the challenge, how do you address it? and how do you make sure that data is there And you saw it with GE, Jeffrey Immelt said, how do you bring that model the rest of the event, here in New York City.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Tom | PERSON | 0.99+ |
Marta | PERSON | 0.99+ |
John | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
David | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Chris Keg | PERSON | 0.99+ |
Laura Ipsen | PERSON | 0.99+ |
Jeffrey Immelt | PERSON | 0.99+ |
Chris | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Chris O'Malley | PERSON | 0.99+ |
Andy Dalton | PERSON | 0.99+ |
Chris Berg | PERSON | 0.99+ |
Dave Velante | PERSON | 0.99+ |
Maureen Lonergan | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Paul Forte | PERSON | 0.99+ |
Erik Brynjolfsson | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Andrew McCafee | PERSON | 0.99+ |
Yahoo | ORGANIZATION | 0.99+ |
Cheryl | PERSON | 0.99+ |
Mark | PERSON | 0.99+ |
Marta Federici | PERSON | 0.99+ |
Larry | PERSON | 0.99+ |
Matt Burr | PERSON | 0.99+ |
Sam | PERSON | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
Dave Wright | PERSON | 0.99+ |
Maureen | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Cheryl Cook | PERSON | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
$8,000 | QUANTITY | 0.99+ |
Justin Warren | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
2012 | DATE | 0.99+ |
Europe | LOCATION | 0.99+ |
Andy | PERSON | 0.99+ |
30,000 | QUANTITY | 0.99+ |
Mauricio | PERSON | 0.99+ |
Philips | ORGANIZATION | 0.99+ |
Robb | PERSON | 0.99+ |
Jassy | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Mike Nygaard | PERSON | 0.99+ |
Prem Ananthakrishnan, Druva | Future of Cloud Data Protection & Management
>> Welcome back, everyone, to our special live Silicon Angle presentation with Druva special live event. This is the Druva Cloud tech preview section with Prem Ananthakrishnan who is is the VP of Product. Prem, Welcome to this segment and giving a preview of the Druva Cloud Platform. We've got a demo coming up. But first, tell us, what is the Druva Cloud Platform? >> First of all, John, great to be here. Let me start off by summarizing what Jaspreet said earlier. So, Druva Cloud Platform brings to market a project called Druva One that we have been working internally for more than 18 months. What this provides is a product that provides a single pane of glass for organizations to protect, govern, and intelligently manage their data, irrespective of where that data resides. And if you really think about where the enterprise data is today, going back to the conversation that you had with Matt and Dave, a lot of the data is now distributed and highly fragmented, right? It's not really sitting behind the four walls of the firewall like it used to. And when you think about how do you manage data that is so distributed and decentralized, you have to think about a centralized approach to manage that data, and cloud becomes the obvious choice for managing that data. That's what Druva Cloud Platform does. It really unifies that management experience by bringing together data across end points, infrastructure, SaaS, and cloud applications, like Office 365, and providing the unified single pane of glass experience again for our customers. And, more importantly, unlike you know the solutions that are out there on the market which really force you to manage data in silos by using different application stacks or management consoles or even multiple logins. Druva provides a single unified interface from where you can easily manage this data. >> I get the unified approach. Let's drill into the as-a-service delivery piece. Why does it matter, and what's the impact to the customer? >> That's a great question. First of all, we are the only solution in the market that can provide data protection and management as a service. And the as-a-service piece, you know, there are multiple advantages to it. Let's start with the obvious one. The obvious one is where people can save a lot of money and also save on the total cost of ownership by really eliminating hardware and the infrastructure costs. But when you start thinking about what's going on in the market, you know, with cloud washing and also with people really overusing the term cloud, you have to really think about how your customers would really see the difference between the benefits you would get from an as-a-service solution versus just software that's hosted in the cloud. And you know, I got to say when you start looking at people who have gone down the path of hosting software in the cloud, a lot of times they underestimate the cost and complexity that comes with maintaining as well as deploying and supporting software in the cloud. And what the end result is, you know, they get a huge check from the cloud provider, and then they're all upset. They are like, wait a minute, this is not what I was promised and not what I expected, right? Because if you think about what really goes behind this, when you start putting software in the cloud, you're still leasing infrastructure from your cloud provider. But you, the customer, are responsible for managing the application stack, which means you're responsible for patching software, upgrading it, security, ensuring the service availability with that software. All those things still fall on you. And that stuff still costs you. People don't realize that. >> Yeah, and what's interesting, too, with DevOps, we've got this whole infrastructure as code concept, so the cloud is attractive from that standpoint because all these hidden costs around the glue layer, if you will, APIs, microservices. You've still got to put them together in an effective way, which is also going to be hard. How does the cloud platform you guys have with Druva help facilitate the customer journey to be simpler to execute if they're all API based or they love DevOps? How do you guys fit into that? That's a great question again. But first it starts with the as-a-service model itself. When you think about a true software as a service solution, like Druva, what we do is we bring together that customer experience. It's not just about you know, throwing software in the cloud and using it, as I mentioned earlier. You basically have a promise of SLA, our service guarantee. You also have a predictable cost. And you also have, you know, an underlying architecture that really supports all of that, right? And that's where this notion of APIs and microservices also comes in. When you talk about microservices, for example, that really provides our customers to scale pretty much infinitely to millions of users over zettabytes of data without having to worry about bottlenecks in performance or reliability or even resiliency. And that is huge, right? I mean, this kind of promise again you get with the cloud, but also with a true as-a-service experience in a true cloud world. >> Well, the big news here in this event we're doing digitally with you guys is obviously funding, but also the introduction of the Druva Cloud Platform preview. So let's get into the demo. You want to walk us through the solution? >> Absolutely. Let me switch over and walk you through the demo. So John, what you see here is the dashboard that an administrator would see once they log in to the Druva Cloud Platform management console. As you can see here, the dashboard gives you a quick summary of the total data protected and managed by Druva with a clear breakdown of that data based on different data sources, such as your cloud applications, data on your end points, file servers, as well as virtual servers. Again, bringing together that single pane of glass management experience across all your data sources. Once again, this is huge, right? When you start thinking about the legacy solutions, they offer this in piecemeal. We're able to bring this unified experience and being able to do that on a single management console, allowing our customers to protect and manage and govern all this data. And when you look at the service utilization piece here, that really tells you the value an organization can get from this platform. Not just in terms of your classic backup and restores, but also in terms of how their internal teams can use this platform to solve their use cases around e-discovery, or compliance. As you scroll down here, you can see some of the other elements of SaaS and you know the software as a service benefits that I talked about earlier. Things like service availability, supportability, and also a great user and learning experience. So when you talk about service availability, as you can see here, you can pretty much get a bird eye view of where your data is located anywhere in the world and also the operational status of the data center of a region. And once again, Druva is very uniquely positioned in the market when it comes to being able to spin new data centers anywhere in the world within a very short notice, maybe in just minutes or hours. And the reason, again, we are able to do that is because we're not constrained by the limitations of a software solution where you have to still install that on some server and bring up your application stack. We can pretty much orchestrate this anywhere in the world where we also obviously leverage the global footprint from our public cloud partners, like Amazon and Microsoft. >> So both clouds are there. I see Ransomware on there, that's cool. Is there any kind of Steve Jobs, one more thing, kind of feature you can show us? >> There is definitely that. >> You've got a one more thing? (laughs) >> There's always the one more thing. So let me get into that. (John laughs) Before I go into that, I want to mention one more thing. And then I'm going to dive into that real quick. So what you see here in the central panel here are also the different microservices, right? So again, the microservices provide a great way for our customers not only to scale to that terabytes of data and millions of users, it also gives Druva a great way to bring new products and services to our customers with agility and great go-to-market efficiency. So our customers can easily consume something that we bring to them right off of this console. They can subscribe to it, just like you would go to Amazon today and log in to that portal and consume let's say a storage service like S3, our customers can come to Druva and consume data protection at scale with a single click. Now with that, I'm going to go to the Steve Jobs question. There's always one more thing. >> John: One more thing! Saving the best for last! >> Prem: (laughs) Always! To think about the administrative challenges a lot of people go through when they manage products and go through the day to day administration, they always struggle with navigating the different sections of the product or the product documentation because that's how enterprise products are. They are fairly complex. They actually have multiple workflows. And then, especially when you think about remote offices, or locations where you have employees with limited IT skill set, then you have to think about how do they really get started? How do they really know where to go? How do you get from point A to point B? And we took this problem statement to our engineering team and told them to solve it. Our brilliant engineers came up with this really cool search utility that we are calling CAS, or context aware search. And Jaspreet sort of alluded to this earlier in the day. And if you look at what this does, as I start searching for any keyword, this is the kind of experience I'm sure you and I have seen with consumer websites. Let's say you go to a shopping site like Walmart or Amazon and when you're searching for whatever you're shopping for, the search tool uses your history, also has an intelligence of what other people have been looking for, and it comes back with results. >> John: Kind of like Google search for the enterprise. >> Prem: Exactly, but think about this, though. What this is doing now is Druva is bringing that consumer-like search experience into the enterprise. And now we're using that to solve this problem of administrators having to navigate through different parts of the product. So what you are able to do with this is now with a single click, you can easily navigate to any part of the product or the product documentation. So as an example, I'm just going to click on, I'll go back to that. I'm going to go back to Legal Hold. And I'm going to click on the Manage Legal Hold link. As you can see, with a single click, it takes me directly into that section of the product from where I can manage Legal Hold. Let's try another example. In this case, let's assume I'm not really ready to manage anything yet, but I still want to learn about Office 365 and how Druva integrates with Office 365. So as you can see, the search results have also been cleanly categorized into two sections. You have actions for configuration and you also have information links. So now I'm clicking on this link which allows me to quickly go to our documentation page and see how Druva can integrate with Office 365. So once again, the goal here is to make that administrative experience easier, intuitive, and allow them to navigate to any part of the product or product documentation with one single click. >> John: Truly a single pane of glass for the user. Discovery, learning, and all the knowledge center in there. Congratulations. So the question is, when can people get started? >> Great question. People can get started today with our end-user data protection as well as SaaS data protection and the infrastructure data protection products. There are free trials available at Druva.com. The Druva Cloud Platform will be available towards the later half of this calendar year, in Q4. But we are also starting early trials as early as next month. >> Prem, thanks so much. Great demo. Congratulations on the tech preview. Great demo. And our next segment will be talking about the $80 million financing with the CFO and the big time investors, on our next segment. Be right back.
SUMMARY :
and giving a preview of the Druva Cloud Platform. And when you think about how do you manage I get the unified approach. And the as-a-service piece, you know, there are How does the cloud platform you guys have with Druva digitally with you guys is obviously funding, but also And when you look at the service utilization piece here, kind of feature you can show us? They can subscribe to it, just like you would go And then, especially when you think about remote offices, So once again, the goal here is to make that So the question is, when can people get started? and the infrastructure data protection products. and the big time investors, on our next segment.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Matt | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Steve Jobs | PERSON | 0.99+ |
Prem Ananthakrishnan | PERSON | 0.99+ |
Druva | TITLE | 0.99+ |
Walmart | ORGANIZATION | 0.99+ |
Office 365 | TITLE | 0.99+ |
Jaspreet | PERSON | 0.99+ |
$80 million | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Druva Cloud Platform | TITLE | 0.99+ |
next month | DATE | 0.99+ |
Druva | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
two sections | QUANTITY | 0.99+ |
more than 18 months | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
Prem | PERSON | 0.98+ |
later half of this calendar year | DATE | 0.98+ |
millions of users | QUANTITY | 0.98+ |
DevOps | TITLE | 0.97+ |
one single click | QUANTITY | 0.97+ |
Q4 | DATE | 0.97+ |
First | QUANTITY | 0.97+ |
single | QUANTITY | 0.96+ |
one more thing | QUANTITY | 0.96+ |
ORGANIZATION | 0.96+ | |
Druva One | TITLE | 0.95+ |
single click | QUANTITY | 0.92+ |
One more thing | QUANTITY | 0.91+ |
single management console | QUANTITY | 0.89+ |
single pane | QUANTITY | 0.87+ |
S3 | TITLE | 0.87+ |
Ransomware | TITLE | 0.83+ |
Silicon Angle | EVENT | 0.74+ |
Druva Cloud | TITLE | 0.74+ |
single pane of | QUANTITY | 0.74+ |
terabytes of data | QUANTITY | 0.73+ |
Legal Hold | TITLE | 0.66+ |
point B | OTHER | 0.66+ |
Druva.com | ORGANIZATION | 0.6+ |
one | QUANTITY | 0.56+ |
Legal Hold | OTHER | 0.48+ |
zettabytes | QUANTITY | 0.45+ |
Ben Newton, Sumo Logic | AWS Summit 2017
>> Announcer: Live, from Manhattan. It's theCUBE! Covering AWS Summit New York City 2017. Brought to you by Amazon web services. >> And welcome back here on theCUBE. The flagship broadcast of SiliconANGLE TV where our colleague John Furrier likes to say we extract the signal from the noise. Doing that here at AWS Summit here in midtown along with Stu Miniman. I'm John Walls and we're joined now by Ben Newton who's the analytics lead at Sumo Logic. And I said Ben, what is an analytics lead? If you were to give me the elevator speech on that? You said you're the geek who stays up all night and fiddles with stuff. >> That's why I joined Sumo Logic. I love finding the things that other people didn't find. And when I first joined, I was staying up until 2:00 a.m. every night playing around with the data. My wife started getting worried about me. (laughter) But that was the path that I set on. >> You're the guy that looks at the clouds and sees the man's nose, right? >> Yeah exactly, exactly. >> It's just it's in data that's all. >> Yeah, yeah. >> So I hear this concept. But we'll jump in here about continuous intelligence, right? >> Ben: Yeah. >> It's machine data and there's just this constant stream. I mean, how do you see that? How do you define that? And how does that play with how you, what you do? >> Yeah, no absolutely. So, I've been around a little while. And when I started out, there was a particular set of problems we were trying to solve. You know, we had the $100,000 Sun Microsystem servers. You drop 'em on the floor, somebody gets fired. But it was a very particular problem set. What's happened now is that the market is really changing. And so, the amount of data is just growing exponentially. So I kind of have my own conjoined triangle slide that I like to show people. But basically, things are getting smaller and smaller and smaller. We're going from these monolithic services to microservices, IOT. And the scale is just getting bigger and bigger and bigger. And what that means is that the amount of data being produced is it's bigger than anyone ever imagined. I was just looking up some numbers that Barkley says it's going to be 16 zettabytes. I had to look that up. That's a billion terabytes by 2020. That's like watching the whole Netflix catalog 30 million times. (laughter) That's the amount of data that customers are dealing with and that's what's exciting about this space I think. >> So, I remember at Re:Invent. You see Sumo's like the booth when you walk in. They actually had sumo wrestlers one year. (laughter) Remind me, just wrestling. I've got all that data. How do I take advantage of that? How do I democratize the analytics on data? What are the big challenges? You said customers used to be dropping a server on the floor. How are they getting their arms around this? How are they really leveraging their data? And leveraging analytics more? >> Yeah, I got to wrestle one of those sumos. (laughter) He let me win a little bit. (laughter) And then it was over. >> Did you have to wear the outfit? >> Luckily no. That was good for everybody. Yeah, you know, I think ... A few years ago, it was all about big data. And it was all about how much data they could get in. And I think you saw some announcements from AWS today. Really people are getting their hands around it. Now it's all about fast data. Like what can I do in real time? And that's what people are struggling with. They have this massive amount of data that's just sitting there unused. And people weren't actually getting value out of to drive the business. And that's really the next goal I think over the next few years is how can our customers and these companies get more value out of data they have without having to invest in all this costly infrastructure to do it? >> I think a few years ago, it was big data. I'm going to take the compute and I'm going to move it to the data. >> Yeah. >> Now, last year at Re:Invent, talked to a lot of the companies. They're working with Hadoop and the like, and they said the data lakes are now in the public cloud. >> Ben: Yes. >> But now I've got edge computing. I kind of have the data side, the public cloud, and the edge. And I'm never going to get all my data in the same place so how am I managing all of those various pools of data? >> Ben: Yeah. >> How do I make sure I get the right data in the right place so I can make the decisions that I need to when I need to? >> Yeah, it's a good question. So, a lot of what we're trying to do now is trying to help customers get the data in the way they want it. Just like you said. So, before, it might have been about here's our standard way. And here's our agent. You go install that. Now we're trying to provide ways for them to get the data in they want. We're providing APIs and basically trying to move towards becoming more of a platform. So the customers are sending us with third party tools they like. Because I was talking to one of my developers. And I asked him, if somebody came and said to you, you need to change the way you produce your data to use this product, what is he going to say? And he used a four letter word I can't repeat. That's how they think about it. They don't want to have to change the way they do things. So what we do is we provide lots of different ways of getting from multiple clouds from multiple tools. Open source tools. We don't care. Making it as easy as possible to get the data in. >> You know, if Stu and I were different clients of yours. What matters to Stu is much different that what matters to me, right? So how do you go about helping determine access to data in a context that I want it, >> Ben: Yeah. as opposed to the data that Stu wants at the time that he wants it? Cause it's just not about finding real time stuff, right? It's about also finding value at it. >> Ben: Yeah. >> And helping me put action to it. >> You know absolutely John. So I think there's a couple different ways. One is making it easy to get the data in like we just talked about. Another way is actually building a COSMO that matches how you use the data. The typical way that analytics tools have done it in the past, including us before, was kind of a one size fits all model. So last year we announced our unified logs and metric product which was trying to appeal to long term trending. And so now, what we're moving towards as well is providing a model that allows our customers, we call it cloud flex. It allows them to organize their data in the way that makes the most sense. So, maybe you want to keep your security data for a year. But you want to keep your operational data for seven days. That's fine. But organizing the way that makes most sense to you and match your cost to your data. I mean, this is the path that I think AWS has really set. That we're basically meeting customers where they're at. Allowing them to use it. And the second thing is also making easy for their customers to get to that data. And use it in the way they like. So you can make it easy to get in, cost efficient model, and then make it really easy for the user to get to that data. >> Ben, who are you working with the most? Maybe you're working across all these but Amazon was talking a lot about the data scientist this morning. All the ETL challenges >> Yeah. >> that are happening. I know there's a big boost for developers. I expect there's probably something with Lambda >> Yeah. >> that you're involved in. But what are some of those hot button issues that you're seeing across some of the customer roles? >> Sure, sure. I think one thing where you say that with data scientist. I mean we all know that there's a data scientist shortage. We have data scientists at Sumo Logic. They're hard to find. And so part of this is making it, one of the hot button issues is can I get people that don't have that background access to the data? And so, I may want to geek out and write inquiries and staying up to 2:00 a.m. writing that. Most people don't. That's (mumble), right? Not surprising. >> Stu: Right. >> So, a lot of that is how can you make it easier for our developers for example that have another job to do. This is not their main job. To get access to that data and use it. And so for example, one of the things we've done for customers we did for ourselves at Sumo is even making that data accessible to other parts of the business. So for example, our sales reps at Sumo Logic actually use that data to drive the customer interactions. So they can go to a customer and say, hey, we're seeing how you're using a tool. We think you could get value out of these other five things. And work with them in a constructive way. For example, a couple of other clients I've worked with. They're actually using the data in their marketing departments and their sales departments and putting this up on the wall so that other parts of the business are getting access to it beyond dev ops and IT ops, which is huge value to them, right? >> Sumo, I'm just curious. Sumo Logic, umm, where from the name? What's the genesis of that? >> Well the official story is that it's about Sumo, big data. The real story is that our founder Christian loves dogs. And he has a dog named Sumo. And so, it really fit well. It fit the name cause of big data but it also it fit it because he had a >> Alright. >> he had a dog named Sumo. >> I'll buy that. Just curious. Ben, thanks for being with us. We appreciate the time here on theCUBE and you could have taken him I know, if you really wanted to. >> I appreciate that. >> You could have, no doubt. (laughter) Ben Newton. Analytics lead at Sumo Logic joining us here on theCUBE. Back with more from AWS Summit in New York right after this break. (upbeat techno music)
SUMMARY :
Brought to you by Amazon web services. And I said Ben, what is an analytics lead? I love finding the things that other people didn't find. So I hear this concept. And how does that play with how you, And so, the amount of data is just growing exponentially. You see Sumo's like the booth when you walk in. Yeah, I got to wrestle one of those sumos. And I think you saw some announcements from AWS today. and I'm going to move it to the data. talked to a lot of the companies. And I'm never going to get all my data in the same place And I asked him, if somebody came and said to you, What matters to Stu is much different as opposed to the data that Stu wants But organizing the way that makes most sense to you the data scientist this morning. I expect there's probably something with Lambda that you're seeing across some of the customer roles? that don't have that background access to the data? of the business are getting access to it What's the genesis of that? It fit the name cause of big data We appreciate the time here on theCUBE You could have, no doubt.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Stu Miniman | PERSON | 0.99+ |
Ben Newton | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
$100,000 | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Sumo | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Sumo Logic | ORGANIZATION | 0.99+ |
Ben | PERSON | 0.99+ |
seven days | QUANTITY | 0.99+ |
John Walls | PERSON | 0.99+ |
2020 | DATE | 0.99+ |
last year | DATE | 0.99+ |
John | PERSON | 0.99+ |
Manhattan | LOCATION | 0.99+ |
Stu | PERSON | 0.99+ |
Sumo | PERSON | 0.99+ |
30 million times | QUANTITY | 0.99+ |
New York | LOCATION | 0.99+ |
five things | QUANTITY | 0.99+ |
one thing | QUANTITY | 0.99+ |
2:00 a.m. | DATE | 0.98+ |
Sun Microsystem | ORGANIZATION | 0.98+ |
a year | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
Netflix | ORGANIZATION | 0.98+ |
16 zettabytes | QUANTITY | 0.98+ |
Hadoop | ORGANIZATION | 0.98+ |
today | DATE | 0.97+ |
second thing | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
first | QUANTITY | 0.97+ |
Re:Invent | EVENT | 0.96+ |
one year | QUANTITY | 0.94+ |
SiliconANGLE TV | ORGANIZATION | 0.94+ |
AWS Summit 2017 | EVENT | 0.93+ |
Re:Invent | ORGANIZATION | 0.92+ |
AWS Summit | EVENT | 0.89+ |
billion terabytes | QUANTITY | 0.88+ |
every | QUANTITY | 0.86+ |
four letter | QUANTITY | 0.85+ |
few years ago | DATE | 0.84+ |
COSMO | ORGANIZATION | 0.84+ |
couple | QUANTITY | 0.83+ |
Barkley | PERSON | 0.8+ |
this morning | DATE | 0.79+ |
Logic | PERSON | 0.78+ |
AWS Summit New York City 2017 | EVENT | 0.78+ |
a few years ago | DATE | 0.73+ |
next few years | DATE | 0.72+ |
sumo | PERSON | 0.66+ |
Christian | OTHER | 0.6+ |
Lambda | ORGANIZATION | 0.54+ |
sumos | PERSON | 0.5+ |
Scott Dietzen, Pure Storage | Pure Accelerate 2017
>> Announcer: Live from San Francisco, It's The Cube. Covering Pure Accelerate 2017. Brought to you by Pure Storage. >> Welcome back to Pier 70 in San Francisco, everybody. This is The Cube, the leader in live tech coverage. I'm Dave Vellante with Stu Miniman. Scott Dietzen is here, the CEO of Pure Storage, hot off the keynote. Scott, great to see you. >> Great to be back on The Cube. >> So I love the nickname. I grew up in a town where everybody had a nickname. We got Dietz, we got Hat, we got Danzig, we got Kicks, I dunno. You can call me V. He's, I guess, just S-tu. >> V works. >> I mean, that's it, you know. So, again, great show here, I love the venue. How'd you guys pick this place? >> So I can't say I was involved in the choice and this place has a really illustrious history. I mean, it goes back to the 1800's. And actually they manufactured steel here during World War II. I think they were turning out two battleships a week. But another piece of history that maybe isn't as nice is this is the last time this venue's going to be used. So it is scheduled to be brought down to make way for new condos I guess. So we really wanted to celebrate the venue and its history. It's just a great industrial feel to it. >> And they're tearing down a bunch, the new Warriors facility is going to be in Dogpatch, right? >> Yes, and so, yeah, we can't feel too bad about it because we are indeed celebrating the Warriors success. >> You needed a bigger house for all those trophies. (Scott laughs) >> I think they're poised to have a really good run. But I think Cleveland's going to be there contending with them for the next several years to come and it's really exciting. >> Well, hopefully my Celtics will get there in the next four or five years with some draft picks. So, I want to talk about sort of the ascendancy of Pure. When we first met you, you had a pretty simple message. It was like, look, we think we can deliver way better performance for lower cost. I mean, boom. It wasn't the same cost. I remember you were very forced. I said, "About the same, right?" You said, "No, no, lower. "We have the best data reduction technology "in the business." I remember talking to you at Oracle OpenWorld about that. >> Yep. >> And that's fundamentally what happened. And you attacked the legacy and stall base. And you won that game. But you're not resting on that, you've got to take it now to a next level. Talk about that next level. Well, talk about where you came from and then the next level of data and beyond just sort of public cloud. >> You guys have talked about this too, right. If you look at the curb of Moore's law. I mean, mechanical disk doesn't follow Moore's law. And so the cost reduction curbs, we did the math and we said, look, we're going to be able to drive down the cost of storage. We're going to be able to drive up the density and power cooling space. Simplicititly you can dramatically reduce the cost of storage. But Flash is going to help us, right? You know, we've gotten to the point where Flash is, you know, even with a tighter component market, it's cheaper to buy raw than fast disks. And way cheaper to deploy. World Bank talked about saving millions of dollars by deploying Pure Storage and getting a 5x performance boost at the same time. So if we can help customers pay for their storage both in terms of cost savings as well as new business value, that's a great outcome. >> Wikibon's been on the right side of that prediction since early on. >> That's very true, I've used your data. >> We're very aggressive about that. But the thing that excited us most was the second thing you said. Which was the business impact, the business value. So I want to come back a little bit and get a history. It used to be I would buy EMC for block and NetApp for file. You're sort of attacking that premise. Talk about that. >> Well, so we started in the performance end of the storage market, which is dominated by block. Because we knew that one was going to be the first to shift to all Flash. And we've already seen that play out. I mean, even the legacy vendors and their install base are inclined to use Flash. Cause it's actually cheaper than 15k disk to put in. That tech is about to hit a wall because as SSD's get bigger. You know, we've grown SSD's almost 400 fold since Pure got started. But we haven't changed the pipe, right? So if you make a vessel 400 times larger but you have the same pipe going in and out of it you're losing a lot of access to data. This is this new sea change to new protocols where we're shedding all of the disk. And I think the second big change is we're bringing the same wave to big data. Right, so we've been playing in the block market now we're playing in the file and object market. Because big data workloads, especially those that require deep learning, you just need massively parallel storage. And you're never going to be able to get that with, you know, 20-plus year old storage designs. >> So, Scott, when you talk to your customers, especially when you're talking to C-suite, how does storage fit into that discussion? I loved in the keynote, there's a lot of discussion of, you know, next generation applications. Everything from the, you know, buzzwords of the AI and ML type pieces out there. But, you know, what are the big challenges that your customer's facing? And how much is it a storage discussion? How much is it kind of a digital transformation? >> Yeah, I think we see all of it. We'll talk to customers that find that they can't innovate quickly, right? And they want to get so much more value from their data. One of the studies we cited in the keynote today was 80% of companies think they can make 20% more on the top line if they can just get insights out of their current data. I mean, that's a staggering statistic. 20% top line for every company if they could just get more out of their data. We want to make that possible. Their constrained with very expensive legacy technologies. That they simply can't give them the access to the data. They don't have the performance to mind those insights. And the infrastructure is so cumbersome, they just can't evolve and move their business forward. And so providing that recipe, you know, giving customers the ability to get dramatically more value out of their data and do it for lower cost is working. >> Yeah, and it's been interesting to watch kind of the data center to the cloud, and now cloud to the edge. And you've got solutions that are spanning across them. How do you see that maturing in really the vision to expand where Pure fits in the discussion. >> So, you know, from early on we targeted the cloud market. Because we knew that this is where the future lies, right? Even traditional enterprises still want all the benefits of the cloud inside of their own icy environments. >> And when you say cloud, you're meaning SaaS providers, service providers, as well as, you know? >> Yeah. We talk about the model that the big three are using. But, you know, this is very popular in many other clouds. The world is not moving to three data centers. Companies like Apple and Facebook are very committed to their own data center investment. And we seek to be a supplier to that consumer internet. The softwares of service and infrastructures service of providers. Because that's where the data center's going. But, you know, what we've seen recently with the proliferation of internet of things in sensor data is customers are just growing these huge data footprints that are just too big to move across public networks. So we talked about, in the keynote, in three years only one, out of every 20 bites that's generated, can fit on the internet that year. >> 2.5 out of 50, I think was the number. >> 2.5 out of 50 zettabytes. 50 zettabytes will be produced that year but only 2.5 is going to be transferred across the internet for the entire year. So we've got to get better as an industry at helping customers capture that data where it's generated, right? We call that edge. Sometimes it'll be on the devices, or it'll be in data centers that are close to the edge. And they've got to mine insights from it right there. >> Dave: Absolutely. >> One of the exciting demos we're showing here is actually AI co-processing with the public cloud. So we've got an edge data center that we're running deep learning in. But then we're selecting particular data sets through the deep learning to transfer it up to the public cloud for more machine learning. >> Those key nuggets, the needles maybe you transfer. Cause otherwise it's too expensive to transfer all the data. >> You can't transfer all of it. So if it's a self-driving car, you know, if I'm just routinely driving along, no big deal, you keep the data. But if I slam on the breaks because a dog's in the crosswalk that's the thing you want to do the training on. >> That can't be an asynchronous operation, right? So, okay, you're already getting the hook, I can't believe it, he just got here. (Scott laughs) Cube is a comfortable place but we got to throw some hard questions at you. So >> Please. >> Stu asked me the other day, or, actually, today, "Who's going to reach a billion dollars first?" And you don't have to predict, you can leave that to us. "Nutanix or Pure?" Okay, so talk about HCI. You made some comments up on stage about hyper-converged. Said that, you know, it's good for its own specific use cases. What's your point of view on that? >> So first of all, Nutanix has built a great business. >> Dave: Awesome, yeah, sure. >> We're absolutely fans. I will say, in the markets, those two new markets that we're playing in, in the cloud market and in the next gen applications and deep learning, we don't see hyper-converged infrastructure. We do see hyper-converged in business and enterprises. But it's usually the smaller scale deployments. The reason is, at scale, you don't want to collocate applications, data, and storage all in a single tier. It limits the ability to easily scale independently. You know, if you need more capacity you need more application compute versus data compute. You want to be able to flex those independently. Which is why all the big clouds and enterprise data centers run converged rather than hyper-converged. But the change that's coming is fast networks are changing this even more. So what I believes going to turn hyper-converged inside out is it's now more efficient to access remote storage than it is the same storage on your local chassis. And that's because we're offloading compute to the server net cards on there. So these new protocols NVMe over fabrics are actually making the network finally really the computer. There's no longer a chassis that's even meaningful. >> Big fan of that infrastructure and NVMe over fabric. Okay, next tough question is the narrative, from the big guy, EMC in particular, Pure is small, they're losing money. And your return narrative is tell EMC they're large, they're slow, they're outdated and confused. Okay, we love that, you know, it gets a little juices flowing. But here's my question. A lot of customers are large and slow and outdated and confused. So how do you get that fat middle to move faster and become a tailwind for you guys? >> So I think it's happening. I mean, customers just want technology to be made easily. I mean, one of the disrupters that's really helped is the AWS user experience, right? AWS has reset the bar for IT everywhere because people are like, why am I paying for consultants to visit my data center and take care of this mainframe or client server error technology that used to be so expensive. You know, consultants coming along with it. And permanently staying with it was okay. That's not okay, right? The world needs to move to self-driving infrastructure and they need radically better performance if they're going to use these new techniques. And so I think the key motivation is customers need to get more value from their data and they need to drive down costs. And we're in the sweet spot of being able to provide it. And these 20-plus year old designs can't. There's no way. >> So it's inevitable is really what I'm taking away from that. And you've got a lead that you can sustain in your view. >> You know, it's been very interesting to watch our competitors talk about the new FlashArray//X. With all NVMe and the new FlashBlade. They've said these are science projects that won't be real for three years. And, yet, we've won one of the biggest AI platforms in the world. You know, 25% or more of our business is coming from cloud customers. So, you know, from where we sit, things are going exactly as we'd hoped. >> Love it, we're talking about the edge, you're pushing the envelope at the edge. Alright, Scott, we'll give you the last word. I know you're super busy, but give us the wrap up. The bumper sticker on Accelerate 2017. >> Oh, it's such a phenomenal group coming together to talk about innovation. We've already shipped the hardware form factors this year, with our new FlashArray and the new FlashBlade. But the thing that I'm so excited about is we've got more than two years of software innovation teed up that we've been very quite about. So when you can bring two years of innovation and pack it into six months like we have this year, it makes things really exciting. >> Well congratulations on getting to this point. We're really excited about the future. Scott Dietz Dietzen, thanks for coming on The Cube. Great to see you again. >> Thank you, always good to be on the cube. >> Alright, keep it right there, buddy. We'll be back with our next guest. This is Pure Accelerate, live from San Fancisco. We'll be right back. (soft electronic music)
SUMMARY :
Brought to you by Pure Storage. This is The Cube, the leader in live tech coverage. So I love the nickname. I mean, that's it, you know. I mean, it goes back to the 1800's. because we are indeed celebrating the Warriors success. You needed a bigger house for all those trophies. But I think Cleveland's going to be there contending with them I remember talking to you at Oracle OpenWorld And you attacked the legacy and stall base. And so the cost reduction curbs, we did the math Wikibon's been on the right side of that prediction I've used your data. But the thing that excited us most I mean, even the legacy vendors and their install base I loved in the keynote, there's a lot of discussion And so providing that recipe, you know, kind of the data center to the cloud, So, you know, from early on we targeted the cloud market. We talk about the model that the big three are using. or it'll be in data centers that are close to the edge. One of the exciting demos we're showing here Those key nuggets, the needles maybe you transfer. that's the thing you want to do the training on. I can't believe it, he just got here. And you don't have to predict, you can leave that to us. It limits the ability to easily scale independently. Okay, we love that, you know, I mean, one of the disrupters that's really helped And you've got a lead that you can sustain in your view. With all NVMe and the new FlashBlade. Alright, Scott, we'll give you the last word. But the thing that I'm so excited about Great to see you again. This is Pure Accelerate, live from San Fancisco.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
ORGANIZATION | 0.99+ | |
Scott | PERSON | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Celtics | ORGANIZATION | 0.99+ |
Scott Dietzen | PERSON | 0.99+ |
Pure Storage | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
World Bank | ORGANIZATION | 0.99+ |
Cleveland | ORGANIZATION | 0.99+ |
two years | QUANTITY | 0.99+ |
20% | QUANTITY | 0.99+ |
Dietz | PERSON | 0.99+ |
Scott Dietz Dietzen | PERSON | 0.99+ |
25% | QUANTITY | 0.99+ |
50 zettabytes | QUANTITY | 0.99+ |
World War II. | EVENT | 0.99+ |
80% | QUANTITY | 0.99+ |
three years | QUANTITY | 0.99+ |
San Francisco | LOCATION | 0.99+ |
20-plus year | QUANTITY | 0.99+ |
Pier 70 | LOCATION | 0.99+ |
One | QUANTITY | 0.99+ |
six months | QUANTITY | 0.99+ |
5x | QUANTITY | 0.99+ |
more than two years | QUANTITY | 0.99+ |
2.5 | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
1800's | DATE | 0.98+ |
2017 | DATE | 0.98+ |
HCI | ORGANIZATION | 0.98+ |
50 | QUANTITY | 0.98+ |
this year | DATE | 0.97+ |
EMC | ORGANIZATION | 0.97+ |
millions of dollars | QUANTITY | 0.97+ |
Hat | PERSON | 0.97+ |
first | QUANTITY | 0.97+ |
Stu | PERSON | 0.97+ |
both | QUANTITY | 0.97+ |
Warriors | ORGANIZATION | 0.97+ |
Pure | ORGANIZATION | 0.96+ |
Oracle OpenWorld | ORGANIZATION | 0.95+ |
400 times | QUANTITY | 0.95+ |
three data centers | QUANTITY | 0.94+ |
one | QUANTITY | 0.94+ |
FlashBlade | COMMERCIAL_ITEM | 0.94+ |
Wikibon | ORGANIZATION | 0.93+ |
Flash | TITLE | 0.9+ |
second big | QUANTITY | 0.9+ |
Flash | ORGANIZATION | 0.89+ |
The Cube | ORGANIZATION | 0.89+ |
two battleships a week | QUANTITY | 0.87+ |
Moore | PERSON | 0.87+ |
San Fancisco | LOCATION | 0.87+ |
FlashArray | COMMERCIAL_ITEM | 0.86+ |
second thing | QUANTITY | 0.86+ |
Danzig | PERSON | 0.86+ |
two new markets | QUANTITY | 0.85+ |
single tier | QUANTITY | 0.83+ |
billion dollars | QUANTITY | 0.81+ |
NVMe | ORGANIZATION | 0.77+ |
NetApp | TITLE | 0.77+ |
15k disk | QUANTITY | 0.75+ |
next several years | DATE | 0.75+ |
Natalia Vassilieva & Kirk Bresniker, HP Labs - HPE Discover 2017
>> Announcer: Live from Las Vegas, it's the CUBE! Covering HPE Discover 2017. Brought to you by Hewlett Packard Enterprise. >> Hey, welcome back, everyone. We are live here in Las Vegas for SiliconANGLE Media's CUBE exclusive coverage of HPE Discover 2017. I'm John Furrier, my co-host, Dave Vellante. Our next guest is Kirk Bresniker, fellow and VP chief architect of Hewlett Packard Labs, and Natalia Vassilieva, senior research manager, Hewlett Packard Labs. Did I get that right? >> Yes! >> John: Okay, welcome to theCUBE, good to see you. >> Thank you. >> Thanks for coming on, really appreciate you guys coming on. One of the things I'm most excited about here at HPE Discover is, always like to geek out on the Hewlett Packard Labs booth, which is right behind us. If you go to the wide shot, you can see the awesome display. But there's some two things in there that I love. The Machine is in there, which I love the new branding, by the way, love that pyramid coming out of the, the phoenix rising out of the ashes. And also Memristor, really game-changing. This is underlying technology, but what's powering the business trends out there that you guys are kind of doing the R&D on is AI, and machine learning, and software's changing. What's your thoughts as you look at the labs, you look out on the landscape, and you do the R&D, what's the vision? >> One of the things what is so fascinating about the transitional period we're in. We look at the kind of technologies that we've had 'til date, and certainly spent a whole part of my career on, and yet all these technologies that we've had so far, they're all kind of getting about as good as they're going to get. You know, the Moore's Law semiconductor process steps, general-purpose operating systems, general-purpose microprocessors, they've had fantastic productivity growth, but they all have a natural life cycle, and they're all maturing. And part of The Machine research program has been, what do we think is coming next? And really, what's informing us as what we have to set as the goals are the kinds of applications that we expect. And those are data-intensive applications, not just petabytes, exabytes, but zettabytes. Tens of zettabytes, hundreds of zettabytes of data out there in all those sensors out there in the world. And when you want to analyze that data, you can't just push it back to the individual human, you need to employ machine learning algorithms to go through that data to call out and find those needles in those increasingly enormous haystacks, so that you can get that key correlation. And when you don't have to reduce and redact and summarize data, when you can operate on the data at that intelligent edge, you're going to find those correlations, and that machine learning algorithm is going to be that unbiased and unblinking eye that's going to find that key relationship that'll really have a transformational effect. >> I think that's interesting. I'd like to ask you just one follow-up question on that, because I think, you know, it reminds me back when I was in my youth, around packets, and you'd get the buffer, and the speeds, and feeds. At some point there was a wire speed capability. Hey, packets are moving, and you can do all this analysis at wire speed. What you're getting at is, data processing at the speed of, as fast as the data's coming in and out. Is that, if I get that right, is that kind of where you're going with this? Because if you have more data coming, potentially an infinite amount of data coming in, the data speed is going to be so high-velocity, how do you know what a needle looks like? >> I think that's a key, and that's why the research Natalia's been doing is so fundamental, is that we need to be able to process that incredible amount of information and be able to afford to do it. And the way that you will not be able to have it scale is if you have to take that data, compress it, reduce it, select it down because of some pre-determined decision you've made, transmit it to a centralized location, do the analysis there, then send back the action commands. Now, we need that cycle of intelligence measurement, analysis and action to be microseconds. And that means it needs to happen at the intelligent edge. I think that's where the understanding of how machine learning algorithms, that you don't program, you train, so that they can work off of this enormous amount of data, they voraciously consume the data, and produce insights. That's where machine learning will be the key. >> Natalia, tell us about your research on this area. Curious. Your thoughts. >> We started to look at existing machine learning algorithms, and whether their limiting factors today in the infrastructure which don't allow to progress the machine learning algorithms fast enough. So, one of the recent advances in AI is appearance, or revival, of those artificial neural networks. Deep learning. That's a very large hype around those types of algorithms. Every speech assistant which you get, Siri in your phone, Cortana, or whatever, Alexa by Amazon, all of them use deep learning to train speech recognition systems. If you go to Facebook and suddenly it starts you to propose to mark the faces of your friends, that the face detection, face recognition, also that was deep learning. So that's a revival of the old artificial neural networks. Today we are capable to train byte-light enough models for those types of tasks, but we want to move forward. We want to be able to process larger volumes of data, to find more complicated patterns, and to do that, we need more compute power. Again, today, the only way how you can add more compute power to that, you scale out. So there is no compute device on Earth today which is capable to do all the computation. You need to have many of them interconnect together, and they all crunch numbers for the same problem. But at some point, the communication between those nodes becomes a bottleneck. So you need to let know laboring node what you achieved, and you can't scale out anymore. Adding yet another node to the cluster won't lead up to the reduction of the training time. With The Machine, when we have added the memory during computing architecture, when all data seeds in the same shared pool of memory, and when all computing nodes have an ability to talk to that memory. We don't have that limitation anymore. So for us, we are looking forward to deploy those algorithms on that type of architecture. We envision significant speedups in the training. And it will allow us to retrain the model on the new data, which is coming. To do not do training offline anymore. >> So how does this all work? When HP split into two companies, Hewlett Packard Labs went to HPE and HP Labs went to HP Ink. So what went where, and then, first question. Then second question is, how do you decide what to work on? >> I think in terms of how we organize ourselves, obviously, things that were around printing and personal systems went to HP Ink. Things that were around analytics, enterprise, hardware and research, went to Hewlett Packard Labs. The one thing that we both found equally interesting was security, 'cause obviously, personal systems, enterprise systems, we all need systems that are increasingly secure because of the advanced, persistent threats that are constantly assaulting everything from our personal systems up through enterprise and public infrastructure. So that's how we've organized ourselves. Now in terms of what we get to work on, you know, we're in an interesting position. I came to Labs three years ago. I used to be the chief technologist for the server global business unit. I was in the world of big D, tiny R. Natalia and the research team at Labs, they were out there looking out five, 10, 15, or 20 years. Huge R, and then we would meet together occasionally. I think one of the things that's happened with our machine advanced development and research program is, I came to Labs not to become a researcher, but to facilitate that communication to bring in the engineering, the supply chain team, that technical and production prowess, our experience from our services teams, who know how things actually get deployed in the real world. And I get to set them at the bench with Natalia, with the researchers, and I get to make everyone unhappy. Hopefully in equal amounts. That the development teams realize we're going to make some progress. We will end up with fantastic progress and products, both conventional systems as well as new systems, but it will be a while. We need to get through, that's why we had to build our prototype. To say, "No, we need a construction proof of these ideas." The same time, with Natalia and the research teams, they were always looking for that next horizon, that next question. Maybe we pulled them a little bit closer, got a little answers out of them rather than the next question. So I think that's part of what we've been doing at the Labs is understanding, how do we organize ourselves? How do we work with the Hewlett Packard Enterprise Pathfinder program, to find those little startups who need that extra piece of something that we can offer as that partnering community? It's really a novel approach for us to understand how do we fill that gap, how do we still have great conventional products, how do we enable breakthrough new category products, and have it in a timeframe that matters? >> So, much tighter connection between the R and the D. And then, okay, so when Natalia wants to initiate a project, or somebody wants Natalia to initiate a project around AI, how does that work? Do you say, "Okay, submit an idea," and then it goes through some kind of peer review? And then, how does it get funded? Take us through that. >> I think I'll give my perspective, I would love to hear what you have from your side. For me, it's always been organic. The ideas that we had on The Machine, for me, my little thread, one of thousands that's been brought in to get us to this point, started about 2003, where we were getting ready for, we're midway through Blade Systems C-class. A category-defining product. A absolute home run in defining what a Blade system was going to be. And we're partway through that, and you realize you got a success on your hands. You think, "Wow, nothing gets better than this!" Then it starts to worry, what if nothing gets better than this? And you start thinking about that next set of things. Now, I had some insights of my own, but when you're a technologist and you have an insight, that's a great feeling for a little while, and then it's a little bit of a lonely feeling. No one else understands this but me, and is it always going to be that way? And then you have to find that business opportunity. So that's where talking with our field teams, talking with our customers, coming to events like Discover, where you see business opportunities, and you realize, my ingenuity and this business opportunity are a match. Now, the third piece of that is someone who can say, a business leader, who can say, "You know what?" "Your ingenuity and that opportunity can meet "in a finite time with finite resources." "Let's do it." And really, that's what Meg and leadership team did with us on The Machine. >> Kirk, I want to shift gears and talk about the Memristor, because I think that's a showcase that everyone's talking about. Actually, The Machine has been talked about for many years now, but Memristor changes the game. It kind of goes back to old-school analog, right? We're talking about, you know, login, end-login kind of performance, that we've never seen before. So it's a completely different take on memory, and this kind of brings up your vision and the team's vision of memory-driven computing. Which, some are saying can scale machine learning. 'Cause now you have data response times in microseconds, as you said, and provisioning containers in microseconds is actually really amazing. So, the question is, what is memory-driven computing? What does that mean? And what are the challenges in deep learning today? >> I'll do the machine learning-- >> I will do deep learning. >> You'll do the machine learning. So, when I think of memory-driven computing, it's the realization that we need a new set of technologies, and it's not just one thing. Can't we just do, dot-dot-dot, we would've done that one thing. This is more taking a holistic approach, looking at all the technologies that we need to pull together. Now, memories are fascinating, and our Memristor is one example of a new class of memory. But they also-- >> John: It's doing it differently, too, it's not like-- >> It's changing the physics. You want to change the economics of information technology? You change the physics you're using. So here, we're changing physics. And whether it's our work on the Memristor with Western Digital and the resistive RAM program, whether it's the phase-change memories, whether it's the spin-torque memories, they're all applying new physics. What they all share, though, is the characteristic that they can continue to scale. They can scale in the layers inside of a die. The die is inside of a package. The package is inside of a module, and then when we add photonics, a transformational information communications technology, now we're scaling from the package, to the enclosure, to the rack, cross the aisle, and then across the data center. All that memory accessible as memory. So that's the first piece. Large, persistent memories. The second piece is the fabric, the way we interconnect them so that we can have great computational, great memory, great communication devices available on industry open standards, that's the Gen-Z Consortium. The last piece is software. New software as well as adapting existing productive programming techniques, and enabling people to be very productive immediately. >> Before Natalia gets into her piece, I just want to ask a question, because this is interesting to me because, sorry to get geeky here, but, this is really cool because you're going analog with signaling. So, going back to the old concepts of signaling theory. You mentioned neural networks. It's almost a hand-in-glove situation with neural networks. Here, you have the next question, which is, connect the dots to machine learning and neural networks. This seems to be an interesting technology game-changer. Is that right? I mean, am I getting this right? What's this mean? >> I'll just add one piece, and then hear Natalia, who's the expert on the machine learning. For me, it's bringing that right ensemble of components together. Memory technologies, communication technologies, and, as you say, novel computational technologies. 'Cause transistors are not going to get smaller for very much longer. We have to think of something more clever to do than just stamp out another copy of a standard architecture. >> Yes, you asked about changes of deep learning. We look at the landscape of deep learning today, and the set of tasks which are solved today by those problems. We see that although there is a variety of tasks solved, most of them are from the same area. So we can analyze images very efficiently, we can analyze video, though it's all visual data, we can also do speech processing. There are few examples in other domains, with other data types, but they're much fewer. It's much less knowledge how to, which models to train for those applications. The thing that one of the challenges for deep learning is to expand the variety of applications which it can be used. And it's known that artificial neural networks are very well applicable to the data where there are many hidden patterns underneath. And there are multi-dimensional data, like data from sensors. But we still need to learn what's the right topology of neural networks to do that. What's the right algorithm to train that. So we need to broaden the scope of applications which can take advantage of deep learning. Another aspect is, which I mentioned before, the computational power of today's devices. If you think about the well-known analogy of artificial neural network in our brain, the size of the model which we train today, the artificial neural networks, they are much, much, much smaller than the analogous thing in our brain. Many orders of magnitude. It was shown that if you increase the size of the model, you can get better accuracy for some tasks. You can process a larger variety of data. But in order to train those large models, you need more data and you need more compute power. Today, we don't have enough compute power. Actually did some computation, though in order to train a model which is comparable in size with our human brain, you will need to train it in a reasonable time. You will need a compute device which is capable to perform 10 to the power of 26 floating-point operations per second. We are far, far-- >> John: Can you repeat that again? >> 10 to the power of 26. We are far, far below that point now. >> All right, so here's the question for you guys. There's all this deep learning source code out there. It's open bar for open source right now. All this goodness is pouring in. Google's donating code, you guys are donating code. It used to be like, you had to build your code from scratch. Borrow here and there, and share in open source. Now it's a tsunami of greatness, so I'm just going to build my own deep learning. How do customers do that? It's too hard. >> You are right on the point to the next challenge of deep learning, which I believe is out there. Because we have so many efforts to speed up the infrastructure, we have so many open source libraries. So now the question is, okay, I have my application at hand. What should I choose? What is the right compute node to the deep learning? Everybody use GPUs, but is it true for all models? How many GPUs do I need? What is the optimal number of nodes in the cluster? And we have a research effort towards to answer those questions as well. >> And a breathalyzer for all the drunk coders out there, open bar. I mean, a lot of young kids are coming in. This is a great opportunity for everyone. And in all seriousness, we need algorithms for the algorithms. >> And I think that's where it's so fascinating. We think of some classes of things, like recognizing written handwriting, recognizing voice, but when we want to apply machine learning and algorithms to the volume of sensor data, so that every manufactured item, and not only every item we manufacture, but every factory that can be fully instrumented with machine learning understanding how it can be optimized. And then, what of the business processes that are feeding that factory? And then, what are the overall economic factors that are feeding that business? And instrumenting and having this learning, this unblinking, unbiased eye examining to find those hidden correlations, those hidden connections, that could yield a very much more efficient system at every level of human enterprise. >> And the data's more diverse now than ever. I'm sorry to interrupt, but in Voice you mentioned you saw Siri, you see Alexa, you see Voice as one dataset. Data diversity's massive, so more needles, more types of needles than ever before. >> In that example that you gave, you need a domain expert. And there's plenty of those, but you also need a big brain to build the model, and train the model, and iterate. And there aren't that many of those. Is the state of machine learning and AI going to get to the point where that problem will solve itself, or do we just need to train more big brains? >> Actually, one of the advantages of deep learning that you don't need that much effort from the domain experts anymore, from the step which was called future engineering, like, what do you do with your data before you throw machine learning algorithm into that? So they're, pretty thing, cool thing about deep learning, artificial neural network, that you can throw almost raw data into that. And there are some examples out there, that the people without any knowledge in medicine won the competition of the drug recognition by applying deep neural networks to that, without knowing all the details about their connection between proteins, like that. Not domain experts, but they still were able to win that competition. Just because algorithm that good. >> Kirk, I want to ask you a final question before we break in the segment because, having spent nine years of my career at HP in the '80s and '90s, it's been well-known that there's been great research at HP. The R&D has been spectacular. Not too much R, I mean, too much D, not enough applied, you mention you're bringing that to market faster, so, the question is, what should customers know about Hewlett Packard Labs today? Your mission, obviously the memory-centric is the key thing. You got The Machine, you got the Memristor, you got a novel way of looking at things. What's the story that you'd like to share? Take a minute, close out the segment and share Hewlett Packard Labs' mission, and what expect to see from you guys in terms of your research, your development, your applications. What are you guys bringing out of the kitchen? What's cooking in the oven? >> I think for us, it is, we've been given an opportunity, an opportunity to take all of those ideas that we have been ruminating on for five, 10, maybe even 15 years. All those things that you thought, this is really something. And we've been given the opportunity to build a practical working example. We just turned on the prototype with more memory, more computation addressable simultaneously than anyone's ever assembled before. And so I think that's a real vote of confidence from our leadership team, that they said, "Now, the ideas you guys have, "this is going to change the way that the world works, "and we want to see you given every opportunity "to make that real, and to make it effective." And I think everything that Hewlett Packard Enterprise has done to focus the company on being that fantastic infrastructure, provider and partner is just enabling us to get this innovation, and making it meaningful. I've been designing printed circuit boards for 28 years, now, and I must admit, it's not as, you know, it is intellectually stimulating on one level, but then when you actually meet someone who's changing the face of Alzheimer's research, or changing the way that we produce energy as a society, and has an opportunity to really create a more sustainable world, then you say, "That's really worth it." That's why I get up, come to Labs every day, work with fantastic researchers like Natalia, work with great customers, great partners, and our whole supply chain, the whole team coming together. It's just spectacular. >> Well, congratulations, thanks for sharing the insight on theCUBE. Natalia, thank you very much for coming on. Great stuff going on, looking forward to keeping the progress and checking in with you guys. Always good to see what's going on in the Lab. That's the headroom, that's the future. That's the bridge to the future. Thanks for coming in theCUBE. Of course, more CUBE coverage here at HP Discover, with the keynotes coming up. Meg Whitman on stage with Antonio Neri. Back with more live coverage after this short break. Stay with us. (energetic techno music)
SUMMARY :
Brought to you by Hewlett Packard Enterprise. Did I get that right? the business trends out there that you guys and that machine learning algorithm is going to be the data speed is going to be so high-velocity, And the way that you will not be able to have it scale Natalia, tell us about your research on this area. and to do that, we need more compute power. Then second question is, how do you decide what to work on? And I get to set them at the bench Do you say, "Okay, submit an idea," and is it always going to be that way? and the team's vision of memory-driven computing. it's the realization that we need a new set of technologies, that they can continue to scale. connect the dots to machine learning and neural networks. We have to think of something more clever to do What's the right algorithm to train that. 10 to the power of 26. All right, so here's the question for you guys. What is the right compute node to the deep learning? And a breathalyzer for all the to the volume of sensor data, I'm sorry to interrupt, but in Voice you mentioned In that example that you gave, you need a domain expert. that you don't need that much effort and what expect to see from you guys "Now, the ideas you guys have, to keeping the progress and checking in with you guys.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Natalia | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Natalia Vassilieva | PERSON | 0.99+ |
Hewlett Packard Labs | ORGANIZATION | 0.99+ |
five | QUANTITY | 0.99+ |
Meg Whitman | PERSON | 0.99+ |
Antonio Neri | PERSON | 0.99+ |
HP Labs | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Kirk Bresniker | PERSON | 0.99+ |
HP | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
28 years | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
second piece | QUANTITY | 0.99+ |
Siri | TITLE | 0.99+ |
second question | QUANTITY | 0.99+ |
Kirk | PERSON | 0.99+ |
Cortana | TITLE | 0.99+ |
first piece | QUANTITY | 0.99+ |
10 | QUANTITY | 0.99+ |
15 | QUANTITY | 0.99+ |
two companies | QUANTITY | 0.99+ |
Earth | LOCATION | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
nine years | QUANTITY | 0.99+ |
15 years | QUANTITY | 0.99+ |
third piece | QUANTITY | 0.99+ |
20 years | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
Hewlett Packard Labs' | ORGANIZATION | 0.99+ |
Western Digital | ORGANIZATION | 0.99+ |
Hewlett Packard Enterprise | ORGANIZATION | 0.99+ |
first question | QUANTITY | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
HP Ink | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
Tens of zettabytes | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
Meg | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
three years ago | DATE | 0.99+ |
one piece | QUANTITY | 0.98+ |
Alexa | TITLE | 0.98+ |
26 | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
two things | QUANTITY | 0.97+ |
Memristor | TITLE | 0.96+ |
Moore | ORGANIZATION | 0.96+ |
Alzheimer | OTHER | 0.95+ |
one | QUANTITY | 0.94+ |
The Machine | TITLE | 0.94+ |
HPE Discover 2017 | EVENT | 0.94+ |
one thing | QUANTITY | 0.94+ |
Gen-Z Consortium | ORGANIZATION | 0.94+ |
One | QUANTITY | 0.93+ |
one level | QUANTITY | 0.92+ |
HP Discover | ORGANIZATION | 0.92+ |