Image Title

Search Results for TLC:

Richard Hartmann, Grafana Labs | KubeCon + CloudNativeCon NA 2022


 

>>Good afternoon everyone, and welcome back to the Cube. I am Savannah Peterson here, coming to you from Detroit, Michigan. We're at Cuban Day three. Such a series of exciting interviews. We've done over 30, but this conversation is gonna be extra special, don't you think, John? >>Yeah, this is gonna be a good one. Griffon Labs is here with us. We're getting the conversation of what's going on in the industry management, watching the Kubernetes clusters. This is large scale conversations this week. It's gonna be a good one. >>Yeah. Yeah. I'm very excited. He's also got a fantastic Twitter handle, twitchy. H Please welcome Richie Hartman, who is the director of community here at Griffon. Richie, thank you so much for joining us. Thanks >>For having me. >>How's the show been for you? >>Busy. I, I mean, I, I, >>In >>A word, I have a ton of talks at at like maintain a thing and like the covering board searches at the TLC panel. I run forme day. So it's, it's been busy. It, yeah. Monday, I didn't have to run anything. That was quite nice. But there >>You, you have your hands in a lot. I'm not even gonna cover it. Looking at your bio, there's, there's so many different things that you're working on. I know that Grafana specifically had some announcements this week. Yeah, >>Yeah, yeah. We had quite a few, like the, the two largest ones is a, we now have a field Kubernetes integration on Grafana Cloud. So our, our approach is generally extremely open source first. So we try to push stuff into the exporters, like into the open source exporters, into mixes into things which are out there as open source for anyone to use. But that's little bit like a tool set, not a ready made solution. So when we talk integrations, we actually talk about things where you get this like one click experience, You log into your Grafana cloud, you click, I have a Kubernetes, which probably most of us have, and things just work like you in just the data. You have to write dashboards, you have to write alerts, you have to write everything to just get started with extremely opinionated dashboards, SLOs, alerts, again, all those things made by experts, so anyone can use them. And you don't have to reinvent the view for every single user. So that's the one. The other is, >>It's a big deal. >>Oh yeah, it is. Yeah. It is. It, we, we has, its heavily in integrations course. While, I mean, I don't have to convince anyone that perme is a DD factor standard in everything. Cloudnative. But again, it's, it's, it's sometimes a little bit hard to handle or a little bit not easy to get into. So, so smoothing this, this, this path onto onboarding yourself onto this stack and onto those types of solutions. Yes. Is what a lot of people need. Course, if you, if you look at the statistics from coupon, and we just heard this in the governing board session yesterday. Yeah. Like 60% of the people here are first time attendees. So there's a lot of people who just come into this thing and who need, like, this is your path. This is where you should be going. Or at least if you want to go, go there. This is how to get there. >>Here's your runway for takeoff. Yes. Yeah. I think that's a really good point. And I love that you, you had those numbers. I was curious. I, I had seen on Twitter, speaking of Twitter, I had seen, I had seen that, that there were a lot of people here coming for the first time. You're a community guy. Are we at an inflection point where this community is about to continue to scale? >>That's a very good question. Which I can't really answer. So I mean, >>Obviously I bet you're gonna try. >>I covid changed a few things. Yeah. Probably most people, >>A couple things. I mean, you know, casually, it's like such a gentle way of putting that, that was >>Beautiful. I'm gonna say yes, just to explode. All these new ERs are gonna learn Prometheus. They're gonna roll in with a open, open metrics, open telemetry. I love it, >>You know, But, but at the same time, like Cuban is, is ramping back up. But if you look at the, if you look at the registration numbers between Valencia Andro, it was more or less the same. Interesting. Which, so it didn't go onto this, onto this flu trajectory, which it was on like, up to, up to 2019. I expect this to take up again. But also with the economic situation, everything, I, I don't think >>It's, I think the jury's still out on hybrid. I think there's a lot, lot more hybrid. Let's see how the projects are gonna go. That's what I think it's gonna be the tell sign. How many people are in participating? How are the project's advancing? Some of the momentum, >>I mean, from the project level, Most of this is online anyway. Of course. That's how open source, right. I've been working for >>Ages. That's >>Cause you don't have any trouble budget or, or any office or, It's >>Always been that way. >>Yeah, precisely. So the projects are arguably spearheading this, this development and the, the online numbers. I I, I have some numbers in my head, but I'm, I'm not a hundred percent certain to, but they're higher for this time in Detroit than in volunteer as far somewhere. Cool. So that is growing and it's grown in parallel, which also is great. Cause it's much more accessible, much more inclusive. You don't have to have a budget of at least, let's say, I don't know, two to five k to, to fly over the pond and, and attend this thing. You can just do it from your home. So that is, that's a lot more inclusive. And I expect this to, to basically be a second more or less orthogonal growth, growth path. But the best thing about coupon is the hallway track. I'm just meeting people, talking to people and that kind of thing is not really possible with, >>It's, it's great to see people >>In person. No, and it makes such a difference. I mean, yeah. Even and interviewing people in person too. I mean, it does a, it's, it's, and, and this, this whole, I mean cncf, this whole community, every company here is community first. It's how these projects come to be. I think it's awesome. I feel like you got something you're saying to say, Johnny. >>Yeah. And I love some of the advancements. Rich Richie, we talked last time about, you know, open telemetry, open metrics. You're involved in dashboards. Yeah. One of the themes here is ease of use, simplicity, developer productivity. Where do you see the ease of use going from a project standpoint? For me, as you mentions everywhere, it's pretty much, it is, it's almost all corners of the world. Yep. And new people coming in. How, how are you making it easier? What's going on? Give us the update on that. >>So we also, funnily enough at precisely this topic in the TC panel just a few hours ago, about ease of use and about how to, how to make things easier to, to handle how developers currently, like if they just want to get into the cloud native seen, they have like, like we, we did some neck and math, like maybe 10 tools at least, which you have to be somewhat proficient in to just get started, which is honestly horrendous. Yeah. Course. Like with a server, I just had my survey install my thing and it runs, maybe I need a database, but that's roughly it. And this needs to change again. Like it's, it's nice that everything is, is un unraveled. And you have, you, you, you, you don't have those service boundaries which you had before. You can do all the horizontal scaling, you can do all the automatic scaling, all those things that they're super nice. But at the same time, this complexity, which used to be nicely compartmentalized, was deliberately broken up. And so it's becoming a lot harder to, to, like, we, we need to find new ways to compartmentalize this complexity back to, to human understandable levels again, in particular, as we keep onboarding new and new and new, new people, of course it's just not good use of anyone's time to, to just like learn the basics again and again and again. This is something which should be just compartmentalized and automated away. We're >>The three, We were talking to Matt Klein earlier and he was talking about as projects become mature and all over the place and have reach and and usage, you gotta work on the boring stuff. Yes. And when it's boring, that means you have success. Yes. But then you gotta work on the plumbing. What are some of the things that you guys are working on? Because people are relying on the product. >>Oh yeah. So for with my premises head on, the highlight feature is exponential or native or spars. Histograms. There's like three different names for one single concept. If you know Prometheus, you ha you currently have hard bucket boundaries where I say my latency is lower equal two seconds, one second, a hundred milliseconds, what have you. And I can put stuff into those histogram buckets accordingly to those predefined levels, which is extremely efficient, but like on the, on the code level. But it's not very nice for the humans course you need to understand your system before you're able to, to, to choose good cutoff points. And if you, if you, if you add new ones, that's completely fine. But if you want to actually change them, course you, you figured out that you made a fundamental mistake, you're going to have a break in the continue continuity of your observability data. And you cannot undo this in, into the past. So this is just gone native histograms. On the other hand, allow me to, to, okay, I'm not going to get get into the math, but basically you define a single formula, which there comes a good default. If you have good reasons, then you can change it. But if you don't, just don't talk, >>The people are in the math, Hit him up on Twitter. Twitter, h you'll get you that math. >>So the, >>The thing is people want the math, believe me. >>Oh >>Yeah. I mean we don't have time, but hit him up. Yeah. >>There's ProCon in two weeks in Munich and there will be whole talk about like the, the dirty details of all of the stuff. But the, the high level answer is it just does what people would expect it to do. And with very little overhead, you become, you get highly, highly or high resolution histograms, which is really important for a lot of use cases. But this is not just Prometheus with my open metrics head on the 2.0 feature, like the breaking highlight feature of Open Metrics 2.0 will be you guested precisely the same with my open telemetry head on. Low and behold the same underlying technology is being put or has been put into open telemetry. And we've worked for month and month and month and even longer between all different projects to, to assert that we have one single standard which is actually compatible with each other course. One of the worst things which you can have in the cloud ecosystem is if you have soly different things and they break in subtly wrong ways, like it's much better to just not work than to break in a way, which is just a little bit wrong. Of course you won't figure this out until it's too late. So we spent, like with all three hats, we spent insane amounts of time on making this happen and, and making this nice. >>Savannah, one of the things we have so much going on at Cube Con. I mean just you're unpacking like probably another day of cube. We can't go four days, but open time. >>I know, I know. I'm the same >>Open telemetry >>Challenge acceptance open. >>Sorry, we're gonna stay here. All the, They >>Shut the lights off on us last night. >>They literally gonna pull the plug on us. Yeah, yeah, yeah, yeah. They've done that before. It's not the first time we go until they kick us out. We love, love doing this. But Open telemetry is got a lot of news too. So that's, We haven't really talked much about that. >>We haven't at >>All. So there's a lot of stuff going on that, I won't call it boring. That's like code word's. That's cube talk for, for it's working. Yeah. So it's not bad, but there's a lot of stuff going on. Like open telemetry, open metrics, This is the stuff that matters cuz when you go in large scale, that's key. It's just what, missing all the, all the stuff. >>No, >>What are we missing? What are people missing? What's going on in the show that you think that's not actually being reported on? I mean it's a lot of high web assembly for instance got a lot >>Of high. Oh yeah, I was gonna say, I'm glad you're asking this because you, you've already mentioned about seven different hats that you wear. I can only imagine how many hats are actually in your hat cabinet. But you, you are someone with your, with your fingers in a lot of different things. So you can kind of give us a state of the union. Yeah. So go ahead. Let's talk about >>It. So I think you already hit a few good points. Ease of use is definitely one of them. And, and improving the developer experience and not having this like a value of pain. Yeah. That is one of the really big ones. It's going to be interesting cause it is boring. It is janitorial and it needs a different type of persona. A lot of, or maybe not most, but a large fraction of developers like the shiny stuff. And we could see this in Prometheus where like initially the people who contributed this the most where like those restless people who need to fix that one thing, this is impossible, are going to do it. Which changed over the years where the people who now contribute the most are off the janitorial. Like keep things boring, keep things running, still have substantial changes. But but not like more on the maintenance level. >>Yeah. The maintainers. I was just gonna bring that >>Up. Yeah. On the, on the keep things boring while still pushing 'em forward. Yeah. And the thing about ease of use is a lot of this is boring. A lot of this is strategy. A lot of this is toil. A lot of this takes lots of research also in areas where developers are not really good at, like UX for example, and ui like most software developers are really bad at those cause they just think differently from normal humans, I guess. >>So that's an interesting observation that you just made. I we could unpack that on a whole nother show as well. >>So the, the thing is this is going to be interesting for the open source scene course. This needs deliberate investment by companies who assign people to those projects and say, okay, fix that one thing or make it easier to use what have you. That is a lot easier with, with first party products and projects from companies cuz they can invest directly into the thing and they see much more of a value prop. It's, it's kind of normal by now to, to allow developers or even assigned developers onto open source projects. That's not so much the case for the tpms, for the architects, for the UX and your I people like for the documentation people that there's not as much awareness of that this is also driving value for everyone. Yes. And also there's not much as much. >>Yeah, that's a great point. This whole workflow production system of open source, which has grown and keeps growing and we'll keep growing. These be funded. And one of the things we were talking earlier in another session about is about the recession potentially we're hitting and the global issues, macroeconomics that might force some of these projects or companies not to get VC >>Funding. It's such a theme at the show. So, >>So to me, I said it's just not about VC funding. There's other funding mechanisms that's community oriented. There's companies participating, there's other meccas. Richie, if you could have your wishlist of how things could progress an open source, what would you want to see happen in terms of how it's, how things are funded, how things are executed. Cuz developers are going to run businesses. Cuz ultimately if you follow digital transformation to completion, it and developers aren't a department serving the business. They are the business. And that's coming fast. You know, what has to happen in your opinion, if you had the wish magic wand, what would you, what would you snap your fingers to make happen? >>If I had a magic wand that's very different from, from what is achievable. But let, let's >>Go with, Okay, go with the magic wand first. Cause we'll, we'll, we'll we'll riff on that. So >>I'm here for dreams. Yeah, yeah, >>Yeah. I mean I, I've been in open source for more than two, two decades, but now, and most of the open source is being driven forward by people who are not being paid for those. So for example, Gana is the first time I'm actually paid by a company to do my com community work. It's always been on the side. Of course I believe in it and I like doing it. I'm also not bad at it. And so I just kept doing it. But it was like at night on the weekends and everything. And to be honest, it's still at night and in the weekends, but the majority of it is during paid company time, which is awesome. Yeah. Most of the people who have driven this space forward are not in this position. They're doing it at night, they're doing it on the weekends. They're doing it out of dedication to a cause. Yeah. >>The commitment is insane. >>Yeah. At the same time you have companies mostly hyperscalers and either they have really big cloud offerings or they have really big advertisement business or both. And they're extracting a huge amount of value, which has been created in large part elsewhere. Like yes, they employ a ton of developers, but a lot of the technologies they built on and the shoulders of the giants they stand upon it are really poorly paid. And there are some efforts to like, I think the core foundation like which redistribute a little bit of money and such. But if I had my magic wand, everyone who is an open source and actually drives things forwards, get, I don't know, 20% of the value which they create just magically somehow. Yeah. >>Or, or other companies don't extract as much value and, and redistribute more like put more full-time engineers onto projects or whichever, like that would be the ideal state where the people who actually make the thing out of dedication are not more or less left on the sideline. Of course they're too dedicated to just say, Okay, I'm, I'm not doing this anymore. You figure this stuff out and let things tremble and falter. So I mean, it's like with nurses and such who, who just like, they, they know they have something which is important and they keep doing it. Of course they believe in it. >>I think this, I think this is an opportunity to start messaging this narrative because yeah, absolutely. Now we're at an inflection point where there's a big community, there is a shared responsibility in my opinion, to not spread the wealth, but make sure that it's equally balanced and, and the, and I think there's a way to do that. I don't know how yet, but I see that more than ever, it's not just come in, raid the kingdom, steal all the jewels, monetize it, and throw some token token money around. >>Well, in the burnout. Yeah, I mean I, the other thing that I'm thinking about too is it's, you know, it's, it's the, it's the financial aspect of this. It's the cognitive load. And I'm curious actually, when I ask you this question, how do you avoid burnout? You do a million different things and we're, you know, I'm sure the open source community that passion the >>Coach. Yeah. So it's just write code, >>It's, oh, my, my, my software engineering days are firmly over. I'm, I'm, I'm like, I'm the cat herer and the janitor and like this type of thing. I, I don't really write code anymore. >>It's how do you avoid burnout? >>So a i I didn't curse ahead burnout a few years ago. I was not nice, but that was still when I had like a full day job and that day job was super intense and on top I did all the things. Part of being honest, a lot of the people who do this are really dedicated and are really bad at setting boundaries between work >>And process. That's why I bring it up. Yeah. Literally why I bring it up. Yeah. >>I I I'm firmly in that area and I'm, I'm, I don't claim I have this fully figured out yet. It's also even more risky to some extent per like, it's, it's good if you're paid for this and you can do it during your work time. But on the other hand, if it's so nice and like if your hobby and your job are almost completely intersectional, it >>Becomes really, the lines are blurry. >>Yeah. And then yeah, like have work from home. You, you don't even commute anything or anymore. You just sit down at your computer and you just have fun doing your stuff and all of a sudden it's deep at night and you're still like, I want to keep going. >>Sounds like God, something cute. I >>Know. I was gonna say, I was like, passion is something we all have in common here on this. >>That's the key. That is the key point There is a, the, the passion project becomes the job. But now the contribution is interesting because now yeah, this ecosystem is, is has a commercial aspect. Again, this is the, this is the balance between commercialization and keeping that organic production system that's called open source. I mean, it's so fascinating and this is amazing. I want to continue that conversation. It's >>Awesome. Yeah. Yeah. This is, this is great. Richard, this entire conversation has been excellent. Thank you so much for joining us. How can people find you? I mean, I give em your Twitter handle, but if they wanna find out more about Grafana Prometheus and the 1700 things you do >>For grafana grafana.com, for Prometheus, promeus.io for my own stuff, GitHub slash richie age slash talks. Of course I track all my talks in there and like, I don't, I currently don't have a personal website cause I stop bothering, but my, like that repository is, is very, you find what I do over, like for example, the recording link will be uploaded to this GitHub. >>Yeah. Great. Follow. You also run a lot of events and a lot of community activity. Congratulations for you. Also, I talked about this last time, the largest IRC network on earth. You ran, built a data center from scratch. What happened? You done >>That? >>Haven't done a, he even built a cloud hyperscale compete with Amazon. That's the next one. Why don't you put that on the >>Plate? We'll be sure to feature whatever Richie does next year on the cube. >>I'm game. Yeah. >>Fantastic. On that note, Richie, again, thank you so much for being here, John, always a pleasure. Thank you. And thank you for tuning in to us here live from Detroit, Michigan on the cube. My name is Savannah Peterson and here's to hoping that you find balance in your life this weekend.

Published Date : Oct 28 2022

SUMMARY :

We've done over 30, but this conversation is gonna be extra special, don't you think, We're getting the conversation of what's going on in the industry management, Richie, thank you so much for joining us. I mean, I, I, I run forme day. You, you have your hands in a lot. You have to write dashboards, you have to write alerts, you have to write everything to just get started with Like 60% of the people here are first time attendees. And I love that you, you had those numbers. So I mean, I covid changed a few things. I mean, you know, casually, it's like such a gentle way of putting that, I love it, I expect this to take up again. Some of the momentum, I mean, from the project level, Most of this is online anyway. So the projects are arguably spearheading this, I feel like you got something you're saying to say, Johnny. it's almost all corners of the world. You can do all the horizontal scaling, you can do all the automatic scaling, all those things that they're super nice. What are some of the things that you But it's not very nice for the humans course you need The people are in the math, Hit him up on Twitter. Yeah. One of the worst things which you can have in the cloud ecosystem is if you have soly different things and Savannah, one of the things we have so much going on at Cube Con. I'm the same All the, They It's not the first time we go until they Like open telemetry, open metrics, This is the stuff that matters cuz when you go in large scale, So you can kind of give us a state of the union. And, and improving the developer experience and not having this like a I was just gonna bring that the thing about ease of use is a lot of this is boring. So that's an interesting observation that you just made. So the, the thing is this is going to be interesting for the open source scene course. And one of the things we were talking earlier in So, Richie, if you could have your wishlist of how things could But let, let's So Yeah, yeah, Gana is the first time I'm actually paid by a company to do my com community work. shoulders of the giants they stand upon it are really poorly paid. are not more or less left on the sideline. I think this, I think this is an opportunity to start messaging this narrative because yeah, Yeah, I mean I, the other thing that I'm thinking about too is it's, you know, I'm, I'm like, I'm the cat herer and the janitor and like this type of thing. a lot of the people who do this are really dedicated and are really Yeah. I I I'm firmly in that area and I'm, I'm, I don't claim I have this fully You, you don't even commute anything or anymore. I That is the key point There is a, the, the passion project becomes the job. things you do like that repository is, is very, you find what I do over, like for example, the recording link will be uploaded Also, I talked about this last time, the largest IRC network on earth. That's the next one. We'll be sure to feature whatever Richie does next year on the cube. Yeah. My name is Savannah Peterson and here's to hoping that you find balance in your life this weekend.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Richie HartmanPERSON

0.99+

RichiePERSON

0.99+

Matt KleinPERSON

0.99+

Savannah PetersonPERSON

0.99+

Richard HartmannPERSON

0.99+

RichardPERSON

0.99+

AmazonORGANIZATION

0.99+

JohnPERSON

0.99+

Grafana LabsORGANIZATION

0.99+

PrometheusTITLE

0.99+

Rich RichiePERSON

0.99+

60%QUANTITY

0.99+

Griffon LabsORGANIZATION

0.99+

two secondsQUANTITY

0.99+

one secondQUANTITY

0.99+

MunichLOCATION

0.99+

20%QUANTITY

0.99+

10 toolsQUANTITY

0.99+

DetroitLOCATION

0.99+

MondayDATE

0.99+

Detroit, MichiganLOCATION

0.99+

GrafanaORGANIZATION

0.99+

yesterdayDATE

0.99+

Grafana PrometheusTITLE

0.99+

threeQUANTITY

0.99+

five kQUANTITY

0.99+

first timeQUANTITY

0.99+

twoQUANTITY

0.98+

next yearDATE

0.98+

bothQUANTITY

0.98+

oneQUANTITY

0.98+

this weekDATE

0.98+

two decadesQUANTITY

0.98+

one single conceptQUANTITY

0.98+

GitHubORGANIZATION

0.98+

2019DATE

0.98+

Grafana cloudTITLE

0.98+

OneQUANTITY

0.97+

last nightDATE

0.97+

SavannahPERSON

0.97+

TwitterORGANIZATION

0.96+

earthLOCATION

0.96+

four daysQUANTITY

0.96+

over 30QUANTITY

0.95+

JohnnyPERSON

0.95+

one clickQUANTITY

0.95+

Grafana CloudTITLE

0.95+

CloudNativeConEVENT

0.94+

few hours agoDATE

0.93+

2.0OTHER

0.93+

GriffonORGANIZATION

0.93+

hundred percentQUANTITY

0.92+

two weeksQUANTITY

0.92+

one thingQUANTITY

0.91+

grafana grafana.comOTHER

0.9+

more than twoQUANTITY

0.89+

three different namesQUANTITY

0.88+

two largestQUANTITY

0.88+

promeus.ioOTHER

0.86+

a hundred millisecondsQUANTITY

0.86+

few years agoDATE

0.86+

single formulaQUANTITY

0.85+

firstQUANTITY

0.83+

Con.EVENT

0.83+

IRCORGANIZATION

0.82+

KubernetesTITLE

0.81+

seven different hatsQUANTITY

0.8+

one single standardQUANTITY

0.79+

Valencia AndroORGANIZATION

0.79+

NA 2022EVENT

0.77+

Open Metrics 2.0OTHER

0.74+

KubeCon +EVENT

0.7+

Prakash Darji, Pure Storage


 

(upbeat music) >> Hello, and welcome to the special Cube conversation that we're launching in conjunction with Pure Accelerate. Prakash Darji is here, is the general manager of Digital Experience. They actually have a business unit dedicated to this at Pure Storage. Prakash, welcome back, good to see you. >> Yeah Dave, happy to be here. >> So a few weeks back, you and I were talking about the Shift 2 and as a service economy and which is a good lead up to Accelerate, held today, we're releasing this video in LA. This is the fifth in person Accelerate. It's got a new tagline techfest so you're making it fun, but still hanging out to the tech, which we love. So this morning you guys made some announcements expanding the portfolio. I'm really interested in your reaffirmed commitment to Evergreen. That's something that got this whole trend started in the introduction of Evergreen Flex. What is that all about? What's your vision for Evergreen Flex? >> Well, so look, this is one of the biggest moments that I think we have as a company now, because we introduced Evergreen and that was and probably still is one of the largest disruptions to happen to the industry in a decade. Now, Evergreen Flex takes the power of modernizing performance and capacity to storage beyond the box, full stop. So we first started on a project many years ago to say, okay, how can we bring that modernization concept to our entire portfolio? That means if someone's got 10 boxes, how do you modernize performance and capacity across 10 boxes or across maybe FlashBlade and FlashArray. So with Evergreen Flex, we first are starting to hyper disaggregate performance and capacity and the capacity can be moved to where you need it. So previously, you could have thought of a box saying, okay, it has this performance or capacity range or boundary, but let's think about it beyond the box. Let's think about it as a portfolio. My application needs performance or capacity for storage, what if I could bring the resources to it? So with Evergreen Flex within the QLC family with our FlashBlade and our FlashArray QLC projects, you could actually move QLC capacity to where you need it. And with FlashArray X and XL or TLC family, you could move capacity to where you need it within that family. Now, if you're enabling that, you have to change the business model because the capacity needs to get build where you use it. If you use it in a high performance tier, you could build at a high performance rate. If you use it as a lower performance tier, you could build at a lower performance rate. So we changed the business model to enable this technology flexibility, where customers can buy the hardware and they get a pay per use consumption model for the software and services, but this enables the technology flexibility to use your capacity wherever you need. And we're just continuing that journey of hyper disaggregated. >> Okay, so you solve the problem of having to allocate specific capacity or performance to a particular workload. You can now spread that across whatever products in the portfolio, like you said, you're disaggregating performance and capacity. So that's very cool. Maybe you could double click on that. You obviously talk to customers about doing this. They were in pain a little bit, right? 'Cause they had this sort of stovepipe thing. So talk a little bit about the customer feedback that led you here. >> Well, look, let's just say today if you're an application developer or you haven't written your app yet, but you know you're going to. Well, you need that at least say I need something, right? So someone's going to ask you what kind of storage do you need? How many IOPS, what kind of performance capacity, before you've written your code. And you're going to buy something and you're going to spend that money. Now at that point, you're going to go write your application, run it on that box and then say, okay, was I right or was I wrong? And you know what? You were guessing before you wrote the software. After you wrote the software, you can test it and decide what you need, how it's going to scale, et cetera. But if you were wrong, you already bought something. In a hyper disaggregated world, that capacity is not a sunk cost, you can use it wherever you want. You can use capacity of somewhere else and bring it over there. So in the world of application development and in the world of storage, today people think about, I've got a workload, it's SAP, it's Oracle, I've built this custom app. I need to move it to a tier of storage, a performance class. Like you think about the application and you think about moving the application. And it takes time to move the application, takes performance, takes loan, it's a scheduled event. What if you said, you know what? You don't have to do any of that. You just move the capacity to where you need it, right? >> Yep. >> So the application's there and you actually have the ability to instantaneously move the capacity to where you need it for the application. And eventually, where we're going is we're looking to do the same thing across the performance hearing. So right now, the biggest benefit is the agility and flexibility a customer has across their fleet. So Evergreen was great for the customer with one array, but Evergreen Flex now brings that power to the entire fleet. And that's not tied to just FlashArray or FlashBlade. We've engineered a data plane in our direct flash fabric software to be able to take on the personality of the system it needs to go into. So when a data pack goes into a FlashBlade, that data pack is optimized for use in that scale out architecture with the metadata for FlashBlade. When it goes into a FlashArray C it's optimized for that metadata structure. So our Purity software has made this transformative to be able to do this. And we created a business model that allowed us to take advantage of this technology flexibility. >> Got it. Okay, so you got this mutually interchangeable performance and capacity across the portfolio beautiful. And I want to come back to sort of the Purity, but help me understand how this is different from just normal Evergreen, existing evergreen options. You mentioned the one array, but help us understand that more fully. >> Well, look, so in addition to this, like we had Evergreen Gold historically. We introduced Evergreen Flex and we had Pure as a service. So you had kind of two spectrums previously. You had Evergreen Gold on one hand, which modernized the performance and capacity of a box. You had Pure as a service that said don't worry about the box, tell me how many IOPS you have and will run and operate and manage that service for you. I think we've spoken about that previously on theCUBE. >> Yep. >> Now, we have this model where it's not just about the box, we have this model where we say, you know what, it's your fleet. You're going to run and operate and manage your fleet and you could move the capacity to where you need it. So as we started thinking about this, we decided to unify our entire portfolio of sub software and subscription services under the Evergreen brand. Evergreen Gold we're renaming to Evergreen Forever. We've actually had seven customers just crossed a decade of updates Forever Evergreen within a box. So Evergreen Forever is about modernizing a box. Evergreen Flex is about modernizing your fleet and Evergreen one, which is our rebrand of Pure as a service is about modernizing your labor. Instead of you worrying about it, let us do it for you. Because if you're an application developer and you're trying to figure out, where should I put my capacity? Where should I do it? You can just sign up for the IOPS you need and let us actually deliver and move the components to where you need it for performance, capacity, management, SLAs, et cetera. So as we think about this, for us this is a spectrum and a continuum of where you're at in the modernization journey to software subscription and services. >> Okay, got it. So why did you feel like now was the right time for the rebranding and the renaming convention, what's behind? What was the thing? Take us inside the internal conversations and the chalkboard discussion? >> Well, look, the chalkboard discussion's simple. It's everything was built on the Evergreen stateless architecture where within a box, right? We disaggregated the performance and capacity within the box already, 10 years ago within Evergreen. And that's what enabled us to build Pure as a service. That's why I say like when companies say they built a service, I'm like it's not a service if you have to do a data migration. You need a stateless architecture that's disaggregated. You can almost think of this as the anti hyper-converge, right? That's going the other way. It's hyper disaggregated. >> Right. >> And that foundation is true for our whole portfolio. That was fundamental, the Evergreen architecture. And then if Gold is modernizing a box and Flex is modernizing your fleet and your portfolio and Pure as a service is modernizing the labor, it is more of a continuation in the spectrum of how do you ensure you get better with age, right? And it's like one of those things when you think about a car. Miles driven on a car means your car's getting older and it doesn't necessarily get better with age, right? What's interesting when you think about the human body, yeah, you get older and some people deteriorate with age and some people it turns out for a period of time, you pick up some muscle mass, you get a little bit older, you get a little bit wiser and you get a little bit better with age for a while because you're putting in the work to modernize, right? But where in infrastructure and hardware and technology are you at the point where it always just gets better with age, right? We've introduced that concept 10 years ago. And we've now had proven industry success over a decade, right? As I mentioned, our first seven customers who've had a decade of Evergreen update started with an FA-300 way back when, and since then performance and capacity has been getting better over time with Evergreen Forever. So this is the next 10 years of it getting better and better for the company and not just tying it to the box because now we've grown up, we've got customers with like large fleets. I think one of our customers just hit 900 systems, right? >> Wow. >> So when you have 900 systems, right? And you're running a fleet you need to think about, okay, how am I using these resources? And in this day and age in that world, power becomes a big thing because if you're using resources inefficiently and the cost of power and energy is up, you're going to be in a world of hurt. So by using Flex where you can move the capacity to where it's needed, you're creating the most efficient operating environment, which is actually the lowest power consumption environment as well. >> Right. >> So we're really excited about this journey of modernizing, but that rebranding just became kind of a no brainer to us because it's all part of the spectrum on your journey of whether you're a single array customer, you're a fleet customer, or you don't want to even run, operate and manage. You can actually just say, you know what, give me the guarantee in the SLA. So that's the spectrum that informed the rebranding. >> Got it. Yeah, so to your point about the human body, all you got to do is look at Tom Brady's NFL combine videos and you'll see what a transformation. Fine wine is another one. I like the term hyper disaggregated because that to me is consistent with what's happening with the cloud and edge. We're building this hyper distributed or disaggregated system. So I want to just understand a little bit about you mentioned Purity so there's this software obviously is the enabler here, but what's under the covers? Is it like a virtualizer or megaload balancer, metadata manager, what's the tech behind this? >> Yeah, so we'll do a little bit of a double tech, right? So we have this concept of drives where in Purity, we build our own software for direct flash that takes the NAND and we do the NAND management as we're building our drives in Purity software. Now ,that advantage gives us the ability to say how should this drive behave? So in a FlashArray C system, it can behave as part of a FlashArray C and its usable capacity that you can write because the metadata and some of the system information is in NVRAM as part of the controller, right? So you have some metadata capability there. In a legend architecture for example, you have a distributed Blade architecture. So you need parts of that capacity to operate almost like a single layer chip where you can actually have metadata operations independent of your storage operations that operate like QLC. So we actually manage the NAND in a very very different way based on the persona of the system it's going into, right? So this capacity to make it usable, right? It's like saying a competitor could go ahead name it, Dell that has power max in Isilon, HPE that has single store and three power and nimble and like you name, like can you really from a technology standpoint say your capacity can be used anywhere or all these independent systems. Everyone's thinking about the world like a system, like here's this system, here's that system, here's that system. And your capacity is locked into a system. To be able to unlock that capacity to the system, you need to behave differently with the media type in the operating environment you're going into and that's what Purity does, right? So we are doing that as part of our direct Flex software around how we manage these drives to enable this. >> Well, it's the same thing in the cloud precaution, right? I mean, you got different APIs and primitive for object, for block, for file. Now, it's all programmable infrastructure so that makes it easier, but to the point, it's still somewhat stovepipe. So it's funny, it's good to see your commitment to Evergreen, I think you're right. You lay down the gauntlet a decade plus ago. First everybody ignored you and then they kind of laughed at you, then they criticized you, and then they said, oh, then you guys reached the escape velocity. So you had a winning hand. So I'm interested in that sort of progression over the past decade where you're going, why this is so important to your customers, where you're trying to get them ultimately. >> Well, look, the thing that's most disappointing is if I bought 100 terabytes still have to re-buy it every three or five years. That seems like a kind of ridiculous proposition, but welcome to storage. You know what I mean? That's what most people do with Evergreen. We want to end data migrations. We want to make sure that every software updates, hardware updates, non disruptive. We want to make it easy to deploy and run at scale for your fleet. And eventually we want everyone to move to our Evergreen one, formerly Pure as a service where we can run and operate and manage 'cause this is all about trust. We're trying to create trust with the customer to say, trust us, to run and operate and scale for you and worry about your business because we make tech easy. And like think about this hyper disaggregated if you go further. If you're going further with hyper disaggregated, you can think about it as like performance and capacity is your Lego building blocks. Now for anyone, I have a son, he wants to build a Lego Death Star. He didn't have that manual, he's toast. So when you move to at scale and you have this hyper disaggregated world and you have this unlimited freedom, you have unlimited choice. It's the problem of the cloud today, too much choice, right? There's like hundreds of instances of this, what do I even choose? >> Right. >> Well, so the only way to solve that problem and create simplicity when you have so much choice is put data to work. And that's where Pure one comes in because we've been collecting and we can scan your landscape and tell you, you should move these types of resources here and move those types of resources there, right? In the past, it was always about you should move this application there or you should move this application there. We're actually going to turn the entire industry on it's head. It's not like applications and data have gravity. So let's think about moving resources to where that are needed versus saying resources are a fixed asset, let's move the applications there. So that's a concept that's new to the industry. Like we're creating that concept, we're introducing that concept because now we have the technology to make that reality a new efficient way of running storage for the world. Like this is that big for the company. >> Well, I mean, a lot of the failures in data analytics and data strategies are a function of trying to jam everything into a single monolithic system and hyper centralize it. Data by its very nature is distributed. So hyper disaggregated fits that model and the pendulum's clearly swinging to that. Prakash, great to have you, purestorage.com I presume is where I can learn more? >> Oh, absolutely. We're super excited and our pent up by demand I think in this space is huge so we're looking forward to bringing this innovation to the world. >> All right, hey, thanks again. Great to see you, I appreciate you coming on and explaining this new model and good luck with it. >> All right, thank you. >> All right, and thanks for watching. This is David Vellante, and appreciate you watching this Cube conversation, we'll see you next time. (upbeat music)

Published Date : May 25 2022

SUMMARY :

is the general manager So this morning you guys capacity to where you need it. in the portfolio, like you So someone's going to ask you the capacity to where you and capacity across the the box, tell me how many IOPS you have capacity to where you need it. and the chalkboard discussion? if you have to do a data migration. and technology are you at the point So when you have 900 systems, right? So that's the spectrum that disaggregated because that to me and like you name, like can you really So you had a winning hand. and you have this hyper and create simplicity when you have and the pendulum's to bringing this innovation to the world. appreciate you coming on and appreciate you watching

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
David VellantePERSON

0.99+

EvergreenORGANIZATION

0.99+

PrakashPERSON

0.99+

DellORGANIZATION

0.99+

LALOCATION

0.99+

10 boxesQUANTITY

0.99+

10 boxesQUANTITY

0.99+

DavePERSON

0.99+

AccelerateORGANIZATION

0.99+

Prakash DarjiPERSON

0.99+

todayDATE

0.99+

Tom BradyPERSON

0.99+

900 systemsQUANTITY

0.99+

100 terabytesQUANTITY

0.99+

LegoORGANIZATION

0.99+

Pure AccelerateORGANIZATION

0.99+

five yearsQUANTITY

0.99+

seven customersQUANTITY

0.99+

first seven customersQUANTITY

0.99+

Pure StorageORGANIZATION

0.99+

OracleORGANIZATION

0.99+

10 years agoDATE

0.98+

Evergreen GoldORGANIZATION

0.98+

Evergreen ForeverORGANIZATION

0.98+

FirstQUANTITY

0.98+

one arrayQUANTITY

0.97+

oneQUANTITY

0.97+

fifthQUANTITY

0.97+

purestorage.comOTHER

0.95+

singleQUANTITY

0.95+

Forever EvergreenORGANIZATION

0.94+

firstQUANTITY

0.93+

Evergreen FlexORGANIZATION

0.93+

single layerQUANTITY

0.93+

FlashArray CTITLE

0.91+

single storeQUANTITY

0.91+

two spectrumsQUANTITY

0.9+

a decade plus agoDATE

0.9+

TLCORGANIZATION

0.89+

NFLORGANIZATION

0.89+

single arrayQUANTITY

0.88+

threeQUANTITY

0.87+

FA-300COMMERCIAL_ITEM

0.87+

SAPORGANIZATION

0.85+

hundreds of instancesQUANTITY

0.83+

pastDATE

0.83+

over a decadeQUANTITY

0.82+

doubleQUANTITY

0.8+

Shift 2TITLE

0.79+

PurityTITLE

0.79+

FlashBladeCOMMERCIAL_ITEM

0.78+

Death StarCOMMERCIAL_ITEM

0.78+

MilesQUANTITY

0.77+

next 10 yearsDATE

0.73+

PureCOMMERCIAL_ITEM

0.73+

IsilonLOCATION

0.73+

every threeQUANTITY

0.73+

this morningDATE

0.72+

a decadeQUANTITY

0.71+

PurityORGANIZATION

0.71+

a few weeks backDATE

0.71+

PureORGANIZATION

0.69+

Rob Lee & Rob Walters, Pure Storage | AWS re:Invent 2019


 

>> Voiceover: Live, from Las Vegas it's theCUBE Covering AWS re:Invent 2019. Brought to you by Amazon Web Services and Intel, along with its ecosystem partners. >> We're back at AWS re:Invent, this is theCUBE, the leader in live tech coverage. I'm Dave Vellante with my co-host, Justin Warren. This is day one of AWS re:Invent. Rob Lee is here, he's the Vice President and Chief Architect at Pure Storage. And he's joined by Rob Walters, who is the Vice President, General Manager of Storage as a Service at Pure. Robs, welcome to theCUBE. >> Thanks for having us back. >> Yep, thank you. >> Dave: You're welcome. Rob, we'll start with, Rob Lee we'll start with you. So re:Invent, this is the eighth re:Invent, I think the seventh for theCUBE, what's happened at the show, any key takeaways? >> Yeah, absolutely it's great to be back. We were here last year obviously big launch of cloud data services, so it's great to be back a year in. And just kind of reflect back on how the year's gone for uptick at cloud data services, our native US. And it's been a banner year. So we saw over the last year CloudSnap go GA Cloud Block Store go GA and you know just really good customer uptake, adoption and kind of interest out of the gate. So it's kind of great to be back. Great to kind of share what we've down over the last year as well as just get some feedback and more interest from future customers and prospects as well. >> So Rob W, with your background in the cloud what's you take on this notion of storage as a service? How do you guys think about that and how do you look at that? >> Sure, well this is an ever more increasingly important way to consume storage. I mean we're seeing customers who've been you know got used to the model, the economic model, the as a service model in the cloud, now looking to get those benefits on-prem and in the hybrid cloud too. Which if you know, you look at our portfolio we have both there, as part of the Pure as a service. >> Right okay, and then so Pure Accelerate you guys announced Cloud Block Store. >> Yeah, that's when we took it GA. Right so we've been working with customers in a protracted beta process over the last year to really refine the fit and use cases for tier one block workloads and so we took that GA in Accelerate. >> So this is an interesting, you're a partner obviously with Amazon I would think many parts of Amazon love Cloud Block Store 'cause you're using EC2, you're front-ending S3 like you're helping Amazon sell services and you're delivering a higher level of availability and performance in certain workloads, relative to EVS. So there's probably certain guys at Amazon that aren't so friendly with you. So that's an interesting dynamic, but talk about the positioning of Cloud Block Store. Any sort of updates on uptake? What are customers excited about? What can you share? >> Yeah, no absolutely You know I'd say primarily we're most pleased with the variety of workloads and use cases that customers are bringing us into. I think when we started out on this journey we saw tremendous promise for the technology to really improve the AWS Echo system and customer experience for people that wanted to consume block storage in the cloud. What we learned as we started working with customers is that because of the way we've architected the product brought a lot of the same capabilities we deliver on our flash arrays today into AWS, it's allowed customers to take us into all the same types of workloads that they put flash arrays into. So that's their tier one mission critical environments, their VMware workloads, their Oracle workloads, their SAP workloads. They're also looking at us from everything from to do lift and shift, test and dev in the cloud, as well as DR right, and that again I think speaks to a couple things. It speaks to the durability, the higher level of service that we're able to deliver in AWS, but also the compatibility with which we're able to deliver the same sets of features and have it operate in exactly the same way on-prem and in the cloud. 'Cause look, if you're going to DR the last time, the last point in time you want to discover that there's a caveat, hey this feature doesn't quite work the way you expect is when you have a DR failover. And so the fact that we set out with this mission in mind to create that exact level of sameness, you know it's really paying dividends in the types of use cases that customers are bringing us into. >> So you guys obviously a big partner of VMware, you're done very well in that community. So VMware cloud on AWS, is that a tailwind for you guys or can you take advantage of that at this point? >> Yeah no, so I think the way I look at it is both VMware, Pure, AWS, I think we're all responding to the same market demands and customer needs. Which at the end of the day is, look if I'm an enterprise customer the reality is, I'm going to have some of my workloads running on-premise, I'm going to have some of my workloads running in the cloud, I expect you the vendors to help me manage this diverse, hybrid environment. And what I'd say is, there are puts and takes how the different vendors are going about it but at the end of the day that's the customer need. And so you know we're going about this through a very targeted storage-centric approach because that's where we provide service today. You know and you see VMware going after it from the kind of application, hypervisor kind of virtualization end of things. Over time we've had a great partnership with VMware on-premise, and as both Cloud Block Store and VMware Cloud mature, we'd look to replicate the same motion with them in that offering. >> Yeah, I mean to to extent I mean you think about VMware moving workloads with their customers into the cloud, more mission critical stuff comes into the cloud, it's been hard to get a lot of those workloads in to date and that's maybe the next wave of cloud. Rob W., I have a question for you. You know Amazon's been kind of sleepy in storage over the, S3, EBS, okay great. They dropped a bunch of announcements this year and so it seems like there's more action now in the cloud. What's your sort of point of view as to how you make that an opportunity for Pure? >> The way I've always looked at it is, there's been a way of getting your storage done and delivered on AWS and there's been the way that enterprises have done things on-premise. And I think that was a sort of a longer term bet from AWS that that was the way things will tend to fall towards into the public cloud. And now we see, all of the hyperscalers quite honestly with on-prem, hybrid opportunities. With the like Outpost today, et cetera. The hybrid is a real things, it's not just something people said that couldn't get to the cloud, you know it's a real thing. So I think that actually opens up opportunity from both sides. True enterprise class features that our enterprise class customers are looking for in the cloud through something like CBS are now available. But I think you know at Amazon and other hyperscale are reaching back down into the on-prem environments to help with the onboarding of enterprises up into the cloud >> So the as a service side of things makes life a little bit interesting from my perspective, because that's kind of new for Pure to provide that storage as a service, but also for enterprises as you say, they're used to running things in a particular way so as they move to cloud they're kind of having to adapt and change and yet they don't fully want to. Hybrid is a real thing, there are real workloads that need to perform in a hybrid fashion. So what does that mean for you providing storage as a service, and still to Rob Lee's point, still providing that consistency of experience across the entire product portfolio. 'Cause that's quite an achievement and many other as storage providers haven't actually been able to pull that off. So how do you keep all of those components working coherently together and still provide what customers are actually looking for? >> I think you have to go back to what the basics of what customers are actually looking for. You know they're looking to make smart use of their finances capex potentially moving towards opex, that kind of consumption model is growing in popularity. And I think a lot of enterprises are seeing less and less value in the sort of nuts and bolts storage management of old. And we can provide a lot of that through the as a service offering. So had to look past the management and monitoring. We've always done the Evergreen service subscription, so with software and hardware upgrades. So we're letting their sort of shrinking capex budget and perhaps their limited resources work on the more strategically important elements of their IT strategies, including hybrid-cloud. >> Rob Lee, one of the things we've talked about in the past is AI. I'm interested in sort of the update on the AI workloads . We heard a lot obviously today on the main stage about machine learning, machine intelligence, AI, transformations, how is that going, the whole AI push? You guys were first, really the first storage company to sort of partner up and deliver solutions in that area. Give us the update there. Wow's it going, what are you learning? >> Yeah, so it's going really well. So it continues to be a very strong driver of our flash play business, and again it's really driven by it's a workload that succeeds with very large sums of data, it succeeds when you can push those large sums of data at high speed into modern compute, and rinse and repeat very frequently. And the fourth piece which I think is really helping to propel some of the business there, is you know, as enterprises, as customers get further on into the AI deployment journeys what they're finding is the application space evolves very quickly there. And the ability for infrastructure in general, but storage in particular, because that's where so much data gravity exists to be flexible to adapt to different applications and changing application requirements really helps speed them up. So said another way, if the application set that your data scientists are using today are going to change in six months, you can't really be building your storage infrastructure around a thesis of what that application looks like and then go an replace it in six months. And so that message as customers have been through now the first, first and a half iterations of that and really sort of internalize, hey AI is a space that's rapidly evolving we need infrastructure that can evolve and grow with us, that's helping drive a lot of second looks and a lot of business back to us. And I would actually tie this back to your previous question which is the direction that Amazon have taken with some of their new storage offerings and how that ties into storage as a service. If I step back as a whole, what I'd say is both Amazon and Pure, what we see is there's now a demand really for multiple classes of service for storage, right. Fast is important, it's going to continue to get more and more important, whether it's AI, whether it's low latency transactional databases, or some other workload. So fast always matters, cost always matters. And so you're going to have this stratification, whether it's in the cloud, whether its on flash with SCM, TLC, QLC, you want the benefits of all of those. What you don't want is to have to manage the complexity of tying and stitching all those pieces together yourself, and what you certainly don't want is a procurement model that locks you out or in to one of these tiers, or in one of these locations. And so if you think about it in the long term, and not to put words in the other Rob's mouth, where I think you see us going with Pure as a service is moving to a model that really shifts the conversation with customers to say, look the way you should be transacting with storage vendors, and we're going to lead the charge is class of service, maybe protocol, and that's about it. It's like where do you want this data to exist? How fast do you want it? Where on the price performance curve do you want to be? How do you want it to be protected? And give us room to take care of it from there. >> That's right, that's right. This isn't about the storage array anymore. You know you look at the modern data experience message this is about what do you need from your storage, from a storage attribute perspective rather than a physical hardware perspective and let us worry about the rest. >> Yeah you have to abstract that complexity. You guys have, I mean simple is the reason why you were able to achieve escape velocity along with obviously great product and pretty good management as well. But you'll never sub optimize simplicity to try to turn some knobs. I mean I've learned that following you guys over the years. I mean that's your philosophy. >> No absolutely, and what I'd say is as technology evolves, as the components evolve into this world of multis, multi-protocol, multi-tier, multi-class of service, you know the focus on that simplicity and taking even more if it on becomes ever more important. And that's a place where, getting to your question about AI we help customers implement AI, we also do a lot of AI within our own products in our fleet. That's a place where our AI driven ops really have a place to shine in delivering that kind of best optimization of price, performance, tiers of service, so on, so forth, within the product lines. >> What are you guys seeing at the macro? I mean that to say, you've achieved escape velocity, check. Now you're sort of entering the next chapter of Pure. You're the big share gainer, but obviously growing slower than you had in previous years. Part of that we think is this, part of your fault. You put so much flash into the marketplace. It's given people a lot of headroom. Obviously NaN pricing has been an issue, you guys have addressed that on your calls, but still gaining share much, much more quickly than most. Most folks are shrinking. So what are you seeing at the macro, what are customers telling you in terms of their long term strategy with regard to storage? >> Well, so I'll start, I'll let Rob add in. What I'd say is we see in the macro a shift, a clear shift to flash. We've called the shots since day one, but what I'd say is that's accelerating. And that's accelerating with pricing dynamics, with and you know we talked about a lot of the NaN pricing and all that kind of stuff, but in the macro I think there's a clear realization now that customers want to be on flash. It's just a matter of what's the sensible rate? What's the price kind of curve to get there? And we see a couple meaningful steps. We saw it originally with our flash array line taking out 15K spinning drives, 10K's really falling. With QLC coming online and what we're doing in FlashArray//C the 7200 RPM drive kind of in the enterprise, you know those days are numbered, right. And I think for many customers at this point it's really a matter of, okay how quickly can we get there and when does it make sense to move, as opposed to, does it make sense. In many ways it's really exciting. Because if you think about it, the focus for so long has been in those tier one environments, but in many ways the tier two environments are the ones that could most benefit from a move to flash because a couple things happen there. Because they're considered lower tier, lower cost they tend to spread like bunnies, they tend to be kind of more neglected parts of the environment and so having customers now be able to take a second look at modernizing, consolidating those environments is both helpful from a operational point of view, it's also helpful from the point of view of getting them to be able to make that data useful again. >> I would also say that those exact use cases are perfect candidates for an as a service consumption model because we can actually raise the utilization, actually helping customers manage to a much more utilized set of arrays than the over consumption, under consumption game they're trying to play right now with their annual capex cycles. >> And so how aggressive do you see customers wanting to take advantage of that as a service consumption model? Is it mixed or is it like, we want this? >> There's a lot of customers who are just like we want this and we want it now. We've seen a very good traction and adoption so yeah, it's a surprisingly large, complex enterprise customer adoption as well. >> A lot of enterprise, they've gotten used to the idea of cloud from AWS. They like that model of dealing with things and they want to bring that model of operating on site, because they want cloud everywhere. They don't actually want to transform the cloud into enterprise. >> No, exactly, I mean if I go back 20 plus years to when I was doing hands on IT, the idea that we as a team would let go of any of the widgetry that we are responsible for, never would have happened. But then you've had this parallel path of public cloud experience, and people are like well I don't even need to be doing that anymore. And we get better results. Oh and it's secure as well? And that list just goes on. And so now as you say, the enterprise wants to bring it back on-prem for all of those benefits. >> One of the other things that we've been tracking, and maybe it falls in the category of cloud 2.0 is the sort of new workload forming. And I'll preface it this way, you know the early days, the past decade of cloud infrastructures of service have been about, yeah I'm going to spin up some EC2, I'm going to need some S3, whatever, I need some storage, but today it seems like, there's all this data now and then you're seeing new workloads driven by platforms like Snowflake, Redshift, you know clearly throw in some ML tools like Databricks and it's driving a lot of compute now but it's also driving insights. People are really pulling insights out of that data. I just gave you cloud examples, are you seeing on-prem examples as well, or hybrid examples, and how do you guys fit into that? >> Yeah, no absolutely. I think this is a secular trend that was kicked off by open source and the public cloud. But it certainly affects, I would say, the entire tech landscape. You know a lot of it is just about how applications are built. If you about, think back to the late '80s, early '90s you had large monoliths, you had Oracle, and it did everything, soup to nuts. Your transactional system, your data warehouse, ERP, cool, we got it all. That's not how applications are built anymore. They're built with multiple applications working together. You've got, whether it's Kafka connecting into some scale out analytics database, connected into Cassandra, connected right. It's just the modern way of how applications are built. And so whether that's connecting data between SaaS services in the cloud, whether it's connecting data between multiple different application sets that are running on-prem, we definitely see that trend. And so when you peel back the covers of that, what we see, what we hear from customers as they make that shift, as they try to stand up infrastructure to meet those need, is again the need for flexibility. As multiple applications are sharing data, are handing off data as part of a pipeline or as part of a workflow, it becomes ever more important for the underlying infrastructure, the storage array if you will, to be able to deliver high performance to multiple applications. And so the era of saying, hey look I'm going to design a storage array to be super optimized for Oracle and nothing else like you're only going to solve part of the problem now. And so this is why you see us taking, within Pure the approach that we do with how we optimize performance, whether it's across FlashArray, FlashBlade, or Cloud Block Store. >> Excellent, well guys we got to leave it there. Thanks so much for coming on theCUBE and sharing your thoughts with us. And have a good rest of re:Invent. >> Thanks for having us back >> Dave: All right, pleasure >> Thank you >> All right, keep it right there everybody. We'll be back to wrap day one. Dave Vellante for Justin Warren. You're watching theCUBE from AWS re:Invent 2019. Right back (electronic music)

Published Date : Dec 4 2019

SUMMARY :

Brought to you by Amazon Web Services and Intel, Rob Lee is here, he's the Vice President So re:Invent, this is the eighth re:Invent, and kind of interest out of the gate. and in the hybrid cloud too. you guys announced Cloud Block Store. and so we took that GA in Accelerate. but talk about the positioning of Cloud Block Store. And so the fact that we set out with this mission in mind So VMware cloud on AWS, is that a tailwind for you guys And so you know we're going about this as to how you make that an opportunity for Pure? that couldn't get to the cloud, you know it's a real thing. So what does that mean for you I think you have to go back to what the basics Wow's it going, what are you learning? Where on the price performance curve do you want to be? this is about what do you need from your storage, I mean I've learned that following you guys over the years. you know the focus on that simplicity So what are you seeing at the macro, are the ones that could most benefit from a move to flash than the over consumption, under consumption game There's a lot of customers who are just like They like that model of dealing with things And so now as you say, the enterprise wants to and maybe it falls in the category of cloud 2.0 And so this is why you see us taking, within Pure and sharing your thoughts with us. We'll be back to wrap day one.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Justin WarrenPERSON

0.99+

Dave VellantePERSON

0.99+

AmazonORGANIZATION

0.99+

Rob LeePERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

Rob WaltersPERSON

0.99+

DavePERSON

0.99+

RobPERSON

0.99+

AWSORGANIZATION

0.99+

Rob W.PERSON

0.99+

last yearDATE

0.99+

Las VegasLOCATION

0.99+

Rob WPERSON

0.99+

OracleORGANIZATION

0.99+

Cloud Block StoreTITLE

0.99+

EchoCOMMERCIAL_ITEM

0.99+

IntelORGANIZATION

0.99+

firstQUANTITY

0.99+

VMwareORGANIZATION

0.99+

todayDATE

0.99+

15KQUANTITY

0.99+

USLOCATION

0.99+

10KQUANTITY

0.99+

both sidesQUANTITY

0.99+

20 plus yearsQUANTITY

0.99+

fourth pieceQUANTITY

0.99+

EBSORGANIZATION

0.98+

bothQUANTITY

0.98+

late '80sDATE

0.98+

EC2TITLE

0.98+

early '90sDATE

0.98+

CBSORGANIZATION

0.98+

GALOCATION

0.97+

six monthsQUANTITY

0.97+

RobsPERSON

0.97+

oneQUANTITY

0.97+

this yearDATE

0.97+

second lookQUANTITY

0.96+

first and a halfQUANTITY

0.95+

Pure AccelerateORGANIZATION

0.95+

S3TITLE

0.95+

day oneQUANTITY

0.95+

seventhQUANTITY

0.94+

FlashBladeTITLE

0.94+

OutpostORGANIZATION

0.94+

EvergreenORGANIZATION

0.93+

OneQUANTITY

0.92+

PureORGANIZATION

0.92+

tier twoQUANTITY

0.92+

NVMe: Ready for the Enterprise


 

>> Announcer: From the Silicon Angle Media Office in Boston, Massachusetts. It's the theCUBE. Now here's your host Stu Miniman. >> Hi, I'm Stu Miniman and welcome to a special theCUBE conversation here in our Boston area studio. Happy to welcome back to the program, Danny Cobb, who's with Dell EMC in the CTO office. >> Thanks Stu, great to see you here today. >> Great to see you too. So Danny, we're going to talk about a topic that like many things in the industry. It seems like it's something that happen overnight, but there's been a lot of hard work going on for quite a lot of years, even going back to heck when you and I worked together. >> Danny: That's right. >> A company use to be called EMC. NVMe, so first of all just bring everybody up to speed as to what you work on inside the Dell family. >> Danny: Sure, so my responsibility at now Dell EMC has been this whole notion of emergence systems. New technologies, new capabilities that are just coming into broad market adoption, broad readiness, technological feasibility, and those kinds of things. And then making sure that as a company we're prepared for their adoption and inclusion in our product portfolio. So it's a great set of capabilities a great set of work to be doing especially if you have a short attention span like I do. >> Danny, I spend a lot of time these days in the open source world. You talk about people are moving faster, people are trying lots of technologies. You've been doing some really hard work. The company and the industry in the standards world. What's the importance of standards these days, and bring us back to how this NVMe stuff started. >> So a great way to get everybody up to speed as you mentioned when you kicked off. NVMe, an overnight success, almost 11 years in the making now. The very first NVMe standard was about 2007. EMC joined the NVMe consortium in 2008 along with an Austin, Texas computer company called Dell. So Dell and EMC were both in the front row of defining the NVMe standard, and essentially putting in place a set of standards, a set of architectures, a set of protocols, product adoption capabilities, compatibility capabilities for the entire industry to follow, starting in 2008. Now you know from our work together that the storage industry likes to make sure that everything's mature, everything works reliably. Everything has broad interoperability standards and things like that. So since 2008, we've largely been about how do we continue to build momentum and generate support for a new storage technology that's based on broadly accepted industry standards, in order to allow the entire industry to move forward. Not just to achieve the most out of the flash revolution, but prepare the industry for coming enhancements to storage class memory. >> Yeah, so storage class memory you mentioned things like flash. One thing we've looked at for a long time is when flash rolled out. There's a lot of adoption on the consumer side first, and then that drove the enterprise piece, but flash today is still done through Ikusi interface with SaaS or Sata. And believe we're finally getting rid of when we go to NVMe. What some in the industry have called the horrible Ikusi stack. >> Danny: That's right. >> So explain to us a little bit about first, the consumer piece of where this fits first, and how it gets the enterprise. Where are we in the industry today with that? >> Yeah so as you pointed out a number of the new media technologies have actually gained a broad acceptance and a grounds full of support starting in the consumer space. The rapid adoption of mobile devices whether initially iPods and iPhones and things like that. Tablets where the more memory you have the more songs you carry, the more pictures you can take. A lot of very virtuous cycle type things occurred in the consumer space to allow flash to go from a fairly expensive perhaps niche technology to broad high volume manufacturing. And with high volume manufacturing comes much lower costs and so we always knew that flash was fast when we first started working on it at EMC in 2005. It became fast and robust when we shipped in 2008. It went from flash to robust to affordable with technologies like the move from SLC to MLC, and now TLC flash and the continuing advances of Moore's law. And so flash has been the beneficiary of high volume consumer economics along with our friend Moore's law over a number of years. >> Okay, so on the NVMe piece, your friends down in Round Rock in Dell. They've got not only the storage portfolio, but on the consumer side. There's pieces like my understanding NVMe already in the market for some part of this today, correct. >> That's right, I think one of the very first adoption scenarios for NVMe was in Lightweight laptop device. The storage deck could be more efficient. The fundamental number of gates in Silicon required to implement the stack was more efficient. Power was more efficient, so a whole bunch of things that were beneficial to a mobile high volume client device like an ultra light, ultra portable laptop made it a great place to launch the technology. >> Okay, and so bring us to what does that mean then for storage? Is that available in the enterprise storage today? >> Danny: Yeah. >> And where is that today and where is that today, and where are we going to see in the next years though? >> So here's the progression that the industry has more or less followed. If we went from that high volume, ultra light laptop device to very inexpensive M.2 devices that could be used in laptops and desktops more broadly, also gained a fair amount of traction with certain used cases and hyperscalers. And then as the spec matured and as the enterprise ecosystem around it, broader data integrity type solutions in the sili-case itself. A number of other things that are bread and butter for enterprise class devices. As those began to emerge, we've now seen NVMe move forward from laptop and client devices to high volume M.2 devices to full function, full capability dual ported enterprise NVMe devices really crossing over this year. >> Okay, so that means we're going to see not only in the customer pieces but should be seeing really enterprise roll out in I'm assuming things like storage arrays, maybe hyper converged. All the different flavors in the not too distant future. >> Absolutely right, the people who get paid to forecast these things when they look into their crystal balls. They've talked about when does NVMe get close enough to its predecessor SaaS to make the switch over be a no brainer. And often times, you get a performance factor where there's more value or you get a cost factor where suddenly that becomes the way the game is won. In the case of NVMe versus SaaS, both of those situations value and cost are more or less a wash right now across the industry. And so there are very few impediments to adoption. Much like a few years ago, there were very few impediment to adoption of enterprise SSDs versus high performance HDDs. The 15Ks and the 10K HDDs. Once we got to close enough in terms of cost parity. The entire industry went all flash over night. >> Yeah, it's a little bit different than say the original adoption of flash versus HDD. >> Danny: That's right. >> HDD versus SSD. Remember back, you had to have the algebra sheet. And you said okay, how many devices did I have.? What's the power savings that I could get out of that? Plus the performance that I had and then does this makes sense. It seems like this is a much more broadly applicable type of solution that we'll see. >> Danny: Right. >> For much faster adoption. >> Do you remember those days of a little goes a long way? >> Stu: Yeah. >> And then more is better? And then almost be really good, and so that's where we've come over what seems like a very few years. >> Okay, so we've only been talking about NVMe, the thing I know David Foyer's been look a lot from an architectural standpoint. Where we see benefit obviously from NVMe but NVMe over Fabrics is the thing that has him really excited if you talk about the architectures, maybe just explain a little bit about what I get with NVMe and what I'll get added on top with the over fabric piece of that. >> Danny: Sure. >> And what's that roll out look like? >> Can I tell you a little story about what I think of as the birth of NVMe over Fabrics? >> Stu: Please. >> Some of your viewers might remember a project at EMC called Thunder. And Thunder was PCI flash with an RDMA over ethernet front end on it. We took that system to Intel developers forum as a proof of concept. Around the corner from me was an engineer named Dave Min-turn, who's an Intel engineer. Who had almost exactly the same software stack up and running except it was an Intel RDMA capability nick and an Intel flash drive, and of course some changes to the Intel processor stack to support the used case that he had in mind. And we started talking and we realized that we were both counting the number of instructions from packet arriving across the network to bytes being read or written on the vis-tory fast PCI E device. And we realized that there has to be a better way, and so from that day, I think it was September 2013, maybe it was August. We actually started working together on how can we take the benefits of the NVMe standard that exists mapped onto PCI E. And then map those same parameters as cleanly as we possibly can onto, at that time ethernet but also InfiniBand, Fiber channel, and perhaps some other transports as a way to get the benefits of the NVMe software stack, and build on top of the new high performance capabilities of these RDMA capable interconnects. So it goes way back to 2013, we moved it into the NVMe standard as a proposal in 2014. And again three, four years later now, we're starting to see solutions roll out that begin to show the promise that we saw way back then. >> Yeah and the challenge with networking obviously is sounds like you've got a few different transport layers that I can use there. Probably a number of different providers. How baked is the standard? Where do things like hits the interoperability fit into the mix? When do customers get their hands on it, and what can they expect the roll out to be? >> We're clearly at the beginning of what's about to be a very, I think long and healthy future for NVMe over Fabrics. I don't know about you. I was at Flash Memory Summit back in August in Santa Clara and there were a number of vendors there starting to talk about NVMe over Fabrics basics. FPGA implementation, system on chip implementations, software implementations across a variety of stacks. The great thing was NVMe over Fabrics was a phrase of the entire show. The challenging thing was probably no two of those solutions interoperated with each other yet. We were still at the running water through the pipes phase, not really checking for leaks and getting to broad adoption. Broad adoption I think comes when we've got a number of vendors broad interoperability, multi-supplier, component availability and those things, that let a number of implementations exists and interoperate because our customers live in a diverse multi-vendor environment. So that's what it will take to go from interesting proof of concept technology which I think is what we're seeing in terms of early customers engagement today to broad base deployment in both existing fiber channel implementations, and also in some next generation data center implementations, probably beginning next year. >> Okay, so Danny, I talked to a lot of companies out there. Everyone that's involved in this (mumbles) has been talking about NVMe over Fabric for a couple of years now. From a user standpoint, how are they going to help sort this out? What will differentiate the check box. Yes, I have something that follows this to, oh wait this will actually help performance so much better. What works with my environment? Where are the pitfalls and where are the things that are going to help companies? What's going to differentiate the marketplace? >> As an engineer, we always get into the speeds and the feeds and the weeds on performance and things like that, and while those are all true. We can talk about fewer and fewer instructions in the networks stack. Fewer and fewer instructions in the storage stack. We can talk about more efficient Silicon implementations. More affinity for multi-processor, multi-core processing environments, more efficient operating system implementations and things like that. But that's just the performance side. The broader benefits come to beginning to move to more cost effective data center fabric implementation. Where I'm not managing an orange wire and a blue wire unless that's really what I want. There's still a number of people who want to manage their fiber channel and will run NVMe over that. They get the compatibility that they want. They get the policies that they want and the switch behavior that they want, and the provisioning model that they want and all of those things. They'll get that in an NVMe over Fabrics implementation. A new data center however will be able to go, you know what, I'm all in day one on 25, 5000 bit gigabit ethernet as my fundamental connection of choice. I'm going 400 gigabit ethernet ports as soon as Andy Beck-tels shine or somebody gives them to me and things like that. And so if that's the data center architecture model that I'm in, that's a fundamental implementation decision that I get to make knowing that I can run an enterprise grade, storage protocol over the top of that, and the industry is ready. My external storage is ready, my servers are ready and my workloads can get the benefit of that. >> Okay, so if I just step back for a second, NVMe sounds like a lot of it is what we would consider the backend in proving that NVMe over Fabrics helps with some of the front end. From a customer stand point, what about their application standpoint? Can they work with everything that they have today? Are there things that they're going to want to do to optimize for that? So the storage industry just take care of it for them. What do they think about today and future planning from an application standpoint? >> I think it's a matter of that readiness and what is it going to take. The good news and this has analogs to the industry change from HDD to SSDs in the first place. The good new is you can make that switch over today and your data management application, your database application, your warehouse, you're analytics or whatever. Not one line of software changes. NVMe device shows up in the block stack of your favorite operating system, and you get lower latency, more IOs in parallel. More CPU back for your application to run because you don't need it in the storage stack anymore. So you get the benefits of that just by changing over to this new protocol. For applications who then want to optimize for this new environment, you can start thinking about having more IOs in flight in parallel. You could start thinking about what happens when those IOs are satisfied more rapidly without as much overhead in and interrupt processing and a number of things like that. You could start thinking about what happens when your application goes from hundred micro-second latencies and IOs like the flash devices to 10 microsecond or one microsecond IOs. Would perhaps with some of these new storage class memory devices that are out there. Those are the benefits that people are going to see when they start thinking about an all NVMe stack. Not just being beneficial for existing flash implementations but being fundamentally required and mandatory to get the benefits of storage class memory implementations. So this whole notion of future ready was one of the things that was fundamental in how NVMe was initially designed over 10 years ago. And we're starting to see that long term view pay benefits in the marketplace. >> Any insight from the customer standpoint? Is it certain applications or verticals where this is really going to help? I think back to the move to SSDs. It was David Foyer who just wet around the entire news feed. He was like, database, database, database is where we can have the biggest impact. What's NVMe going to impact? >> I think what we always see with these things. First of all, NVMe is probably going to have a very rapid advancement and impact across the industry much more quickly than the transition from HDD to SSD, so we don't have to go through that phase of a little goes a long way. You can largely make the switch and as your ecosystem supports it as your vendor of choice supports it. You can make that switch and to a large extent have the application be agnostic from that. So that's a really good way to start. The other place is you and I have had this conversation before. If you take out a cocktail napkin and you draw an equation that says time equals money. That's an obvious place where NVMe and NVMe over Fabrics benefit someone initially. High speed analytics, real time, high frequency trading, a number of things where more efficiency. My ability to do more work per unit time than yours gives me a competitive advantage. Makes my algorithms better, exposes my IP in a more advantageous way. Those are wonderful places for these types of emerging technologies to get adopted because the value proposition is just slam dunk simple. >> Yeah, so running through my head are all the latest buzz words. Is everything at Wikibon when we did our predictions for this year, data is at the center of all of it. But machine learning, AI, heck blockchain, Edge computing all of these things can definitely be affected by that. Is NVMe going to help all of them? >> Oh machine learning. Incredible high bandwidth application. Wonderful thing stream data in, compute on it, get your answers and things like that. Wonderful benefits for a new squeaky clean storage stack to run into. Edge where often times, real time is required. The ability to react to a stimulus and provide a response because of human safety issue or a risk management issue or what have you. Any place that performance let's you get close, get you outer close to real time is a win. And the efficiency of NVMe has a significant advantage in those environments. So NVMe is largely able to help the industry be ready just at the time that new processing models are coming in such as machine learning, artificial intelligence. New data center deployment architectures like the Edge come in and the new types of telemetry and algorithms that they maybe running there. It's really a technology that's arriving just at the time that the industry needs it. >> Yeah, was reading up on some of the blogs on the Dell sites. Jeff Brew-dough said, "We should expect "to see things from 2018." Not expecting you to pre-announce anything but what should we be looking for from Dell and the Dell family in 2018 when it comes to this space? >> We're very bullish on NVMe. We've been pushing very, very hard in the standards community. Obviously, we have already shipped NVMe for a series of internal use cases in our storage platforms. So we have confidence in the technology, its readiness, the ability of our software stacks to do what they need to do. We have a robust, multi-supplier supply chain ready to go so that we can service our customers, and provide them the choice in capacities and capabilities and things like that that are required to bet your business, and long term supply assurance for and things like that. So we're seeing the next year or so be the full transition to NVMe and we're ready for it. We've been getting ready for a long time. Now, the ecosystem is there and we're predicting very big things in the future. >> Okay, so Danny, you've been working on this for 11 years. Give us just a little bit of insight. What you learned, what this group has learned from previous transitions? What's excited you the most? Give us a little bit of sausage making? >> What's been funny about this is we talk about the initial transition to flash, and just getting to the point where a little goes a long way. That was a three year journey. We started in 2005, we shipped in 2008. We moved from there. We flash in a raise as a tier, as a cache, as the places where a little latency, high performance media adds value and those things. Then we saw the industry begin to develop into some server centric storage solutions. You guys have been at the front of forecasting what that market looks like with software defined storage. We see that in technologies like ScaleIO and VSAN where their abilities to start using the media when it's resident in a server became important. And suddenly that began to grow as a peer to the external storage market. Another market San alternative came along with them. Now we're moving even further out where it seems like we use to ask why flash? And it will get asked that. Now it's why not flash? Why don't we move there? So what we've seen is a combination of things. As we get more and more efficient low latency storage protocols. The bottle neck stops being about the network and start being about something else. As we get more multi-core compute capabilities and Moore's law continues to tickle along. We suddenly have enough compute and enough bandwidth and the next thing to target is the media. As we get faster and faster more capable media such as the move to flash and now the move to storage class memory. Again the bottle neck moves away from the media, maybe back to something else in the stack. As I advance compute in media and interconnect, suddenly it becomes beneficial for me to rewrite my application or re-platform it, and create an entire new set of applications that exploit the current capabilities or the technologies. And so we are in that rinse, lather repeat cycle right now in the technology. And for guys like you and me who've been doing this for awhile, we've seen this movie before. We know how it hands. It actually doesn't end. There are just new technologies and new bottlenecks and new manifestations of Moore's law and Holmes law and Metcalfe's law that come into play here. >> Alright so Danny, any final predictions from you on what we should be seeing? What's the next thing you work on that you call victory soon right? >> Yes, so I'm starting to lift my eyes a little bit and we think we see some really good capabilities coming at us from the device physicists in the white coats with the pocket protectors back in the fabs. We're seeing a couple of storage class memories begin to come to market now. You're led by Intel and microns, 3D XPoint but a number of other candidates on the horizon that will take us from this 100 microsecond world to a 10 microsecond world maybe to a 100 nanosecond world. And you and I we back here talking about that fairly soon I predict. >> Excellent, well Danny Cobb always a pleasure to catch up with you. Thanks so much for walking us through all of the pieces. We'll have lots more coverage of this technology and lots more more. Check out theCUBE.net. You can see Dell Technology World and lots of the other shows will be back. Thank you so much for watching theCUBE. (uptempo techno music)

Published Date : Mar 16 2018

SUMMARY :

Announcer: From the Silicon Angle Media Office Happy to welcome back to the program, to heck when you and I worked together. inside the Dell family. and those kinds of things. The company and the industry in the standards world. that the storage industry likes to make sure There's a lot of adoption on the consumer side first, and how it gets the enterprise. in the consumer space to allow flash to go from Okay, so on the NVMe piece, required to implement the stack was more efficient. and client devices to high volume M.2 devices in the customer pieces but should be seeing The 15Ks and the 10K HDDs. the original adoption of flash versus HDD. What's the power savings that I could get out of that? and so that's where we've come over but NVMe over Fabrics is the thing that has him that begin to show the promise that we saw way back then. Yeah and the challenge with networking obviously We're clearly at the beginning Where are the pitfalls and where are the things and the provisioning model that they want So the storage industry just take care of it for them. Those are the benefits that people are going to see I think back to the move to SSDs. You can largely make the switch and as your ecosystem are all the latest buzz words. that the industry needs it. of the blogs on the Dell sites. that are required to bet your business, What's excited you the most? and the next thing to target is the media. but a number of other candidates on the horizon and lots of the other shows will be back.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
2008DATE

0.99+

EMCORGANIZATION

0.99+

DellORGANIZATION

0.99+

2014DATE

0.99+

Dave Min-turnPERSON

0.99+

DannyPERSON

0.99+

2005DATE

0.99+

Danny CobbPERSON

0.99+

2018DATE

0.99+

StuPERSON

0.99+

one microsecondQUANTITY

0.99+

AugustDATE

0.99+

September 2013DATE

0.99+

Stu MinimanPERSON

0.99+

David FoyerPERSON

0.99+

10 microsecondQUANTITY

0.99+

Santa ClaraLOCATION

0.99+

11 yearsQUANTITY

0.99+

2013DATE

0.99+

BostonLOCATION

0.99+

Jeff Brew-doughPERSON

0.99+

iPhonesCOMMERCIAL_ITEM

0.99+

three yearQUANTITY

0.99+

Austin, TexasLOCATION

0.99+

100 nanosecondQUANTITY

0.99+

Dell EMCORGANIZATION

0.99+

iPodsCOMMERCIAL_ITEM

0.99+

bothQUANTITY

0.99+

Round RockLOCATION

0.99+

Boston, MassachusettsLOCATION

0.99+

next yearDATE

0.99+

todayDATE

0.99+

four years laterDATE

0.99+

WikibonORGANIZATION

0.99+

hundred micro-secondQUANTITY

0.99+

firstQUANTITY

0.99+

IntelORGANIZATION

0.98+

MoorePERSON

0.98+

10KQUANTITY

0.98+

oneQUANTITY

0.98+

25, 5000 bitQUANTITY

0.98+

2007DATE

0.97+

Flash Memory SummitEVENT

0.97+

NVMeORGANIZATION

0.96+

this yearDATE

0.96+

SiliconLOCATION

0.96+

twoQUANTITY

0.96+

threeDATE

0.96+

SataTITLE

0.96+