Matt Burr, Pure Storage
(Intro Music) >> Hello everyone and welcome to this special cube conversation with Matt Burr who is the general manager of FlashBlade at Pure Storage. Matt, how you doing? Good to see you. >> I'm doing great. Nice to see you again, Dave. >> Yeah. You know, welcome back. We're going to be broadcasting this is at accelerate. You guys get big news. Of course, FlashBlade S we're going to dig into it. The famous FlashBlade now has new letter attached to it. Tell us what it is, what it's all about. >> (laughing) >> You know, it's easy to say. It's just the latest and greatest version of the FlashBlade, but obviously it's a lot more than that. We've had a lot of success with FlashBlade kind of across the board in particular with Meta and their research super cluster, which is one of the largest AI super clusters in the world. But, it's not enough to just build on the thing that you had, right? So, with the FlashBlade S, we've increased modularity, we've done things like, building co-design software and hardware and leveraging that into something that increases, or it actually doubles density, performance, power efficiency. On top of that, you can scale independently, storage, networking, and compute, which is pretty big deal because it gives you more flexibility, gives you a little more granularity around performance or capacity, depending on which direction you want to go. And we believe that, kind of the end of this is fundamentally the, I guess, the way to put it is sort of the highest performance and capacity optimization, unstructured data platform on the market today without the need for, kind of, an expensive data tier of cash or expected data cash and tier. So we're pretty excited about, what we've ended up with here. >> Yeah. So I think sometimes people forget, about how much core engineering Meta does. Facebook, you go on Facebook and play around and post things, but yeah, their backend cloud is just amazing. So talk a little bit more about the problem targets for FlashBlade. I mean, it's pretty wide scope and we're going to get into that, but what's the core of that. >> Yeah. We've talked about that extensively in the past, the use cases kind of generally remain the same. I know, we'll probably explore this a little bit more deeply, but you know, really what we're talking about here is performance and scalability. We have written essentially an unlimited Metadata software level, which gives us the ability to expand, we're already starting to think about computing an exabyte scale. Okay. So, the problem that the customer has of, Hey, I've got a Greenfield, object environment, or I've got a file environment and my 10 K and 7,500 RPM disc is just spiraling out of control in my environment. It's an environmental problem. It's a management problem, we have effectively, simplified the process of bringing together highly performant, very large multi petabyte to eventually exabyte scale unstructured data systems. >> So people are obviously trying to inject machine intelligence, AI, ML into applications, bring data into applications, bringing those worlds closer together. Analytics is obviously exploding. You see some other things happening in the news, read somewhere, protection and the like, where does FlashBlade fit in terms of FlashBlade S in some terms of some of these new use cases. >> All those things, we're only going wider and broader. So, we've talked in the past about having a having a horizontal approach to this market. The unstructured data market has often had vertical specificity. You could see successful infrastructure companies in oil and gas that may not play median entertainment, where you see, successful companies that play in media entertainment, but don't play well in financial services, for example. We're sort of playing the long game here with this and we're focused on, bringing an all Q L C architecture that combines our traditional kind of pure DFM with the software that is, now I guess seven years hardened from the original FlashBlade system. And so, when we look at customers and we look at kind of customers in three categories, right, we have customers that sort of fit into a very traditional, more than three, but kind of make bucketized this way, customers that fit into kind of this EDA HPC space, then you have that sort of data protection, which I believe kind of ransomware falls under that as well. The world has changed, right? So customers want their data back faster. Rapid restore is a real thing, right? We have customers that come to us and say, anybody can back up my data, but if I want to get something back fast and I mean in less than a week or a couple days, what do I do? So we can solve that problem. And then as you sort of accurately pointed out where you started, there is the AI ML side of things where the Invidia relationship that we have, right. DGX is are a pretty powerful weapon in that market and solving those problems. But they're not cheap. And keeping those DGX's running all the time requires an extremely efficient underpinning of a flash system. And we believe we have that market as well. >> It's interesting when pure was first coming out as a startup, you obviously had some cool new tech, but you know, your stack wasn't as hard. And now you've got seven years under your belt. The last time you were on the cube, we talked about some of the things that you guys were doing differently. We talked about UFFO, unified fast file and object. How does this new product, FlashBlade S, compare to some previous generations of FlashBlade in terms of solving unstructured data and some of these other trends that we've been talking about? >> Yeah. I touched on this a little bit earlier, but I want to go a little bit deeper on this concept of modularity. So for those that are familiar with Pure Storage, we have what's called the evergreen storage program. It's not as much a program as it is an engineering philosophy. The belief that everything we build should be modular in nature so that we can have essentially a chassi that has an a 100% modular components inside of it. Such that we can upgrade all of those features, non disruptively from one version to the next, you should think about that as you know, if you have an iPhone, when you go get a new iPhone, what do you do with your old iPhone? You either throw it away or you sell it. Well, imagine if your iPhone just got newer and better each time you renewed your, whatever it is, two year or three year subscription with apple. That's effectively what we have as a core philosophy, core operating engineering philosophy within pure. That is now a completely full and robust program with this instantiation of the FlashBlade S. And so kind of what that means is, for a customer I'm future proofed for X number of years, knowing that we have a run rate of being able to keep customers on the flash array side from the FA 400 all the way through the flash array X and Excel, which is about a 10 year time span. So, that then, and of itself sort of starts to play into customers that have concerns around ESG. Right? Last time I checked power space and cooling, still mattered in data center. So although I have people that tell me all the time, power space clearly doesn't matter anymore, but I know at the end of the day, most customers seem to say that it does, you're not throwing away refrigerator size pieces of equipment that once held spinning disc, something that's a size of a microwave that's populated with DFMs with all LC flash that you can actually upgrade over time. So if you want to scale more performance, we can do that through adding CPU. If you want to scale more capacity, we can do that through adding more And we're in control of those parameters because we're building our own DFM, our direct fabric modules on our own storage notes, if you will. So instead of relying on the consumer packaging of an SSD, we're upgrading our own stuff and growing it as we can. So again, on the ESG side, I think for many customers going into the next decade, it's going to be a huge deal. >> Yeah. Interesting comments, Matt. I mean, I don't know if you guys invented it, but you certainly popularize the idea of, no Fort lift upgrades and sort of set the industry on its head when you guys really drove that evergreen strategy and kind of on that note, you guys talk about simplicity. I remember last accelerate went deep with cause on your philosophy of keeping things simple, keeping things uncomplicated, you guys talk about using better science to do that. And you a lot of talk these days about outcomes. How does FlashBlade S support those claims and what do you guys mean by better science? >> Yeah. You know, better science is kind of a funny term. It was an internal term that I was on a sales call actually. And the customer said, well, I understand the difference between these two, but could you tell me how we got there and I was a little stumped on the answer. And I just said, well, I think we have better scientists and that kind of morphed into better science, a good example of that is our Metadata architecture, right? So our scalable Metadata allows us to avoid having that cashing tier, that other architectures have to rely on in order to anticipate, which files are going to need to be in read cash and read misses become very expensive. Now, a good follow up question there, not to do your job, but it's the question that I always get is, well, when you're designing your own hardware and your own software, what's the real material advantage of that? Well, the real material advantage of that is that you are in control of the combination and the interaction of those two things you don't give up the sort of the general purpose nature, if you will, of the performance characteristics that come along with things like commodity, you get a very specific performance profile. That's tailored to the software that's being married to it. Now in some instances you could say, well, okay, does that really matter? Well, when you start to talking about 20, 40, 50, 100, 500, petabyte data sets, every percentage matters. And so those individual percentages equate to space savings. They equate to power and cooling savings. We believe that we're going to have industry best dollars per lot. We're going to have industry best, kind of dollar PRU. So really the whole kind of game here is a round scale. >> Yeah. I mean, look, there's clearly places for the pure software defined. And then when cloud first came out, everybody said, oh, build the cloud and commodity, they don't build custom art. Now you see all the hyper scalers building custom software, custom hardware and software integration, custom Silicon. So co-innovation between hardware and software. It seems pretty as important, if not more important than ever, especially for some of these new workloads who knows what the edge is going to bring. What's the downside of not having that philosophy in your view? Is it just, you can't scale to the degree that you want, you can't support the new workloads or performance? What should customers be thinking about there? >> I think the downside plays in two ways. First is kind of the future and at scale, as I alluded to earlier around cost and just savings over time. Right? So if you're using a you know a commodity SSD, there's packaging around that SSD that is wasteful both in terms of- It's wasteful in the environmental sense and wasteful in the sort of computing performance sense. So that's kind of one thing. On the second side, it's easier for us to control the controllables around reliability when you can eliminate the number of things that actually sit in that workflow and by workflow, I mean when a right is acknowledged from a host and it gets down to the media, the more control you have over that, the more reliability you have over that piece. >> Yeah. I know. And we talked about ESG earlier. I know you guys, I'm going to talk a little bit about more news from accelerate within Invidia. You've certainly heard Jensen talk about the wasted CPU cycles in the data center. I think he's forecasted, 25 to 30% of the cycles are wasted on doing things like storage offload, or certainly networking and security. So now it sort of confirms your ESG thought, we can do things more efficiently, but as it relates to Invidia and some of the news around AIRI's, what is the AI RI? What's that stand for? What's the high level overview of AIRI. >> So the AIRI has been really successful for both us and Invidia. It's a really great partnership we're appreciative of the partnership. In fact, Tony pack day will be speaking here at accelerate. So, really looking forward to that, Look, there's a couple ways to look at this and I take the macro view on this. I know that there's a equally as good of a micro example, but I think the macro is really kind of where it's at. We don't have data center space anymore, right? There's only so many data centers we can build. There's only so much power we can create. We are going to reach a point in time where municipalities are going to struggle against the businesses that are in their municipalities for power. And now you're essentially bidding big corporations against people who have an electric bill. And that's only going to last so long, you know who doesn't win in that? The big corporation doesn't win in that. Because elected officials will have to find a way to serve the people so that they can get power. No matter how skewed we think that may be. That is the reality. And so, as we look at this transition, that first decade of disc to flash transition was really in the block world. The second decade, which it's really fortunate to have a multi decade company, of course. But the second decade of riding that wave from disk to flash is about improving space, power, efficiency, and density. And we sort of reach that, it's a long way of getting to the point about iMedia where these AI clusters are extremely powerful things. And they're only going to get bigger, right? They're not going to get smaller. It's not like anybody out there saying, oh, it's a Thad, or, this isn't going to be something that's going to yield any results or outcomes. They yield tremendous outcomes in healthcare. They yield tremendous outcomes in financial services. They use tremendous outcome in cancer research, right? These are not things that we as a society are going to give up. And in fact, we're going to want to invest more on them, but they come at a cost and one of the resources that is required is power. And so when you look at what we've done in particular with Invidia. You found something that is extremely power efficient that meets the needs of kind of going back to that macro view of both the community and the business. It's a win-win. >> You know and you're right. It's not going to get smaller. It's just going to continue to in momentum, but it could get increasingly distributed. And you think about, I talked about the edge earlier. You think about AI inferencing at the edge. I think about Bitcoin mining, it's very distributed, but it consumes a lot of power and so we're not exactly sure what the next level architecture is, but we do know that science is going to be behind it. Talk a little bit more about your Invidia relationship, because I think you guys were the first, I might be wrong about this, but I think you were the first storage company to announce a partnership with Invidia several years ago, probably four years ago. How is this new solution with a AIRI slash S building on that partnership? What can we expect with Invidia going forward? >> Yeah. I think what you can expect to see is putting the foot on the gas on kind of where we've been with Invidia. So, as I mentioned earlier Meta is by some measurements, the world's largest research super cluster, they're a huge Invidia customer and built on pure infrastructure. So we see kind of those types of well reference architectures, not that everyone's going to have a Meta scale reference architecture, but the base principles of what they're solving for are the base principles of what we're going to begin to see in the enterprise. I know that begin sounds like a strange word because there's already a big business in DGX. There's already a sizable business in performance, unstructured data. But those are only going to get exponentially bigger from here. So kind of what we see is a deepening and a strengthening of the of the relationship and opportunity for us to talk, jointly to customers that are going to be building these big facilities and big data centers for these types of compute related problems and talking about efficiency, right? DGX are much more efficient and Flash Blades are much more efficient. It's a great pairing. >> Yeah. I mean you're definitely, a lot of AI today is modeling in the cloud, seeing HPC and data just slam together all kinds of new use cases. And these types of partnerships are the only way that we're going to solve the future problems and go after these future opportunities. I'll give you a last word you got to be excited with accelerate, what should people be looking for, add accelerate and beyond. >> You know, look, I am really excited. This is going on my 12th year at Pure Storage, which has to be seven or eight accelerates whenever we started this thing. So it's a great time of the year, maybe take a couple off because of because of COVID, but I love reconnecting in particular with partners and customers and just hearing kind of what they have to say. And this is kind of a nice one. This is four years or five years worth of work for my team who candidly I'm extremely proud of for choosing to take on some of the solutions that they, or excuse me, some of the problems that they chose to take on and find solutions for. So as accelerate roles around, I think we have some pretty interesting evolutions of the evergreen program coming to be announced. We have some exciting announcements in the other product arenas as well, but the big one for this event is FlashBlade. And I think that we will see. Look, no one's going to completely control this transition from disc to flash, right? That's a that's a macro trend. But there are these points in time where individual companies can sort of accelerate the pace at which it's happening. And that happens through cost, it happens through performance. My personal belief is this will be one of the largest points of those types of acceleration in this transformation from disc to flash and unstructured data. This is such a leap. This is essentially the equivalent of us going from the 400 series on the block side to the X, for those that you're familiar with the flash array lines. So it's a huge, huge leap for us. I think it's a huge leap for the market. And look, I think you should be proud of the company you work for. And I am immensely proud of what we've created here. And I think one of the things that is a good joy in life is to be able to talk to customers about things you care about. I've always told people my whole life, inefficiency is the bane of my existence. And I think we've rooted out ton of inefficiency with this product and looking forward to going and reclaiming a bunch of data center space and power without sacrificing any performance. >> Well congratulations on making it into the second decade. And I'm looking forward to the orange and the third decade, Matt Burr, thanks so much for coming back in the cubes. It's good to see you. >> Thanks, Dave. Nice to see you as well. We appreciate it. >> All right. And thank you for watching. This is Dave Vellante for the Cube. And we'll see you next time. (outro music)
SUMMARY :
Good to see you. to see you again, Dave. We're going to be broadcasting kind of the end of this the problem targets for FlashBlade. in the past, the use cases kind of happening in the news, We have customers that come to us and say, that you guys were doing differently. that tell me all the time, and kind of on that note, the general purpose nature, if you will, to the degree that you want, First is kind of the future and at scale, and some of the news around AIRI's, that meets the needs of I talked about the edge earlier. of the of the relationship are the only way that we're going to solve of the company you work for. and the third decade, Nice to see you as well. This is Dave Vellante for the Cube.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Matt Burr | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Invidia | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
100% | QUANTITY | 0.99+ |
25 | QUANTITY | 0.99+ |
AIRI | ORGANIZATION | 0.99+ |
seven years | QUANTITY | 0.99+ |
five years | QUANTITY | 0.99+ |
10 K | QUANTITY | 0.99+ |
four years | QUANTITY | 0.99+ |
seven | QUANTITY | 0.99+ |
Excel | TITLE | 0.99+ |
three year | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
12th year | QUANTITY | 0.99+ |
7,500 RPM | QUANTITY | 0.99+ |
Matt | PERSON | 0.99+ |
two year | QUANTITY | 0.99+ |
apple | ORGANIZATION | 0.99+ |
less than a week | QUANTITY | 0.99+ |
first decade | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
seven years | QUANTITY | 0.99+ |
second side | QUANTITY | 0.99+ |
eight | QUANTITY | 0.99+ |
second decade | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
40 | QUANTITY | 0.99+ |
four years ago | DATE | 0.99+ |
more than three | QUANTITY | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
100 | QUANTITY | 0.98+ |
next decade | DATE | 0.98+ |
two ways | QUANTITY | 0.98+ |
50 | QUANTITY | 0.98+ |
one version | QUANTITY | 0.98+ |
several years ago | DATE | 0.98+ |
30% | QUANTITY | 0.98+ |
two | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
Tony | PERSON | 0.97+ |
two things | QUANTITY | 0.97+ |
500 | QUANTITY | 0.97+ |
Pure Storage | ORGANIZATION | 0.97+ |
FlashBlade | TITLE | 0.97+ |
today | DATE | 0.94+ |
third decade | QUANTITY | 0.94+ |
FlashBlade | EVENT | 0.94+ |
a couple days | QUANTITY | 0.9+ |
first storage company | QUANTITY | 0.88+ |
each time | QUANTITY | 0.88+ |
ESG | ORGANIZATION | 0.87+ |
Jensen | PERSON | 0.85+ |
DGX | ORGANIZATION | 0.85+ |
FlashBlade S | TITLE | 0.85+ |
three categories | QUANTITY | 0.85+ |
FlashBlade S | COMMERCIAL_ITEM | 0.82+ |
about a 10 year | QUANTITY | 0.82+ |
400 series | QUANTITY | 0.78+ |
Matt Burr, Scott Sinclair, Garrett Belschner | The Convergence of File and Object
>>From around the globe presenting the convergence of file and object brought to you by pure storage. Okay. >>We're back with the convergence of file and object and a power panel. This is a special content program made possible by pure storage. And co-created with the cube. Now in this series, what we're doing is we're exploring the coming together of file and object storage, trying to understand the trends that are driving this convergence, the architectural considerations that users should be aware of and which use cases make the most sense for so-called unified fast file in object storage. And with me are three great guests to unpack these issues. Garrett bell center is the data center solutions architect he's with CDW. Scott Sinclair is a senior analyst at enterprise strategy group. He's got deep experience on enterprise storage and brings that independent analyst perspective. And Matt Burr is back with us, gentlemen, welcome to the program. >>Thank you. >>Hey Scott, let me, let me start with you, uh, and get your perspective on what's going on in the market with, with object to cloud huge amount of unstructured data out there. It lives in files. Give us your independent view of the trends that you're seeing out there. >>Well, Dave, you know where to start, I mean, surprise, surprise data's growing. Um, but one of the big things that we've seen is that we've been talking about data growth for what decades now, but what's really fascinating is or changed is because of the digital economy, digital business, digital transformation, whatever you call it. Now, people are not just storing data. They actually have to use it. And so we see this in trends like analytics and artificial intelligence. And what that does is it's just increasing the demand for not only consolidation of massive amounts of storage that we've seen for awhile, but also the demand for incredibly low latency access to that storage. And I think that's one of the things that we're seeing, that's driving this need for convergence, as you put it of having multiple protocols can Solidated onto one platform, but also the need for high performance access to that data. >>Thank you for that. A great setup. I got, like I wrote down three topics that we're going to unpack as a result of that. So Garrett, let me, let me go to you. Maybe you can give us the perspective of what you see with customers is, is this, is this like a push where customers are saying, Hey, listen, I need to converge my file and object. Or is it more a story where they're saying, Garrett, I have this problem. And then you see unified file and object as a solution. >>Yeah, I think, I think for us, it's, you know, taking that consultative approach with our customers and really kind of hearing pain around some of the pipelines, the way that they're going to market with data today and kind of what are the problems that they're seeing. We're also seeing a lot of the change driven by the software vendors as well. So really being able to support a dis-aggregated design where you're not having to upgrade and maintain everything as a single block has been a place where we've seen a lot of customers pivot to where they have more flexibility as they need to maintain larger volumes of data and higher performance data, having the ability to do that separate from compute and cash. And some of those other layers are, is really critical. >>So, Matt, I wonder if you could follow up on that. So, so Gary was talking about this dis-aggregated design, so I like it, you know, distributed cloud, et cetera, but then we're talking about bringing things together in one place, right? So square that circle. How does this fit in with this hyper distributed cloud edge that's getting built out? >>Yeah. You know, I mean, I could give you the easy answer on that, but I can also pass it back to Garrett in the sense that, you know, Garrett, maybe it's important to talk about, um, elastic and Splunk and some of the things that you're seeing in, in that world and, and how that, I think the answer today, the question I think you can give, you can give a pretty qualified answer relative to what your customers are seeing. >>Oh, that'd be great, please. >>Yeah, absolutely. No, no problem at all. So, you know, I think with, um, Splunk kind of moving from its traditional design and classic design, whatever you want to, you want to call it up into smart store? Um, that was kind of one of the first that we saw kind of make that move towards kind of separating object out. And I think, you know, a lot of that comes from their own move to the cloud and updating their code to basically take advantage of object object in the cloud. Um, but we're starting to see, you know, with like Vertica Ian, for example, um, elastic other folks taking that same type of approach where in the past we were building out many to use servers. We were jamming them full of, uh, you know, SSDs and then DME drives. Um, that was great, but it doesn't really scale. >>And it kind of gets into that same problem that we see with hyperconvergence a little bit where it's, you know, you're all, you're always adding something maybe that you didn't want to add. Um, so I think it, you know, again, being driven by software is really kind of where we're seeing the world open up there. Um, but that whole idea of just having that as a hub and a central place where you can then leverage that out to other applications, whether that's out to the edge for machine learning or AI applications to take advantage of it. I think that's where that convergence really comes back in. Um, but I think like Scott mentioned earlier, it's really folks are now doing things with the data where before I think they were really storing and trying to figure out what are we going to actually do with it when we need to do something with it? So this is making it possible. >>Yeah. And Dave, if I could just sort of tack onto the end of the Garrett's answer there, you know, in particular verdict with beyond mode, the ability to leverage sharted sub clusters, give you, um, you know, sort of an advantage in terms of being able to isolate performance, hotspots you an advantage to that as being able to do that on a flash blade, for example. So, um, sharted, sub clusters allow you to sort of say, I am, you know, I am going to give prioritization to, you know, this particular element of my application in my dataset, but I can still share those, share that data across those, across those sub clusters. So, um, you know, as you see, you know, Vertica with the non-motor, >>You see Splunk advanced with, with smart store, um, you know, these are all sort of advancements that are, you know, it's a chicken and the egg thing. Um, they need faster storage, they need, you know, sort of a consolidated data storage data set. Um, and, and that's what sort of allows these things to drive forward. Yes, >>The verdict eon mode, there was a no, no, it's the ability to separate compute and storage and scale independently. I think, I think Vertica, if they're, if they're not the only one, they're one of the only ones I think they might even be the only one that does that in the cloud and on prem and that sort of plays into this distributed nature of this hyper distributed cloud. I sometimes call it and I'm interested in the, in the data pipeline. And I wonder Scott, if we can talk a little bit about that maybe where unified object and file fund. I mean, I'm envisioning this, this distributed mesh and then, you know, UFO is sort of a note on that, that I can tap when I need it. But, but Scott, what are you seeing as the state of infrastructure as it relates to the data pipeline and the trends there? >>Yeah, absolutely. Dave, so w when I think data pipeline, I immediately gravitate to analytics or, or machine learning initiatives. Right. And so one of the big things we see, and this is, it's an interesting trend. It seems, you know, we continue to see increased investment in AI, increase interest and people think, and as companies get started, they think, okay, well, what does that mean? Well, I gotta go hire a data scientist. Okay. Well that data scientist probably needs some infrastructure. And what they end, what often happens in these environments is where it ends up being a bespoke environment or a one-off environment. And then over time organizations run into challenges. And one of the big challenges is the data science team or people whose jobs are outside of it, spend way too much time trying to get the infrastructure, um, to, to keep up with their demands and predominantly around data performance. So one of the, one of the ways organizations that especially have artificial intelligence workloads in production, and we found this in our research have started mitigating that is by deploying flash all across the data pipe. We have. Yeah, >>We have data on this. Sorry to interrupt, but Pat, if you could bring up that, that chart, that would be great. Um, so take us through this, uh, Scott and, and share with us what we're looking at here. >>Yeah, absolutely. So, so Dave, I'm glad you brought this up. So we did this study. Um, I want to say late last year, uh, one of the things we looked at was across artificial intelligence environments. Now, one thing that you're not seeing on this slide is we went through and we asked all around the data pipeline and we saw flash everywhere. But I thought this was really telling because this is around data lakes. And when many people think about the idea of a data Lake, they think about it as a repository. It's a place where you keep maybe cold data. And what we see here is especially within production environments, a pervasive use of flash stores. So I think that 69% of organizations are saying their data Lake is mostly flash or all flash. And I think we had 0% that don't have any flash in that environment. So organizations are out that thing that flashes in essential technology to allow them to harness the value of their data. >>So Garrett, and then Matt, I wonder if you could chime in as well. We talk about digital transformation and I, I sometimes call it, you know, the COVID forced March to digital transformation. And, and I'm curious as to your perspective on things like machine learning and the adoption, um, and Scott, you may have a perspective on this as well. You know, we had to pivot, he had to get laptops. We had to secure the end points, you know, VDI, those became super high priorities. What happened to, you know, injecting AI into my applications and, and machine learning. Did that go in the back burner? Was that accelerated along with the need to digitally transform, uh, Garrett, I wonder if you could share with us what you saw with, with customers last year? >>Yeah. I mean, I think we definitely saw an acceleration. Um, I think folks are in, in my market are, are still kind of figuring out how they inject that into more of a widely distributed business use case. Um, but again, this data hub and allowing folks to now take advantage of this data that they've had in these data lakes for a long time. I agree with Scott. I mean, many of the data lakes that we have were somewhat flashing, accelerated, but they were typically really made up of large capacity, uh, slower spinning nearline drives, um, accelerated with some flash, but I'm really starting to see folks now look at some of those older Hadoop implementations and really leveraging new ways to look at how they consume data. And many of those redesigned customers are coming to us, wanting to look at all flash solutions. So we're definitely seeing it. And we're seeing an acceleration towards folks trying to figure out how to actually use it in more of a business sense now, or before I feel it goes a little bit more skunkworks kind of people dealing with, uh, you know, in a much smaller situation, maybe in the executive offices trying to do some testing and things. >>Scott you're nodding away. Anything you can add in here. >>Yeah. So, well, first off, it's great to get that confirmation that the stuff we're seeing in our research, Garrett seeing, you know, out in the field and in the real world, um, but you know, as it relates to really the past year, it's been really fascinating. So one of the things we, we studied at ESG is it buying intentions. What are things, what are initiatives that companies plan to invest in? And at the beginning of 2020, we saw heavy interest in machine learning initiatives. Then you transition to the middle of 2020 in the midst of COVID. Uh, some organizations continued on that path, but a lot of them had the pivot, right? How do we get laptops, everyone? How do we continue business in this new world? Well, now as we enter into 2021, and hopefully we're coming out of this, uh, you know, the, the pandemic era, um, we're getting into a world where organizations are pivoting back towards these strategic investments around how do I maximize the usage of data and actually accelerating those because they've seen the importance of, of digital business initiatives over the past >>Year. >>Yeah, Matt, I mean, when we exited 2019, we saw a narrowing of experimentation in our premise was, you know, that that organizations are going to start now operationalizing all their digital transformation experiments. And, and then we had a 10 month Petri dish on, on digital. So what are you, what are you seeing in this regard? >>It's 10 months, Petri dish is an interesting way to interesting way to describe it. Um, you know, we, we saw another, there's another, there's another candidate for pivot in there around ransomware as well. Right. Um, you know, security entered into the mix, uh, which took people's attention away from some of this as well. I mean, look, I I'd like to bring this up just a level or two, um, because what we're actually talking about here is progress, right? And, and progress is an, is an inevitability. Um, you know, whether it's whether, whether you believe that it's by 20, 25 or you, or you think it's 20, 35 or 2050, it doesn't matter. We're on a forced March to the eradication of desk. And that is happening in many ways. Uh, you know, in many ways, um, due to some of the things that Garrett was referring to and what Scott was referring to in terms of what our customer's demands for, how they're going to actually leverage the data that they have. >>And that brings me to kind of my final point on this, which is we see customers in three phases. There's the first phase where they say, Hey, I have this large data store, and I know there's value in there. I don't know how to get to it. Or I have this large data store and I've started a project to get value out of it. And we failed. Those could be customers that, um, you know, marched down the dupe, the Hadoop path early on. And they, they, they got some value out of it. Um, but they realized that, you know, HDFS, wasn't going to be a modern protocol going forward for any number of reasons. You know, the first being, Hey, if I have gold dot master, how do I know that I have gold dot four is consistent with my gold dot master? So data consistency matters. >>And then you have the sort of third group that says, I have these large datasets. I know how to extract value from them. And I'm already on to the Vertica is the elastics, you know, the Splunks et cetera. Um, I think those folks are the folks that, that latter group are the folks that kept their, their, their projects going because they were already extracting value from them. The first two groups we were seeing, sort of saying the second half of this year is when we're going to begin really being picking up on these, on these types of initiatives again. >>Well, thank you, Matt, by the way, for, for hitting the escape key, because I think value from data really is what this is all about. And there are some real blockers there that I kind of want to talk about. You've mentioned HDFS. I mean, we were very excited, of course, in the early days of a dupes, many of the concepts were profound, but at the end of the day, it was too complicated. We've got these hyper specialized roles that are, that are serving the business, but it still takes too long. It's, it's too hard to get value from data. And one of the blockers is infrastructure that the complexity of that infrastructure really needs to be abstracted taken up a level. We're starting to see this in, in cloud where you're seeing some of those abstraction layers being built from some of the cloud vendors, but more importantly, a lot of the vendors like pure, Hey, we can do that heavy lifting for you. Uh, and we, you know, we have expertise in engineering to do cloud native. So I'm wondering what you guys see. Maybe Garrett, you could start us off and the other salmon as some of the blockers, uh, to getting value from data and how we're going to address those in the coming decade. >>Yeah. I mean, I think part of it we're solving here obviously with, with pure bringing, uh, you know, flash to a market that traditionally was utilizing a much slower media. Um, you know, the other thing that I, that I see that's very nice with flash blade for example, is the ability to kind of do things, you know, once you get it set up a blade at a time. I mean, a lot of the things that we see from just kind of more of a simplistic approach to this, like a lot of these teams don't have big budgets and being able to kind of break them down into almost a blade type chunk, I think has really kind of allowed folks to get more projects and, and things off the ground because they don't have to buy a full expensive system to run these projects. Um, so that's helped a lot. >>I think the wider use cases have helped a lot. So, um, Matt mentioned ransomware, um, you know, using safe mode as a, as a place to help with ransomware has been a really big growth spot for us. We've got a lot of customers, very interested and excited about that. Um, and the other thing that I would say is bringing dev ops into data is another thing that we're seeing. So kind of that push towards data ops and really kind of using automation and infrastructure as code as a way to now kind of drive things through the system. The way that we've seen with automation through dev ops is, is really an area we're seeing a ton of growth with from a services perspective, >>Guys, any other thoughts on that? I mean, we're, I I'll, I'll tee it up there. I, we are seeing some bleeding edge, which is somewhat counterintuitive, especially from a cost standpoint, organizational changes at some, some companies, uh, think of some of the, the, the, the internet companies that do, uh, music, uh, for instance, and adding podcasts, et cetera. And those are different data products. We're seeing them actually reorganize their data architectures to make them more distributed, uh, and actually put the domain heads, the business heads in charge of the data and the data pipeline. And that is maybe less efficient, but, but it's, again, some of these bleeding edge. What else are you guys seeing out there that might be some harbinger of the next decade? >>Uh, I'll go first. Um, you know, I think specific to, um, the, the construct that you threw out, Dave, one of the things that we're seeing is, um, you know, the, the, the application owner, maybe it's the dev ops person, but it's, you know, maybe it's, it's, it's, it's the application owner through the dev ops person. They're, they're becoming more technical in their understanding of how infrastructure, um, interfaces with their, with their application. I think, um, you know, what, what we're seeing on the flash blade side is we're having a lot more conversations with application people than, um, just it people. It doesn't mean that the, it people aren't there, the it, people are still there for sure if they have to deliver the service, et cetera. Um, but you know, the days of, of it, you know, building up a catalog of services and a business owner subscribing to one of those services, you know, picking, you know, whatever sort of fits their need. >>Um, I don't think that constant, I think that's the construct that changes going forward. The application owner is becoming much more prescriptive about what they want the infrastructure to fit, how they want the infrastructure to fit into their application. Um, and that's a big change. And for, for, um, you know, certainly folks like, like Garrett and CDW, um, you know, they do a good job with this being able to sort of get to the application owner and bring those two sides together. There's a tremendous amount of value there, uh, for us to spend a little bit of a, of a retooling we've traditionally sold to the it side of the house. And, um, you know, we've had to teach ourselves how to go talk the language of, of applications. So, um, you know, I think you pointed out a good, a good, a good construct there, and you know, that that application owner tank playing a much bigger role in what they're expecting from the performance of it, infrastructure I think is, is, is a key, is a key change. >>Interesting. I mean, that definitely is a trend. That's puts you guys closer to the business where the infrastructure team is serving the business, as opposed to sometimes I talked to data experts and they're frustrated, uh, especially data owners or data, product builders who are frustrated that they feel like they have to beg, beg the, the data pipeline team to get, you know, new data sources or get data out. How about the edge? Um, you know, maybe Scott, you can kick us off. I mean, we're seeing, you know, the emergence of, of edge use cases, AI inferencing at the edge, lot of data at the edge. W what are you seeing there and how does this unified object I'll bring us back to that in file fit. >>Wow. Dave, how much time do we have, um, tell me, first of all, Scott, why don't you, why don't you just tell everybody what the edge is? Yeah. You got it all figured out. How much time do you have end of the day. And that's, that's a great question, right? Is if you take a step back and I think it comes back to Dave, something you mentioned it's about extracting value from data. And what that means is when you extract value from data, what it does is as Matt pointed out the, the influencers or the users of data, the application owners, they have more power because they're driving revenue now. And so what that means is from an it standpoint, it's not just, Hey, here are the services you get, use them or lose them, or, you know, don't throw a fit. It is no, I have to, I have to adapt. I have to follow what my application owners me. Now, when you bring that back to the edge, what it means is, is that data is not localized to the data center. I mean, we just went through a nearly 12 month period where >>The entire workforce for most of the companies in this country had went distributed and business continued. So if business is distributed, data is distributed. And that means, that means in the data center, that means at the edge, that means that the cloud, and that means in all other places and tons of places. And what it also means is you have to be able to extract and utilize data anywhere it may be. And I think that's something that we're going to continue to and continue to see. And I think it comes back to, you know, if you think about key characteristics, we've talked about, um, things like performance and scale for years, but we need to start rethinking it because on one hand, we need to get performance everywhere. But also in terms of scale, and this ties back to some of the other initiatives and getting value from data, it's something I call the, the massive success problem. One of the things we see, especially with, with workloads like machine learning is businesses find success with them. And as soon as they do they say, well, I need about 20 of these projects now will all of a sudden that overburdens it organizations, especially across, across core and edge and cloud environments. And so when you look at environments ability to meet performance and scale demands, wherever it needs to be is something that's really important. You know, >>Dave, I'd like to, um, just sort of tie together sort of two things that, um, I think that I heard from Scott and Garrett that I think are important and it's around this concept of scale. Um, you know, some of us are old enough to remember the day when kind of a 10 terabyte blast radius was too big of a blast radius for people to take on, or a terabyte of storage was considered to be, um, you know, uh, uh, an exemplary budget environment. Right. Um, now we sort of think as terabytes, kind of like we used to think of as gigabytes in some ways, um, petabyte, like you don't have to explain to anybody what a petabyte is anymore. Um, and you know, what's on the horizon and it's not far are our exabyte type dataset workloads. Um, and you start to think about what could be in that exabyte of data. >>We've talked about how you extract that value. And we've talked about sort of, um, how you start, but if the scale is big, not everybody's going to start at a petabyte or an exabyte to Garrett's point, the ability to start small and grow into these products, or excuse me, these projects, I think is a, is a really, um, fundamental concept here because you're not going to just go buy five. I'm going to go kick off a five petabyte project, whether you do that on disk or flash, it's going to be expensive, right. But if you could start at a couple of hundred terabytes, not just as a proof of concept, but as something that, you know, you could get predictable value out of that, then you could say, Hey, this either scales linearly, or non-linearly in a way that I can then go map my investments to how I can go dig deeper into this. That's how all of these things are going to, that's how these successful projects are going to start, because the people that are starting with these very large, you know, sort of, um, expansive, you know, Greenfield projects at multi petabyte scale, it's gonna be hard to realize near-term value. Excellent. Uh, >>We we're, we gotta wrap, but, but Garrett, I wonder if you could close it, when you look forward, you talk to customers, do you see this unification of file and object? Is it, is this an evolutionary trend? Is it something that is, that is, that is, that is going to be a lever that customers use. How do you see it evolving over the next two, three years and beyond? >>Yeah, I mean, I think from our perspective, I mean, just from what we're seeing from the numbers within the market, the amount of growth that's happening with unstructured data is really just starting to finally really kind of hit this data delusion or whatever you want to call it that we've been talking about for so many years. Um, it really does seem to now be becoming true, um, as we start to see things scale out and really folks settle into, okay, I'm going to use the cloud to start and maybe train my models, but now I'm going to get it back on prem because of latency or security or whatever the, the, the, um, decision points are there. Um, this is something that is not going to slow down. And I think, you know, folks like pure having the ability to have the tools that they give us, um, do use and bring to market with our customers are, are really key and critical for us. So I see it as a huge growth area and a big focus for us moving forward, >>Guys, great job unpacking a topic that, you know, it's covered a little bit, but I think we, we covered some ground. That is a, that is new. And so thank you so much for those insights and that data really appreciate your time. >>Thanks, Dave. Thanks. Yeah. Thanks, Dave. >>Okay. And thank you for watching the convergence of file and object. Keep it right there. Bright, bright back after the short break.
SUMMARY :
of file and object brought to you by pure storage. And Matt Burr is back with us, gentlemen, welcome to the program. Hey Scott, let me, let me start with you, uh, and get your perspective on what's going on in the market with, but also the need for high performance access to that data. And then you see unified Yeah, I think, I think for us, it's, you know, taking that consultative approach with our customers and really kind design, so I like it, you know, distributed cloud, et cetera, you know, Garrett, maybe it's important to talk about, um, elastic and Splunk and some of the things that you're seeing Um, but we're starting to see, you know, with like Vertica Ian, so I think it, you know, again, being driven by software is really kind of where we're seeing the world I am, you know, I am going to give prioritization to, you know, this particular element of my application you know, it's a chicken and the egg thing. But, but Scott, what are you seeing as the state of infrastructure as it relates to the data It seems, you know, we continue to see increased investment in AI, Sorry to interrupt, but Pat, if you could bring up that, that chart, that would be great. So, so Dave, I'm glad you brought this up. We had to secure the end points, you know, uh, you know, in a much smaller situation, maybe in the executive offices trying to do some testing and things. Anything you can add in here. Garrett seeing, you know, out in the field and in the real world, um, but you know, in our premise was, you know, that that organizations are going to start now operationalizing all Um, you know, security entered into the mix, uh, which took people's attention away from some of this as well. Um, but they realized that, you know, HDFS, wasn't going to be a modern you know, the Splunks et cetera. Uh, and we, you know, we have expertise in engineering is the ability to kind of do things, you know, once you get it set up a blade at a time. um, you know, using safe mode as a, as a place to help with ransomware has been a really What else are you guys seeing out there that Um, but you know, the days of, of it, you know, building up a So, um, you know, I think you pointed out a good, a good, a good construct there, to get, you know, new data sources or get data out. And what that means is when you extract value from data, what it does And I think it comes back to, you know, if you think about key characteristics, considered to be, um, you know, uh, uh, an exemplary budget environment. you know, sort of, um, expansive, you know, Greenfield projects at multi petabyte scale, you talk to customers, do you see this unification of file and object? And I think, you know, folks like pure having the Guys, great job unpacking a topic that, you know, it's covered a little bit, but I think we, we covered some ground. Bright, bright back after the short break.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Matt | PERSON | 0.99+ |
Garrett | PERSON | 0.99+ |
Scott | PERSON | 0.99+ |
Gary | PERSON | 0.99+ |
Scott Sinclair | PERSON | 0.99+ |
Matt Burr | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Garrett Belschner | PERSON | 0.99+ |
2019 | DATE | 0.99+ |
2021 | DATE | 0.99+ |
Petr | PERSON | 0.99+ |
69% | QUANTITY | 0.99+ |
10 terabyte | QUANTITY | 0.99+ |
first phase | QUANTITY | 0.99+ |
10 month | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
10 months | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
0% | QUANTITY | 0.99+ |
ESG | ORGANIZATION | 0.99+ |
two sides | QUANTITY | 0.99+ |
Pat | PERSON | 0.99+ |
today | DATE | 0.99+ |
next decade | DATE | 0.98+ |
25 | QUANTITY | 0.98+ |
two | QUANTITY | 0.98+ |
20 | QUANTITY | 0.98+ |
three phases | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
Vertica | ORGANIZATION | 0.98+ |
2050 | DATE | 0.98+ |
third group | QUANTITY | 0.98+ |
single block | QUANTITY | 0.97+ |
one platform | QUANTITY | 0.97+ |
three topics | QUANTITY | 0.97+ |
five petabyte | QUANTITY | 0.96+ |
March | DATE | 0.95+ |
three great guests | QUANTITY | 0.95+ |
late last year | DATE | 0.95+ |
one place | QUANTITY | 0.95+ |
one thing | QUANTITY | 0.92+ |
past year | DATE | 0.91+ |
Greenfield | ORGANIZATION | 0.9+ |
CDW | PERSON | 0.89+ |
CDW | ORGANIZATION | 0.88+ |
35 | QUANTITY | 0.88+ |
pandemic | EVENT | 0.87+ |
One | QUANTITY | 0.87+ |
three years | QUANTITY | 0.85+ |
Matt Burr, General Manager, FlashBlade, Pure Storage | The Convergence of File and Object
from around the globe it's thecube presenting the convergence of file and object brought to you by pure storage we're back with the convergence of file and object a special program made possible by pure storage and co-created with the cube so in this series we're exploring that convergence between file and object storage we're digging into the trends the architectures and some of the use cases for unified fast file and object storage uffo with me is matt burr who's the vice president general manager of flashblade at pure storage hello matt how you doing i'm doing great morning dave how are you good thank you hey let's start with a little 101 you know kind of the basics what is unified fast file and object yeah so look i mean i think you got to start with first principles talking about the rise of unstructured data so when we think about unstructured data you sort of think about the projections 80 of data by 2025 is going to be unstructured data whether that's machine generated data or you know ai and ml type workloads you start to sort of see this i don't want to say it's a boom uh but it's sort of a renaissance for unstructured data if you will where we move away from you know what we've traditionally thought of as general purpose nas and and file shares to you know really things that focus on uh fast object taking advantage of s3 cloud native applications that need to integrate with applications on site um you know ai workloads ml workloads tend to look to share data across uh you know multiple data sets and you really need to have a platform that can deliver both highly performant and scalable fast file and object from one system so talk a little bit more about some of the drivers that you know bring forth that need to unify file an object yeah i mean look you know there's a there's there's a real challenge um in managing you know bespoke uh bespoke infrastructure or architectures around general purpose nas and daz etc so um if you think about how a an architect sort of looks at an application they might say well okay i need to have um you know fast daz storage proximal to the application um but that's gonna require a tremendous amount of dabs which is a tremendous amount of drives right hard drives are you know historically pretty pretty pretty unwieldy to manage because you're replacing them relatively consistently at multi-petabyte scale so you start to look at things like the complexity of das you start to look at the complexity of general purpose nas and you start to just look at quite frankly something that a lot of people don't really want to talk about anymore but actual data center space right like consolidation matters the ability to take you know something that's the size of a microwave like a modern flash blade or a modern um you know uffo device replaces something that might be you know the size of three or four or five refrigerators so matt why is is now the right time for this i mean for years nobody really paid much attention to object s3 already obviously changed you know that course most of the world's data is still stored in file formats and you get there with nfs or smb why is now the time to think about unifying object and and file well because we're moving to things like a contactless society um you know the the things that we're going to do are going to just require a tremendous amount more compute power network and quite frankly storage throughput and you know i can give you two sort of real primary examples here right um you know warehouses are being you know taken over by robots if you will um it's not a war it's a it's a it's sort of a friendly advancement in you know how do i how do i store a box in a warehouse and you know we have we have a customer who focuses on large sort of big box distribution warehousing and you know a box that carried a an object uh two weeks ago might have a different box size two weeks later well that robot needs to know where the space is in the data center in order to put it but also needs to be able to process hey i don't want to put the thing that i'm going to access the most in the back of the warehouse i'm going to put that thing in the front of the warehouse all of those types of data you know sort of real time you can think of the robot as almost an edge device uh is processing in real time unstructured data and its object right so it's sort of the emergence of these new types of workloads and i give you the opposite example the other end of the spectrum is ransomware right you know today you know we'll talk to customers and they'll say quite commonly hey if you know anybody can sell me a backup device i need something that can restore quickly if you had the ability to restore something in 270 terabytes an hour or 250 terabytes an hour that's much faster when you're dealing with a ransomware attack you want to get your data back quickly you know so i want to actually i was going to ask you about that later but since you brought it up what is the right i guess call it architecture for for for ransomware i mean how and explain like how unified object and file would support me i get the fast recovery but how would you recommend a customer uh go about architecting a ransomware proof you know system yeah well you know with with flashblade and and with flasharray there's an actual feature called called safe mode and that safe mode actually protects uh the snapshots and and the data from uh sort of being is a part of the of the ransomware event and so if you're in a type of ransomware situation like this you're able to leverage safe mode and you say okay what happens in a ransomware attack is you can't get access to your data and so you know the bad guy the perpetrator is basically saying hey i'm not going to give you access to your data until you pay me you know x in bitcoin or whatever it might be right um with with safe mode those snapshots are actually protected outside of the ransomware blast zone and you can bring back those snapshots because what's your alternative if you're not doing something like that your alternative is either to pay and unlock your data or you have to start retouring restoring excuse me from tape or slow disk that could take you days or weeks to get your data back so leveraging safe mode um you know in either the flash for the flash blade product is a great way to go about uh architecting against ransomware i got to put my i'm thinking like a customer now so safe mode so that's an immutable mode right can't change the data um is it can can an administrator go in and change that mode can he turn it off do i still need an air gap for example what would you recommend there yeah so there there are still um uh you know sort of our back or rollback role-based access control policies uh around who can access that safe mode and who can right okay so uh anyway subject for a different day i want to i want to actually bring up uh if you don't object a topic that i think used to be really front and center and it now be is becoming front and center again i mean wikibon just produced a research note forecasting the future of flash and hard drives and those of you who follow us know we've done this for quite some time and you can if you could bring up the chart here you you could see and we see this happening again it was originally we forecast the the death of of quote unquote high spin speed disk drives which is kind of an oxymoron but you can see on here on this chart this hard disk had a magnificent journey but they peaked in volume in manufacturing volume in 2010 and the reason why that is is so important is that volumes now are steadily dropping you can see that and we use wright's law to explain why this is a problem and wright's law essentially says that as you your cumulative manufacturing volume doubles your cost to manufacture decline by a constant percentage now i won't go too much detail on that but suffice it to say that flash volumes are growing very rapidly hdd volumes aren't and so flash because of consumer volumes can take advantage of wright's law and that constant reduction and that's what's really important for the next generation which is always more expensive to build and so this kind of marks the beginning of the end matt what do you think what what's the future hold for spinning disc in your view uh well i can give you the answer on two levels on a personal level uh it's why i come to work every day uh you know the the eradication or or extinction of an inefficient thing um you know i like to say that inefficiency is the bane of my existence uh and i think hard drives are largely inefficient and i'm willing to accept the sort of long-standing argument that um you know we've seen this transition in block right and we're starting to see it repeat itself in in unstructured data um and i'm willing to accept the argument that cost is a vector here and it most certainly is right hdds have been considerably cheaper uh than than than flash storage um you know even to this day uh you know up to this point right but we're starting to approach the point where you sort of reach a 3x sort of you know differentiator between the cost of an hdd and an sdd and you know that really is that point in time when uh you begin to pick up a lot of volume and velocity and so you know that tends to map directly to you know what you're seeing here which is you know a slow decline uh which i think is going to become even more rapid kind of probably starting around next year where you start to see sds excuse me ssds uh you know really replacing hdds uh at a much more rapid clip particularly on the unstructured data side and it's largely around cost the the workloads that we talked about robots and warehouses or you know other types of advanced machine learning and artificial intelligence type applications and workflows you know they require a degree of performance that a hard drive just can't deliver we are we are seeing sort of the um creative innovative uh disruption of an entire industry right before our eyes it's a fun thing to live through yeah and and we would agree i mean it doesn't the premise there is it doesn't have to be less expensive we think it will be by you know the second half or early second half of this decade but even if it's a we think around a 3x delta the value of of ssd relative to spinning disk is going to overwhelm just like with your laptop you know it got to the point where you said why would i ever have a spinning disc in my laptop we see the same thing happening here um and and so and we're talking about you know raw capacity you know put in compression and dedupe and everything else that you really can't do with spinning discs because of the performance issues you can do with flash okay let's come back to uffo can we dig into the challenges specifically that that this solves for customers give me give us some examples yeah so you know i mean if we if we think about the examples um you know the the robotic one um i think is is is the one that i think is the marker for you know kind of of of the the modern side of of of what we see here um but what we're you know what we're what we're seeing from a trend perspective which you know not everybody's deploying robots right um you know there's there's many companies that are you know that aren't going to be in either the robotic business uh or or even thinking about you know sort of future type oriented type things but what they are doing is greenfield applications are being built on object um generally not on not on file and and not on block and so you know the rise of of object as sort of the the sort of let's call it the the next great protocol for um you know for uh for for modern workloads right this is this is that that modern application coming to the forefront and that could be anything from you know financial institutions you know right down through um you know we've even see it and seen it in oil and gas uh we're also seeing it across across healthcare uh so you know as as as companies take the opportunity as industries to take this opportunity to modernize you know they're modernizing not on things that are are leveraging you know um you know sort of archaic disk technology they're they're they're really focusing on on object but they still have file workflows that they need to that they need to be able to support and so having the ability to be able to deliver those things from one device in a capacity orientation or a performance orientation while at the same time dramatically simplifying the overall administration of your environment both physically and non-physically is a key driver so the great thing about object is it's simple it's a kind of a get put metaphor um it's it scales out you know because it's got metadata associated with the data uh and and it's cheap the drawback is you don't necessarily associate it with high performance and and as well most applications don't you know speak in that language they speak in the language of file you know or as you mentioned block so i i see real opportunities here if i have some some data that's not necessarily frequently accessed you know every day but yet i want to then whether end of quarter or whatever it is i want to i want to or machine learning i want to apply some ai to that data i want to bring it in and then apply a file format uh because for performance reasons is that right maybe you could unpack that a little bit yeah so um you know we see i mean i think you described it well right um but i don't think object necessarily has to be slow um and nor does it have to be um you know because when you think about you brought up a good point with metadata right being able to scale to a billions of objects being able to scale to billions of objects excuse me is of value right um and i think people do traditionally associate object with slow but it's not necessarily slow anymore right we we did a sort of unofficial survey of of of our of our customers and our employee base and when people described object they thought of it as like law firms and storing a word doc if you will um and that that's just you know i think that there's a lack of understanding or a misnomer around what modern what modern object has become and perform an object particularly at scale when we're talking about billions of objects you know that's the next frontier right um is it at pace performance wise with you know the other protocols no but it's making leaps and grounds so you talked a little bit more about some of the verticals that you see i mean i think when i think of financial services i think transaction processing but of course they have a lot of tons of unstructured data are there any patterns you're seeing by by vertical market um we're you know we're not that's the interesting thing um and you know um as a as a as a as a company with a with a block heritage or a block dna those patterns were pretty easy to spot right there were a certain number of databases that you really needed to support oracle sql some postgres work etc then kind of the modern databases around cassandra and things like that you knew that there were going to be vmware environments you know you could you could sort of see the trends and where things were going unstructured data is such a broader horizontal um thing right so you know inside of oil and gas for example you have you know um you have specific applications and bespoke infrastructures for those applications um you know inside of media entertainment you know the same thing the the trend that we're seeing the commonality that we're seeing is the modernization of you know object as a starting point for all the all of the net new workloads within within those industry verticals right that's the most common request we see is what's your object roadmap what's your you know what's your what's your object strategy you know where do you think where do you think object is going so um there isn't any um you know sort of uh there's no there's no path uh it's really just kind of a wide open field in front of us with common requests across all industries so the amazing thing about pure just as a kind of a little you know quasi you know armchair historian the industry is pure was really the only company in many many years to be able to achieve escape velocity break through a billion dollars i mean three part couldn't do it isilon couldn't do it compellent couldn't do it i could go on but pure was able to achieve that as an independent company uh and so you become a leader you look at the gartner magic quadrant you're a leader in there i mean if you've made it this far you've got to have some chops and so of course it's very competitive there are a number of other storage suppliers that have announced products that unify object and file so i'm interested in how pure differentiates why pure um it's a great question um and it's one that uh you know having been a long time puritan uh you know i take pride in answering um and it's actually a really simple answer um it's it's business model innovation and technology right the the technology that goes behind how we do what we do right and i don't mean the product right innovation is product but having a better support model for example um or having on the business model side you know evergreen storage right where we sort of look at your relationship to us as a subscription right um you know we're gonna sort of take the thing that that you've had and we're gonna modernize that thing in place over time such that you're not rebuying that same you know terabyte or you know petabyte of storage that you've that you that you've paid for over time so um you know sort of three legs of the stool uh that that have made you know pure clearly differentiated i think the market has has recognized that um you're right it's it's hard to break through to a billion dollars um but i look forward to the day that you know we we have two billion dollar products and i think with uh you know that rise in in unstructured data growing to 80 by 2025 and you know the massive transition that you know you guys have noted in in in your hdd slide i think it's a huge opportunity for us on you know the other unstructured data side of the house you know the other thing i'd add matt and i've talked to cause about this is is it's simplicity first i've asked them why don't you do this why don't you do it and the answer is always the same is that adds complexity and we we put simplicity for the customer ahead of everything else and i think that served you very very well what about the economics of of unified file and object i mean if you bringing additional value presumably there's a there there's a cost to that but there's got to be also a business case behind it what kind of impact have you seen with customers yeah i mean look i'll i'll go back to something i mentioned earlier which is just the reclamation of floor space and power and cooling right um you know there's a you know there's people people people want to search for kind of the the sexier element if you will when it comes to looking at how we how you derive value from something but the reality is if you're reducing your power consumption by you know by by a material percentage um power bills matter in big in big data centers you know customers typically are are facing you know a paradigm of well i i want to go to the cloud but you know the clouds are not being more expensive than i thought it was going to be or you know i've figured out what i can use in the cloud i thought it was going to be everything but it's not going to be everything so hybrid's where we're landing but i want to be out of the data center business and i don't want to have a team of 20 storage people to match you know to administer my storage um you know so there's sort of this this very tangible value around you know hey if i could manage um you know multiple petabytes with one full-time engineer uh because the system uh to your and kaza's point was radically simpler to administer didn't require someone to be running around swapping drives all the time would that be a value the answer is yes 100 of the time right and then you start to look at okay all right well on the uffo side from a product perspective hey if i have to manage a you know bespoke environment for this application if i have to manage a bespoke environment for this application and a spoke environment for this application and this focus environment for this application i'm managing four different things and can i actually share data across those four different things there's ways to share data but most customers it just gets too complex how do you even know what your what your gold.master copy is of data if you have it in four different places or you try to have it in four different places and it's four different siloed infrastructures so when you get to the sort of the side of you know how do we how do you measure value in uffo it's actually being able to have all of that data concentrated in one place so that you can share it from application to application got it i'm interested we use a couple minutes left i'm interested in the the update on flashblade you know generally but also i have a specific question i mean look getting file right is hard enough uh you just announced smb support for flashblade i'm interested in you know how that fits in i think it's kind of obvious with file and object converging but give us the update on on flashblade and maybe you could address that specific question yeah so um look i mean we're we're um you know tremendously excited about the growth of flashblade uh you know we we we found workloads we never expected to find um you know the rapid restore workload was one that was actually brought to us from from a customer actually um and has become you know one of our one of our top two three four you know workloads so um you know we're really happy with the trend we've seen in it um and you know mapping back to you know thinking about hdds and ssds you know we're well on a path to building a billion dollar business here so you know we're very excited about that but to your point you know you don't just snap your fingers and get there right um you know we've learned that doing file and object uh is is harder than block um because there's more things that you have to go do for one you're basically focused on three protocols s b nfs and s3 not necessarily in that order um but to your point about s b uh you know we we are on the path through to releasing um you know smb full full native smb support in in the system that will allow us to uh service customers we have a limitation with some customers today where they'll have an smb portion of their nfs workflow um and we do great on the nfs side um but you know we didn't we didn't have the ability to plug into the s p component of their workflow so that's going to open up a lot of opportunity for us um on on that front um and you know we continue to you know invest significantly across the board in in areas like security which is you know become more than just a hot button you know today security's always been there but it feels like it's blazing hot today and so you know going through the next couple years we'll be looking at uh you know developing some some uh you know pretty material security elements of the product as well so uh well on a path to a billion dollars is the net on that and uh you know we're we're fortunate to have have smb here and we're looking forward to introducing that to to those customers that have you know nfs workloads today with an s b component yeah nice tailwind good tam expansion strategy matt thanks so much we're out of time but really appreciate you coming on the program we appreciate you having us and uh thanks much dave good to see you all right good to see you and you're watching the convergence of file and object keep it right there we'll be back with more right after this short break [Music]
SUMMARY :
i need to have um you know fast daz
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
2010 | DATE | 0.99+ |
Matt Burr | PERSON | 0.99+ |
250 terabytes | QUANTITY | 0.99+ |
270 terabytes | QUANTITY | 0.99+ |
2025 | DATE | 0.99+ |
three | QUANTITY | 0.99+ |
four | QUANTITY | 0.99+ |
matt burr | PERSON | 0.99+ |
today | DATE | 0.99+ |
billion dollars | QUANTITY | 0.98+ |
two levels | QUANTITY | 0.98+ |
billions of objects | QUANTITY | 0.98+ |
two weeks later | DATE | 0.98+ |
80 | QUANTITY | 0.98+ |
two weeks ago | DATE | 0.98+ |
one system | QUANTITY | 0.98+ |
an hour | QUANTITY | 0.97+ |
cassandra | PERSON | 0.97+ |
matt | PERSON | 0.97+ |
next year | DATE | 0.96+ |
billions of objects | QUANTITY | 0.96+ |
dave | PERSON | 0.96+ |
one device | QUANTITY | 0.96+ |
both | QUANTITY | 0.96+ |
first principles | QUANTITY | 0.93+ |
second half | QUANTITY | 0.93+ |
billion dollar | QUANTITY | 0.91+ |
petabyte | QUANTITY | 0.9+ |
four different siloed infrastructures | QUANTITY | 0.89+ |
two billion dollar | QUANTITY | 0.89+ |
one place | QUANTITY | 0.89+ |
next couple years | DATE | 0.88+ |
80 of data | QUANTITY | 0.88+ |
early second half of this decade | DATE | 0.87+ |
20 storage people | QUANTITY | 0.86+ |
four different things | QUANTITY | 0.86+ |
five refrigerators | QUANTITY | 0.86+ |
one | QUANTITY | 0.84+ |
oracle sql | TITLE | 0.81+ |
one full-time | QUANTITY | 0.8+ |
wikibon | ORGANIZATION | 0.79+ |
four different places | QUANTITY | 0.79+ |
first | QUANTITY | 0.79+ |
3x | QUANTITY | 0.78+ |
a lot of people | QUANTITY | 0.78+ |
FlashBlade | ORGANIZATION | 0.78+ |
end of quarter | DATE | 0.77+ |
a couple minutes | QUANTITY | 0.77+ |
two sort | QUANTITY | 0.75+ |
isilon | ORGANIZATION | 0.74+ |
s3 | TITLE | 0.74+ |
three part | QUANTITY | 0.72+ |
100 of | QUANTITY | 0.7+ |
terabyte | QUANTITY | 0.7+ |
three legs | QUANTITY | 0.68+ |
two | QUANTITY | 0.68+ |
multiple petabytes | QUANTITY | 0.68+ |
vice president | PERSON | 0.65+ |
many years | QUANTITY | 0.61+ |
flashblade | ORGANIZATION | 0.57+ |
many companies | QUANTITY | 0.56+ |
tons | QUANTITY | 0.55+ |
gartner | ORGANIZATION | 0.53+ |
General Manager | PERSON | 0.53+ |
multi | QUANTITY | 0.51+ |
general manager | PERSON | 0.45+ |
Pure | ORGANIZATION | 0.34+ |
Matt Burr, Pure Storage & Rob Ober, NVIDIA | Pure Storage Accelerate 2018
>> Announcer: Live from the Bill Graham Auditorium in San Francisco, it's theCUBE! Covering Pure Storage Accelerate 2018 brought to you by Pure Storage. >> Welcome back to theCUBE's continuing coverage of Pure Storage Accelerate 2018, I'm Lisa Martin, sporting the clong and apparently this symbol actually has a name, the clong, I learned that in the last half an hour. I know, who knew? >> Really? >> Yes! Is that a C or a K? >> Is that a Prince orientation or, what is that? >> Yes, I'm formerly known as. >> Nice. >> Who of course played at this venue, as did Roger Daltry, and The Who. >> And I might have been staff for one of those shows. >> You could have been, yeah, could I show you to your seat? >> Maybe you're performing later. You might not even know this. We have a couple of guests joining us. We've got Matt Burr, the GM of FlashBlade, and Rob Ober, the Chief Platform Architect at NVIDIA. Guys, welcome to theCUBE. >> Hi. >> Thank you. >> Dave: Thanks for coming on. >> So, lots of excitement going on this morning. You guys announced Pure and NVIDIA just a couple of months ago, a partnership with AIRI. Talk to us about AIRI, what is it? How is it going to help organizations in any industry really democratize AI? >> Well, AIRI, so AIRI is something that we announced, the AIRI Mini today here at Accelerate 2018. AIRI was originally announced at the GTC, Global Technology Conference, for NVIDIA back in March, and what it is is, it essentially brings NVIDIA's DGX servers, connected with either Arista or Cisco switches down to the Pure Storage FlashBlade, so this is something that sits in less than half a rack in the data center, that replaces something that was probably 25 or 50 racks of compute and store, so, I think Rob and I like to talk about it as kind of a great leap forward in terms of compute potential. >> Absolutely, yeah. It's an AI supercomputer in a half rack. >> So one of the things that this morning, that we saw during the general session that Charlie talked about, and I think Matt (mumbles) kind of a really brief history of the last 10 to 20 years in storage, why is modern external storage essential for AI? >> Well, Rob, you want that one, or you want me to take it? Coming from the non storage guy, maybe? (both laugh) >> Go ahead. >> So, when you look at the structure of GPUs, and servers in general, we're talking about massively parallel compute, right? These are, we're now taking not just tens of thousands of cores but even more cores, and we're actually finding a path for them to communicate with storage that is also massively parallel. Storage has traditionally been something that's been kind of serial in nature. Legacy storage has always waited for the next operation to happen. You actually want to get things that are parallel so that you can have parallel processing, both at the compute tier, and parallel processing at the storage tier. But you need to have big network bandwidth, which was what Charlie was eluding to, when Charlie said-- >> Lisa: You like his stool? >> When Charlie was, one of his stools, or one of the legs of his stool, was talking about, 20 years ago we were still, or 10 years ago, we were at 10 gig networks, in merges of 100 gig networks has really made the data flow possible. >> So I wonder if we can unpack that. We talked a little bit to Rob Lee about this, the infrastructure for AI, and wonder if we can go deeper. So take the three legs of the stool, and you can imagine this massively parallel compute-storage-networking grid, if you will, one of our guys calls it uni-grid, not crazy about the name, but this idea of alternative processing, which is your business, really spanning this scaled out architecture, not trying to stuff as much function on a die as possible, really is taking hold, but what is the, how does that infrastructure for AI evolve from an architect's perspective? >> The overall infrastructure? I mean, it is incredibly data intensive. I mean a typical training set is terabytes, in the extreme it's petabytes, for a single run, and you will typically go through that data set again and again and again, in a training run, (mumbles) and so you have one massive set that needs to go to multiple compute engines, and the reason it's multiple compute engines is people are discovering that as they scale up the infrastructure, you actually, you get pretty much linear improvements, and you get a time to solution benefit. Some of the large data centers will run a training run for literally a month and if you start scaling it out, even in these incredibly powerful things, you can bring time to solution down, you can have meaningful results much more quickly. >> And you be a sensitive, sort of a practical application of that. Yeah there's a large hedge fund based in the U.K. called Man AHL. They're a system-based quantitative training firm, and what that means is, humans really aren't doing a lot of the training, machines are doing the vast majority if not all of the training. What the humans are doing is they're essentially quantitative analysts. The number of simulations that they can run is directly correlative to the number of trades that their machines can make. And so the more simulations you can make, the more trades you can make. The shorter your simulation time is, the more simulations that you can run. So we're talking about in a sort of a meta context, that concept applies to everything from retail and understanding, if you're a grocery store, what products are not on my shelves at a given time. In healthcare, discovering new forms of pathologies for cancer treatments. Financial services we touched on, but even broader, right down into manufacturing, right? Looking at, what are my defect rates on my lines, and if it used to take me a week to understand the efficiency of my assembly line, if I can get that down to four hours, and make adjustments in real time, that's more than just productivity, it's progress. >> Okay so, I wonder if we can talk about how you guys see AI emerging in the marketplace. You just gave an example. We were talking earlier again to Rob Lee about, it seems today to be applied and, in narrow use cases, and maybe that's going to be the norm, whether it's autonomous vehicles or facial recognition, natural language processing, how do you guys see that playing out? Whatever be, this kind of ubiquitous horizontal layer or do you think the adoption is going to remain along those sort of individual lines, if you will. >> At the extreme, like when you really look out at the future, let me start by saying that my background is processor architecture. I've worked in computer science, the whole thing is to understand problems, and create the platforms for those things. What really excited me and motivated me about AI deep learning is that it is changing computer science. It's just turning it on its head. And instead of explicitly programming, it's now implicitly programming, based on the data you feed it. And this changes everything and it can be applied to almost any use case. So I think that eventually it's going to be applied in almost any area that we use computing today. >> Dave: So another way of asking that question is how far can we take machine intelligence and your answer is pretty far, pretty far. So as processor architect, obviously this is very memory intensive, you're seeing, I was at the Micron financial analyst meeting earlier this week and listening to what they were saying about these emerging, you got T-RAM, and obviously you have Flash, people are excited about 3D cross-point, I heard it, somebody mentioned 3D cross-point on the stage today, what do you see there in terms of memory architectures and how they're evolving and what do you need as a systems architect? >> I need it all. (all talking at once) No, if I could build a GPU with more than a terabyte per second of bandwidth and more than a terabyte of capacity I could use it today. I can't build that, I can't build that yet. But I need, it's a different stool, I need teraflops, I need memory bandwidth, and I need memory capacity. And really we just push to the limit. Different types of neural nets, different types of problems, will stress different things. They'll stress the capacity, the bandwidth, or the actual compute. >> This makes the data warehousing problem seem trivial, but do you see, you know what I mean? Data warehousing, it was like always a chase, chasing the chips and snake swallowing a basketball I called it, but do you see a day that these problems are going to be solved, architecturally, it talks about, More's laws, moderating, or is this going to be this perpetual race that we're never going to get to the end of? >> So let me put things in perspective first. It's easy to forget that the big bang moment for AI and deep learning was the summer of 2012, so slightly less than six years ago. That's when Alex Ned get the seed and people went wow, this is a whole new approach, this is amazing. So a little less than six years in. I mean it is a very young, it's a young area, it is in incredible growth, the change in state of art is literally month by month right now. So it's going to continue on for a while, and we're just going to keep growing and evolving. Maybe five years, maybe 10 years, things will stabilize, but it's an exciting time right now. >> Very hard to predict, isn't it? >> It is. >> I mean who would've thought that Alexa would be such a dominant factor in voice recognition, or that a bunch of cats on the internet would lead to facial recognition. I wonder if you guys can comment, right? I mean. >> Strange beginnings. (all laughing) >> But very and, I wonder if I can ask you guys ask about the black box challenge. I've heard some companies talk about how we're going to white box everything, make it open and, but the black box problem meaning if I have to describe, and we may have talked about this, how I know that it's a dog. I struggle to do that, but a machine can do that. I don't know how it does it, probably can't tell me how it does it, but it knows, with a high degree of accuracy. Is that black box phenomenon a problem, or do we just have to get over it? >> Up to you. >> I think it's certain, I don't think it's a problem. I know that mathematicians, who are friends, it drives them crazy, because they can't tell you why it's working. So it's a intellectual problem that people just need to get over. But it's the way our brains work, right? And our brains work pretty well. There are certain areas I think where for a while there will be certain laws in place where you can't prove the exact algorithm, you can't use it, but by and large, I think the industry's going to get over it pretty fast. >> I would totally agree, yeah. >> You guys are optimists about the future. I mean you're not up there talking about how jobs are going to go away and, that's not something that you guys are worried about, and generally, we're not either. However, machine intelligence, AI, whatever you want to call it, it is very disruptive. There's no question about it. So I got to ask you guys a few fun questions. Do you think large retail stores are going to, I mean nothing's in the extreme, but do you think they'll generally go away? >> Do I think large retail stores will generally go away? When I think about retail, I think about grocery stores, and the things that are going to go away, I'd like to see standing in line go away. I would like my customer experience to get better. I don't believe that 10 years from now we're all going to live inside our houses and communicate over the internet and text and half of that be with chat mods, I just don't believe that's going to happen. I think the Amazon effect has a long way to go. I just ordered a pool thermometer from Amazon the other day, right? I'm getting old, I ordered readers from Amazon the other day, right? So I kind of think it's that spur of the moment item that you're going to buy. Because even in my own personal habits like I'm not buying shoes and returning them, and waiting five to ten times, cycle, to get there. You still want that experience of going to the store. Where I think retail will improve is understanding that I'm on my way to their store, and improving the experience once I get there. So, I think you'll see, they need to see the Amazon effect that's going to happen, but what you'll see is technology being employed to reach a place where my end user experience improves such that I want to continue to go there. >> Do you think owning your own vehicle, and driving your own vehicle, will be the exception, rather than the norm? >> It pains me to say this, 'cause I love driving, but I think you're right. I think it's a long, I mean it's going to take a while, it's going to take a long time, but I think inevitably it's just too convenient, things are too congested, by freeing up autonomous cars, things that'll go park themselves, whatever, I think it's inevitable. >> Will machines make better diagnoses than doctors? >> Matt: Oh I mean, that's not even a question. Absolutely. >> They already do. >> Do you think banks, traditional banks, will control of the payment systems? >> That's a good one, I haven't thought about-- >> Yeah, I'm not sure that's an AI related thing, maybe more of a block chain thing, but, it's possible. >> Block chain and AI, kind of cousins. >> Yeah, they are, they are actually. >> I fear a world though where we actually end up like WALLE in the movie and everybody's on these like floating chez lounges. >> Yeah lets not go there. >> Eating and drinking. No but I'm just wondering, you talked about, Matt, in terms of the number of, the different types of industries that really can verge in here. Do you see maybe the consumer world with our expectation that we can order anything on Amazon from a thermometer to a pair of glasses to shoes, as driving other industries to kind of follow what we as consumers have come to expect? >> Absolutely no question. I mean that is, consumer drives everything, right? All flash arrays were driven by you have your phone there, right? The consumerization of that device was what drove Toshiba and all the other fad manufacturers to build more NAM flash, which is what commoditized NAM flash, which what brought us faster systems, these things all build on each other, and from a consumer perspective, there are so many things that are inefficient in our world today, right? Like lets just think about your last call center experience. If you're the normal human being-- >> I prefer not to, but okay. >> Yeah you said it, you prefer not to, right? My next comment was going to be, most people's call center experiences aren't that good. But what if the call center technology had the ability to analyze your voice and understand your intonation, and your inflection, and that call center employee was being given information to react to what you were saying on the call, such that they either immediately escalated that call without you asking, or they were sent down a decision path, which brought you to a resolution that said that we know that 62% of the time if we offer this person a free month of this, that person is going to view, is going to go away a happy customer, and rate this call 10 out of 10. That is the type of things that's going to improve with voice recognition, and all of the voice analysis, and all this. >> And that really get into how far we can take machine intelligence, the things that machines, or the humans can do, that machines can't, and that list changes every year. The gap gets narrower and narrower, and that's a great example. >> And I think one of the things, going back to your, whether stores'll continue being there or not but, one of the biggest benefits of AI is recommendation, right? So you can consider it userous maybe, or on the other hand it's great service, where a lot of, something like an Amazon is able to say, I've learned about you, I've learned about what people are looking for, and you're asking for this, but I would suggest something else, and you look at that and you go, "Yeah, that's exactly what I'm looking for". I think that's really where, in the sales cycle, that's really where it gets up there. >> Can machines stop fake news? That's what I want to know. >> Probably. >> Lisa: To be continued. >> People are working on that. >> They are. There's a lot, I mean-- >> That's a big use case. >> It is not a solved problem, but there's a lot of energy going into that. >> I'd take that before I take the floating WALLE chez lounges, right? Deal. >> What if it was just for you? What if it was just a floating chez lounge, it wasn't everybody, then it would be alright, right? >> Not for me. (both laughing) >> Matt and Rob, thanks so much for stopping by and sharing some of your insights and we should have a great rest of the day at the conference. >> Great, thank you very much. Thanks for having us. >> For Dave Vellante, I'm Lisa Martin, we're live at Pure Storage Accelerate 2018 at the Bill Graham Civic Auditorium. Stick around, we'll be right back after a break with our next guest. (electronic music)
SUMMARY :
brought to you by Pure Storage. I learned that in the last half an hour. Who of course played at this venue, and Rob Ober, the Chief Platform Architect at NVIDIA. Talk to us about AIRI, what is it? I think Rob and I like to talk about it as kind of It's an AI supercomputer in a half rack. for the next operation to happen. has really made the data flow possible. and you can imagine this massively parallel and if you start scaling it out, And so the more simulations you can make, AI emerging in the marketplace. based on the data you feed it. and what do you need as a systems architect? the bandwidth, or the actual compute. in incredible growth, the change I wonder if you guys can comment, right? (all laughing) I struggle to do that, but a machine can do that. that people just need to get over. So I got to ask you guys a few fun questions. and the things that are going to go away, I think it's a long, I mean it's going to take a while, Matt: Oh I mean, that's not even a question. maybe more of a block chain thing, but, it's possible. and everybody's on these like floating to kind of follow what we as consumers I mean that is, consumer drives everything, right? information to react to what you were saying on the call, the things that machines, or the humans can do, and you look at that and you go, That's what I want to know. There's a lot, I mean-- It is not a solved problem, I'd take that before I take the Not for me. and sharing some of your insights and Great, thank you very much. at the Bill Graham Civic Auditorium.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Matt Burr | PERSON | 0.99+ |
Matt | PERSON | 0.99+ |
Charlie | PERSON | 0.99+ |
10 gig | QUANTITY | 0.99+ |
25 | QUANTITY | 0.99+ |
Rob Lee | PERSON | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
Rob | PERSON | 0.99+ |
five | QUANTITY | 0.99+ |
Lisa | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
100 gig | QUANTITY | 0.99+ |
Toshiba | ORGANIZATION | 0.99+ |
Rob Ober | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
62% | QUANTITY | 0.99+ |
Dave | PERSON | 0.99+ |
10 | QUANTITY | 0.99+ |
March | DATE | 0.99+ |
five years | QUANTITY | 0.99+ |
10 years | QUANTITY | 0.99+ |
Pure Storage | ORGANIZATION | 0.99+ |
Alex Ned | PERSON | 0.99+ |
Roger Daltry | PERSON | 0.99+ |
AIRI | ORGANIZATION | 0.99+ |
U.K. | LOCATION | 0.99+ |
four hours | QUANTITY | 0.99+ |
ten times | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Bill Graham Civic Auditorium | LOCATION | 0.99+ |
today | DATE | 0.99+ |
less than half a rack | QUANTITY | 0.98+ |
Arista | ORGANIZATION | 0.98+ |
10 years ago | DATE | 0.98+ |
San Francisco | LOCATION | 0.98+ |
20 years ago | DATE | 0.98+ |
summer of 2012 | DATE | 0.98+ |
three legs | QUANTITY | 0.98+ |
tens of thousands of cores | QUANTITY | 0.97+ |
less than six years | QUANTITY | 0.97+ |
Man AHL | ORGANIZATION | 0.97+ |
both | QUANTITY | 0.97+ |
a week | QUANTITY | 0.96+ |
earlier this week | DATE | 0.96+ |
more than a terabyte | QUANTITY | 0.96+ |
50 racks | QUANTITY | 0.96+ |
Global Technology Conference | EVENT | 0.96+ |
this morning | DATE | 0.95+ |
more than a terabyte per second | QUANTITY | 0.95+ |
Pure | ORGANIZATION | 0.94+ |
GTC | EVENT | 0.94+ |
less than six years ago | DATE | 0.93+ |
petabytes | QUANTITY | 0.92+ |
terabytes | QUANTITY | 0.92+ |
half rack | QUANTITY | 0.92+ |
one of the legs | QUANTITY | 0.92+ |
single run | QUANTITY | 0.92+ |
a month | QUANTITY | 0.91+ |
FlashBlade | ORGANIZATION | 0.9+ |
theCUBE | ORGANIZATION | 0.88+ |
Pure Storage Accelerate 2018 | EVENT | 0.88+ |
20 years | QUANTITY | 0.87+ |
Pure Storage Convergence File Object promo
>>Welcome to the convergence of file and object, a special program made possible by pure storage and co-created with the cube we're running. What I would call a little mini series and we're exploring the conversions of file and object storage. What are the key trends? Why would you want to converge file and object? What are the use cases and architectural considerations and importantly, what are the business drivers of U F F O so-called unified fast file and object in this program, you'll hear from Matt Burr, who was the GM of pure flash blade business. And then we'll bring in the perspectives of a solutions architect, Garrett who's from CDW, and then the analyst angle with Scott St. Claire of the enterprise strategy group ESG. And then we'll wrap with a really interesting technical conversation with Chris and bond CB bond, who is a lead data architect at Microfocus. And he's got a really cool use case to share with us. So sit back and enjoy the pros.
SUMMARY :
What are the use cases and
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Tom | PERSON | 0.99+ |
Marta | PERSON | 0.99+ |
John | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
David | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Chris Keg | PERSON | 0.99+ |
Laura Ipsen | PERSON | 0.99+ |
Jeffrey Immelt | PERSON | 0.99+ |
Chris | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Chris O'Malley | PERSON | 0.99+ |
Andy Dalton | PERSON | 0.99+ |
Chris Berg | PERSON | 0.99+ |
Dave Velante | PERSON | 0.99+ |
Maureen Lonergan | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Paul Forte | PERSON | 0.99+ |
Erik Brynjolfsson | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Andrew McCafee | PERSON | 0.99+ |
Yahoo | ORGANIZATION | 0.99+ |
Cheryl | PERSON | 0.99+ |
Mark | PERSON | 0.99+ |
Marta Federici | PERSON | 0.99+ |
Larry | PERSON | 0.99+ |
Matt Burr | PERSON | 0.99+ |
Sam | PERSON | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
Dave Wright | PERSON | 0.99+ |
Maureen | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Cheryl Cook | PERSON | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
$8,000 | QUANTITY | 0.99+ |
Justin Warren | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
2012 | DATE | 0.99+ |
Europe | LOCATION | 0.99+ |
Andy | PERSON | 0.99+ |
30,000 | QUANTITY | 0.99+ |
Mauricio | PERSON | 0.99+ |
Philips | ORGANIZATION | 0.99+ |
Robb | PERSON | 0.99+ |
Jassy | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Mike Nygaard | PERSON | 0.99+ |
Pure Storage Convergence of File and Object FULL SHOW V1
we're running what i would call a little mini series and we're exploring the convergence of file and object storage what are the key trends why would you want to converge file an object what are the use cases and architectural considerations and importantly what are the business drivers of uffo so-called unified fast file and object in this program you'll hear from matt burr who is the gm of pure's flashblade business and then we'll bring in the perspectives of a solutions architect garrett belsner who's from cdw and then the analyst angle with scott sinclair of the enterprise strategy group esg he'll share some cool data on our power panel and then we'll wrap with a really interesting technical conversation with chris bond cb bond who is a lead data architect at microfocus and he's got a really cool use case to share with us so sit back and enjoy the program from around the globe it's thecube presenting the convergence of file and object brought to you by pure storage we're back with the convergence of file and object a special program made possible by pure storage and co-created with the cube so in this series we're exploring that convergence between file and object storage we're digging into the trends the architectures and some of the use cases for unified fast file and object storage uffo with me is matt burr who's the vice president and general manager of flashblade at pure storage hello matt how you doing i'm doing great morning dave how are you good thank you hey let's start with a little 101 you know kind of the basics what is unified fast file and object yeah so look i mean i think you got to start with first principles talking about the rise of unstructured data so um when we think about unstructured data you sort of think about the projections 80 of data by 2025 is going to be unstructured data whether that's machine generated data or um you know ai and ml type workloads uh you start to sort of see this um i don't want to say it's a boom uh but it's sort of a renaissance for unstructured data if you will we move away from you know what we've traditionally thought of as general purpose nas and and file shares to you know really things that focus on uh fast object taking advantage of s3 cloud native applications that need to integrate with applications on site um you know ai workloads ml workloads tend to look to share data across you know multiple data sets and you really need to have a platform that can deliver both highly performant and scalable fast file and object from one system so talk a little bit more about some of the drivers that you know bring forth that need to unify file an object yeah i mean look you know there's a there's there's a real challenge um in managing you know bespoke uh bespoke infrastructure or architectures around general purpose nas and daz etc so um if you think about how a an architect sort of looks at an application they might say well okay i need to have um you know fast daz storage proximal to the application um but that's going to require a tremendous amount of dams which is a tremendous amount of drives right hard drives are you know historically pretty pretty pretty unwieldy to manage because you're replacing them relatively consistently at multi-petabyte scale um so you start to look at things like the complexity of daz you start to look at the complexity of general purpose nas and you start to just look at quite frankly something that a lot of people don't really want to talk about anymore but actual data center space right like consolidation matters the ability to take you know something that's the size of a microwave like a modern flash blade or a modern um you know uffo device uh replaces something that might be you know the size of three or four or five refrigerators so matt what why is is now the right time for this i mean for years nobody really paid much attention to object s3 already obviously changed you know that course most of the world's data is still stored in file formats and you get there with nfs or smb why is now the time to think about unifying object and file well because we're moving to things like a contactless society um you know the the things that we're going to do are going to just require a tremendous amount more compute power network um and quite frankly storage throughput and you know i can give you two sort of real primary examples here right you know warehouses are being you know taken over by robots if you will um it's not a war it's a it's a it's sort of a friendly advancement in you know how do i how do i store a box in a warehouse and you know we have we have a customer who focuses on large sort of big box distribution warehousing and you know a box that carried a an object two weeks ago might have a different box size two weeks later well that robot needs to know where the space is in the data center in order to put it but also needs to be able to process hey i don't want to put the thing that i'm going to access the most in the back of the warehouse i'm going to put that thing in the front of the warehouse all of those types of data you know sort of real time you can think of the robot as almost an edge device is processing in real time unstructured data in its object right so it's sort of the emergence of these new types of workloads and i give you the opposite example the other end of the spectrum is ransomware right you know today you know we'll talk to customers and they'll say quite commonly hey if you know anybody can sell me a backup device i need something that can restore quickly um if you had the ability to restore something in 270 terabytes an hour or 250 terabytes an hour uh that's much faster when you're dealing with a ransomware attack you want to get your data back quickly you know so i want to add i was going to ask you about that later but since you brought it up what is the right i guess call it architecture for for for ransomware i mean how and explain like how unified object and file which appointment i get the fast recovery but how how would you recommend a customer uh go about architecting a ransomware proof you know system yeah well you know with with flashblade and and with flasharray there's an actual feature called called safe mode and that safe mode actually protects uh the snapshots and and the data from uh sort of being a part of the of the ransomware event and so if you're in a type of ransomware situation like this you're able to leverage safe mode and you say okay what happens in a ransomware attack is you can't get access to your data and so you know the bad guy the perpetrator is basically saying hey i'm not going to give you access to your data until you pay me you know x in bitcoin or whatever it might be right um with with safe mode those snapshots are actually protected outside of the ransomware blast zone and you can bring back those snapshots because what's your alternative if you're not doing something like that your alternative is either to pay and unlock your data or you have to start retouring restoring excuse me from tape or slow disk that could take you days or weeks to get your data back so leveraging safe mode um you know in either the flash for the flash blade product uh is a great way to go about architecting against ransomware i got to put my my i'm thinking like a customer now so safe mode so that's an immutable mode right can't change the data um is it can can an administrator go in and change that mode can you turn it off do i still need an air gap for example what would you recommend there yeah so there there are still um uh you know sort of our back or roll back role-based access control policies uh around who can access that safe mode and who can right okay so uh anyway subject for a different day i want to i want to actually bring up uh if you don't object a topic that i think used to be really front and center and it now be is becoming front and center again i mean wikibon just produced a research note forecasting the future of flash and hard drives and those of you who follow us know we've done this for quite some time and you can if you could bring up the chart here you you could and we see this happening again it was originally we forecast the the the death of of quote-unquote high spin speed disc drives which is kind of an oxymoron but you can see on here on this chart this hard disk had a magnificent journey but they peaked in volume in manufacturing volume in 2010 and the reason why that is is so important is that volumes now are steadily dropping you can see that and we use wright's law to explain why this is a problem and wright's law essentially says that as you your cumulative manufacturing volume doubles your cost to manufacture decline by a constant percentage now i won't go too much detail on that but suffice it to say that flash volumes are growing very rapidly hdd volumes aren't and so flash because of consumer volumes can take advantage of wright's law and that constant reduction and that's what's really important for the next generation which is always more expensive to build uh and so this kind of marks the beginning of the end matt what do you think what what's the future hold for spinning disc in your view uh well i can give you the answer on two levels on a personal level uh it's why i come to work every day uh you know the the eradication or or extinction of an inefficient thing um you know i like to say that uh inefficiency is the bane of my existence uh and i think hard drives are largely inefficient and i'm willing to accept the sort of long-standing argument that um you know we've seen this transition in block right and we're starting to see it repeat itself in in unstructured data and i'm going to accept the argument that cost is a vector here and it most certainly is right hdds have been considerably cheaper uh than than than flash storage um you know even to this day uh you know up up to this point right but we're starting to approach the point where you sort of reach a a 3x sort of um you know differentiator between the cost of an hdd and an std and you know that really is that point in time when uh you begin to pick up a lot of volume and velocity and so you know that tends to map directly to you know what you're seeing here which is you know a a slow decline uh which i think is going to become even more rapid kind of probably starting around next year um where you start to see sds excuse me ssds uh you know really replacing hdds uh at a much more rapid clip particularly on the unstructured data side and it's largely around cost the the workloads that we talked about robots and warehouses or you know other types of advanced machine learning and artificial intelligence type applications and workflows you know they require a degree of performance that a hard drive just can't deliver we are we are seeing sort of the um creative innovative uh disruption of an entire industry right before our eyes it's a fun thing to live through yeah and and we would agree i mean it doesn't the premise there is that it doesn't have to be less expensive we think it will be by you know the second half or early second half of this decade but even if it's a we think around a 3x delta the value of of ssd relative to spinning disk is going to overwhelm just like with your laptop you know it got to the point where you said why would i ever have a spinning disc in my laptop we see the same thing happening here um and and so and we're talking about you know raw capacity you know put in compression and d-dupe and everything else that you really can't do with spinning discs because of the performance issues you can do with flash okay let's come back to uffo can we dig into the challenges specifically that that this solves for customers give me give us some examples yeah so you know i mean if we if we think about the examples um you know the the robotic one um i think is is is the one that i think is the marker for you know kind of of of the the modern side of of of what we see here um but what we're you know what we're what we're seeing from a trend perspective which you know not everybody's deploying robots right um you know there's there's many companies that are you know that aren't going to be in either the robotic business uh or or even thinking about you know sort of future type oriented type things but what they are doing is green field applications are being built on object um generally not on not on file and and not on block and so you know the rise of of object as sort of the the sort of let's call it the the next great protocol for um you know for uh for for modern workloads right this is this is that that modern application coming to the forefront and that could be anything from you know financial institutions you know right down through um you we've even see it and seen it in oil and gas uh we're also seeing it across across healthcare uh so you know as as as companies take the opportunity as industries to take this opportunity to modernize you know they're modernizing not on things that are are leveraging you know um you know sort of archaic disk technology they're they're they're really focusing on on object but they still have file workflows that they need to that they need to be able to support and so having the ability to be able to deliver those things from one device in a capacity orientation or a performance orientation uh while at the same time dramatically simplifying uh the overall administration of your environment both physically and non-physically is a key driver so the great thing about object is it's simple it's a kind of a get put metaphor um it's it scales out you know because it's got metadata associated with the data uh and and it's cheap uh the drawback is you don't necessarily associate it with high performance and and and as well most applications don't you know speak in that language they speak in the language of file you know or as you mentioned block so i i see real opportunities here if i have some some data that's not necessarily frequently accessed you know every day but yet i want to then whether end of quarter or whatever it is i want to i want to or machine learning i want to apply some ai to that data i want to bring it in and then apply a file format uh because for performance reasons is that right maybe you could unpack that a little bit yeah so um you know we see i mean i think you described it well right um but i don't think object necessarily has to be slow um and nor does it have to be um you know because when you think about you brought up a good point with metadata right being able to scale to a billions of objects being able to scale to billions of objects excuse me is of value right um and i think people do traditionally associate object with slow but it's not necessarily slow anymore right we we did a sort of unofficial survey of of of our of our customers and our employee base and when people described object they thought of it as like law firms and storing a word doc if you will um and that that's just you know i think that there's a lack of understanding or a misnomer around what modern what modern object has become and perform an object particularly at scale when we're talking about billions of objects you know that's the next frontier right um is it at pace performance wise with you know the other protocols no uh but it's making leaps and grounds so you talked a little bit more about some of the verticals that you see i mean i think when i think of financial services i think transaction processing but of course they have a lot of tons of unstructured data are there any patterns you're seeing by by vertical market um we're you know we're not that's the interesting thing um and you know um as a as a as a as a company with a with a block heritage or a block dna those patterns were pretty easy to spot right there were a certain number of databases that you really needed to support oracle sql some postgres work et cetera then kind of the modern databases around cassandra and things like that you knew that there were going to be vmware environments you know you could you could sort of see the trends and where things were going unstructured data is such a broader horizontal thing right so you know inside of oil and gas for example you have you know um you have specific applications and bespoke infrastructures for those applications um you know inside of media entertainment you know the same thing the the trend that we're seeing the commonality that we're seeing is the modernization of you know object as a starting point for all the all the net new workloads within within those industry verticals right that's the most common request we see is what's your object roadmap what's your you know what's your what's your object strategy you know where do you think where do you think object is going so um there isn't any um you know sort of uh there's no there's no path uh it's really just kind of a wide open field in front of us with common requests across all industries so the amazing thing about pure just as a kind of a little you know quasi you know armchair historian the industry is pure was really the only company in many many years to be able to achieve escape velocity break through a billion dollars i mean three part couldn't do it isilon couldn't do it compellent couldn't do it i could go on but pure was able to achieve that as an independent company and so you become a leader you look at the gartner magic quadrant you're a leader in there i mean if you've made it this far you've got to have some chops and so of course it's very competitive there are a number of other storage suppliers that have announced products that unify object and file so i'm interested in how pure differentiates why pure um it's a great question um and it's one that uh you know having been a long time puritan uh you know i take pride in answering um and it's actually a really simple answer um it's it's business model innovation and technology right the the technology that goes behind how we do what we do right and i don't mean the product right innovation is product but having a better support model for example um or having on the business model side you know evergreen storage right where we sort of look at your relationship to us as a subscription right um you know we're going to sort of take the thing that that you've had and we're going to modernize that thing in place over time such that you're not rebuying that same you know terabyte or you know petabyte of storage that you've that you that you've paid for over time so um you know sort of three legs of the stool uh that that have made you know pure clearly differentiated i think the market has has recognized that um you're right it's it's hard to break through to a billion dollars um but i look forward to the day that you know we we have two billion dollar products and i think with uh you know that rise in in unstructured data growing to 80 by 2025 and you know the massive transition that you know you guys have noted in in in your hdd slide i think it's a huge opportunity for us on you know the other unstructured data side of the house you know the other thing i'd add matt i've talked to cause about this is is it's simplicity first i've asked them why don't you do this why don't you do it and the answer is always the same is that adds complexity and we we put simplicity for the customer ahead of everything else and i think that served you very very well what about the economics of of unified file an object i mean if you bring in additional value presumably there's a there there's a cost to that but there's got to be also a business case behind it what kind of impact have you seen uh with customers yeah i mean look i'll i'll i'll go back to something i mentioned earlier which is just the reclamation of floor space and power and cooling right um you know there's a you know there's people people people want to search for kind of the the sexier element if you will when it comes to looking at how we how you derive value from something but the reality is if you're reducing your power consumption by you know by by a material percentage power bills matter in big in big data centers um you know customers typically are are facing you know a paradigm of well i i want to go to the cloud but you know the clouds are not being more expensive than i thought it was going to be or you know i figured out what i can use in the cloud i thought it was going to be everything but it's not going to be everything so hybrid's where we're landing but i want to be out of the data center business and i don't want to have a team of 20 storage people to match you know to administer my storage um you know so there's sort of this this very tangible value around you know hey if i could manage um you know multiple petabytes with one full-time engineer uh because the system uh to yoran kaz's point was radically simpler to administer didn't require someone to be running around swapping drives all the time would that be a value the answer is yes 100 of the time right and then you start to look at okay all right well on the uffo side from a product perspective hey if i have to manage a you know bespoke environment for this application if i have to manage a bespoke environment for this application and a bespoke environment for this application and this book environment for this application i'm managing four different things and can i actually share data across those four different things there's ways to share data but most customers it just gets too complex how do you even know what your what your gold.master copy is of data if you have it in four different places or you try to have it in four different places and it's four different siloed infrastructures so when you get to the sort of the side of you know how do we how do you measure value in uffo it's actually being able to have all of that data concentrated in one place so that you can share it from application to application got it i'm interested we use a couple minutes left i'm interested in the the update on flashblade you know generally but also i have a specific question i mean look getting file right is hard enough uh you just announced smb support for flashblade i'm interested in you know how that fits in i think it's kind of obvious with file and object converging but give us the update on on flashblade and maybe you could address that specific question yeah so um look i mean we're we're um you know tremendously excited about the growth of flashblade uh you know we we we found workloads we never expected to find um you know the rapid restore workload was one that was actually brought to us from from from a customer actually and has become you know one of our one of our top two three four you know workloads so um you know we're really happy with the trend we've seen in it um and you know mapping back to you know thinking about hdds and ssds you know we're well on a path to building a billion dollar business here so you know we're very excited about that um but to your point you know you don't just snap your fingers and get there right um you know we've learned that doing file and object uh is is harder than block um because there's more things that you have to go do for one you're basically focused on three protocols s b nfs and s3 not necessarily in that order um but to your point about smb uh you know we we are uh on the path through to releasing um you know smb uh full full native smb support in in the system that will allow us to uh service customers we have a limitation with some customers today where they'll have an s b portion of their nfs workflow um and we do great on the nfs side um but you know we didn't we didn't have the ability to plug into the s p component of their workflow so that's going to open up a lot of opportunity for us um on on that front um and you know we continue to you know invest significantly across the board in in areas like security which is you know become more than just a hot button you know today security's always been there but it feels like it's blazing hot today um and so you know going through the next couple years we'll be looking at uh you know developing some some um you know pretty material security elements of the product as well so uh well on a path to a billion dollars is the net on that and uh you know we're we're fortunate to have have smb here and we're looking forward to introducing that to to those customers that have you know nfs workloads today with an s p component yeah nice tailwind good tam expansion strategy matt thanks so much really appreciate you coming on the program we appreciate you having us and uh thanks much dave good to see you [Music] okay we're back with the convergence of file and object in a power panel this is a special content program made possible by pure storage and co-created with the cube now in this series what we're doing is we're exploring the coming together of file and object storage trying to understand the trends that are driving this convergence the architectural considerations that users should be aware of and which use cases make the most sense for so-called unified fast file in object storage and with me are three great guests to unpack these issues garrett belsner is the data center solutions architect he's with cdw scott sinclair is a senior analyst at enterprise strategy group he's got deep experience on enterprise storage and brings that independent analyst perspective and matt burr is back with us gentlemen welcome to the program thank you hey scott let me let me start with you uh and get your perspective on what's going on the market with with object the cloud a huge amount of unstructured data out there that lives in files give us your independent view of the trends that you're seeing out there well dave you know where to start i mean surprise surprise date is growing um but one of the big things that we've seen is we've been talking about data growth for what decades now but what's really fascinating is or changed is because of the digital economy digital business digital transformation whatever you call it now people are not just storing data they actually have to use it and so we see this in trends like analytics and artificial intelligence and what that does is it's just increasing the demand for not only consolidation of massive amounts of storage that we've seen for a while but also the demand for incredibly low latency access to that storage and i think that's one of the things that we're seeing that's driving this need for convergence as you put it of having multiple protocols consolidated onto one platform but also the need for high performance access to that data thank you for that a great setup i got like i wrote down three topics that we're going to unpack as a result of that so garrett let me let me go to you maybe you can give us the perspective of what you see with customers is is this is this like a push where customers are saying hey listen i need to converge my file and object or is it more a story where they're saying garrett i have this problem and then you see unified file and object as a solution yeah i think i think for us it's you know taking that consultative approach with our customers and really kind of hearing pain around some of the pipelines the way that they're going to market with data today and kind of what are the problems that they're seeing we're also seeing a lot of the change driven by the software vendors as well so really being able to support a disaggregated design where you're not having to upgrade and maintain everything as a single block has really been a place where we've seen a lot of customers pivot to where they have more flexibility as they need to maintain larger volumes of data and higher performance data having the ability to do that separate from compute and cache and those other layers are is really critical so matt i wonder if if you could you know follow up on that so so gary was talking about this disaggregated design so i like it you know distributed cloud etc but then we're talking about bringing things together in in one place right so square that circle how does this fit in with this hyper-distributed cloud edge that's getting built out yeah you know i mean i i could give you the easy answer on that but i could also pass it back to garrett in the sense that you know garrett maybe it's important to talk about um elastic and splunk and some of the things that you're seeing in in that world and and how that i think the answer to dave's question i think you can give you can give a pretty qualified answer relative what your customers are seeing oh that'd be great please yeah absolutely no no problem at all so you know i think with um splunk kind of moving from its traditional design and classic design whatever you want you want to call it up into smart store um that was kind of one of the first that we saw kind of make that move towards kind of separating object out and i think you know a lot of that comes from their own move to the cloud and updating their code to basically take advantage of object object in the cloud uh but we're starting to see you know with like vertica eon for example um elastic other folks taking that same type of approach where in the past we were building out many 2u servers we were jamming them full of uh you know ssds and nvme drives that was great but it doesn't really scale and it kind of gets into that same problem that we see with you know hyper convergence a little bit where it's you know you're all you're always adding something maybe that you didn't want to add um so i think it you know again being driven by software is really kind of where we're seeing the world open up there but that whole idea of just having that as a hub and a central place where you can then leverage that out to other applications whether that's out to the edge for machine learning or ai applications to take advantage of it i think that's where that convergence really comes back in but i think like scott mentioned earlier it's really folks are now doing things with the data where before i think they were really storing it trying to figure out what are we going to actually do with it when we need to do something with it so this is making it possible yeah and dave if i could just sort of tack on to the end of garrett's answer there you know in particular vertica with neon mode the ability to leverage sharded subclusters give you um you know sort of an advantage in terms of being able to isolate performance hot spots you an advantage to that is being able to do that on a flashblade for example so um sharded subclusters allow you to sort of say i'm you know i'm going to give prioritization to you know this particular element of my application and my data set but i can still share those share that data across those across those subclusters so um you know as you see you know vertica advance with eon mode or you see splunk advance with with smart store you know these are all sort of advancements that are you know it's a chicken in the egg thing um they need faster storage they need you know sort of a consolidated data storage data set um and and that's what sort of allows these things to drive forward yeah so vertica eon mode for those who don't know it's the ability to separate compute and storage and scale independently i think i think vertica if they're if they're not the only one they're one of the only ones i think they might even be the only one that does that in the cloud and on-prem and that sort of plays into this distributed you know nature of this hyper-distributed cloud i sometimes call it and and i'm interested in the in the data pipeline and i wonder scott if we could talk a little bit about that maybe we're unified object and file i mean i'm envisioning this this distributed mesh and then you know uffo is sort of a node on that that i i can tap when i need it but but scott what are you seeing as the state of infrastructure as it relates to the data pipeline and the trends there yeah absolutely dave so when i think data pipeline i immediately gravitate to analytics or or machine learning initiatives right and so one of the big things we see and this is it's an interesting trend it seems you know we continue to see increased investment in ai increased interest and people think and as companies get started they think okay well what does that mean well i got to go hire a data scientist okay well that data scientist probably needs some infrastructure and what they end what often happens in these environments is where it ends up being a bespoke environment or a one-off environment and then over time organizations run into challenges and one of the big challenges is the data science team or people whose jobs are outside of it spend way too much time trying to get the infrastructure to to keep up with their demands and predominantly around data performance so one of the one of the ways organizations that especially have artificial intelligence workloads in production and we found this in our research have started mitigating that is by deploying flash all across the data pipeline we have we have data on this sorry interrupt but yeah if you could bring up that that chart that would be great um so take us through this uh uh scott and share with us what we're looking at here yeah absolutely so so dave i'm glad you brought this up so we did this study um i want to say late last year uh one of the things we looked at was across artificial intelligence environments now one thing that you're not seeing on this slide is we went through and we asked all around the data pipeline and we saw flash everywhere but i thought this was really telling because this is around data lakes and when when or many people think about the idea of a data lake they think about it as a repository it's a place where you keep maybe cold data and what we see here is especially within production environments a pervasive use of flash storage so i think that 69 of organizations are saying their data lake is mostly flash or all flash and i think we have zero percent that don't have any flash in that environment so organizations are finding out that they that flash is an essential technology to allow them to harness the value of their data so garrett and then matt i wonder if you could chime in as well we talk about digital transformation and i sometimes call it you know the coveted forced march to digital transformation and and i'm curious as to your perspective on things like machine learning and the adoption and scott you may have a perspective on this as well you know we had to pivot we had to get laptops we had to secure the end points you know and vdi those became super high priorities what happened to you know injecting ai into my applications and and machine learning did that go in the back burner was that accelerated along with the need to digitally transform garrett i wonder if you could share with us what you saw with with customers last year yeah i mean i think we definitely saw an acceleration um i think folks are in in my market are still kind of figuring out how they inject that into more of a widely distributed business use case but again this data hub and allowing folks to now take advantage of this data that they've had in these data lakes for a long time i agree with scott i mean many of the data lakes that we have were somewhat flash accelerated but they were typically really made up of you know large capacity slower spinning near-line drive accelerated with some flash but i'm really starting to see folks now look at some of those older hadoop implementations and really leveraging new ways to look at how they consume data and many of those redesigned customers are coming to us wanting to look at all flash solutions so we're definitely seeing it we're seeing an acceleration towards folks trying to figure out how to actually use it in more of a business sense now or before i feel it goes a little bit more skunk works kind of people dealing with uh you know in a much smaller situation maybe in the executive offices trying to do some testing and things scott you're nodding away anything you can add in here yeah so first off it's great to get that confirmation that the stuff we're seeing in our research garrett's seeing you know out in the field and in the real world um but you know as it relates to really the past year it's been really fascinating so one of the things we study at esg is i.t buying intentions what are things what are initiatives that companies plan to invest in and at the beginning of 2020 we saw a heavy interest in machine learning initiatives then you transition to the middle of 2020 in the midst of covid some organizations continued on that path but a lot of them had the pivot right how do we get laptops to everyone how do we continue business in this new world well now as we enter into 2021 and hopefully we're coming out of this uh you know the pandemic era um we're getting into a world where organizations are pivoting back towards these strategic investments around how do i maximize the usage of data and actually accelerating those because they've seen the importance of of digital business initiatives over the past year yeah matt i mean when we exited 2019 we saw a narrowing of experimentation and our premise was you know that that organizations are going to start now operationalizing all their digital transformation experiments and and then we had a you know 10 month petri dish on on digital so what do you what are you seeing in this regard a 10 month petri dish is an interesting way to interesting way to describe it um you know we saw another there's another there's another candidate for pivot in there around ransomware as well right um you know security entered into the mix which took people's attention away from some of this as well i mean look i'd like to bring this up just a level or two um because what we're actually talking about here is progress right and and progress isn't is an inevitability um you know whether it's whether whether you believe that it's by 2025 or you or you think it's 2035 or 2050 it doesn't matter we're on a forced march to the eradication of disk and that is happening in many ways uh you know in many ways um due to some of the things that garrett was referring to and what scott was referring to in terms of what are customers demands for how they're going to actually leverage the data that they have and that brings me to kind of my final point on this which is we see customers in three phases there's the first phase where they say hey i have this large data store and i know there's value in there i don't know how to get to it or i have this large data store and i've started a project to get value out of it and we failed those could be customers that um you know marched down the hadoop path early on and they they got some value out of it um but they realized that you know hdfs wasn't going to be a modern protocol going forward for any number of reasons you know the first being hey if i have gold.master how do i know that i have gold.4 is consistent with my gold.master so data consistency matters and then you have the sort of third group that says i have these large data sets i know how to extract value from them and i'm already on to the verticas the elastics you know the splunks etc um i think those folks are the folks that that ladder group are the folks that kept their their their projects going because they were already extracting value from them the first two groups we we're seeing sort of saying the second half of this year is when we're going to begin really being picking up on these on these types of initiatives again well thank you matt by the way for for hitting the escape key because i think value from data really is what this is all about and there are some real blockers there that i kind of want to talk about you mentioned hdfs i mean we were very excited of course in the early days of hadoop many of the concepts were profound but at the end of the day it was too complicated we've got these hyper-specialized roles that are that are you know serving the business but it still takes too long it's it's too hard to get value from data and one of the blockers is infrastructure that the complexity of that infrastructure really needs to be abstracted taking up a level we're starting to see this in in cloud where you're seeing some of those abstraction layers being built from some of the cloud vendors but more importantly a lot of the vendors like pew are saying hey we can do that heavy lifting for you uh and we you know we have expertise in engineering to do cloud native so i'm wondering what you guys see uh maybe garrett you could start us off and other students as some of the blockers uh to getting value from data and and how we're going to address those in the coming decade yeah i mean i i think part of it we're solving here obviously with with pure bringing uh you know flash to a market that traditionally was utilizing uh much slower media um you know the other thing that i that i see that's very nice with flashblade for example is the ability to kind of do things you know once you get it set up a blade at a time i mean a lot of the things that we see from just kind of more of a you know simplistic approach to this like a lot of these teams don't have big budgets and being able to kind of break them down into almost a blade type chunk i think has really kind of allowed folks to get more projects and and things off the ground because they don't have to buy a full expensive system to run these projects so that's helped a lot i think the wider use cases have helped a lot so matt mentioned ransomware you know using safe mode as a place to help with ransomware has been a really big growth spot for us we've got a lot of customers very interested and excited about that and the other thing that i would say is bringing devops into data is another thing that we're seeing so kind of that push towards data ops and really kind of using automation and infrastructure as code as a way to now kind of drive things through the system the way that we've seen with automation through devops is really an area we're seeing a ton of growth with from a services perspective guys any other thoughts on that i mean we're i'll tee it up there we are seeing some bleeding edge which is somewhat counterintuitive especially from a cost standpoint organizational changes at some some companies uh think of some of the the the internet companies that do uh music uh for instance and adding podcasts etc and those are different data products we're seeing them actually reorganize their data architectures to make them more distributed uh and actually put the domain heads the business heads in charge of the the data and the data pipeline and that is maybe less efficient but but it's again some of these bleeding edge what else are you guys seeing out there that might be yes some harbingers of the next decade uh i'll go first um you know i think specific to um the the construct that you threw out dave one of the things that we're seeing is um you know the the application owner maybe it's the devops person but it's you know maybe it's it's it's the application owner through the devops person they're they're becoming more technical in their understanding of how infrastructure um interfaces with their with their application i think um you know what what we're seeing on the flashblade side is we're having a lot more conversations with application people than um just i.t people it doesn't mean that the it people aren't there the it people are still there for sure they have to deliver the service etc um but you know the days of of i.t you know building up a catalog of services and a business owner subscribing to one of those services you know picking you know whatever sort of fits their need um i don't think that constru i think that's the construct that changes going forward the application owner is becoming much more prescriptive about what they want the infrastructure to fit how they want the infrastructure to fit into their application and that's a big change and and for for um you know certainly folks like like garrett and cdw um you know they do a good job with this being able to sort of get to the application owner and bring those two sides together there's a tremendous amount of value there for us it's been a little bit of a retooling we've traditionally sold to the i.t side of the house and um you know we've had to teach ourselves how to go talk the language of of applications so um you know i think you pointed out a good a good a good construct there and and you know that that application owner taking playing a much bigger role in what they're expecting uh from the performance of it infrastructure i think is is is a key is a key change interesting i mean that definitely is a trend that's put you guys closer to the business where the the infrastructure team is is serving the business as opposed to sometimes i talk to data experts and they're frustrated uh especially data owners or or data product builders who are frustrated that they feel like they have to beg beg the the data pipeline team to get you know new data sources or get data out how about the edge um you know maybe scott you can kick us off i mean we're seeing you know the emergence of edge use cases ai inferencing at the edge a lot of data at the edge what are you seeing there and and how does this unified object i'll bring us back to that and file fit wow dave how much time do we have um two minutes first of all scott why don't you why don't you just tell everybody what the edge is yeah you got it figured out all right how much time do you have matt at the end of the day and that that's that's a great question right is if you take a step back and i think it comes back today of something you mentioned it's about extracting value from data and what that means is when you extract value from data what it does is as matt pointed out the the influencers or the users of data the application owners they have more power because they're driving revenue now and so what that means is from an i.t standpoint it's not just hey here are the services you get use them or lose them or you know don't throw a fit it is no i have to i have to adapt i have to follow what my application owners mean now when you bring that back to the edge what it means is is that data is not localized to the data center i mean we just went through a nearly 12-month period where the entire workforce for most of the companies in this country had went distributed and business continued so if business is distributed data is distributed and that means that means in the data center that means at the edge that means that the cloud that means in all other places in tons of places and what it also means is you have to be able to extract and utilize data anywhere it may be and i think that's something that we're going to continue to and continue to see and i think it comes back to you know if you think about key characteristics we've talked about things like performance and scale for years but we need to start rethinking it because on one hand we need to get performance everywhere but also in terms of scale and this ties back to some of the other initiatives and getting value from data it's something i call that the massive success problem one of the things we see especially with with workloads like machine learning is businesses find success with them and as soon as they do they say well i need about 20 of these projects now all of a sudden that overburdens it organizations especially across across core and edge and cloud environments and so when you look at environments ability to meet performance and scale demands wherever it needs to be is something that's really important you know so dave i'd like to um just sort of tie together sort of two things that um i think that i heard from scott and garrett that i think are important and it's around this concept of scale um you know some of us are old enough to remember the day when kind of a 10 terabyte blast radius was too big of a blast radius for people to take on or a terabyte of storage was considered to be um you know an exemplary budget environment right um now we sort of think as terabytes kind of like we used to think of as gigabytes in some ways um petabyte like you don't have to explain anybody what a petabyte is anymore um and you know what's on the horizon and it's not far are our exabyte type data set workloads um and you start to think about what could be in that exabyte of data we've talked about how you extract that value we've talked about sort of um how you start but if the scale is big not everybody's going to start at a petabyte or an exabyte to garrett's point the ability to start small and grow into these products or excuse me these projects i think a is a really um fundamental concept here because you're not going to just go by i'm going to kick off a five petabyte project whether you do that on disk or flash it's going to be expensive right but if you could start at a couple hundred terabytes not just as a proof of concept but as something that you know you could get predictable value out of that then you could say hey this either scales linearly or non-linearly in a way that i can then go map my investments to how i can go dig deeper into this that's how all of these things are gonna that's how these successful projects are going to start because the people that are starting with these very large you know sort of um expansive you know greenfield projects at multi-petabyte scale it's gonna be hard to realize near-term value excellent we gotta wrap but but garrett i wonder if you could close when you look forward you talk to customers do you see this unification of of file and object is it is this an evolutionary trend is it something that is that that is that is that is going to be a lever that customers use how do you see it evolving over the next two three years and beyond yeah i mean i think from our perspective i mean just from what we're seeing from the numbers within the market the amount of growth that's happening with unstructured data is really just starting to finally really kind of hit this data deluge or whatever you want to call it that we've been talking about for so many years it really does seem to now be becoming true as we start to see things scale out and really folks settle into okay i'm going to use the cloud to to start and maybe train my models but now i'm going to get it back on prem because of latency or security or whatever the the um decision points are there this is something that is not going to slow down and i think you know folks like pure having the ability to have the tools that they give us um to use and bring to market with our customers are really key and critical for us so i see it as a huge growth area and a big focus for us moving forward guys great job unpacking a topic that you know it's covered a little bit but i think we we covered some ground that is uh that is new and so thank you so much for those insights and that data really appreciate your time thanks steve thanks yeah thanks dave okay and thank you for watching the convergence of file and object keep it right there right back after this short break innovation impact influence welcome to the cube disruptors developers and practitioners learn from the voices of leaders who share their personal insights from the hottest digital events around the globe enjoy the best this community has to offer on the cube your global leader in high-tech digital coverage [Music] okay now we're going to get the customer perspective on object and we'll talk about the convergence of file and object but really focusing on the object piece this is a content program that's being made possible by pure storage and it's co-created with the cube christopher cb bond is here he's a lead architect for microfocus the enterprise data warehouse and principal data engineer at microfocus cb welcome good to see you thanks dave good to be here so tell us more about your role at microfocus it's a pan microfocus role of course we know the company is a multinational software firm and acquired the software assets of hp of course including vertica tell us where you fit yeah so microfocus is uh you know it's like i said wide worldwide uh company that uh sells a lot of software products all over the place to governments and so forth and um it also grows often by acquiring other companies so there is the problem of of integrating new companies and their data and so what's happened over the years is that they've had a a number of different discrete data systems so you've got this data spread all over the place and they've never been able to get a full complete introspection on the entire business because of that so my role was come in design a central data repository an enterprise data warehouse that all reporting could be generated against and so that's what we're doing and we selected vertica as the edw system and pure storage flashblade as the communal repository okay so you obviously had experience with with vertica in your in your previous role so it's not like you were starting from scratch but but paint a picture of what life was like before you embarked on this sort of consolidated a approach to your your data warehouse what was it just disparate data all over the place a lot of m a going on where did the data live right so again the data was all over the place including under people's desks in just dedicated you know their their own private uh sql servers it a lot of data in in um microfocus is run on sql server which has pros and cons because that's a great uh transactional database but it's not really good for analytics in my opinion so uh but a lot of stuff was running on that they had one vertica instance that was doing some select uh reporting wasn't a very uh powerful system and it was what they call vertica enterprise mode where had dedicated nodes which um had the compute and storage um in the same locus on each uh server okay so vertica eon mode is a whole new world because it separates compute from storage you mentioned eon mode uh and the ability to to to scale storage and compute independently we wanted to have the uh analytics olap stuff close to the oltp stuff right so that's why they're co-located very close to each other and so uh we could what's nice about this situation is that these s3 objects it's an s3 object store on the pure flash plate we could copy those over if we needed to uh aws and we could spin up um a version of vertica there and keep going it's it's like a tertiary dr strategy because we actually have a we're setting up a second flashblade vertica system geo-located elsewhere for backup and we can get into it if you want to talk about how the latest version of the pure software for the flashblade allows synchronization across network boundaries of those flash plays which is really nice because if uh you know there's a giant sinkhole opens up under our colo facility and we lose that thing then we just have to switch the dns and we were back in business off the dr and then if that one was to go we could copy those objects over to aws and be up and running there so we're feeling pretty confident about being able to weather whatever comes along so you're using the the pure flash blade as an object store um most people think oh object simple but slow uh not the case for you is that right not the case at all it's ripping um well you have to understand about vertica and the way it stores data it stores data in what they call storage containers and those are immutable okay on disk whether it's on aws or if you had a enterprise mode vertica if you do an update or delete it actually has to go and retrieve that object container from disk and it destroys it and rebuilds it okay which is why you don't you want to avoid updates and deletes with vertica because the way it gets its speed is by sorting and ordering and encoding the data on disk so it can read it really fast but if you do an operation where you're deleting or updating a record in the middle of that then you've got to rebuild that entire thing so that actually matches up really well with s3 object storage because it's kind of the same way uh it gets destroyed and rebuilt too okay so that matches up very well with vertica and we were able to design this system so that it's append only now we had some reports that were running in sql server okay uh which were taking seven days so we moved that to uh to vertica from sql server and uh we rewrote the queries which were which had been written in t sql with a bunch of loops and so forth and we were to get this is amazing it went from seven days to two seconds to generate this report which has tremendous value uh to the company because it would have to have this long cycle of seven days to get a new introspection in what they call their knowledge base and now all of a sudden it's almost on demand two seconds to generate it that's great and that's because of the way the data is stored and uh the s3 you asked about oh you know is it slow well not in that context because what happens really with vertica eon mode is that it can they have um when you set up your compute nodes they have local storage also which is called the depot it's kind of a cache okay so the data will be drawn from the flash and cached locally uh and that was it was thought when they designed that oh you know it's that'll cut down on the latency okay but it turns out that if you have your compute nodes close meaning minimal hops to the flashblade that you can actually uh tell vertica you know don't even bother caching that stuff just read it directly on the fly from the from the flashblade and the performance is still really good it depends on your situation but i know for example a major telecom company that uh uses the same topology as we're talking about here they did the same thing they just they just dropped the cache because the flash player was able to to deliver the the data fast enough so that's you're talking about that that's speed of light issues and just the overhead of of of switching infrastructure is that that gets eliminated and so as a result you can go directly to the storage array that's correct yeah it's it's like it's fast enough that it's it's almost as if it's local to the compute node uh but every situation is different depending on your uh your knees if you've got like a few tables that are heavily used uh then yeah put them um put them in the cash because that'll be probably a little bit faster but if you have a lot of ad hoc queries that are going on you know you may exceed the storage of the local cache and then you're better off having it uh just read directly from the uh from the flash blade got it look it pure's a fit i mean i sound like a fanboy but pure is all about simplicity so is object so that means you don't have to you know worry about wrangling storage and worrying about luns and all that other you know nonsense and and file i've been burned by hardware in the past you know where oh okay they're building to a price and so they cheap out on stuff like fans or other things and these these components fail and the whole thing goes down but this hardware is super super good quality and uh so i'm i'm happy with the quality that we're getting so cb last question what's next for you where do you want to take this uh this this initiative well we are in the process now of we um when so i i designed this system to combine the best of the kimball approach to data warehousing and the inland approach okay and what we do is we bring over all the data we've got and we put it into a pristine staging layer okay like i said it's uh because it's append only it's essentially a log of all the transactions that are happening in this company just they appear okay and then from the the kimball side of things we're designing the data marts now so that that's what the end users actually interact with and so we're we're taking uh the we're examining the transactional systems to say how are these business objects created what's what's the logic there and we're recreating those logical models in uh in vertica so we've done a handful of them so far and it's working out really well so going forward we've got a lot of work to do to uh create just about every object that that the company needs cb you're an awesome guest to really always a pleasure talking to you and uh thank you congratulations and and good luck going forward stay safe thank you [Music] okay let's summarize the convergence of file and object first i want to thank our guests matt burr scott sinclair garrett belsener and c.b bohn i'm your host dave vellante and please allow me to briefly share some of the key takeaways from today's program so first as scott sinclair of esg stated surprise surprise data's growing and matt burr he helped us understand the growth of unstructured data i mean estimates indicate that the vast majority of data will be considered unstructured by mid-decade 80 or so and obviously unstructured data is growing very very rapidly now of course your definition of unstructured data and that may vary across across a wide spectrum i mean there's video there's audio there's documents there's spreadsheets there's chat i mean these are generally considered unstructured data but of course they all have some type of structure to them you know perhaps it's not as strict as a relational database but there's certainly metadata and certain structure to these types of use cases that i just mentioned now the key to what pure is promoting is this idea of unified fast file and object uffo look object is great it's inexpensive it's simple but historically it's been less performant so good for archiving or cheap and deep types of examples organizations often use file for higher performance workloads and let's face it most of the world's data lives in file formats what pure is doing is bringing together file and object by for example supporting multiple protocols ie nfs smb and s3 s3 of course has really given new life to object over the past decade now the key here is to essentially enable customers to have the best of both worlds not having to trade off performance for object simplicity and a key discussion point that we've had on the program has been the impact of flash on the long slow death of spinning disk look hard disk drives they had a great run but hdd volumes they peaked in 2010 and flash as you well know has seen tremendous volume growth thanks to the consumption of flash in mobile devices and then of course its application into the enterprise and that's volume is just going to keep growing and growing and growing the price declines of flash are coming down faster than those of hdd so it's the writing's on the wall it's just a matter of time so flash is riding down that cost curve very very aggressively and hdd has essentially become you know a managed decline business now by bringing flash to object as part of the flashblade portfolio and allowing for multiple protocols pure hopes to eliminate the dissonance between file and object and simplify the choice in other words let the workload decide if you have data in a file format no problem pure can still bring the benefits of simplicity of object at scale to the table so again let the workload inform what the right strategy is not the technical infrastructure now pure course is not alone there are others supporting this multi-protocol strategy and so we asked matt burr why pure or what's so special about you and not surprisingly in addition to the product innovation he went right to pure's business model advantages i mean for example with its evergreen support model which was very disruptive in the marketplace you know frankly pure's entire business disrupted the traditional disk array model which was fundamentally was flawed pure forced the industry to respond and when it achieved escape velocity velocity and pure went public the entire industry had to react and a big part of the pure value prop in addition to this business model innovation that we just discussed is simplicity pure's keep its simple approach coincided perfectly with the ascendancy of cloud where technology organizations needed cloud-like simplicity for certain workloads that were never going to move into the cloud they're going to stay on-prem now i'm going to come back to this but allow me to bring in another concept that garrett and cb really highlighted and that is the complexity of the data pipeline and what do you mean what do i mean by that and why is this important so scott sinclair articulated he implied that the big challenge is organizations their data full but insights are scarce scarce a lot of data not as much insights it takes time too much time to get to those insights so we heard from our guests that the complexity of the data pipeline was a barrier to getting to faster insights now cb bonds shared how he streamlined his data architecture using vertica's eon mode which allowed him to scale compute independently of storage so that brought critical flexibility and improved economics at scale and flashblade of course was the back-end storage for his data warehouse efforts now the reason i think this is so important is that organizations are struggling to get insights from data and the complexity associated with the data pipeline and data life cycles let's face it it's overwhelming organizations and there the answer to this problem is a much longer and different discussion than unifying object and file that's you know i can spend all day talking about that but let's focus narrowly on the part of the issue that is related to file and object so the situation here is that technology has not been serving the business the way it should rather the formula is twisted in the world of data and big data and data architectures the data team is mired in complex technical issues that impact the time to insights now part of the answer is to abstract the underlying infrastructure complexity and create a layer with which the business can interact that accelerates instead of impedes innovation and unifying file and object is a simple example of this where the business team is not blocked by infrastructure nuance like does this data reside in a file or object format can i get to it quickly and inexpensively in a logical way or is the infrastructure in a stovepipe and blocking me so if you think about the prevailing sentiment of how the cloud is evolving to incorporate on premises workloads that are hybrid and configurations that are working across clouds and now out to the edge this idea of an abstraction layer that essentially hides the underlying infrastructure is a trend we're going to see evolve this decade now is uffo the be all end-all answer to solving all of our data pipeline challenges no no of course not but by bringing the simplicity and economics of object together with the ubiquity and performance of file uffo makes it a lot easier it simplifies life organizations that are evolving into digital businesses which by the way is every business so we see this as an evolutionary trend that further simplifies the underlying technology infrastructure and does a better job supporting the data flows for organizations so they don't have to spend so much time worrying about the technology details that add a little value to the business okay so thanks for watching the convergence of file and object and thanks to pure storage for making this program possible this is dave vellante for the cube we'll see you next time [Music] you
SUMMARY :
on the nfs side um but you know we
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
garrett belsner | PERSON | 0.99+ |
matt burr | PERSON | 0.99+ |
2010 | DATE | 0.99+ |
2050 | DATE | 0.99+ |
270 terabytes | QUANTITY | 0.99+ |
seven days | QUANTITY | 0.99+ |
2021 | DATE | 0.99+ |
scott sinclair | PERSON | 0.99+ |
2035 | DATE | 0.99+ |
2019 | DATE | 0.99+ |
four | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
two seconds | QUANTITY | 0.99+ |
2025 | DATE | 0.99+ |
matt burr | PERSON | 0.99+ |
first phase | QUANTITY | 0.99+ |
dave | PERSON | 0.99+ |
dave vellante | PERSON | 0.99+ |
scott sinclair | PERSON | 0.99+ |
five | QUANTITY | 0.99+ |
250 terabytes | QUANTITY | 0.99+ |
10 terabyte | QUANTITY | 0.99+ |
zero percent | QUANTITY | 0.99+ |
100 | QUANTITY | 0.99+ |
steve | PERSON | 0.99+ |
gary | PERSON | 0.99+ |
two billion dollar | QUANTITY | 0.99+ |
garrett | PERSON | 0.99+ |
two minutes | QUANTITY | 0.99+ |
two weeks later | DATE | 0.99+ |
three topics | QUANTITY | 0.99+ |
two sides | QUANTITY | 0.99+ |
two weeks ago | DATE | 0.99+ |
billion dollars | QUANTITY | 0.99+ |
mid-decade 80 | DATE | 0.99+ |
today | DATE | 0.99+ |
cdw | PERSON | 0.98+ |
three phases | QUANTITY | 0.98+ |
80 | QUANTITY | 0.98+ |
billions of objects | QUANTITY | 0.98+ |
10 month | QUANTITY | 0.98+ |
one device | QUANTITY | 0.98+ |
an hour | QUANTITY | 0.98+ |
one platform | QUANTITY | 0.98+ |
scott | ORGANIZATION | 0.97+ |
last year | DATE | 0.97+ |
five petabyte | QUANTITY | 0.97+ |
scott | PERSON | 0.97+ |
cassandra | PERSON | 0.97+ |
one | QUANTITY | 0.97+ |
single block | QUANTITY | 0.97+ |
one system | QUANTITY | 0.97+ |
next decade | DATE | 0.96+ |
tons of places | QUANTITY | 0.96+ |
both worlds | QUANTITY | 0.96+ |
vertica | TITLE | 0.96+ |
matt | PERSON | 0.96+ |
both | QUANTITY | 0.96+ |
69 of organizations | QUANTITY | 0.96+ |
billion dollars | QUANTITY | 0.95+ |
pandemic | EVENT | 0.95+ |
first | QUANTITY | 0.95+ |
three great guests | QUANTITY | 0.95+ |
next year | DATE | 0.95+ |
DV Pure Storage 208
>> Thank you, sir. All right, you ready to roll? >> Ready. >> All right, we'll go ahead and go in five, four, three, two. >> Okay, let's summarize the convergence of file and object. First, I want to thank our guests, Matt Burr, Scott Sinclair, Garrett Belsner, and CB Bonne. I'm your host, Dave Vellante, and please allow me to briefly share some of the key takeaways from today's program. So first, as Scott Sinclair of ESG stated surprise, surprise, data's growing. And Matt Burr, he helped us understand the growth of unstructured data. I mean, estimates indicate that the vast majority of data will be considered unstructured by mid decade, 80% or so. And obviously, unstructured data is growing very, very rapidly. Now, of course, your definition of unstructured data, now that may vary across a wide spectrum. I mean, there's video, there's audio, there's documents, there's spreadsheets, there's chat. I mean, these are generally considered unstructured data but of course they all have some type of structure to them. You know, perhaps it's not as strict as a relational database, but there's certainly metadata and certain structure to these types of use cases that I just mentioned. Now, the key to what Pure is promoting is this idea of unified fast file and object, U-F-F-O. Look, object is great, it's inexpensive, it's simple, but historically, it's been less performant, so good for archiving, or cheap and deep types of examples. Organizations often use file for higher performance workloads and let's face it, most of the world's data lives in file formats. What Pure is doing is bringing together file and object by, for example, supporting multiple protocols, ie, NFS, SMB, and S3. S3, of course, has really given a new life to object over the past decade. Now, the key here is to essentially enable customers to have the best of both worlds, not having to trade off performance for object simplicity. And a key discussion point that we've had in the program has been the impact of Flash on the long, slow, death of spinning disk. Look, hard disk drives, they had a great run, but HDD volumes, they peaked in 2010, and Flash, as you well know, has seen tremendous volume growth thanks to the consumption of Flash in mobile devices and then of course, its application into the enterprise. And as volume is just going to keep growing and growing, and growing. the price declines of Flash are coming down faster than those of HDD. So it's, the writing's on the wall. It's just a matter of time. So Flash is riding down that cost curve very, very aggressively and HDD has essentially become a managed decline business. Now, by bringing Flash to object as part of the FlashBlade portfolio and allowing for multiple protocols, Pure hopes to eliminate the dissonance between file and object and simplify the choice. In other words, let the workload decide. If you have data in a file format, no problem. Pure can still bring the benefits of simplicity of object at scale to the table. So again, let the workload inform what the right strategy is not the technical infrastructure. Now Pure, of course, is not alone. There are others supporting this multi-protocol strategy. And so we asked Matt Burr why Pure, what's so special about you? And not surprisingly, in addition to the product innovation, he went right to Pure's business model advantages. I mean, for example, with its Evergreen support model which was very disruptive in the marketplace. You know, frankly, Pure's entire business disrupted the traditional disk array model which was, fundamentally, it was flawed. Pure forced the industry to respond. And when it achieved escape velocity and Pure went public, the entire industry had to react. And a big part of the Pure value prop in addition to this business model innovation that we just discussed is simplicity. Pure's keep it simple approach coincided perfectly with the ascendancy of cloud where technology organizations needed cloud-like simplicity for certain workloads that were never going to move into the cloud. They were going to stay on-prem. Now I'm going to come back to this but allow me to bring in another concept that Garrett and CB really highlighted, and that is the complexity of the data pipeline. And what do I mean, what do I mean by that, and why is this important? So Scott Sinclair articulated or he implied that the big challenge is organizations, they're data full, but insights are scarce; a lot of data, not as much insights, and it takes time, too much time to get to those insights. So we heard from our guests that the complexity of the data pipeline was a barrier to getting to faster insights. Now, CB Bonne shared how he streamlined his data architecture using Vertica's Eon Mode which allowed him to scale, compute, independently of storage, so that brought critical flexibility and improved economics at scale. And FlashBlade, of course, was the backend storage for his data warehouse efforts. Now, the reason I think this is so important is that organizations are struggling to get insights from data and the complexity associated with the data pipeline and data lifecycles, let's face it, it's overwhelming organizations. And there, the answer to this problem is a much longer and different discussion than unifying object and file. That's, you know, I could spend all day talking about that, but let's focus narrowly on the part of the issue that is related to file and object. So the situation here is the technology has not been serving the business the way it should. Rather, the formula is twisted in the world of data and big data, and data architectures. The data team is mired in complex technical issues that impact the time to insights. Now, part of the answer is to abstract the underlying infrastructure complexity and create a layer with which the business can interact that accelerates instead of impedes innovation. And unifying file and object is a simple example of this where the business team is not blocked by infrastructure nuance, like does this data reside in the file or object format? Can I get to it quickly and inexpensively in a logical way or is the infrastructure in a stovepipe and blocking me? So if you think about the prevailing sentiment of how the cloud is evolving to incorporate on premises, workloads that are hybrid, and configurations that are working across clouds, and now out to the edge, this idea of an abstraction layer that essentially hides the underlying infrastructure is a trend we're going to see evolve this decade. Now, is UFFO the be-all end-all answer to solving all of our data pipeline challenges? No, no, of course not. But by bringing the simplicity and economics of object together with the ubiquity and performance of file, UFFO makes it a lot easier. It simplifies a life organizations that are evolving into digital businesses, which by the way, is every business. So, we see this as an evolutionary trend that further simplifies the underlying technology infrastructure and does a better job supporting the data flows for organizations so they didn't have to spend so much time worrying about the technology details that add little value to the business. Okay, so thanks for watching the convergence of file and object and thanks to Pure Storage for making this program possible. This is Dave Vellante for theCUBE. We'll see you next time.
SUMMARY :
All right, you ready to roll? in five, four, three, two. that impact the time to insights.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Matt Burr | PERSON | 0.99+ |
Scott Sinclair | PERSON | 0.99+ |
Garrett Belsner | PERSON | 0.99+ |
ESG | ORGANIZATION | 0.99+ |
80% | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
CB Bonne | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
2010 | DATE | 0.99+ |
First | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
first | QUANTITY | 0.98+ |
four | QUANTITY | 0.98+ |
three | QUANTITY | 0.98+ |
both worlds | QUANTITY | 0.98+ |
Flash | TITLE | 0.97+ |
CB | PERSON | 0.97+ |
Vertica | ORGANIZATION | 0.97+ |
Pure Storage | ORGANIZATION | 0.96+ |
Pure | ORGANIZATION | 0.96+ |
Garrett | PERSON | 0.96+ |
Evergreen | ORGANIZATION | 0.86+ |
past decade | DATE | 0.59+ |
UFFO | ORGANIZATION | 0.59+ |
Pure Storage 208 | COMMERCIAL_ITEM | 0.59+ |
Pure | PERSON | 0.58+ |
this decade | DATE | 0.5+ |
FlashBlade | ORGANIZATION | 0.43+ |
FlashBlade | TITLE | 0.37+ |
Siva Sivakumar, Cisco and Rajiev Rajavasireddy, Pure Storage | Pure Storage Accelerate 2018
>> Announcer: Live from the Bill Graham Auditorium in San Francisco, it's The Cube, covering Pure Storage Accelerate 2018. Brought to you by Pure Storage. (upbeat techno music) >> Welcome back to The Cube, we are live at Pure Accelerate 2018 at the Bill Graham Civic Auditorium in San Francisco. I'm Lisa Martin, moonlighting as Prince today, joined by Dave Vellante, moonlighting as The Who. Should we call you Roger? >> Yeah, Roger. Keith. (all chuckling) I have a moon bat. (laughing) >> It's a very cool concert venue, in case you don't know that. We are joined by a couple of guests, Cube alumnae, welcoming them back to The Cube. Rajiev Rajavasireddy, the VP of Product Management and Solutions at Pure Storage and Siva Sivakumar, the Senior Director of Data Center Solutions at Cisco. Gentlemen, welcome back. >> Thank you. >> Thank you. >> Rajiev: Happy to be here. >> So talk to us about, you know, lots of announcements this morning, Cisco and Pure have been partners for a long time. What's the current status of the Cisco-Pure partnership? What are some of the things that excite you about where you are in this partnership today? >> You want to take that, Siva, or you want me to take it? >> Sure, sure. I think if you look back at what brought us together, obviously both of us are looking at the market transitions and some of the ways that customers were adopting technologies from our site. The converged infrastructure is truly how the partnership started. We literally saw that the customers wanted simplification, wanted much more of a cloud-like experience. They wanted to see infrastructure come together in a much more easier fashion. That we bring the IT, make it easier for them, and we started, and of course, the best of breed technology on both sides, being a Flash leader from their side, networking and computer leader on our side, we truly felt the partnership brought the best value out of both of us. So it's a journey that started that way and we look back now and we say that this is absolutely going great and the best is yet to come. >> So from my side, basically Pure had started what we now call FlashStack, a converged infrastructure offering, roughly about four years ago. And about two and a half years ago, Cisco started investing a lot in this partnership. We're very thankful to them, because they kind of believed in us. We were growing, obviously. But we were not quite as big as we are right now. But they saw the potential early. So about roughly two-and-a-half years ago, I talked about them investing in us. I'm not sure how many people know about what a Cisco validated design is. It's a pretty exhaustive document. It takes a lot of work on Cisco's site to come up with one of those. And usually, a single CVD takes about two or three of their TMEs, highly technical resources and about roughly three to six months to build those. >> Per CVD? >> Per CVD. >> Wow. >> Like I said, it's very exhaustive, I mean you get your building materials, your versions, your interoperability, your, you can actually, your commands that you actually use to stand up that infrastructure and the applications, so on and so forth. So in a nine-month span, they kind of did seven CVDs for us. That was phenomenal. We were very, very thankful that they did that. And over time, that investment paid off. There was a lot of good market investment that Cisco and Pure jointly made, all those investments paid off really well in terms of the customer adoption, the acquisition. And essentially we are at a really good point right now. When we came out with our FlashArray X70 last April, Cisco was about the same time, they were coming out with the M5 servers. And so they invested again, and gave us five more CVDs. And just recently they've added FlashBlade to that portfolio. As you know, FlashBlade is a new product offering. Well not so new, but relatively new, product offering from PR, so we have a new CV that just got released that includes FlashArray and Flash Blade for Oracle. So FlashArray does the online transaction processing, FlashBlade does data warehousing, obviously Cisco networking and Cisco servers do everything OLTB and data warehouse, it's an end to an architecture. So that was what Matt Burr had talked about on stage today. We are also excited to announce that we had that we had introduced AIRI AI-ready infrastructure along with Nvidia at their expo recently. We are excited to say that Cisco is now part of that AIRI infrastructure that Matt Burr had talked about on stage as well. So as you can tell, in a two and half year period we've come a really long way. We have a lot of customer adoption every quarter. We keep adding a ton of customers and we are mutually benefiting from this partnership. >> So I want to ask you about, follow up on the Oracle solution. Oracle would obviously say, "Okay, you buy our database, "buy our SAS, buy the Red Stack, "single throat to choke, "You're going to run better, "take advantage of all the hooks we have." You've heard it before. And it's an industry discussion. >> Rajiev: Of course. >> Customer have it, Oracle comes in hard. So what's the advantage of working with you guys, versus going with an all-Red Stack? Let's talk about that a little bit. >> Sure. Do you want to do it? >> I think if you look at the Oracle databases being deployed, this is a, this really powers many companies. This is really the IT platform. And one of the things that customers, or major customers standardize on this. Again, if they have a standardization from an Oracle perspective, they have a standardization from an infrastructure perspective. Just a database alone is not necessarily easy to put on a different infrastructure, manage them, operate them, go through lifecycle. So they look for a architecture. They look for something that's a overall platform for IT. "I want to do some virtualization. "I want to run desktop virtualization. "I want to do Oracle. "I want to do SAP." So the typical IT operates as more of "I want to manage my infrastructure as a whole. "I want to manage my database and data as its own. "I want its own way of looking." So while there are way to make very appliancey behaviors, that actually operates one better, the approach we took is truly delivering a architecture for data center. The fact that the network as well as the computer is so programmable it makes it easy to expand. Really brings a value from a complete perspective. But if you look at Pure again, their FlashArrays truly have world-class performance. So the customer also looks at, "Well I can get everything from one vendor. "Am I getting the best of breed? "Am I getting the world-class technology from "every one of those aspects and perspectives?" So we certainly think there are a good class of customers who value what we bring to the table and who certainly choose us for what we are. >> And to add to what Siva has just said, right? So if you looked at pre-Flash, you're mostly right in the sense that, hey, if you built an application, especially if it was mission-vertical application, you wanted it siloed, you didn't want another application jumping in and kind of messing up the performance and response times and all that good stuff, right? So in those kind of cases, yeah, appliances made sense. But now, when you have all Flash, and then you have servers and networking that can actually elaborates the performance of Flash, you don't really have to worry about mixing different applications and messing up performance for one at the expense of the other. That's basically, it's a win-win for the customers to have much more of a consolidated platform for multiple applications as opposed to silos. 'Cause silos are always hard to manage, right? >> Siva, I want to ask you, you know, Pure has been very bullish, really, for many years now. Obviously Cisco works with a lot of other vendors. What was it a couple years ago? 'Cause you talked about the significant resource investment that Cisco has been making for a couple of years now in Pure Storage. What is it that makes this so, maybe this Flash tech, I'm kind of thinking of the three-legged stool that Charlie talked about this morning. But what were some of the things that you guys saw a few years ago, even before Pure was a public company, that really drove Cisco to make such a big investment in this? >> I think they, when you look at how Cisco has evolved our data center portfolio, I mean, we are a very significant part of the enterprise today powered by Cisco, Cisco networking, and then we grew into the computer business. But when you looked at the way we walked into this computer business, the traditional storage as we know today is something we actually led through a variety of partnerships in the industry. And our approach to the partnership is, first of all, technology. Technology choice was very very critical, that we bring the best of breed for the customers. But also, again, the customer themself, speaking to us, and then our channel partners, who are very critical for our enablement of the business, is very very critical. So the way we, and when Pure really launched and forayed into all Flash, and they created this whole notion that storage means Flash and that was never the patterning before. That was a game-changing, sort of a model of offering storage, not just capacity but also Flash as my capacity as well as the performance point. We really realized that was going to be a good set of customers will absorb that. Some select workloads will absorb that. But as Flash in itself evolved to be much more mainstream, every day's data storage can be in a Flash medium. They realize, customers realized, this technology, this partner, has something very unique. They've thought about a future that was coming, which we realized was very critical for us. When we evolved network from 10-gig fabric to 40-gig to 100-gig, the workloads that are the slowest part of any system is the data movement. So when Flash became faster and easier for data to be moved, the fabric became a very critical element for the eventual success of our customer. We realized a partnership with Pure, with all Flash and the faster network, and faster compute, we realized there is something unique that we can bring to bear for the customer. So our partnership minds had really said, "This is the next big one that we are going to "invest time and energy." And so we clearly did that and we continue to do that. I mean we continue to see huge success in the customer base with the joint solutions. >> This issue of "best of breed" versus a kind of integrated stacks, it's been around forever, it's not going to go away. I mean obviously Cisco, in the early days of converged infrastructure, put a lot of emphasis on integrating, and obviously partnerships. Since that time, I dunno what it was, 2009 or whatever it was, things have changed a lot. Y'know, cloud was barely a thought back then. And the cloud has pushed this sort of API economy. Pure talks about platforms and integrating through APIs. How has that changed your ability to integrate "best of breed" more seamlessly? >> Actually, you know, I've been working with UCS since it started, right? And it's perhaps, it was a first server system that was built on an API-first philosophy. So everything in the Cisco UCS system can be basically, anything you can do to it GUI or the command line, you can do it their XML API, right? It's an open API that they provide. And they kind of emphasized the openness of it. When they built the initial converged infrastructure stacks, right, the challenge was the legacy storage arrays didn't really have the same API-first programmability mentality, right? If you had to do an operation, you had a bunch of, a ton of CLI commands that you had to go through to get to one operation, right? So Pure, having the advantage of being built from scratch, when APIs are what people want to work with, does everything through rest APIs. All function features, right? So the huge advantage we have is with both Pure, Pure actually unlocks the potential that UCS always had. To actually be a programmable infrastructure. That was somewhat held back, I don't know if Siva agrees or not, but I will say it. That kind of was held back by legacy hardware that didn't have rest space APIs or XML or whatever. So for example, they have Python, and PowerShell-based toolkits, based on their XML APIs that they built around that. We have Python PowerShell toolkits that we built around our own rest APIs. We have puppet integration installed, and all the other stuff that you saw on the stage today. And they have the same things. So if you're a customer, and you've standardized, you've built your automation around any of these things, right, If you have the Intuit infrastructure that is completely programmable, that cloud paradigms that you're talking about is mainly because of programmability, right, that people like that stuff. So we offer something very similar, the joint-value proposition. >> You're being that dev-ops kind of infrastructure-as-code mentality to systems design and architecture. >> Rajiev: Yeah. >> And it does allow you to bring the cloud operating model to your business. >> An aspect of the cloud operating model, right. There's multiple different things that people, >> Yeah maybe not every single feature, >> Rajiev: Right. >> But the ones that are necessary to be cloud-like. >> Yeah, absolutely. >> Dave: That's kind of what the goal is. >> Let's talk about some customer examples. I think Domino's was on stage last year. >> Right. >> And they were mentioned again this morning about how they're leveraging AI. Are they a customer of Flash tech? Is that maybe something you can kind of dig into? Let's see how the companies that are using this are really benefiting at the business level with this technology. >> I think, absolutely, Domino's is one of our top examples of a Flash tech customer. They obviously took a journey to actually modernize, consolidate many applications. In fact, interestingly, if you look at many of the customer journeys, the place where we find it much much more valuable in this space is the customer has got a variety of workloads and he's also looking to say, "I need to be cloud ready. "I need to have a cloud-like concept, "that I have a hybrid cloud strategy today "or it'll be tomorrow. "I need to be ready to catch him and put him on cloud." And the customer also has the mindset that "While I certainly will keep my traditional applications, "such as Oracle and others, "I also have a very strong interest in the new "and modern workloads." Whether it is analytics, or whether it is even things like containers micro-services, things like that which brings agility. So while they think, "I need to have a variety "of things going." Then they start asking the question, "How can I standardize on a platform, "on an architecture, on something that I can "reuse, repeat, and simplify IT." That's, by far, it may sound like, you know, you got everything kind of thing, but that is by far the single biggest strength of the architecture. That we are versatile, we are multi-workload, and when you really build and deploy and manage, everything from an architecture, from a platform perspective looks the same. So they only worry about the applications they are bringing onboard and worry about managing the lifecycle of the apps. And so a variety of customers, so what has happened because of that is, we started with commercial or mid-size customers, to larger commercial. But now we are much more in enterprise. Large, many large IT shops are starting to standardize on Flash tech, and many of our customers are really measured by the number of repeat purchases they will come back and buy. Because once they like and they bought, they really love it and they come back and buy a lot more. And this is the place where it gets very exciting for all of us that these customers come back and tell us what they want. Whether we build automation or build management architecture, our customer speaks to us and says, "You guys better get together and do this." That's where we want to see our partners come to us and say, "We love this architecture but we want these features in there." So our feedback and our evolution really continues to be a journey driven by the demand and the market. Driven by the customers who we have. And that's hugely successful. When you are building and launching something into the marketplace, your best reward is when customer treats you like that. >> So to basically dovetail into what Siva was talking about, in terms of customers, so he brought up a very valid point. So what customers are really looking for is an entire stack, an infrastructure, that is near invisible. It's programmable, right? And it's, you can kind of cookie-cutter that as you scale. So we have an example of that. I'm not going to use the name of the customer, 'cause I'm sure they're going to be okay with it, but I just don't want to do it without asking their permission. It's a healthcare service provider that has basically, literally dozens of these Flash techs that they've standardized on. Basically, they have vertical applications but they also offer VM as a service. So they have cookie-cuttered this with full automation, integration, they roll these out in a very standard way because of a lot of automation that they've done. And they love the Flash tech just because of the programmability and everything else that Siva was talking about. >> With new workloads coming on, do you see any, you know, architectural limitations? When I say new workloads, data-driven, machine intelligence, AI workloads, do we see any architectural limitations to scale, and how do you see that being addressed in the near future? >> Rajiev: Yeah, that's actually a really good question. So basically, let's start with the, so if you look at Bare Metal VMs and containers, that is one factor. In that factor, we're good because, you know, we support Bare Metal and so does the entire stack, and when I say we, I'm talking about the entire Flash tech servers and storage and network, right. VMs and then also containers. Because you know, most of the containers in the early days were ephemeral, right? >> Yeah. >> Rajiev: Then persistent storage started happening. And a lot of the containers would deploy in the public cloud. Now we are getting to a point where customers are kind of, basically experimenting with large enterprises with containers on prem. And so, the persistent storage that connects to containers is kind of nascent but it's picking up. So there's Kubernetes and Docker are the primary components in there, right? And Docker, we already have Docker native volume plug-ins and Cisco has done a lot of work with Docker for the networking and server pieces. And Kubernetes has flex volumes and we have Kubernetes flex volume integration and Cisco works really well with Kubernetes. So there are no issues in that factor. Now if you're talking about machine learning and Artificial Intelligence, right? So it depends. So for example, Cisco's servers today are primarily driven by Intel-based CPUs, right? And if you look at the Nvidia DGXs, these are mostly GPUs. Cisco has a great relationship with Nvidia. And I will let Siva speak to the machine learning and artificial intelligence pieces of it, but the networking piece for sure, we've already announced today that we are working with Cisco in our AIRI stack, right? >> Dave: Right. >> Yeah, no, I think that the next generation workloads, or any newer workloads, always comes with a different set of, some are just software-level workloads. See typically, software-type of innovation, given the platform architecture is more built with programmability and flexibility, adopting our platforms to a newer software paradigm, such as container micro-services, we certainly can extend the architecture to be able to do that and we have done that several times. So that's a good area that covers. But when there are new hardware innovations that comes with, that is interconnect technologies, or that is new types of Flash models, or machine-learning GPU-style models, what we look at from a platform perspective is what can we bring from an integrated perspective. That, of course, allows IT to take advantage of the new technology, but maintain the operational and IT costs of doing business to be the same. That's where our biggest strength is. Of course Nvidia innovates on the GPU factor, but IT doesn't just do GPUs. They have to integrate into a data center, flow the data into the GPU, run compute along that, and applications to really get most out of this information. And then, of course, processing for any kind of real-time, or any decision making for that matter, now you're really talking about bringing it in-house and integrating into the data center. >> Dave: Right. >> Any time you start in that conversation, that's really where we are. I mean, that's our, we welcome more innovation, but we know when you get into that space, we certainly shine quite well. >> Yeah, it's secured, it's protected, it's move it, it's all kind of things. >> So we love these innovations but then our charter and what we are doing is all in making this experience of whatever the new be, as seamless as possible for IT to take advantage of that. >> Wow, guys, you shared a wealth of information with us. We thank you so much for talking about these Cisco-Pure partnership, what you guys have done with FlashStack, you're helping customers from pizza delivery with Domino's to healthcare services to really modernize their infrastructures. Thanks for you time. >> Thank you. >> Thank you very much. >> For Dave Vellante and Lisa Martin, you're watching the Cube live from Pure Accelerate 2018. Stick around, we'll be right back.
SUMMARY :
Brought to you by Pure Storage. Should we call you Roger? I have a moon bat. and Siva Sivakumar, the Senior Director So talk to us about, you know, We literally saw that the customers wanted simplification, and about roughly three to six months to build those. So that was what Matt Burr had talked about on stage today. "take advantage of all the hooks we have." So what's the advantage of working with you guys, Do you want to do it? The fact that the network as well as the computer that can actually elaborates the performance of Flash, of the three-legged stool "This is the next big one that we are going to And the cloud has pushed this sort of API economy. and all the other stuff that you saw on the stage today. You're being that dev-ops kind of And it does allow you to bring the cloud operating model An aspect of the cloud operating model, right. I think Domino's was on stage last year. Is that maybe something you can kind of dig into? but that is by far the single biggest strength So to basically dovetail into what Siva was talking about, and so does the entire stack, And a lot of the containers would deploy and integrating into the data center. but we know when you get into that space, it's move it, it's all kind of things. So we love these innovations but then what you guys have done with FlashStack, For Dave Vellante and Lisa Martin,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Rajiev Rajavasireddy | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Roger | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Rajiev | PERSON | 0.99+ |
Matt Burr | PERSON | 0.99+ |
10-gig | QUANTITY | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Siva Sivakumar | PERSON | 0.99+ |
100-gig | QUANTITY | 0.99+ |
Pure Storage | ORGANIZATION | 0.99+ |
40-gig | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
Domino | ORGANIZATION | 0.99+ |
Keith | PERSON | 0.99+ |
2009 | DATE | 0.99+ |
six months | QUANTITY | 0.99+ |
nine-month | QUANTITY | 0.99+ |
Pure | ORGANIZATION | 0.99+ |
Charlie | PERSON | 0.99+ |
both | QUANTITY | 0.99+ |
Bill Graham Civic Auditorium | LOCATION | 0.99+ |
one factor | QUANTITY | 0.99+ |
Python | TITLE | 0.99+ |
Siva | PERSON | 0.99+ |
Domino's | ORGANIZATION | 0.99+ |
dozens | QUANTITY | 0.99+ |
last April | DATE | 0.99+ |
three | QUANTITY | 0.98+ |
PowerShell | TITLE | 0.98+ |
three-legged | QUANTITY | 0.98+ |
tomorrow | DATE | 0.98+ |
today | DATE | 0.98+ |
both sides | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
Flash | TITLE | 0.98+ |
Bill Graham Auditorium | LOCATION | 0.97+ |
about two and a half years ago | DATE | 0.97+ |
first server | QUANTITY | 0.97+ |
Harry Mower, Red Hat | Red Hat Summit 2018
>> Narrator: Live from San Francisco, it's theCUBE! Covering Red Hat Summit 2018. Brought to you by Red Hat. >> Hello and welcome back to theCUBE's exclusive coverage here in live in San Francisco, California, for Red Hat Summit 2018. I'm John Furrier, with John Troyer my co-host analyst this week he's the co-founder of TechReckoning Avisory Community Development Firm, of course I'm the co-host of theCUBE, and this is Harry Mower, Senior Director of Red Hat Developer Group within Red Hat. He handles all the outward community work, also making sure everyone's up to speed, educated, has all the tools. Of course, thanks for coming and joining on theCUBE today. Appreciate you coming on. >> Thanks for having me again. >> Obviously developer community is your customers. They're your users, Open Source is winning. Everything's done out in the open. That's your job, is to bring, funnel things and goods to the community. >> Harry: Yes. >> Take a minute to explain, what you do and what's going on with your role in the community for the Red Hat customers. >> Sure, so my group really handles three things. It's developer tools, our developer program, and the evangelism work that we do. So if I kind of start from the evangelism work, we've got a great group of evangelists who go out, around the world, kind of spreading the Gospel of Red Hat, so to speak, and they talk a lot about the things that are about to come in the portfolio, specific to developer platforms and tools. Then we try to get them into the program, which gives the developers access to the products that we have today, and information that they need to be successful with them. So it's very much about enterprise developers getting easy access to download and install, and get to Hello World as fast as possible, right? And then we also build tools that are tailored to our platform, so that developers can be successful writing the code once they download-- >> John F: And the goal is ultimately, get more people coding, with Linux, with Red Hat, with Open Source. >> Harry: Yep, it's driving more of, I mean from inwardly facing it's driving more adoption of our products but you know, outward, as the developer being our customer, it's really to make them successful and when I took over this role it was one of the things we needed to do was really focus on who the developer was, you know, there's a lot of different types of developers, and we really do focus on the nine to five developer that works within all of our customers' organizations, right? And predominately those that are doing enterprise jobs are for the most part, but we're starting to branch out with that, but it's really those nine to five developers that we're targeting. >> Got to be exciting for you now because we were just in Copenhagen last week for CubeCon with Kubernetes, you know, front and center, we're super excited about that's defacto formation around Kubernetes, the role of containers that's going on there, really kind of give kind of a fresh view, and a clear view, for the developer, your customer, where things are sitting. So how do you guys take that momentum and drive that home, because that's getting a lot of people excited, and also clarifying kind of what's going on. If you're under the hood, you got some Open Stack, if you're a developer, app develop you've got this, and then you've got orchestration here and you got containers. Kind of the perfect storm, for you guys. >> Harry: Yeah and what we've been trying to distribute in the container space, so one of the things we do we have these kind of 10 big bets that we put on a wall that really drive our product decisions, right? And one of the first, maybe the second one we put on the wall was, everything will be in containers, right? And so we knew that it was important for developers to be able to use containers really easily, but we also knew that it's an implementation detail for them. It's not something that they really need to learn a lot about, but they need to be able to use, so we made an acquisition last year, Code Envy was the company, driving force behind Eclipse J, one of the great features of Eclipse J, a lot of people see it as a web based IDE, but it's also a workspace management system, that allows developers' development environments to be automatically containerized, hosted and run on Open Shift at scale, right? And when we show the demo it's really interesting because people see us coding in a browser and "Oh that's pretty neat", and then at the end of it everyone starts to ask questions about the browser part, and I say, "Yeah, but did you notice we never typed a docker command, never had to learn about a Kubernetes file, it was always containerized right from the very beginning, and now your developers are in that world without having to really learn it". And so that's really a big big thing that we're trying to do with our tools, as we move from classic Eclipse on the desktop to these new web based. >> So simplifying but also reducing things that they normally had to do before. >> Yeah. >> Using steps to kind of. >> Yeah, we want to, people don't like when I say it, I don't want to try make them disappear into the background but what I mean is it's simple and easy to use. We take care of the creative room. >> Now is that, that's OpenShift.io? Is that where people get started with that? >> Actually Eclipse J. >> Okay, Eclipse J, okay. >> So it starts in Eclipse J, and then we take that technology and bring it into io as well. >> Gotcha gotcha, can you take a little bit about io then? You know, the experience there, and what people are doing. >> Sure, yeah so io is a concept product that we released last, well we announced last year at Summit. It's really our vision of what an end to end cloud tooling platform is going to look like. Our bet is that, many of our customers today take a lot of time to customize their integrated tool chains, because of necessity, because someone doesn't offer the fully integrated seamless one today. Many of our customers like their little snowflakes that they built, but I believe over time, that the cost of maintaining that will become something that they're not going to like, and that's one of the reasons why we built something like io. It's hosted managed by us, and integrated. >> And what are people using it for? Is this for prototyping, is this, what are people doing on the system? >> Today it's mostly for prototyping, one of the things we did here at this week's Summit is we announced kind of a general availability for Java developer using public repose. Up until this point it's always kind of been experimental. You weren't sure if your data was going to be gone if it was up or down, there's much more stability and kind of a more reliable SLA right now for those types of projects. >> John T: Gotcha, gotcha. Well, I mean, pivoting maybe to the overall developer program, so developers.redhat.com, big announcement yesterday, you reached a million members, congratulations. >> Harry: Thank you very, yeah, thanks a million is what I put in my tweet. It's been a really great journey, I started it three years ago, we consolidated a number of the smaller programs together, so we had a base of about two, 300 ish developers, and we've accelerated that adoption, now we're over a million and growing fast, so it's great. >> What's the priorities as you go on? I mean all of these new tools out there and I was just talking with someone, one of your partners here, we were out at a beer thing last night, got talking and like waterfall's dying in software development but Open Source ethos is going into other areas. Marketing, and so the DevOps concepts are actually being applied to other things. So how are you taking that outreach to the community, so as you take the new Gospel, what techniques do you use? I mean, you're tweeting away, you going in with blogging, content marketing, how are you engaging the content, how are you getting it out in digital? >> Our key thing is the demo, right? So you saw a lot of great demos on stage this week, Burr Sutter on our team did a phenomenal job every day with a set of demos, and we take those demos, those are part of the things we bring to all the other conferences as well, they become the center stage for that, because it's kind of the proof of concept, right? It's the proof of what can be possible, and then we start to build around that. And it helps us show it's possible, it actually helps get our product teams coelest around our idea, they start to build better products, we bring that to customers, and then customer engagement starts early, but that's the key of it. >> I mean demos the ultimate content piece, right? >> It forces everybody to, on the scene-- >> Real demo, not a fake demo. >> And those were all real, that's the thing the demos are so good I think some of them people thought they were fake. I'm like Burr you didn't do a good enough job of like pulling the plug faster, and showing it was real, right? But they're, yes, they're absolutely real demos, real technology working, and that creates a lot of momentum around. >> You guys see any demographics shifts in the developers, obviously there's a new wave of developers coming in, younger certainly, right? You get the older developers that know systems, so you're seeing coexistence of different demographics. Old and young, kind of playing together. >> Yeah, so there's a full spectrum of ages, a full spectrum of diversity, and geography, I mean, it's obvious to everybody that our growing markets are Asia, it's India and China right now. You'll see, you know, Chinese New Year we see a dip in usage in our tools, you know, it's very much, that's where the growth is. Our base right now is still predominately North America and EMIA, but all the growth is obviously Asian and-- >> John T: (mumbles). Harry I wanted to talk about the role of the developer advocate a little bit. It's a relatively new role in the ecosystem, not everybody understands it, I think some companies use a title like that in very different ways, can you talk, it's so important, this peer to peer learning, you know, putting a human face on the company, especially for a company like Red Hat, right? Built from Open Source communities from the ground up. Can you talk a little bit about what is a developer advocate, and am I even getting the title right? But what do they do here at Red Hat? >> Yeah so it's funny, so an evangelist is an advocate, and how do you distinguish the difference? So I spend a lot of time at Microsoft, you know, I think they pioneered a lot of that a long time ago, 10 or 12 years ago, really started doing that, and those ideas have matured, many different philosophies of how you do it. I bring a philosophy here and at work and with Burr, that, you know, it's one thing to preach the Gospel, but the end goal is to get them into Church, right? And eventually get them to, you know, donate, right? So, our evangelists are really out there to convince and you know, get them to adopt. Other models where you're an advocate, it's about funneling, it's almost like a marketing, inbound marketing kind of role, where you're taking feedback from the developers and helping to reshape the product. We do a little bit of that, but it's mostly about understanding what Red Hat has, 'cause when people look at Red Hat they think that's the Linux I used to use, I started in college, right? And for us we're trying to transform that view. >> John F: Huge scope now. >> And that's why we're more of an evangelistic organization. >> I mean Linux falls in the background I mean with cloud. Linux, isn't that what the old people used to like install? Like, it's native now. So again, new opportunities. And Open Shift is a big part of that. >> Yeah and we work hand in hand, there's actually an Open Shift evangelism team that we work hand in hand with, and their job is really more of a workshop style engagement, and get the excitement, bring them to that, and then do the engagements and bring it in. >> John F: What's the bumper sticker to developers? I mean obviously developer's mind sheer is critical. So they got to see the pitch of Linux helps a lot, it's all about the OS, what's the main value proposition to the developers that you guys are trying to have front and center the whole time? >> Harry: For Red Hat specific? >> Yeah yeah. >> It's funny, we just redid all of our marketing about the program, and specifically it's build here, go anywhere. And for two levels, right? With using Red Hat technologies, being part of the Open Source community, you can take those skills and knowledge and go anywhere in your career, right? But also with our technology, you can take that, and you can run it anywhere as well. You can take that technology and run it roll on prem, run it on someone else's cloud, and it really is just, we, you know, we really give the developers a lot of options and possibilities, and when you learn our products and use our products, you can really go anywhere. >> So Harry there's a, I loved how you distinguished at the very beginning of the conversation who the program is for, and that particular role, right? I sit down and I code enterprise products and glue stuff together and build new things, bring new functionality to the market, shit, excuse me, this week has been all about speed to market, right? And that's the developers out there, right? See I get so excited about it. >> That's okay, you can swear. >> (mumbles) >> But you know, there's a lot of shifting roles in IT, and the tech industry, over the last, say, decade or so, you know, do we spec the people who we used to call system mins, do they have to become developers? Open Source contributors also are developers. But it sounds like maybe the roles are clarifying a little bit, other than, you know, an Open Shift operator, you know, doesn't have to be a developer, but does have to be, know about APIs and things, how are you looking at it? >> I don't have too strong an opinion on this, but when I talk to other people and we kind of talk about it, you know the role of the, so we made operations easy enough that developers can do a lot of it, but they can't do all of it, right? And there's still a need for operations people out there, and those roles are a lot around being almost automation developers. Things that you do like an (mumbles) playbook or, you know, what other technology might use, so there is an element of operations people having to start to learn how to do some sort of coding, but it's not the same type of that a normal developer will do. So somehow we're meeting in the middle a little bit. But, I'm so focused on the developer part that I really don't have too strong an opinion. >> Well let us know how we can help, we love your mission, theCUBE is an open community brand, we love to get any kind of content, let us know when your big events are, I certainly want to promote it sir. Open Source is one, it's winning, it's changing and you're starting to see commercialization happen in a nice way, where projects are preserved upstream, people are making great products out of it, so a great opportunity for careers. And building great stuff, I mean new application start-ups, it's all over the place so it's great stuff, so congratulations and thanks for coming on theCUBE. It's theCUBE, out in the open here in the middle of the floor at Moscone West, bringing all the covers from Red Hat Summit 2018. We'll be right back with more after this short break, I'm John Furrier, with John Troyer, we'll be right back. (electronic music)
SUMMARY :
Brought to you by Red Hat. of course I'm the co-host of theCUBE, and goods to the community. Take a minute to explain, what you do So if I kind of start from the evangelism work, John F: And the goal is ultimately, one of the things we needed to do was Kind of the perfect storm, for you guys. in the container space, so one of the things we do normally had to do before. We take care of the creative room. Is that where people get started with that? we take that technology and bring it into io as well. You know, the experience there, and what people are doing. and that's one of the reasons why one of the things we did here at this week's Summit big announcement yesterday, you Harry: Thank you very, yeah, thanks a million the new Gospel, what techniques do you use? because it's kind of the proof of concept, right? of like pulling the plug faster, in the developers, obviously there's a a dip in usage in our tools, you know, of the developer advocate a little bit. but the end goal is to get them into Church, right? I mean Linux falls in the background I mean with cloud. and get the excitement, bring them to that, John F: What's the bumper sticker to developers? and it really is just, we, you know, And that's the developers out there, right? a little bit, other than, you know, But, I'm so focused on the developer part of the floor at Moscone West,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Harry Mower | PERSON | 0.99+ |
John Troyer | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Harry | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
nine | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
John F | PERSON | 0.99+ |
Asia | LOCATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Copenhagen | LOCATION | 0.99+ |
John T | PERSON | 0.99+ |
San Francisco, California | LOCATION | 0.99+ |
Today | DATE | 0.99+ |
Eclipse | TITLE | 0.99+ |
yesterday | DATE | 0.99+ |
TechReckoning Avisory Community Development Firm | ORGANIZATION | 0.99+ |
Linux | TITLE | 0.99+ |
10 big bets | QUANTITY | 0.99+ |
Moscone West | LOCATION | 0.99+ |
Red Hat Summit 2018 | EVENT | 0.99+ |
two levels | QUANTITY | 0.99+ |
Code Envy | ORGANIZATION | 0.99+ |
five developers | QUANTITY | 0.98+ |
China | LOCATION | 0.98+ |
North America | LOCATION | 0.98+ |
Red Hat | TITLE | 0.98+ |
10 | DATE | 0.98+ |
this week | DATE | 0.98+ |
one | QUANTITY | 0.98+ |
last week | DATE | 0.98+ |
India | LOCATION | 0.98+ |
three years ago | DATE | 0.98+ |
Hello World | TITLE | 0.98+ |
developers.redhat.com | OTHER | 0.97+ |
OpenShift.io | TITLE | 0.97+ |
first | QUANTITY | 0.97+ |
last night | DATE | 0.97+ |
today | DATE | 0.97+ |
Burr | PERSON | 0.97+ |
over a million | QUANTITY | 0.97+ |
12 years ago | DATE | 0.97+ |
Red Hat Developer Group | ORGANIZATION | 0.97+ |
theCUBE | ORGANIZATION | 0.97+ |
a million members | QUANTITY | 0.96+ |
Eclipse J | TITLE | 0.96+ |
CubeCon | EVENT | 0.96+ |
Java | TITLE | 0.96+ |
five developer | QUANTITY | 0.95+ |
a million | QUANTITY | 0.95+ |
three things | QUANTITY | 0.93+ |
Eclipse J. | TITLE | 0.93+ |
one thing | QUANTITY | 0.9+ |
about two, 300 ish developers | QUANTITY | 0.9+ |
San Francisco | LOCATION | 0.89+ |
EMIA | ORGANIZATION | 0.87+ |
Day One Morning Keynote | Red Hat Summit 2018
[Music] [Music] [Music] [Laughter] [Laughter] [Laughter] [Laughter] [Music] [Music] [Music] [Music] you you [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Applause] [Music] wake up feeling blessed peace you warned that Russia ain't afraid to show it I'll expose it if I dressed up riding in that Chester roasted nigga catch you slippin on myself rocks on I messed up like yes sir [Music] [Music] [Music] [Music] our program [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] you are not welcome to Red Hat summit 2018 2018 [Music] [Music] [Music] [Laughter] [Music] Wow that is truly the coolest introduction I've ever had thank you Wow I don't think I feel cool enough to follow an interaction like that Wow well welcome to the Red Hat summit this is our 14th annual event and I have to say looking out over this audience Wow it's great to see so many people here joining us this is by far our largest summit to date not only did we blow through the numbers we've had in the past we blew through our own expectations this year so I know we have a pretty packed house and I know people are still coming in so it's great to see so many people here it's great to see so many familiar faces when I had a chance to walk around earlier it's great to see so many new people here joining us for the first time I think the record attendance is an indication that more and more enterprises around the world are seeing the power of open source to help them with their challenges that they're facing due to the digital transformation that all of enterprises around the world are going through the theme for the summit this year is ideas worth exploring and we intentionally chose that because as much as we are all going through this digital disruption and the challenges associated with it one thing I think is becoming clear no one person and certainly no one company has the answers to these challenges right this isn't a problem where you can go buy a solution this is a set of capabilities that we all need to build it's a set of cultural changes that we all need to go through and that's going to require the best ideas coming from so many different places so we're not here saying we have the answers we're trying to convene the conversation right we want to serve as a catalyst bringing great minds together to share ideas so we all walk out of here at the end of the week a little wiser than when we first came here we do have an amazing agenda for you we have over 7,000 attendees we may be pushing 8,000 by the time we got through this morning we have 36 keynote speakers and we have a hundred and twenty-five breakout sessions and have to throw in one plug scheduling 325 breakout sessions is actually pretty difficult and so we used the Red Hat business optimizer which is an AI constraint solver that's new in the Red Hat decision manager to help us plan the summit because we have individuals who have a clustered set of interests and we want to make sure that when we schedule two breakout sessions we do it in a way that we don't have overlapping sessions that are really important to the same individual so we tried to use this tool and what we understand about people's interest in history of what they wanted to do to try to make sure that we spaced out different times for things of similar interests for similar people as well as for people who stood in the back of breakouts before and I know I've done that too we've also used it to try to optimize room size so hopefully we will do our best to make sure that we've appropriately sized the spaces for those as well so it's really a phenomenal tool and I know it's helped us a lot this year in addition to the 325 breakouts we have a lot of our customers on stage during the main sessions and so you'll see demos you'll hear from partners you'll hear stories from so many of our customers not on our point of view of how to use these technologies but their point of views of how they actually are using these technologies to solve their problems and you'll hear over and over again from those keynotes that it's not just about the technology it's about how people are changing how people are working to innovate to solve those problems and while we're on the subject of people I'd like to take a moment to recognize the Red Hat certified professional of the year this is known award we do every year I love this award because it truly recognizes an individual for outstanding innovation for outstanding ideas for truly standing out in how they're able to help their organization with Red Hat technologies Red Hat certifications help system administrators application developers IT architects to further their careers and help their organizations by being able to advance their skills and knowledge of Red Hat products and this year's winner really truly is a great example about how their curiosity is helped push the limits of what's possible with technology let's hear a little more about this year's winner when I was studying at the University I had computer science as one of my subjects and that's what created the passion from the very beginning they were quite a few institutions around my University who were offering Red Hat Enterprise Linux as a course and a certification paths through to become an administrator Red Hat Learning subscription has offered me a lot more than any other trainings that have done so far that gave me exposure to so many products under red hair technologies that I wasn't even aware of I started to think about the better ways of how these learnings can be put into the real life use cases and we started off with a discussion with my manager saying I have to try this product and I really want to see how it really fits in our environment and that product was Red Hat virtualization we went from deploying rave and then OpenStack and then the open shift environment we wanted to overcome some of the things that we saw as challenges to the speed and rapidity of release and code etc so it made perfect sense and we were able to do it in a really short space of time so you know we truly did use it as an Innovation Lab I think idea is everything ideas can change the way you see things an Innovation Lab was such an idea that popped into my mind one fine day and it has transformed the way we think as a team and it's given that playpen to pretty much everyone to go and test their things investigate evaluate do whatever they like in a non-critical non production environment I recruited Neha almost 10 years ago now I could see there was a spark a potential with it and you know she had a real Drive a real passion and you know here we are nearly ten years later I'm Neha Sandow I am a Red Hat certified engineer all right well everyone please walk into the states to the stage Neha [Music] [Applause] congratulations thank you [Applause] I think that - well welcome to the red has some of this is your first summit yes it is thanks so much well fantastic sure well it's great to have you here I hope you have a chance to engage and share some of your ideas and enjoy the week thank you thank you congratulations [Applause] neha mentioned that she first got interest in open source at university and it made me think red hats recently started our Red Hat Academy program that looks to programmatically infuse Red Hat technologies in universities around the world it's exploded in a way we had no idea it's grown just incredibly rapidly which i think shows the interest that there really is an open source and working in an open way at university so it's really a phenomenal program I'm also excited to announce that we're launching our newest open source story this year at Summit it's called the science of collective discovery and it looks at what happens when communities use open hardware to monitor the environment around them and really how they can make impactful change based on that technologies the rural premier that will be at 5:15 on Wednesday at McMaster Oni West and so please join us for a drink and we'll also have a number of the experts featured in that and you can have a conversation with them as well so with that let's officially start the show please welcome red hat president of products and technology Paul Cormier [Music] Wow morning you know I say it every year I'm gonna say it again I know I repeat myself it's just amazing we are so proud here to be here today too while you all week on how far we've come with opens with open source and with the products that we that we provide at Red Hat so so welcome and I hope the pride shows through so you know I told you Seven Summits ago on this stage that the future would be open and here we are just seven years later this is the 14th summit but just seven years later after that and much has happened and I think you'll see today and this week that that prediction that the world would be open was a pretty safe predict prediction but I want to take you just back a little bit to see how we started here and it's not just how Red Hat started here this is an open source in Linux based computing is now in an industry norm and I think that's what you'll you'll see in here this week you know we talked back then seven years ago when we put on our prediction about the UNIX error and how Hardware innovation with x86 was it was really the first step in a new era of open innovation you know companies like Sun Deck IBM and HP they really changed the world the computing industry with their UNIX models it was that was really the rise of computing but I think what we we really saw then was that single company innovation could only scale so far could really get so far with that these companies were very very innovative but they coupled hardware innovation with software innovation and as one company they could only solve so many problems and even which comp which even complicated things more they could only hire so many people in each of their companies Intel came on the scene back then as the new independent hardware player and you know that was really the beginning of the drive for horizontal computing power and computing this opened up a brand new vehicle for hardware innovation a new hardware ecosystem was built around this around this common hardware base shortly after that Stallman and leanness they had a vision of his of an open model that was created and they created Linux but it was built around Intel this was really the beginning of having a software based platform that could also drive innovation this kind of was the beginning of the changing of the world here that system-level innovation now having a hardware platform that was ubiquitous and a software platform that was open and ubiquitous it really changed this system level innovation and that continues to thrive today it was only possible because it was open this could not have happened in a closed environment it allowed the best ideas from anywhere from all over to come in in win only because it was the best idea that's what drove the rate of innovation at the pace you're seeing today and it which has never been seen before we at Red Hat we saw the need to bring this innovation to solve real-world problems in the enterprise and I think that's going to be the theme of the show today you're going to see us with our customers and partners talking about and showing you some of those real-world problems that we are sought solving with this open innovation we created rel back then for this for the enterprise it started it's it it wasn't successful because it's scaled it was secure and it was enterprise ready it once again changed the industry but this time through open innovation this gave the hardware ecosystem a software platform this open software platform gave the hardware ecosystem a software platform to build around it Unleashed them the hardware side to compete and thrive it enabled innovation from the OEMs new players building cheaper faster servers even new architectures from armed to power sprung up with this change we have seen an incredible amount of hardware innovation over the last 15 years that same innovation happened on the software side we saw powerful implementations of bare metal Linux distributions out in the market in fact at one point there were 300 there are over 300 distributions out in the market on the foundation of Linux powerful open-source equivalents were even developed in every area of Technology databases middleware messaging containers anything you could imagine innovation just exploded around the Linux platform in innovation it's at the core also drove virtualization both Linux and virtualization led to another area of innovation which you're hearing a lot about now public cloud innovation this innovation started to proceed at a rate that we had never seen before we had never experienced this in the past in this unprecedented speed of innovation and software was now possible because you didn't need a chip foundry in order to innovate you just needed great ideas in the open platform that was out there customers seeing this innovation in the public cloud sparked it sparked their desire to build their own linux based cloud platforms and customers are now are now bringing that cloud efficiency on-premise in their own data centers public clouds demonstrated so much efficiency the data centers and architects wanted to take advantage of it off premise on premise I'm sorry within their own we don't within their own controlled environments this really allowed companies to make the most of existing investments from data centers to hardware they also gained many new advantages from data sovereignty to new flexible agile approaches I want to bring Burr and his team up here to take a look at what building out an on-premise cloud can look like today Bure take it away I am super excited to be with all of you here at Red Hat summit I know we have some amazing things to show you throughout the week but before we dive into this demonstration I want you to take just a few seconds just a quick moment to think about that really important event your life that moment you turned on your first computer maybe it was a trs-80 listen Claire and Atari I even had an 83 b2 at one point but in my specific case I was sitting in a classroom in Hawaii and I could see all the way from Diamond Head to Pearl Harbor so just keep that in mind and I turn on an IBM PC with dual floppies I don't remember issuing my first commands writing my first level of code and I was totally hooked it was like a magical moment and I've been hooked on computers for the last 30 years so I want you to hold that image in your mind for just a moment just a second while we show you the computers we have here on stage let me turn this over to Jay fair and Dini here's our worldwide DevOps manager and he was going to show us his hardware what do you got Jay thank you BER good morning everyone and welcome to Red Hat summit we have so many cool things to show you this week I am so happy to be here and you know my favorite thing about red hat summit is our allowed to kind of share all of our stories much like bird just did we also love to you know talk about the hardware and the technology that we brought with us in fact it's become a bit of a competition so this year we said you know let's win this thing and we actually I think we might have won we brought a cloud with us so right now this is a private cloud for throughout the course of the week we're going to turn this into a very very interesting open hybrid cloud right before your eyes so everything you see here will be real and happening right on this thing right behind me here so thanks for our four incredible partners IBM Dell HP and super micro we've built a very vendor heterogeneous cloud here extra special thanks to IBM because they loaned us a power nine machine so now we actually have multiple architectures in this cloud so as you know one of the greatest benefits to running Red Hat technology is that we run on just about everything and you know I can't stress enough how powerful that is how cost-effective that is and it just makes my life easier to be honest so if you're interested the people that built this actual rack right here gonna be hanging out in the customer success zone this whole week it's on the second floor the lobby there and they'd be glad to show you exactly how they built this thing so let me show you what we actually have in this rack so contained in this rack we have 1056 physical chorus right here we have five and a half terabytes of RAM and just in case we threw 50 terabytes of storage in this thing so burr that's about two million times more powerful than that first machine you boot it up thanks to a PC we're actually capable of putting all the power needs and cooling right in this rack so there's your data center right there you know it occurred to me last night that I can actually pull the power cord on this thing and kick it up a notch we could have the world's first mobile portable hybrid cloud so I'm gonna go ahead and unplug no no no no no seriously it's not unplug the thing we got it working now well Berg gets a little nervous but next year we're rolling this thing around okay okay so to recap multiple vendors check multiple architectures check multiple public clouds plug right into this thing check and everything everywhere is running the same software from Red Hat so that is a giant check so burn Angus why don't we get the demos rolling awesome so we have totally we have some amazing hardware amazing computers on this stage but now we need to light it up and we have Angus Thomas who represents our OpenStack engineering team and he's going to show us what we can do with this awesome hardware Angus thank you Beth so this was an impressive rack of hardware to Joe has bought a pocket stage what I want to talk about today is putting it to work with OpenStack platform director we're going to turn it from a lot of potential into a flexible scalable private cloud we've been using director for a while now to take care of managing hardware and orchestrating the deployment of OpenStack what's new is that we're bringing the same capabilities for on-premise manager the deployment of OpenShift director deploying OpenShift in this way is the best of both worlds it's bare-metal performance but with an underlying infrastructure as a service that can take care of deploying in new instances and scaling out and a lot of the things that we expect from a cloud provider director is running on a virtual machine on Red Hat virtualization at the top of the rack and it's going to bring everything else under control what you can see on the screen right now is the director UI and as you see some of the hardware in the rack is already being managed at the top level we have information about the number of cores in the amount of RAM and the disks that each machine have if we dig in a bit there's information about MAC addresses and IPs and the management interface the BIOS kernel version dig a little deeper and there is information about the hard disks all of this is important because we want to be able to make sure that we put in workloads exactly where we want them Jay could you please power on the two new machines at the top of the rack sure all right thank you so when those two machines come up on the network director is going to see them see that they're new and not already under management and is it immediately going to go into the hardware inspection that populates this database and gets them ready for use so we also have profiles as you can see here profiles are the way that we match the hardware in a machine to the kind of workload that it's suited to this is how we make sure that machines that have all the discs run Seth and machines that have all the RAM when our application workouts for example there's two ways these can be set when you're dealing with a rack like this you could go in an individually tag each machine but director scales up to data centers so we have a rules matching engine which will automatically take the hardware profile of a new machine and make sure it gets tagged in exactly the right way so we can automatically discover new machines on the network and we can automatically match them to a profile that's how we streamline and scale up operations now I want to talk about deploying the software we have a set of validations we've learned over time about the Miss configurations in the underlying infrastructure which can cause the deployment of a multi node distributed application like OpenStack or OpenShift to fail if you have the wrong VLAN tags on a switch port or DHCP isn't running where it should be for example you can get into a situation which is really hard to debug a lot of our validations actually run before the deployment they look at what you're intending to deploy and they check in the environment is the way that it should be and they'll preempts problems and obviously preemption is a lot better than debugging something new that you probably have not seen before is director managing multiple deployments of different things side by side before we came out on stage we also deployed OpenStack on this rack just to keep me honest let me jump over to OpenStack very quickly a lot of our opens that customers will be familiar with this UI and the bare metal deployment of OpenStack on our rack is actually running a set of virtual machines which is running Gluster you're going to see that put to work later on during the summit Jay's gone to an awful lot effort to get this Hardware up on the stage so we're going to use it as many different ways as we can okay let's deploy OpenShift if I switch over to the deployed a deployment plan view there's a few steps first thing you need to do is make sure we have the hardware I already talked about how director manages hardware it's smart enough to make sure that it's not going to attempt to deploy into machines they're already in use it's only going to deploy on machines that have the right profile but I think with the rack that we have here we've got enough next thing is the deployment configuration this is where you get to customize exactly what's going to be deployed to make sure that it really matches your environment if they're external IPs for additional services you can set them here whatever it takes to make sure that the deployment is going to work for you as you can see on the screen we have a set of options around enable TLS for encryption network traffic if I dig a little deeper there are options around enabling ipv6 and network isolation so that different classes of traffic there are over different physical NICs okay then then we have roles now roles this is essentially about the software that's going to be put on each machine director comes with a set of roles for a lot of the software that RedHat supports and you can just use those or you can modify them a little bit if you need to add a monitoring agent or whatever it might be or you can create your own custom roles director has quite a rich syntax for custom role definition and custom Network topologies whatever it is you need in order to make it work in your environment so the rawls that we have right now are going to give us a working instance of openshift if I go ahead and click through the validations are all looking green so right now I can click the button start to the deploy and you will see things lighting up on the rack directors going to use IPMI to reboot the machines provisioned and with a trail image was the containers on them and start up the application stack okay so one last thing once the deployment is done you're going to want to keep director around director has a lot of capabilities around what we call de to operational management bringing in new Hardware scaling out deployments dealing with updates and critically doing upgrades as well so having said all of that it is time for me to switch over to an instance of openshift deployed by a director running on bare metal on our rack and I need to hand this over to our developer team so they can show what they can do it thank you that is so awesome Angus so what you've seen now is going from bare metal to the ultimate private cloud with OpenStack director make an open shift ready for our developers to build their next generation applications thank you so much guys that was totally awesome I love what you guys showed there now I have the honor now I have the honor of introducing a very special guest one of our earliest OpenShift customers who understands the necessity of the private cloud inside their organization and more importantly they're fundamentally redefining their industry please extend a warm welcome to deep mar Foster from Amadeus well good morning everyone a big thank you for having armadillos here and myself so as it was just set I'm at Mario's well first of all we are a large IT provider in the travel industry so serving essentially Airlines hotel chains this distributors like Expedia and others we indeed we started very early what was OpenShift like a bit more than three years ago and we jumped on it when when Retta teamed with Google to bring in kubernetes into this so let me quickly share a few figures about our Mario's to give you like a sense of what we are doing and the scale of our operations so some of our key KPIs one of our key metrics is what what we call passenger borders so that's the number of customers that physically board a plane over the year so through our systems it's roughly 1.6 billion people checking in taking the aircrafts on under the Amarillo systems close to 600 million travel agency bookings virtually all airlines are on the system and one figure I want to stress out a little bit is this one trillion availability requests per day that's when I read this figure my mind boggles a little bit so this means in continuous throughput more than 10 million hits per second so of course these are not traditional database transactions it's it's it's highly cached in memory and these applications are running over like more than 100,000 course so it's it's it's really big stuff so today I want to give some concrete feedback what we are doing so I have chosen two applications products of our Mario's that are currently running on production in different in different hosting environments as the theme here is of this talk hybrid cloud and so I want to give some some concrete feedback of how we architect the applications and of course it stays relatively high level so here I have taken one of our applications that is used in the hospitality environment so it's we have built this for a very large US hotel chain and it's currently in in full swing brought into production so like 30 percent of the globe or 5,000 plus hotels are on this platform not so here you can see that we use as the path of course on openshift on that's that's the most central piece of our hybrid cloud strategy on the database side we use Oracle and Couchbase Couchbase is used for the heavy duty fast access more key value store but also to replicate data across two data centers in this case it's running over to US based data centers east and west coast topology that are fit so run by Mario's that are fit with VMware on for the virtualization OpenStack on top of it and then open shift to host and welcome the applications on the right hand side you you see the kind of tools if you want to call them tools that we use these are the principal ones of course the real picture is much more complex but in essence we use terraform to map to the api's of the underlying infrastructure so they are obviously there are differences when you run on OpenStack or the Google compute engine or AWS Azure so some some tweaking is needed we use right at ansible a lot we also use puppet so you can see these are really the big the big pieces of of this sense installation and if we look to the to the topology again very high high level so these two locations basically map the data centers of our customers so they are in close proximity because the response time and the SLA is of this application is are very tight so that's an example of an application that is architectures mostly was high ability and high availability in minds not necessarily full global worldwide scaling but of course it could be scaled but here the idea is that we can swing from one data center to the unit to the other in matters of of minutes both take traffic data is fully synchronized across those data centers and while the switch back and forth is very fast the second example I have taken is what we call the shopping box this is when people go to kayak or Expedia and they're getting inspired where they want to travel to this is really the piece that shoots most of transit of the transactions into our Mario's so we architect here more for high scalability of course availability is also a key but here scaling and geographical spread is very important so in short it runs partially on-premise in our Amarillo Stata Center again on OpenStack and we we deploy it mostly in the first step on the Google compute engine and currently as we speak on Amazon on AWS and we work also together with Retta to qualify the whole show on Microsoft Azure here in this application it's it's the same building blocks there is a large swimming aspect to it so we bring Kafka into this working with records and another partner to bring Kafka on their open shift because at the end we want to use open shift to administrate the whole show so over time also databases and the topology here when you look to the physical deployment topology while it's very classical we use the the regions and the availability zone concept so this application is spread over three principal continental regions and so it's again it's a high-level view with different availability zones and in each of those availability zones we take a hit of several 10,000 transactions so that was it really in very short just to give you a glimpse on how we implement hybrid clouds I think that's the way forward it gives us a lot of freedom and it allows us to to discuss in a much more educated way with our customers that sometimes have already deals in place with one cloud provider or another so for us it's a lot of value to set two to leave them the choice basically what up that was a very quick overview of what we are doing we were together with records are based on open shift essentially here and more and more OpenStack coming into the picture hope you found this interesting thanks a lot and have a nice summer [Applause] thank you so much deeper great great solution we've worked with deep Marv and his team for a long for a long time great solution so I want to take us back a little bit I want to circle back I sort of ended talking a little bit about the public cloud so let's circle back there you know even so even though some applications need to run in various footprints on premise there's still great gains to be had that for running certain applications in the public cloud a public cloud will be as impactful to to the industry as as UNIX era was of computing was but by itself it'll have some of the same limitations and challenges that that model had today there's tremendous cloud innovation happening in the public cloud it's being driven by a handful of massive companies and much like the innovation that sundeck HP and others drove in a you in the UNIX era of community of computing many customers want to take advantage of the best innovation no matter where it comes from buddy but as they even eventually saw in the UNIX era they can't afford the best innovation at the cost of a siloed operating environment with the open community we are building a hybrid application platform that can give you access to the best innovation no matter which vendor or which cloud that it comes from letting public cloud providers innovate and services beyond what customers or anyone can one provider can do on their own such as large scale learning machine learning or artificial intelligence built on the data that's unique probably to that to that one cloud but consumed in a common way for the end customer across all applications in any environment on any footprint in in their overall IT infrastructure this is exactly what rel brought brought to our customers in the UNIX era of computing that consistency across any of those footprints obviously enterprises will have applications for all different uses some will live on premise some in the cloud hybrid cloud is the only practical way forward I think you've been hearing that from us for a long time it is the only practical way forward and it'll be as impactful as anything we've ever seen before I want to bring Byrne his team back to see a hybrid cloud deployment in action burr [Music] all right earlier you saw what we did with taking bare metal and lighting it up with OpenStack director and making it openshift ready for developers to build their next generation applications now we want to show you when those next turn and generation applications and what we've done is we take an open shift and spread it out and installed it across Asia and Amazon a true hybrid cloud so with me on stage today as Ted who's gonna walk us through an application and Brent Midwood who's our DevOps engineer who's gonna be making sure he's monitoring on the backside that we do make sure we do a good job so at this point Ted what have you got for us Thank You BER and good morning everybody this morning we are running on the stage in our private cloud an application that's providing its providing fraud detection detect serves for financial transactions and our customer base is rather large and we occasionally take extended bursts of traffic of heavy traffic load so in order to keep our latency down and keep our customers happy we've deployed extra service capacity in the public cloud so we have capacity with Microsoft Azure in Texas and with Amazon Web Services in Ohio so we use open chip container platform on all three locations because openshift makes it easy for us to deploy our containerized services wherever we want to put them but the question still remains how do we establish seamless communication across our entire enterprise and more importantly how do we balance the workload across these three locations in such a way that we efficiently use our resources and that we give our customers the best possible experience so this is where Red Hat amq interconnect comes in as you can see we've deployed a MQ interconnect alongside our fraud detection applications in all three locations and if I switch to the MQ console we'll see the topology of the app of the network that we've created here so the router inside the on stage here has made connections outbound to the public routers and AWS and Azure these connections are secured using mutual TLS authentication and encrypt and once these connections are established amq figures out the best way auda matically to route traffic to where it needs to get to so what we have right now is a distributed reliable broker list message bus that expands our entire enterprise now if you want to learn more about this make sure that you catch the a MQ breakout tomorrow at 11:45 with Jack Britton and David Ingham let's have a look at the message flow and we'll dive in and isolate the fraud detection API that we're interested in and what we see is that all the traffic is being handled in the private cloud that's what we expect because our latencies are low and they're acceptable but now if we take a little bit of a burst of increased traffic we're gonna see that an EQ is going to push a little a bi traffic out onto the out to the public cloud so as you're picking up some of the load now to keep the Layton sees down now when that subsides as your finishes up what it's doing and goes back offline now if we take a much bigger load increase you'll see two things first of all asher is going to take a bigger proportion than it did before and Amazon Web Services is going to get thrown into the fray as well now AWS is actually doing less work than I expected it to do I expected a little bit of bigger a slice there but this is a interesting illustration of what's going on for load balancing mq load balancing is sending requests to the services that have the lowest backlog and in order to keep the Layton sees as steady as possible so AWS is probably running slowly for some reason and that's causing a and Q to push less traffic its way now the other thing you're going to notice if you look carefully this graph fluctuate slightly and those fluctuations are caused by all the variances in the network we have the cloud on stage and we have clouds in in the various places across the country there's a lot of equipment locked layers of virtualization and networking in between and we're reacting in real-time to the reality on the digital street so BER what's the story with a to be less I noticed there's a problem right here right now we seem to have a little bit performance issue so guys I noticed that as well and a little bit ago I actually got an alert from red ahead of insights letting us know that there might be some potential optimizations we could make to our environment so let's take a look at insights so here's the Red Hat insights interface you can see our three OpenShift deployments so we have the set up here on stage in San Francisco we have our Azure deployment in Texas and we also have our AWS deployment in Ohio and insights is highlighting that that deployment in Ohio may have some issues that need some attention so Red Hat insights collects anonymized data from manage systems across our customer environment and that gives us visibility into things like vulnerabilities compliance configuration assessment and of course Red Hat subscription consumption all of this is presented in a SAS offering so it's really really easy to use it requires minimal infrastructure upfront and it provides an immediate return on investment what insights is showing us here is that we have some potential issues on the configuration side that may need some attention from this view I actually get a look at all the systems in our inventory including instances and containers and you can see here on the left that insights is highlighting one of those instances as needing some potential attention it might be a candidate for optimization this might be related to the issues that you were seeing just a minute ago insights uses machine learning and AI techniques to analyze all collected data so we combine collected data from not only the system's configuration but also with other systems from across the Red Hat customer base this allows us to compare ourselves to how we're doing across the entire set of industries including our own vertical in this case the financial services industry and we can compare ourselves to other customers we also get access to tailored recommendations that let us know what we can do to optimize our systems so in this particular case we're actually detecting an issue here where we are an outlier so our configuration has been compared to other configurations across the customer base and in this particular instance in this security group were misconfigured and so insights actually gives us the steps that we need to use to remediate the situation and the really neat thing here is that we actually get access to a custom ansible playbook so if we want to automate that type of a remediation we can use this inside of Red Hat ansible tower Red Hat satellite Red Hat cloud forms it's really really powerful the other thing here is that we can actually apply these recommendations right from within the Red Hat insights interface so with just a few clicks I can select all the recommendations that insights is making and using that built-in ansible automation I can apply those recommendations really really quickly across a variety of systems this type of intelligent automation is really cool it's really fast and powerful so really quickly here we're going to see the impact of those changes and so we can tell that we're doing a little better than we were a few minutes ago when compared across the customer base as well as within the financial industry and if we go back and look at the map we should see that our AWS employment in Ohio is in a much better state than it was just a few minutes ago so I'm wondering Ted if this had any effect and might be helping with some of the issues that you were seeing let's take a look looks like went green now let's see what it looks like over here yeah doesn't look like the configuration is taking effect quite yet maybe there's some delay awesome fantastic the man yeah so now we're load balancing across the three clouds very much fantastic well I have two minute Ted I truly love how we can route requests and dynamically load transactions across these three clouds a truly hybrid cloud native application you guys saw here on on stage for the first time and it's a fully portable application if you build your applications with openshift you can mover from cloud to cloud to cloud on stage private all the way out to the public said it's totally awesome we also have the application being fully managed by Red Hat insights I love having that intelligence watching over us and ensuring that we're doing everything correctly that is fundamentally awesome thank you so much for that well we actually have more to show you but you're going to wait a few minutes longer right now we'd like to welcome Paul back to the stage and we have a very special early Red Hat customer an Innovation Award winner from 2010 who's been going boldly forward with their open hybrid cloud strategy please give a warm welcome to Monty Finkelstein from Citigroup [Music] [Music] hi Marty hey Paul nice to see you thank you very much for coming so thank you for having me Oh our pleasure if you if you wanted to we sort of wanted to pick your brain a little bit about your experiences and sort of leading leading the charge in computing here so we're all talking about hybrid cloud how has the hybrid cloud strategy influenced where you are today in your computing environment so you know when we see the variable the various types of workload that we had an hour on from cloud we see the peaks we see the valleys we see the demand on the environment that we have we really determined that we have to have a much more elastic more scalable capability so we can burst and stretch our environments to multiple cloud providers these capabilities have now been proven at City and of course we consider what the data risk is as well as any regulatory requirement so how do you how do you tackle the complexity of multiple cloud environments so every cloud provider has its own unique set of capabilities they have they're own api's distributions value-added services we wanted to make sure that we could arbitrate between the different cloud providers maintain all source code and orchestration capabilities on Prem to drive those capabilities from within our platforms this requires controlling the entitlements in a cohesive fashion across our on Prem and Wolfram both for security services automation telemetry as one seamless unit can you talk a bit about how you decide when you to use your own on-premise infrastructure versus cloud resources sure so there are multiple dimensions that we take into account right so the first dimension we talk about the risk so low risk - high risk and and really that's about the data classification of the environment we're talking about so whether it's public or internal which would be considered low - ooh confidential PII restricted sensitive and so on and above which is really what would be considered a high-risk the second dimension would be would focus on demand volatility and responsiveness sensitivity so this would range from low response sensitivity and low variability of the type of workload that we have to the high response sensitivity and high variability of the workload the first combination that we focused on is the low risk and high variability and high sensitivity for response type workload of course any of the workloads we ensure that we're regulatory compliant as well as we achieve customer benefits with within this environment so how can we give developers greater control of their their infrastructure environments and still help operations maintain that consistency in compliance so the main driver is really to use the public cloud is scale speed and increased developer efficiencies as well as reducing cost as well as risk this would mean providing develop workspaces and multiple environments for our developers to quickly create products for our customers all this is done of course in a DevOps model while maintaining the source and artifacts registry on-prem this would allow our developers to test and select various middleware products another product but also ensure all the compliance activities in a centrally controlled repository so we really really appreciate you coming by and sharing that with us today Monte thank you so much for coming to the red echo thanks a lot thanks again tamati I mean you know there's these real world insight into how our products and technologies are really running the businesses today that's that's just the most exciting part so thank thanks thanks again mati no even it with as much progress as you've seen demonstrated here and you're going to continue to see all week long we're far from done so I want to just take us a little bit into the path forward and where we we go today we've talked about this a lot innovation today is driven by open source development I don't think there's any question about that certainly not in this room and even across the industry as a whole that's a long way that we've come from when we started our first summit 14 years ago with over a million open source projects out there this unit this innovation aggregates into various community platforms and it finally culminates in commercial open source based open source developed products these products run many of the mission-critical applications in business today you've heard just a couple of those today here on stage but it's everywhere it's running the world today but to make customers successful with that interact innovation to run their real-world business applications these open source products have to be able to leverage increase increasingly complex infrastructure footprints we must also ensure a common base for the developer and ultimately the application no matter which footprint they choose as you heard mati say the developers want choice here no matter which no matter which footprint they are ultimately going to run their those applications on they want that flexibility from the data center to possibly any public cloud out there in regardless of whether that application was built yesterday or has been running the business for the last 10 years and was built on 10-year old technology this is the flexibility that developers require today but what does different infrastructure we may require different pieces of the technical stack in that deployment one example of this that Effects of many things as KVM which provides the foundation for many of those use cases that require virtualization KVM offers a level of consistency from a technical perspective but rel extends that consistency to add a level of commercial and ecosystem consistency for the application across all those footprints this is very important in the enterprise but while rel and KVM formed the foundation other technologies are needed to really satisfy the functions on these different footprints traditional virtualization has requirements that are satisfied by projects like overt and products like Rev traditional traditional private cloud implementations has requirements that are satisfied on projects like OpenStack and products like Red Hat OpenStack platform and as applications begin to become more container based we are seeing many requirements driven driven natively into containers the same Linux in different forms provides this common base across these four footprints this level of compatible compatibility is critical to operators who must best utilize the infinite must better utilize secure and deploy the infrastructure that they have and they're responsible for developers on the other hand they care most about having a platform that can creates that consistency for their applications they care about their services and the services that they need to consume within those applications and they don't want limitations on where they run they want service but they want it anywhere not necessarily just from Amazon they want integration between applications no matter where they run they still want to run their Java EE now named Jakarta EE apps and bring those applications forward into containers and micro services they need able to orchestrate these frameworks and many more across all these different footprints in a consistent secure fashion this creates natural tension between development and operations frankly customers amplify this tension with organizational boundaries that are holdover from the UNIX era of computing it's really the job of our platforms to seamlessly remove these boundaries and it's the it's the goal of RedHat to seamlessly get you from the old world to the new world we're gonna show you a really cool demo demonstration now we're gonna show you how you can automate this transition first we're gonna take a Windows virtual machine from a traditional VMware deployment we're gonna convert it into a KVM based virtual machine running in a container all under the kubernetes umbrella this makes virtual machines more access more accessible to the developer this will accelerate the transformation of those virtual machines into cloud native container based form well we will work this prot we will worked as capability over the product line in the coming releases so we can strike the balance of enabling our developers to move in this direction we want to be able to do this while enabling mission-critical operations to still do their job so let's bring Byrne his team back up to show you this in action for one more thanks all right what Red Hat we recognized that large organizations large enterprises have a substantial investment and legacy virtualization technology and this is holding you back you have thousands of virtual machines that need to be modernized so what you're about to see next okay it's something very special with me here on stage we have James Lebowski he's gonna be walking us through he's represents our operations folks and he's gonna be walking us through a mass migration but also is Itamar Hine who's our lead developer of a very special application and he's gonna be modernizing container izing and optimizing our application all right so let's get started James thanks burr yeah so as you can see I have a typical VMware environment here I'm in the vSphere client I've got a number of virtual machines a handful of them that make up my one of my applications for my development environment in this case and what I want to do is migrate those over to a KVM based right at virtualization environment so what I'm gonna do is I'm gonna go to cloud forms our cloud management platform that's our first step and you know cloud forms actually already has discovered both my rev environment and my vSphere environment and understands the compute network and storage there so you'll notice one of the capabilities we built is this new capability called migrations and underneath here I could begin to there's two steps and the first thing I need to do is start to create my infrastructure mappings what this will allow me to do is map my compute networking storage between vSphere and Rev so cloud forms understands how those relate let's go ahead and create an infrastructure mapping I'll call that summit infrastructure mapping and then I'm gonna begin to map my two environments first the compute so the clusters here next the data stores so those virtual machines happen to live on datastore - in vSphere and I'll target them a datastore data to inside of my revenue Arman and finally my networks those live on network 100 so I'll map those from vSphere to rover so once my infrastructure is map the next step I need to do is actually begin to create a plan to migrate those virtual machines so I'll continue to the plan wizard here I'll select the infrastructure mapping I just created and I'll select migrate my development environment from those virtual machines to Rev and then I need to import a CSV file the CSV file is going to contain a list of all the virtual machines that I want to migrate that were there and that's it once I hit create what's going to happen cloud forms is going to begin in an automated fashion shutting down those virtual machines begin converting them taking care of all the minutia that you'd have to do manually it's gonna do that all automatically for me so I don't have to worry about all those manual interactions and no longer do I have to go manually shut them down but it's going to take care of that all for me you can see the migrations kicked off here this is the I've got the my VMs are migrating here and if I go back to the screen here you can see that we're gonna start seeing those shutdown okay awesome but as people want to know more information about this how would they dive deeper into this technology later this week yeah it's a great question so we have a workload portability session in the hybrid cloud on Wednesday if you want to see a presentation that deep dives into this topic and how some of the methodologies to migrate and then on Thursday we actually have a hands-on lab it's the IT optimization VM migration lab that you can check out and as you can see those are shutting down here yeah we see a powering off right now that's fantastic absolutely so if I go back now that's gonna take a while you got to convert all the disks and move them over but we'll notice is previously I had already run one migration of a single application that was a Windows virtual machine running and if I browse over to Red Hat virtualization I can see on the dashboard here I could browse to virtual machines I have migrated that Windows virtual machine and if I open up a tab I can now browse to my Windows virtual machine which is running our wingtip toy store application our sample application here and now my VM has been moved over from Rev to Vita from VMware to Rev and is available for Itamar all right great available to our developers all right Itamar what are you gonna do for us here well James it's great that you can save cost by moving from VMware to reddit virtualization but I want to containerize our application and with container native virtualization I can run my virtual machine on OpenShift like any other container using Huebert a kubernetes operator to run and manage virtual machines let's look at the open ship service catalog you can see we have a new virtualization section here we can import KVM or VMware virtual machines or if there are already loaded we can create new instances of them for the developer to work with just need to give named CPU memory we can do other virtualization parameters and create our virtual machines now let's see how this looks like in the openshift console the cool thing about KVM is virtual machines are just Linux processes so they can act and behave like other open shipped applications we build in more than a decade of virtualization experience with KVM reddit virtualization and OpenStack and can now benefit from kubernetes and open shift to manage and orchestrate our virtual machines since we know this virtual machine this container is actually a virtual machine we can do virtual machine stuff with it like shutdown reboot or open a remote desktop session to it but we can also see this is just a container like any other container in openshift and even though the web application is running inside a Windows virtual machine the developer can still use open shift mechanisms like services and routes let's browse our web application using the OpenShift service it's the same wingtip toys application but this time the virtual machine is running on open shift but we're not done we want to containerize our application since it's a Windows virtual machine we can open a remote desktop session to it we see we have here Visual Studio and an asp.net application let's start container izing by moving the Microsoft sequel server database from running inside the Windows virtual machine to running on Red Hat Enterprise Linux as an open shipped container we'll go back to the open shipped Service Catalog this time we'll go to the database section and just as easily we'll create a sequel server container just need to accept the EULA provide password and choose the Edition we want and create a database and again we can see the sequel server is just another container running on OpenShift now let's take let's find the connection details for our database to keep this simple we'll take the IP address of our database service go back to the web application to visual studio update the IP address in the connection string publish our application and go back to browse it through OpenShift fortunately for us the user experience team heard we're modernizing our application so they pitched in and pushed new icons to use with our containerized database to also modernize the look and feel it's still the same wingtip toys application it's running in a virtual machine on openshift but it's now using a containerized database to recap we saw that we can run virtual machines natively on openshift like any other container based application modernize and mesh them together we containerize the database but we can use the same approach to containerize any part of our application so some items here to deserve repeating one thing you saw is Red Hat Enterprise Linux burning sequel server in a container on open shift and you also saw Windows VM where the dotnet native application also running inside of open ships so tell us what's special about that that seems pretty crazy what you did there exactly burr if we take a look under the hood we can use the kubernetes commands to see the list of our containers in this case the sequel server and the virtual machine containers but since Q Bert is a kubernetes operator we can actually use kubernetes commands like cube Cpl to list our virtual machines and manage our virtual machines like any other entity in kubernetes I love that so there's your crew meta gem oh we can see the kind says virtual machine that is totally awesome now people here are gonna be very excited about what they just saw we're gonna get more information and when will this be coming well you know what can they do to dive in this will be available as part of reddit Cloud suite in tech preview later this year but we are looking for early adopters now so give us a call also come check our deep dive session introducing container native virtualization Thursday 2:00 p.m. awesome that is so incredible so we went from the old to the new from the close to the open the Red Hat way you're gonna be seeing more from our demonstration team that's coming Thursday at 8 a.m. do not be late if you like what you saw this today you're gonna see a lot more of that going forward so we got some really special things in store for you so at this point thank you so much in tomorrow thank you so much you guys are awesome yeah now we have one more special guest a very early adopter of Red Hat Enterprise Linux we've had over a 12-year partnership and relationship with this organization they've been a steadfast Linux and middleware customer for many many years now please extend a warm welcome to Raj China from the Royal Bank of Canada thank you thank you it's great to be here RBC is a large global full-service is back we have the largest bank in Canada top 10 global operate in 30 countries and run five key business segments personal commercial banking investor in Treasury services capital markets wealth management and insurance but honestly unless you're in the banking segment those five business segments that I just mentioned may not mean a lot to you but what you might appreciate is the fact that we've been around in business for over 150 years we started our digital transformation journey about four years ago and we are focused on new and innovative technologies that will help deliver the capabilities and lifestyle our clients are looking for we have a very simple vision and we often refer to it as the digitally enabled bank of the future but as you can appreciate transforming a hundred fifty year old Bank is not easy it certainly does not happen overnight to that end we had a clear unwavering vision a very strong innovation agenda and most importantly a focus towards a flawless execution today in banking business strategy and IT strategy are one in the same they are not two separate things we believe that in order to be the number one bank we have to have the number one tactic there is no question that most of today's innovations happens in the open source community RBC relies on RedHat as a key partner to help us consume these open source innovations in a manner that it meets our enterprise needs RBC was an early adopter of Linux we operate one of the largest footprints of rel in Canada same with tables we had tremendous success in driving cost out of infrastructure by partnering with rahat while at the same time delivering a world-class hosting service to your business over our 12 year partnership Red Hat has proven that they have mastered the art of working closely with the upstream open source community understanding the needs of an enterprise like us in delivering these open source innovations in a manner that we can consume and build upon we are working with red hat to help increase our agility and better leverage public and private cloud offerings we adopted virtualization ansible and containers and are excited about continuing our partnership with Red Hat in this journey throughout this journey we simply cannot replace everything we've had from the past we have to bring forward these investments of the past and improve upon them with new and emerging technologies it is about utilizing emerging technologies but at the same time focusing on the business outcome the business outcome for us is serving our clients and delivering the information that they are looking for whenever they need it and in whatever form factor they're looking for but technology improvements alone are simply not sufficient to do a digital transformation creating the right culture of change and adopting new methodologies is key we introduced agile and DevOps which has boosted the number of adult projects at RBC and increase the frequency at which we do new releases to our mobile app as a matter of fact these methodologies have enabled us to deliver apps over 20x faster than before the other point about around culture that I wanted to mention was we wanted to build an engineering culture an engineering culture is one which rewards curiosity trying new things investing in new technologies and being a leader not necessarily a follower Red Hat has been a critical partner in our journey to date as we adopt elements of open source culture in engineering culture what you seen today about red hearts focus on new technology innovations while never losing sight of helping you bring forward the investments you've already made in the past is something that makes Red Hat unique we are excited to see red arts investment in leadership in open source technologies to help bring the potential of these amazing things together thank you that's great the thing you know seeing going from the old world to the new with automation so you know the things you've seen demonstrated today they're they're they're more sophisticated than any one company could ever have done on their own certainly not by using a proprietary development model because of this it's really easy to see why open source has become the center of gravity for enterprise computing today with all the progress open-source has made we're constantly looking for new ways of accelerating that into our products so we can take that into the enterprise with customers like these that you've met what you've met today now we recently made in addition to the Red Hat family we brought in core OS to the Red Hat family and you know adding core OS has really been our latest move to accelerate that innovation into our products this will help the adoption of open shift container platform even deeper into the enterprise and as we did with the Linux core platform in 2002 this is just exactly what we did with with Linux back then today we're announcing some exciting new technology directions first we'll integrate the benefits of automated operations so for example you'll see dramatic improvements in the automated intelligence about the state of your clusters in OpenShift with the core OS additions also as part of open shift will include a new variant of rel called Red Hat core OS maintaining the consistency of rel farhat for the operation side of the house while allowing for a consumption of over-the-air updates from the kernel to kubernetes later today you'll hear how we are extending automated operations beyond customers and even out to partners all of this starting with the next release of open shift in July now all of this of course will continue in an upstream open source innovation model that includes continuing container linux for the community users today while also evolving the commercial products to bring that innovation out to the enterprise this this combination is really defining the platform of the future everything we've done for the last 16 years since we first brought rel to the commercial market because get has been to get us just to this point hybrid cloud computing is now being deployed multiple times in enterprises every single day all powered by the open source model and powered by the open source model we will continue to redefine the software industry forever no in 2002 with all of you we made Linux the choice for enterprise computing this changed the innovation model forever and I started the session today talking about our prediction of seven years ago on the future being open we've all seen so much happen in those in those seven years we at Red Hat have celebrated our 25th anniversary including 16 years of rel and the enterprise it's now 2018 open hybrid cloud is not only a reality but it is the driving model in enterprise computing today and this hybrid cloud world would not even be possible without Linux as a platform in the open source development model a build around it and while we have think we may have accomplished a lot in that time and we may think we have changed the world a lot we have but I'm telling you the best is yet to come now that Linux and open source software is firmly driving that innovation in the enterprise what we've accomplished today and up till now has just set the stage for us together to change the world once again and just as we did with rel more than 15 years ago with our partners we will make hybrid cloud the default in the enterprise and I will take that bet every single day have a great show and have fun watching the future of computing unfold right in front of your eyes see you later [Applause] [Music] [Music] [Music] [Music] [Music] [Music] [Music] [Music] anytime [Music]
SUMMARY :
account right so the first dimension we
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
James Lebowski | PERSON | 0.99+ |
Brent Midwood | PERSON | 0.99+ |
Ohio | LOCATION | 0.99+ |
Monty Finkelstein | PERSON | 0.99+ |
Ted | PERSON | 0.99+ |
Texas | LOCATION | 0.99+ |
2002 | DATE | 0.99+ |
Canada | LOCATION | 0.99+ |
five and a half terabytes | QUANTITY | 0.99+ |
Marty | PERSON | 0.99+ |
Itamar Hine | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
David Ingham | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
RBC | ORGANIZATION | 0.99+ |
two machines | QUANTITY | 0.99+ |
Paul | PERSON | 0.99+ |
Jay | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Hawaii | LOCATION | 0.99+ |
50 terabytes | QUANTITY | 0.99+ |
Byrne | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
HP | ORGANIZATION | 0.99+ |
second floor | QUANTITY | 0.99+ |
Red Hat Enterprise Linux | TITLE | 0.99+ |
Asia | LOCATION | 0.99+ |
Raj China | PERSON | 0.99+ |
Dini | PERSON | 0.99+ |
Pearl Harbor | LOCATION | 0.99+ |
Thursday | DATE | 0.99+ |
Jack Britton | PERSON | 0.99+ |
8,000 | QUANTITY | 0.99+ |
Java EE | TITLE | 0.99+ |
Wednesday | DATE | 0.99+ |
Angus | PERSON | 0.99+ |
James | PERSON | 0.99+ |
Linux | TITLE | 0.99+ |
thousands | QUANTITY | 0.99+ |
Joe | PERSON | 0.99+ |
today | DATE | 0.99+ |
two applications | QUANTITY | 0.99+ |
two new machines | QUANTITY | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Burr | PERSON | 0.99+ |
Windows | TITLE | 0.99+ |
2018 | DATE | 0.99+ |
Citigroup | ORGANIZATION | 0.99+ |
2010 | DATE | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
each machine | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
Visual Studio | TITLE | 0.99+ |
July | DATE | 0.99+ |
Red Hat | TITLE | 0.99+ |
aul Cormier | PERSON | 0.99+ |
Diamond Head | LOCATION | 0.99+ |
first step | QUANTITY | 0.99+ |
Neha Sandow | PERSON | 0.99+ |
two steps | QUANTITY | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
UNIX | TITLE | 0.99+ |
second dimension | QUANTITY | 0.99+ |
seven years later | DATE | 0.99+ |
seven years ago | DATE | 0.99+ |
this week | DATE | 0.99+ |
36 keynote speakers | QUANTITY | 0.99+ |
first level | QUANTITY | 0.99+ |
OpenShift | TITLE | 0.99+ |
first step | QUANTITY | 0.99+ |
16 years | QUANTITY | 0.99+ |
30 countries | QUANTITY | 0.99+ |
vSphere | TITLE | 0.99+ |
Colin Gallagher, Dell EMC & Josh Holst, Hills Bank & Trust | VMworld 2017
>> Announcer: Live from Las Vegas, it's The Cube, covering VMworld 2017. Brought to you by VMware and its ecosystem partners. >> Welcome back to Las Vegas, everybody. This is VMworld 2017, and this is The Cube, the leader in live tech coverage. My name is Dave Vellante. I'm with my co-host, Peter Burr. Colin Gallagher is back. He's the senior director of hyper-converged infrastructure marketing at Dell EMC and he's joined by Josh Holst, who's the vice president of information services at Hills Bank and Trust Gentlemen, welcome to The Cube. Good to see you. >> Thanks for having me back. >> So Colin, give us the update from when we last talked. What's happening at the show, a bunch of parties last night. How's the vibe? >> Colin: Huh, were there? >> Responses from customers to your announcements, give us the update. >> Nah, I couldn't go to any parties because I knew I had to be with you guys today. Had to keep my voice. Shame. >> Dave: I went, I just didn't talk. >> Smart man. No, I mean, I've been talking to a lot of customers, talking to customers about what they think of the show, and what the messages are and how they're resonating with them. I think so far, you know, most of the keynotes and topics have been really on point with what customers' concerns are. Also been talking to a lot of people about hyper-converge, because that's what I do for a living. You know, and I brought Josh along to talk about his experiences with hyper-converged. But I've been having a really great time at the show, hearing what people are concerned about, and hearing how a lot of what we're delivering at the show is really resonating with them. >> So Josh, tell us about Hills Bank and Trust. What are they all about, what's your role? >> Sure. Hills Bank and Trust was founded in 1904. We're still headquartered in Hills, Iowa, if anybody's familiar with that. We're a full services bank. We provide all the services we can to our customers. And we primarily serve those out of eastern Iowa, but we have customers throughout the U.S. as well. >> And your role? >> I'm a VP of information systems, so I oversee IT infrastructure. >> Okay. So maybe paint a picture, well, let me start here. What do you think about the business challenges and the drivers of your business, and how they ripple through to IT? What are those drivers and how are you responding? >> Yeah, what we're seeing a lot is a big shift within the financial services world, with the FinTechs, the brick and mortarless banking, robo-advisories, digital currencies, and just an increased demand of what our customers want. So what we're trying to do from an IT infrastructure standpoint is build that solid foundation, where we can quickly adapt and move where our industry's taking us. >> Yeah, so things like Blockchain and Crypto, and you guys launching your own currency any time soon? >> Josh: Nope. We are monitoring it, but nothing like that. >> So how do those, I mean somebody said to me one time, it was a banking executive, you know, we think about, we know our customers need banking, but do they need banks? I was like wow, that's a pretty radical statement. And everybody talks about digital transformation. How does that affect your decisions in IT? Is it requiring you to speed things up, change your skill profile, maybe paint a picture there. >> Yeah, what we're seeing from the digital space within banking is that we definitely have to speed things up. We need to be more nimble and quicker within the IT infrastructure side, and be able to, again, address those customer demands and needs as they arise. And plus also we've got an increase government's regulations and compliance we have to deal with, so staying on top of that, and then cybersecurity is huge within the banking field. >> So maybe paint a picture of your infrastructure for us if you could. >> Sure. You know, prior to VxRail, we were traditional IT stack, server, storage, dedicated networking specific for that. As we were going through a review of Refresh, hyper-converged came out and it just really made a lot of sense. The simplified infrastructure to allow us to run our business and be able to operate in the way we need to. >> So can you talk a little bit more about that? Maybe the before and the after. What did things look like before in terms of maybe the complexity, and how many of these and those, or whatever detail you're comfortable with. >> Josh: Sure. >> And what happened afterwards? >> Yeah, before the VxRail platform, I mean, we just had racks of servers and storage. We co-located our data center facilities, so that was becoming a pretty hefty expense as we continued to grow within that type of simplified, or that traditional environment. By moving to the VxRail platform, we've been able to reduce rack space. I think at my last calculation, we went from about 34 to 40 U of rack space down to four, and we're running the exact same work load at a higher performance. >> How hard was it to get the business to buy into what you wanted to do? >> It was a lengthy process to kind of go through the review, the discussions, the expense associated with it. But I think being able to sell the concept of a simpler IT infrastructure, meaning that IT can provide quicker services, and not always be the in the weeds, or the break fix type group. We want to be able to provide more services back to our business. >> So you went to somebody, CFO, business, whoever, to ask for money, because you had a new project. But you would have had to do that anyway, correct? >> Josh: Yes, yes. >> Okay, so... >> Was it easier? >> Was it easier with the business case or were you nervous about that, because you were sticking your neck out? >> No, I think it was easier from the business line. That executive team does trust kind of my judgment with it, so what I brought forward was well-vetted, definitely had our partners involved, the relationship we have with Dell EMC, and they just really were there the entire step of the way. >> And what was the business impact? Or the IT impact, from your standpoint? >> Well, the IT impact is we are performing at a faster pace right now. You know, we're getting things done quicker within that environment. Our data protection has gotten a lot better with the addition of data domain, and the data protection software. >> Peter: Is that important in banking? >> (laughs) You want to make sure that people check your data, right? >> If it's my bank, yeah. >> So it's very important to how we operate and how we do things. >> So one of the things we've heard from our other CIO clients who like the idea of hyper-converged or converged, is that, yeah, I can see how the technology can be converged, but how do I converge the people? That it's not easy for them that they launch little range wars inside. Who's going to win? How did that play out at Hills Bank and Trust? >> You know, it wasn't that big of a shift within our environment. We're a very small IT team. I've got a systems group, a networking group, and a security group, so transforming or doing things differently within that IT space with the help of VxRail just wasn't a large impact. The knowledge transfer and the ramp-up time to get VxRail up and running was very minimal. >> You still have a systems group, a network group and a security group? >> At this point, we're still kind of evaluating that, and what's the right approach, right structure for IT within the bank? But at this point we're still operating within that. >> Did the move to VxRail affect in any way your allocation of labor? Whether it's FTE's, or how they spent their time? >> We're spending a little less time actually managing that infrastructure, and more focusing in on our critical line of business applications. And that's kind of been my whole goal with this, is to be able to introduce an infrastructure set that allows IT to become more of a service provider, and not just an operational group that fixes servers and storage. >> So you're saying a little less? >> A little less. >> It wasn't a dramatic change? >> We're still transforming though, so we still have this traditional IT structure within our group, so I do expect as we start to transform IT more, we'll get there, but I had to start with that hardware layer first. >> What do you think is achievable and what do you want to do in terms of freeing up resource, and what do you want to do with that resource? >> Again, I just want to be able to provide those services back to the bank. We have a lot of applications owned within the line of businesses. I'd like to be able to free up resources on my team to bring those back into IT. Again, more for the control and the structure around it, change management, compliance, making sure we're patching systems appropriately, things along those lines. >> And any desire to get more of your weekends back, or spend more time with your family, or maybe golf a little bit more? >> Exactly. Golf is always good. You know, we've actually seen a reduction in the amount of time we do have to spend managing these platforms, or at least the hardware standpoint, firmware upgrades, and doing the VxRail platform upgrades have gone really well with this, compared to upgrading our server firmware, making sure it matches the storage firmware, and then we've got to appropriately match the storage side or the networking side of it. >> And the backup comment. Easier to back up, more integrated? >> It's definitely more integrated and a lot easier. We've seen tremendous improvements in backup performance by implementing data domain with the data protection software, and it's just really simplified it, so backup is just a service that runs. It's not something we really manage anymore. >> Are you guys getting excited about being able to target their talents and attentions to some other problems that might serve the business? >> Exactly. You know, one of the themes I've picked up here at VMworld has been the digital workspace transformation. That's huge within our realm. We're very traditional banking, but there is a lot of demand internally and from our customers to be more mobile and provide more services in a channel they prefer. >> We're out of time, but two quick questions. Why Dell EMC? Why that choice? >> You know, we had an existing relationship with EMC pre-merger, and it was a solid relationship. They'd been there the entire way during the merger, every question was answered. It wasn't anything that was, oh, let me go check on this. They had everything down. We felt very comfortable with it. And again, it's the entire ecosystem within our data center. >> So trust, really. >> Josh: Absolutely. >> And then if you had to do it over again, anything you'd do differently, any advice you'd give your fellow peers? >> You know, I don't think so. Again, it's just the entire relationship, the process we went through was very well done. The engagement we had from the management team with Dell EMC was just spot on. >> Why do you think that was, sorry, third question. Why do you think that was so successful, then? What did you do up front that led to that success? >> You know, it was just a lot of relationship-building. In Iowa, we're all about building relationships and trust. We do that with our customers at the bank as well. We want to build long-lasting, trusting relationships, and Dell EMC does that exact same thing. >> All right, gents. Thanks very much for coming back to The Cube. >> Josh: Thanks, guys. Good to be here. >> Thanks, Josh, take care. >> Thank you. >> Thank you. >> All right, you're welcome. Keep it right there, buddy. We'll be right back with our next guest at The Cube. We're live from Vmworld 2017. Be right back.
SUMMARY :
Brought to you by VMware and its ecosystem partners. Good to see you. What's happening at the show, Responses from customers to your announcements, because I knew I had to be with you guys today. and hearing how a lot of what we're delivering at the show What are they all about, what's your role? We provide all the services we can to our customers. I'm a VP of information systems, and how they ripple through to IT? and just an increased demand of what our customers want. We are monitoring it, but nothing like that. So how do those, I mean somebody said to me one time, banking is that we definitely have to speed things up. for us if you could. You know, prior to VxRail, we were traditional IT stack, and how many of these and those, as we continued to grow within that type of and not always be the in the weeds, to ask for money, because you had a new project. the relationship we have with Dell EMC, and the data protection software. and how we do things. So one of the things we've heard to get VxRail up and running was very minimal. and what's the right approach, right structure that allows IT to become more of a service provider, so we still have this traditional IT structure I'd like to be able to free up resources in the amount of time we do have to spend And the backup comment. and it's just really simplified it, and from our customers to be more mobile Why that choice? And again, it's the entire ecosystem the process we went through was very well done. Why do you think that was, sorry, third question. We do that with our customers at the bank as well. Thanks very much for coming back to The Cube. Good to be here. We'll be right back with our next guest at The Cube.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Josh | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Josh Holst | PERSON | 0.99+ |
Peter Burr | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
Iowa | LOCATION | 0.99+ |
Colin Gallagher | PERSON | 0.99+ |
Hills Bank | ORGANIZATION | 0.99+ |
Colin | PERSON | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
VMworld | ORGANIZATION | 0.99+ |
FTE | ORGANIZATION | 0.99+ |
U.S. | LOCATION | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Hills, Iowa | LOCATION | 0.99+ |
Hills Bank and Trust | ORGANIZATION | 0.99+ |
third question | QUANTITY | 0.99+ |
1904 | DATE | 0.99+ |
Dell EMC | ORGANIZATION | 0.99+ |
two quick questions | QUANTITY | 0.99+ |
Trust | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
VMworld 2017 | EVENT | 0.98+ |
one | QUANTITY | 0.98+ |
Trust Gentlemen | ORGANIZATION | 0.98+ |
four | QUANTITY | 0.98+ |
40 U | QUANTITY | 0.96+ |
last night | DATE | 0.96+ |
VxRail | TITLE | 0.95+ |
Hills Bank & Trust | ORGANIZATION | 0.95+ |
The Cube | ORGANIZATION | 0.93+ |
eastern Iowa | LOCATION | 0.92+ |
about 34 | QUANTITY | 0.9+ |
one time | QUANTITY | 0.83+ |
first | QUANTITY | 0.71+ |
Vmworld 2017 | EVENT | 0.7+ |
The Cube | COMMERCIAL_ITEM | 0.42+ |
Jay Baer | Oracle Modern Customer Experience
>> Narrator: Live from Las Vegas, it's theCUBE, covering Oracle Modern Customer Experience 2017. Brought to you by Oracle. >> Okay, welcome back here. We're here live in Las Vegas. This is SiliconANGLE Media's theCUBE. It's our flagship program. We go out to the events and extract the signal from the noise, talk to the influencers, the experts, thought leaders, CEOs, entrepreneurs, anyone we can that has data we can share with you. I'm John Furrier, Peter Burr is my co-host for the two days here. Our next is Jay Baer from Convince and Convert, CUBE alumni, great guy, super influential, knows his marketing stuff, perfect guest to summarize and kind of package up what the hell Modern CX means here at the Oracle show. Welcome back, good to see you. >> Jay: Good to see you guys, welcome. >> So you were hosting the CMO Summit that was going on in parallel they had the Marquise Awards which is their awards dinner. >> 11th annual Marquise Awards it's like a thing. >> It's amazing, it looked like the Golden Globes. >> It was beautiful this year, it was like, legit. >> Peter: Is that the one with the O on the top? >> And they delivered an award with a drone. It was a great night. >> Awesome stuff. So give us the package, what's going on, tease out the story here. >> Yeah, I think the story is two-fold. One, Oracle's got an interesting take on the marketing software space because they really are trying to connect it between the overall customer service experience initiative, and then marketing as a piece of that. This event in particular, the Modern Customer Experience event has tracked almost full conferences for marketing, for customer service, for sales and for commerce. So all four of those are the verticals underneath this umbrella and that's a really unusual conference setup but I think it reflects where Oracle's head is at from a thought leadership standpoint. That like, look, maybe were going to get to a point where marketing and customer service really are kind of the same. Maybe we're going to get to the point where sales and marketing really are kind of the same. We're not there yet, by any stretch of the imagination. But I think we all feel that convergence coming. And my world the marketing side, CMO's are starting to get more and more responsibility inside organizations and so if that happens, maybe we do need to start to align the software as well. It's and interesting take on the market, and I think it's sort of prescient for where we're going to head. >> It's interesting you mention of all those different silos, or different departments or different functions in a digital end-to-end fabric experiences are all about the customer, it's one person, they're going to have different experiences at any given time on that life cycle, or product spectrum or solution spectrum. So the CMO has to take responsibility of that. >> Well, I feel like somebody has to be responsible for it. Mark Hurd said this in one of his remarks over the course of the show, the CEO of Oracle said look, there is no data department, everybody has to be responsible for data but somebody has to figure what the ins and the outs are and maybe that's the CMO, maybe it's the CXO, I don't think we've fully baked that cake yet. But we're going to have to get to the point where the single record of truth about the customer and their customer journey has to exist and somebody's got to figure out how to wire all those together. We're gettin' there. >> It's so funny, I was joking, not here on theCUBE, but in the hallways about the United Airlines snafu and I'm like, to me as a kind of a developer mindset software should have solved that problem. They never should have been overbooked to begin with. So if you think about just these things where the reality of a consumer at any given time is based upon their situation. I need customer support, I need this, I need that. So everyone's got to be customer ready with data. >> Talk about relevance, relevancy is the killer app, that's it, right. Relevancy is created by technology, and with people, people who actually know how to put that technology into practice in a way that the customers actually care about. So, one of the things that Mark said, he said look, here's the issue, it's not about data, nor is it about clout, it's not about any of that. It's about taking that data and creating understanding out of it. But he said a really interesting thing, he said what we have to do is push those understandings out to the front lines where somebody on the front lines can do something with it that actually benefits the customer. I think that's a really smart point because so often right now we're talking about, oh we've got these data stores, and we've got DMP's and we've got all these things. That's great but until that gets manifested at the front lines, who cares, you've just got a big pile of numbers. >> We had Katrina on from the commerce side, it's funny, she was making a retail comment look, they don't care about the tech, they don't care about blockchain and all the speed and feed, they have to do a transaction in the speak of the consumer. And the language of the customer is not technology. >> No they don't care, solve my problem right. Just solve my problem, and I don't care how you solve it, what sort of magic you have behind the scenes, if I want a sweater, I want this sweater, and I want it right now. >> OK, Jay, share with the audience watching right now and us conversation hallways you've had, that's always the best because you had a chance, I'll see ya on the big stage doing your hosting thing, but also you get approached a lot people bend your ear a lot, what's happening? >> You know what's been an interesting theme this week is we've made such great advances on the technology side and I think we're starting to bump up against okay well now we've got to make some organizational changes for that technology to actually flourish. Had a lot of conversations this week with influencers, with CMO's, with attendees about, I really want to do this I really want to sort of bring sales and marketing together or commerce and sales, et cetera. But our org chart doesn't support that. The way our company thinks, the way our people are aligned, does not support this convergence. So I think were it an inflection point where we're going to have to like break apart some silos, and not data silos, but operational, what is your job, who manages you and what is your bonus based on? There is a lot of legacy structures, especially at the enterprise that do not really facilitate. >> John: Agile. >> Cross-departmental circumstances that we're looking for. So a lot people are like, oh wow, we're going to have to do some robust organizational change and that ain't easy. Somebody's going to have to drive that. Your marketing practitioners, which is my world, they can't drive that. That's got to come from up here somewhere. >> And also people got to be ready for the change. No one likes change. But we were taking about this yesterday called Add the Agile process into development being applied to marketing, really smart. >> Oh, all the time so many marketing teams now are using Agile and daily Scrum and Stand-ups and all those kind of things as opposed to Waterfall which everybody's used forever I think it's fantastic. >> Yeah, and that's something that we're seeing and Roland Smart had to point, he had a book got a signed copy Peter and I, but this is interesting, if you of Agile, to your point, you just can't read the book you've got to have a commit to it, organizational impact is Agile. >> One of the things we had a CMO Summit, we had 125, 150 CMO's from all around the world and one of the things we talked about in that session yesterday was, jeeze, we need to start taking people or hiring people out of a software development world, people who have Agile experience and put them as PM's on a marketing team. Which is going to put that group of people have the Agile background in even greater demand. Because they won't just be doing tech roles for project management but also marketing project management and sort of teaching everybody how Agile works. I think it's really interesting. >> But they've been doing that for a while. I mean the Agile, Agile started in software development but moved broader than that when it went to the web. >> No question, but a lot of these CMO's do not have those type of skills on their team today. They're still using a Waterfall. >> Or they don't recognize that they have the skills. Because most of them will have responsibility for website, website development, so it's that they don't again, it goes back to. >> Web versus marketing. >> Yeah, they probably have it somewhere, they just don't appreciate it and elevate it. >> It's silo'd within the marketing team. >> It's silo'd within the marketing team. So there's going to be, these are the consequence of changes. We'll see the degree to which it really requires a whole bunch of organizational stuff. But at the end of the day, you're right, it's a very very important thing. What are some of the other things you see as long as we're talking about it, other than just organizational. >> Actual other sort of baseline skills. It wasn't that long ago that your social media teams and contact marketing teams, it was manifestly a written job you made things that were rooted in copy. Now we talked a lot about, you have to have like a full video team on your marketing org chart because the core of the realm now is video content and while companies are getting there it's still a struggle for a lot of them. Should we have our agency do this, should we get somebody else to do it, they're like now I got to have all these people, I got to have video editors and camera crew. >> It's expensive. >> Of course it is, yeah. Not everybody can be theCUBE. >> We'll they're tryin'. No, but I think video's been coming down to the camera level you see Facebook with VR and AR certainly the glam and the sex appeal to that. Then you got docker containers and software development apps, so I call that the app culture, you've got the glam, apps, and then you've got cloud. So those things are going on so are the marketing departments looking to fully integrate agency-like stuff in house or is the agency picking up that? What's your take on the landscape of video and some of these services? >> It depends on how real-time they're thinking about video. We're starting to Facebook Live in a public relations circumstance. You saw when Crayola announced the death of the blue crayon or whatever it was a few weeks ago. They did a press release on that, but the real impetus for that announcement was a Facebook Live video. Which puts Facebook and live video as your new PR apparatus. That's really interesting. So in those circumstances the question is do we do that with the agency, is it easier to do it in-house. I think ultimately my advice would be you have to have it in both places. You have to be able to do at least some things in-house you have to be able to turn it quickly and then maybe for things you have more a lead time, you bring in your agency. >> One of the things we're seeing and just commenting while we're on this great subject, it's our business as well, is content is hard. Good, original content is what we strive for as SiliconANGLE, wikibon and theCUBE is something that we're committed to serving the audience at the same time, we collaborate with marketers in this new, native way so that the challenge that I see, and I see in this marketing cloud, is content is a great piece of data. >> Content is data. >> Content is data. >> And it also helps you get more data because there is a lot of data exchanged. >> So a lot of companies I see that fail on the content marketing side, they don't punch it in the red zone. The ball's on the one yard line all they got to do is get it over the goal line, and that's good content, and they try to fake it. They don't have authentic content. >> Another way of saying that John. >> John: They blew it on the one yard line. >> Yeah, another way of saying that is the historically agencies have driven the notion of production value. They have driven the notion of production value, to make the content as expensive as possible because that's how they make their money. What we're talking about is when we introduce a CX orientation into this mix now we're talking about what does the customer need in context, how can video serve that need? It's going to lead to, potentially, a very very different set of production value. >> You bring up a good point, I want to get Jay's reaction on it because he sees a lot too. Context is everything so at the end of the day what is engaging, you can't buy engagement, it's got to be good. >> What serves the customer. >> John: And that is defined by the customer, there is nature of reality silver bullet there's no engagement bullet. >> Sometimes you can argue that the customer values a lower fidelity content execution because it has a greater perceived authenticity. >> You may not know this Jay, I'm going to promote us for a second. A piece of video that's highly produced in the technology industry generates attention for a minute and a half to a minute and 45 seconds. theCUBE can keep attention for 12 or 13 minutes, why? >> John: We have interesting people on. >> If we were a digital agency. >> I would say the hosts, obviously. >> The hosts, the conversation. >> It's back to relevancy. >> It informs the customer. And that's what, increasingly, these guys have to think about. So in may respects, we'll go back to your organization and I want to test you in this, is that in many respects that the CMO must heal thyself first. By starting to acknowledge that we have to focus on the customer, and not creative and not the agency, and rejigger things so that we can in fact focus on the customer and not the agency's needs for us to spend more. >> There was, one of the great conversations in the CMO Summit was this point that, look, with all this technology we have all the opportunities and darnit, all we're doing is finding other ways to send people a coupon. Like isn't there something else that we could use this technology for. And what if we just flip the script and said what do customers genuinely want? Which is knowable and certainly inferable today in a way that has never been historically why don't we use that data to give them what they want, when they want it, how they want it, instead of constantly trying to push them harder. >> Focus on value and not being annoying. >> I mean I wrote a while book about it. >> Well your key point there, is that you're going to infer and actually get signals that, we've never been there before. Chatter signals. >> But let's use them for good not evil I think is the subtext there. >> Yeah, don't jam a coupon down their throat. >> But as Mark says it's hard because CEO's are under tremendous pressure to raise top line in an environment that is not conducive to that. You're going to have to take share. The economy is not growing so fast that you can just show up and grow your company. CEO's have tons of pressure, they're then droppping that pressure on the CMO who then says you need to grow top line revenue. So the CMO says we've got all this technology I guess we'll just send out more offers we'll have a stronger call to action and as opposed to using this information, the inferences, the data, to be more customer focused. I think in some cases we're being less customer focused which, if anything is short-sighted and at worst is a cryin' shame. >> So the solution there is to use the data to craft relevant things at the right time to the right people. >> And it will work but it requires two things that a lot of organizations simply don't have. Time and courage, right. It requires time and courage to purposely push less hard. Because you know it will payoff eventually you've got to buy into that, and that ain't easy always. Sometimes it's not even your decision. >> What we don't want is we don't want to automate and accelerate bad practices. At the end of the day what CMO's are learning, this conversation came out yesterday is, jeeze maybe marketing really isn't that good. Maybe we have to learn ourselves from what this technology is telling us, what the data is telling us and start dramatically altering the way we think about marketing, the role that marketing plays. The techniques we use, the tactics we use, that will lead to organizational changes. I'm wondering, did you get a sense out of the session that they are in fact stepping back and saying we got to look in the mirror about some of this stuff. >> Absolutely, absolutely. I thought it was remarkable, considering who runs this company, Mark Hurd, came in and did a little Q&A at the CMO Summit and he said, And this is the guy who runs Oracle, who's puttin' this who thing together and is sellin' tons of marketing software and says look guys, I'm not even sure if what we're doing here is right because we've got all this technology we have been doing this for a long time, we've got all these smart people and still, what's our conversion rate, 1%? If we've got the greatest technology in the history of the world, we supposedly know all this about customer service and customer journey mapping and our conversion rate is still 1%. Maybe something identified fundamentally broken with how we think about marketing. I thought for somebody in that role to come in and just drop that on a group of CMO's, I was like whoa. >> I think he's right. >> Totally right. >> But to have a CEO of a company like this just walk in and say here's what I think. >> This is a question for you and I'll ask it by saying we try to observe progressive CMO's as a leading indicator to the comment you mentioned earlier, which is flip things upside down and see what happens. What are you seeing for those progressive CMO's that have the courage to say ya know what, we're going to flip things upside down and apply the technology and rethink it in a way that's different. What are they doing? >> One of the markers that we see on the consulting side of my business is CMO's who are thinking about retention first. Not only from a practical execution layer, but even from a strategic layer. Like, what if we just pulled back on the string here a little bit and just said how can we make sure that everyone who's already given us money, continues to give us money and moreso. And essentially really turn the marketing focus from a new customer model, to a customer retention and customer growth model, start there. Start with your current customers and then use those inisights gained and then do a better job with customer acquisition. As customer service and marketing start to converge, mostly because on online. Online customer service is very brand driven and more like marketing. As this two things are converging we're seeing smart CMO's say well what if we change the way we look at this and took care of our own first. Learn those lessons and then applied them outwardly. I think that's a real strong marking signal. >> It's a great starting point and it's almost risk free from a progressive standpoint. >> It's not always risk-free inside the organization. >> I mean it's harder to get new guinea pig customers to like see what works, but go to your existing customers and you have data to work with. >> But wouldn't you also say that the very nature of digital which is moving the value proposition from an intrinsic statement of the values in the product and caveat emptor, towards a utility orientation where the value's in the use of it, and we want to sustain use of it. We're moving more to a service to do that and digital helps us to do that. That the risk of taking your approach goes down because at the end of the day, when you're doing a service orientation you have to retain the customer because the customer has constantly got the opportunity to abandon you. >> Yes the ability to bail out is very very easy these days I completely agree. But what find is that it makes sense to us. It makes sense to us on theCUBE, but in the real world it's not. Not everybody's drinkin' that punch yet. >> John: And why? >> I don't know. >> Sounds like courage. >> It is definitely courage is one of 'em because you're essentially saying look, I've been taught to do marketing one way for 40 years or 20 years. >> Yeah, I'm going to lean on my email marketing all day long. >> Yeah, I'm going to keep pressing send. It's easy, there's almost no net cost. So there's that. And also just the pressure from above, I think. From the CEO to grow top line, net new customer revenue, I think that's certainly part of it. And some if it, I think we went back to earlier about org charting and skills and resources. There's a heck of a lot more people out there at every level of the marketing organization who are trained in customer acquisition moreso than customer retention. How many MBA's are there in customer retention are there? Zero. How many MBA's are there in marketing and sales? >> Lot of 'em at Amazon. >> A thousand? >> A lot of 'em at Apple. >> Yeah, but they were trained there. They didn't come in like that, so they trained them up. >> Jay, great to have you on theCUBE. Great insight as usual, and I think you're right on the money. I think the theme that I would just say for this show, and agree with you is that if you look at Oracle, you look at IBM, you look what Amazon is doing Microsoft in some way maybe a little bit, but those three, data's at the center of the value proposition. Oracle is clearly saying to the marketers, at least we want to say digital it's end to end if you use data, it's good for you. This is the new direction. If you think data-driven CMO, that seems to be the right strategy in my mind. >> The best quote in the CMO Summit, you guys need a CUBE bumper sticker that you can manufacture with this. Data is the new bacon. I was like, oh I love that, that's the best right. >> Who doesn't love bacon. Jay, great to see you. Real quick, what's up with you, give us a quick update on you're opportunities what you're going these days. >> Things are great, running around the country doing fantastic events just like you guys are. Working on a new content marketing master class for advanced marketers on how to take their content marketing strategy to the next level. That launches in a couple of weeks. Continue to do four or five podcasts a week, a new video show called Jay Today where I do little short snippets three minutes a day. JayToday.tv if you want to subscribe to that. >> Beautiful, Jay Baer, great on theCUBE great thought leader, great practitioner, and just a great sharer on the net, check him out. I'm John Furrier with Peter Burr here at Oracle Marketing CX more live coverage after this short break.
SUMMARY :
Brought to you by Oracle. and extract the signal from the noise, So you were hosting the CMO Summit that was going on it's like a thing. And they delivered an award with a drone. tease out the story here. It's and interesting take on the market, So the CMO has to take responsibility of that. and the outs are and maybe that's the CMO, and I'm like, to me as a kind of a developer mindset on the front lines can do something with it And the language of the customer is not technology. what sort of magic you have behind the scenes, for that technology to actually flourish. Somebody's going to have to drive that. And also people got to be ready for the change. and all those kind of things as opposed to Waterfall and Roland Smart had to point, he had a book and one of the things we talked about I mean the Agile, Agile started in software development those type of skills on their team today. Because most of them will have responsibility Yeah, they probably have it somewhere, We'll see the degree to which it really requires because the core of the realm now is video content Of course it is, yeah. the glam and the sex appeal to that. is it easier to do it in-house. at the same time, we collaborate with marketers And it also helps you get more data is get it over the goal line, and that's good content, They have driven the notion of production value, Context is everything so at the end of the day John: And that is defined by the customer, Sometimes you can argue that the customer values in the technology industry generates attention on the customer, and not creative and not the agency, to send people a coupon. and actually get signals that, for good not evil I think is the subtext there. the inferences, the data, to be more customer focused. So the solution there is to use the data It requires time and courage to purposely push less hard. At the end of the day what CMO's are learning, in the history of the world, we supposedly know But to have a CEO of a company like this that have the courage to say ya know what, One of the markers that we see on the consulting side It's a great starting point and it's almost risk free to like see what works, but go to your existing customers got the opportunity to abandon you. Yes the ability to bail out is I've been taught to do marketing one way for 40 years Yeah, I'm going to lean on my From the CEO to grow top line, net new customer revenue, Yeah, but they were trained there. Jay, great to have you on theCUBE. Data is the new bacon. Jay, great to see you. Things are great, running around the country and just a great sharer on the net, check him out.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Mark | PERSON | 0.99+ |
Mark Hurd | PERSON | 0.99+ |
Peter Burr | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Jay Baer | PERSON | 0.99+ |
12 | QUANTITY | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
40 years | QUANTITY | 0.99+ |
Roland Smart | PERSON | 0.99+ |
20 years | QUANTITY | 0.99+ |
Jay | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
13 minutes | QUANTITY | 0.99+ |
four | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
two days | QUANTITY | 0.99+ |
one yard | QUANTITY | 0.99+ |
Agile | TITLE | 0.99+ |
two things | QUANTITY | 0.99+ |
United Airlines | ORGANIZATION | 0.99+ |
three | QUANTITY | 0.99+ |
1% | QUANTITY | 0.99+ |
Jay Today | TITLE | 0.99+ |
both places | QUANTITY | 0.98+ |
ORGANIZATION | 0.98+ | |
Convince and Convert | ORGANIZATION | 0.98+ |
One | QUANTITY | 0.98+ |
a minute and a half | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
today | DATE | 0.97+ |
this week | DATE | 0.97+ |
Golden Globes | EVENT | 0.97+ |
first | QUANTITY | 0.97+ |
single | QUANTITY | 0.97+ |
CMO Summit | EVENT | 0.96+ |
Crayola | ORGANIZATION | 0.96+ |
three minutes a day | QUANTITY | 0.96+ |
Marquise Awards | EVENT | 0.96+ |
a minute | QUANTITY | 0.96+ |
CUBE | ORGANIZATION | 0.96+ |
Katrina | EVENT | 0.96+ |
one person | QUANTITY | 0.95+ |
Zero | QUANTITY | 0.93+ |
two-fold | QUANTITY | 0.92+ |
125, 150 CMO's | QUANTITY | 0.92+ |
SiliconANGLE Media | ORGANIZATION | 0.92+ |
CMO Summit | EVENT | 0.91+ |
Oracle Marketing CX | ORGANIZATION | 0.9+ |
this year | DATE | 0.9+ |
one way | QUANTITY | 0.9+ |