Roger Dombrowski, dcVAST | Veritas Vision Solution Day 2018
>> Announcer: From Chicago, it's theCUBE. Covering Veritas Vision Solution Day 2018. Brought to you by Veritas. >> Welcome back to Chicago, everybody. We're here covering the Veritas Solution Days. Veritas used to have a big tent event last year. This year they're going out to, I think, seven cities around the globe. Probably touching more people than they would've with the single event, but they're road warriors, and we're here with them. theCUBE is the leader in live tech coverage. My name is Dave Vellante, Roger Dombrowski is here. He's a data protection specialist at dcVAST, one of Veritas' big solution partners based here in Chicago. Roger, thanks for coming on theCUBE. >> Thanks for having me, Dave. >> You're very welcome. So, data protection specialist, so you're into it. Data protection is changing quite dramatically. There's cloud, there's the edge... We just talked to Jyothi about AI, and so lots is changing. From your prospective, how are customers responding to those change? What are some of the key drivers? >> A lot of the key drivers... You used to be able to differentiate with backups and things like that. Now it's table stakes, it's an insurance policy. And that's kind of the old classic way of looking at it, but I think with today what we're finding, and I think Veritas is doing such a great job of, is mining value out of stuff that's even been around a while. So while the workloads have changed, our best practices haven't changed, our strategies haven't changed. It's where things are going, but it's also mining that metadata to get more value out of the backups than to just be a insurance policy. >> So I mean one of the obvious things is, I've talked about, is DR, but DR is still insurance. It's just more insurance and maybe you're killing two birds with one stone, but when you talk about mining data and analytics, and getting more out of the metadata, give us some other examples of how customers are exploiting and leveraging that investment in what used to be just backup pure insurance. >> Yeah, and in fact it's kind of interesting 'cause Info Map's been out for a little while, and I think we've been going around to the customer base with a slide stack, maybe a couple of slides, and really underselling the value. And what I've had a great opportunity to do with a couple of customers here very recently, is get into some deep use cases, and it's been an eye opening experience. And what's so amazing is the date we're and the information we're gathering has been in their backups for years, right? It's like the data has been there. It's been on tap, we're tapping that with Info Map. Finding stale data, ransomware, age data, all kinds of better ways to tier. You know some of the discussions were around cloud. And hey do you really want put cat videos in the cloud? Well, we can find those things with the backups. And we've been looking a that data for years. We're finally now pulling the value out of that data. >> And one of those speakers earlier today talked about, he took us all the way back to the federal rules of civil procedure, and bringing together IT and legal. So those discussions now with GDPR, et cetera, coming back to the fore. And it's important you don't want data that could be a legal risk hanging around. Everybody says, oh big data, keep all the data. And General Council's go I don't want to keep all the data. So the backup corpus is a way you're saying to investigate that and reduce risks, and also potentially identify diamonds in the rough. That you can-- >> Absolutely. >> You can mine. >> Absolutely. >> Okay. Let's talk about. I want to ask you about, there was a little company called Network Appliance, I think they were founded back in the '80s, they changed their name in the 2000s to NetApp, got out of that appliance, but appliances are still strong in the marketplace. Everybody's talking about software-defined. I think even Veritas uses it as part of its description of who they are and yet they continue to announce appliances as do others. Why appliances, from your practitioner perspective, what's going on there? >> Well, actually there's a customer whose actually here at the even today, and one of the things that really sold them on that whole form factor was the larger the company gets the more siloed, different aspects of business are. If you know if you wanted to make a change or implement something, you'd have the network team, you'd have change control, you'd have the OS team, the application teams. The appliance form factor's allowing the backup admins to wrangle in a lot of that crazy, hey, I've got to have 20 groups involved in something. Purpose built and performance tuned. I mean see it all the time. Customers, they still look at us and go, well, I think I can do it cheaper, and I've seen them try to do it, and maybe they'll save a few bucks, but the soft cost in terms of headaches, and problems, and tuning, and just limitations of building your own versus the appliance form factor. >> It's still going to run on hardware? So you're saying let the vendor do the integration and that's sort of the appeal of the appliance. There are use cases for PureSoftware based solutions, but if you just want to set it and (laughs) forget it. >> Roger: It really is that yeah. >> The appliance comes to play. What are some of the other big things and trends that you see, but let's talk cloud. You know the whole, I've often said renting is always more expensive than owning. You don't necessarily want, if you want to rent a car for a day, well go for it but if you want to drive at 100,000 miles, it probably make sense to buy it or even lease it. We heard today about cloud repatriation, I mean that's certainly a narrative that a lot of the on-prem guys want to talk about. What are you seeing in the marketplace? >> I'm seeing, I mean even before. I mean, we'll go back four or five years. Everyone's asking me, Roger, I want to get off of tape. Let's go to the cloud. What's been so interesting is to do those calculations, and I think that some people fly over that 100 miles an hour, and Veritas was one of the first ones to actually preserve deduplication all the way through the process. So it really changed, I call it that cost versus rent, or own versus rent ratio where depending on how long you're keeping data, how well the data dedupes, things like that, that's going to affect your cost model. And that's really, in my role at dcVAST, that's a big part of what I do, is to take the feature sets that Veritas brings to the table and apply them, and say, hey, does this make sense to put this in the cloud? Should this be on prem? And the great thing again is Veritas isn't. This isn't your dad's backup anymore. I mean the Access Appliance, the Flex Appliance, some of these things we're bringing to the table, Info Map, these other tools, we're not just doing backups, we're doing ancillary things to all that. >> Just geeking it out a little bit. You just talking about dedupe through the whole process. You mean without having to rehydrate the data. >> Roger: Exactly, exactly. >> Which is just a time consuming and complicated process. >> Roger: Absolutely. >> That's a technology they're pretty proud of, they talk about it a lot. >> Very, very, very much so. And I mean, if you look it, we've always been able to do it but it's the cost, right? If I have to virtualize an appliance in the cloud, it's a very expensive proposition, but if I can dedupe and all I'm doing is storing small fragments in a cheap storage target in the cloud, that's all better for the economics for the customer. >> All right, Roger, I'll give you the last word. Takeaways from today, and any other thoughts? >> Oh, I loved hearing about the telemetry. There's some new features coming in. I've heard some of this material before, but again, to hear the different perspectives, customers talking about the technology and where we're going, I'm glad we got to go and participate. >> All right Roger Dombrowski. Thanks very much for sharing your perspective. >> Thanks a lot, Dave. >> Great to see you. >> Take care. >> All right, keep it right there everybody, theCUBE. We'll be back at Veritas Vision in Chicago right after this short break, I'm Dave Vellante. (upbeat electronic music)
SUMMARY :
Brought to you by Veritas. We're here covering the Veritas Solution Days. What are some of the key drivers? out of the backups than to just be a insurance policy. and getting more out of the metadata, and the information we're gathering and also potentially identify diamonds in the rough. in the 2000s to NetApp, got out of that appliance, I mean see it all the time. and that's sort of the appeal of the appliance. What are some of the other big things I mean the Access Appliance, the Flex Appliance, You just talking about dedupe through the whole process. That's a technology they're pretty proud of, but it's the cost, right? All right, Roger, I'll give you the last word. but again, to hear the different perspectives, Thanks very much for sharing your perspective. We'll be back at Veritas Vision in Chicago
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Roger Dombrowski | PERSON | 0.99+ |
Roger | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Chicago | LOCATION | 0.99+ |
20 groups | QUANTITY | 0.99+ |
100,000 miles | QUANTITY | 0.99+ |
Veritas | ORGANIZATION | 0.99+ |
2000s | DATE | 0.99+ |
Network Appliance | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
Veritas' | ORGANIZATION | 0.99+ |
four | QUANTITY | 0.99+ |
five years | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
This year | DATE | 0.99+ |
100 miles an hour | QUANTITY | 0.99+ |
one stone | QUANTITY | 0.99+ |
dcVAST | ORGANIZATION | 0.98+ |
Veritas Vision | ORGANIZATION | 0.98+ |
two birds | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
a day | QUANTITY | 0.98+ |
theCUBE | ORGANIZATION | 0.97+ |
Veritas Vision Solution Day 2018 | EVENT | 0.94+ |
single event | QUANTITY | 0.94+ |
seven cities | QUANTITY | 0.93+ |
NetApp | ORGANIZATION | 0.92+ |
GDPR | TITLE | 0.91+ |
earlier today | DATE | 0.87+ |
Veritas Solution Days | EVENT | 0.83+ |
first ones | QUANTITY | 0.77+ |
80s | DATE | 0.75+ |
Jyothi | PERSON | 0.67+ |
Flex Appliance | TITLE | 0.61+ |
PureSoftware | ORGANIZATION | 0.6+ |
DR | ORGANIZATION | 0.57+ |
couple | QUANTITY | 0.56+ |
years | QUANTITY | 0.51+ |
Access Appliance | TITLE | 0.42+ |
Brian Pawlowski, DriveScale | CUBEConversation, Sept 2018
(intense orchestral music) >> Hey welcome back everybody, Jeff Frick here with theCUBE. We're having a CUBE Conversation in our Palo Alto studios, getting a short little break between the madness of the conference season, which is fully upon us, and we're excited to have a long time industry veteran Brian Pawlowski, the CTO of DriveScale, joining us to talk about some of the crazy developments that continue to happen in this in this world that just advances, advances. Brian, great to see you. >> Good morning, Jeff, it's great to be here, I'm a bit, still trying to get used to the timezone after a long, long trip in Europe, but I'm glad to be here, I'm glad we finally were able to schedule this. >> Yes, it's never easy, (laughs) one of the secrets of our business is everyone is actually all together at conferences, it's hard to get 'em together when when there's not that catalyst of a conference to bring everybody together. So give us the 101 on DriveScale. >> So, DriveScale. Let me start with, what is composable infrastructure? DriveScale provides product for orchestrating disaggregated components on a high-performance fabric to allow you to spin up essentially your own private cloud, your own clusters for these modern applications, scale out applications. And I just said a bunch of gobble-dee-gook, what does that mean? The DriveScale software is essentially an orchestration package that provides the ability to take compute nodes and storage nodes on high-performance fabric and securely form multi-tenant architectures, much like you would in a cloud. When we think of application deployment, we think of a hundred nodes or 500 nodes. The applications we're looking at are things that our people are using for big data, machine learning, or AI, or, or these scale out databases. Things like Vertica, Aerospike, is important, DRAM, ESES, dBase database, and, this is an alternative to the standard way of deploying applications in a very static nature onto fixed physical resources, or into network storage coming from the likes of Network Appliance, sorry NetApp, and Dell EMC. It's the modern applications we're after, the big data applications for analytics. >> Right. So it's software that basically manages the orchestration of hardware, I mean of compute, store, and networks you can deploy big data analytics applications? >> Yes. >> Ah, at scale. >> It's absolutely focused on the orchestration part. The typical way applications that we're in pursuit of right now are deployed is on 500 physical bare metal nodes from, pick your vendor, of compute and storage that is all bundled together and then laid out into physical deployment on network. What we do is just that you essentially disaggregate, separate compute, pure compute, no disks at all, storage into another layer, have the fabric, and we inventory it all and, much like vCenter for virtualization, for doing software deployment of applications, we do software deployment of scale out applications and a scale out cluster, so. >> Right. So you talked about using industry standard servers, industry standard storage, does the system accommodate different types of compute and CPUs, different types of storage? Whether it's high performance disks, or it's Flash, how does it accommodate those things? And if I'm trying to set up my big stack of hardware to then deploy your software to get it configured, what're some of the things I should be thinkin' about? >> That's actually, a great question, I'm going to try to hit three points. (clears throat) Absolutely. In fact, a core part of our orchestration layer is to essentially generalize the compute and storage components and the networking components of your data center, and do rule-based, constraint-based selection when creating a cluster. From your perspective when creating a cluster (coughs) you say "I want a hundred nodes, and I'm going to run this application on it, and I need that this environment for the application." And this application is running on local, it thinks it's running local, bare metal, so. You say "A hundred nodes, eight cores each minimum, and I want 64 gig of memory minimum." It'll go out and look at the inventory and do a best match of the components there. You could have different products out there, we are compute agnostic, storage agnostic, you could have mix and match, we will basically do a best fit match of all of your available resources and then propose to you in a couple seconds back with the cluster you want, and then you just hit go, and it forms a cluster in a couple seconds. >> A virtual cluster within that inventory of assets that I-- >> A virtual cluster that-- Yes, out of the inventory of assets, except from the perspective of the application it looks like a physical cluster. This is the critical part of what we do, is that, somebody told me "It's like we have an extension cord between the storage and the compute nodes." They used this analogy yesterday and I said I was going to reuse it, so if they listen to this: Hey, I stole your analogy! We basically provide a long extension cord to the direct-to-test storage, except we've separated out the storage from the compute. What's really cool about that, it was the second point of what you said is that you can mix and match. The mix and match occurs because one of the things your doing with your compute and storage is refreshing your compute and storage at three to five year cycles, separately. When you have the old style model of combining compute and storage in what I'd call a captured dazz scenario. You are forced to do refreshes of both compute and persistent storage at the same time, it just becomes, it's a unmanageable position to be in, and separating out the components provides you a lot of flexibility from mixing and matching different types of components, doing rolling upgrades of the compute separate from the storage, and then also having different storage tiers that you can combine SSD storage, the biggest tiers today are SSD storage and spinning disk storage, being able to either provide spinning disk, SSDs, solid-state storage, or a mixture of both for a hybrid deployment for an application without having to worry about a purchase time having to configure your box that way, we just basically do it on the fly. >> Right. So, and then obviously I can run multiple applications against that big stack of assets, and it's going to go ahead and parse the pieces out that I need for each application. >> We didn't even practice this beforehand, that was a great one too! (laughs) Key part of this is actually providing secure multi-tenant environment is the phrase I use, because it's a common phrase. Our target customer is running multiple applications, 2010, when somebody was deploying big data, they were deploying Hadoop. Quickly, (snaps) think, what were the other things then? Nothing. It was Hadoop. Today it's 10 applications, all scale out, all having different requirements for the reference architecture for the amount of compute storage. So, our orchestration layer basically allows you to provision separate virtual physical clusters in a secure, multi-tenant way, cryptographically secure, and you could encrypt the data too if you wanted you could turn on encryption to get over the wire with that data at rest encryption, think GDPR and stuff like that. But, the different clusters cannot interfere with each other's workloads, and because you're on a fully switched internet fabric, they don't interfere with performance either. But that secure multi-tenant part is critical for the orchestration and management of multiple scale out clusters. >> So then, (light laugh) so in theory, if I'm doing this well, I can continually add capacity, I can upgrade my drives to SSDs, I can put in new CPUs as new great things come out into my big cloud, not my cloud, but my big bucket of resources, and then using your software continue to deploy those against applications as is most appropriate? >> Could we switch seats? (both laugh) Let me ask the questions. (laughing) No, because it's-- >> It sounds great, I just keep adding capacity, and then it redeploys based on the optimum, right? >> That's a great summary because the thing that we're-- the basic problem we're trying to solve is that... This is like the lesson from VMware, right? One lesson from VMware was, first it was, we had unused CPU resources, let's get those unused CPU cycles back. No CPU cycle shall go unused! Right? >> I thought that they needed to keep 50% overhead, just to make sure they didn't bump against the roof. But that's a different conversation. >> That's a little detail, (both laugh) that's a little detail. But anyway. The secondary effect was way more important. Once people decoupled their applications from physical purchase decisions and rolling out physical hardware, they stopped caring about any critical piece of hardware, they then found that the simplified management, the one button push software application deployment, was a critical enabler for business operations and business agility. So, we're trying to do what VMware did for that kind of captured legacy application deployments, we're trying to do that for essentially what has been historically, bare metal, big data application deployment, where people were... Seriously in 2012, 2010, 2012, after virtualization took over the data center, and the IT manager had his cup of coffee and he's layin' back goin' "Man, this is great, I have nothing else to worry about." Then there's a (knocks) and the guy comes in his office, or his cube, and goes "Whaddya want?!" and he goes "Well, I'd like you to deploy 500 bare metal nodes to run this thing called Hadoop." and he goes "Well, I'll just give you 500 virtualized instances." a he goes "Nope, not good enough! I want to start going back to bare metal." And sense then it's gotten worse. So what we're trying to do is restore the balance in the universe, and apply for the scale out clusters what virtualization did for the legacy applications. Does that make a little bit of sense? >> Yeah! And is it heading towards the other direction ride is towards the atomic, right? So if you're trying to break the units of compute and store down to the base, so you've got a unified baseline that you can apply more volume than maybe a particular feature set, in a particular CPU, or a particular, characteristic of a particular type of a storage? >> Right. >> This way you're doing in software, and leveraging a whole bunch of it to satisfy, as you said kind of the meets min for that particular application. >> Yeah, absolutely. And I think, kind of critical about the timing of all this is that virtualization drove, very much, a model of commoditization of CPUs, once VMware hit there, people weren't deploying applications on particular platforms, they were deploying applications on a virtualized hardware model, and that was how applications were always thought about from then on. From a lot of these scale out applications, not a lot of them, all of them, are designed to be hardware agnostic. They want to run on bare metal 'cause they're designed to run, when you play a bare metal application for a scale out, Apache Spark, it uses all of the CPU on the machine, you don't need virtualization because it will use all the CPU, it will use all the bandwidth and the disks underneath it. What we're doing is separating it out to provide lifecycle management between the two of them, but also allow you to change the configurations dynamically over time. But, this word of atomic kinda's a-- the disaggregation part is the first step for composability. You want to break it out, and I'll go here and say that the enterprise storage vendors got it right at one point, I mean, they did something good. When they broke out captured storage to the network and provided a separation of compute and storage, before virtualization, that was a step towards a gaining controlled in a sane management approach to what are essentially very different technologies evolving at very different speeds. And then your comment about "So what if you want to basically replace spinning disks with SSDs?" That's easily done in a composable infrastructure because it's a virtual function, you're just using software, software-defined data center, you're using software, except for the set of applications that just slip past what was being done in the virtualized infrastructure, and the network storage infrastructure. >> Right. And this really supports kind of the trend that we see, which is the new age, which is "No, don't tell me what infrastructure I have, and then I'll build an app and try and make it fit." It's really app first, and the infrastructure has to support the app, and I don't really care as a developer and as a competitive business trying to get apps to satisfy my marketplace, the infrastructure, I'm just now assuming, is going to support whatever I build. This is how you enable that. >> Right. And very importantly, the people that are writing all of these apps, the tons of low apps, Apache-- by the way, there's so many Apache things, Apache Kafka, (laughing) Apache Spark, the Hadoops of the world, the NoSQL databases, >> Flinks, and Oracle, >> Cassandra, Vertica, things that we consider-- >> MongoDB, you got 'em all. MongoDB, right. Let's just keep rolling these things off our tongue. >> They're all CUBE alumni, so we've talked to 'em all. >> Oh, this is great. >> It's awesome. (laughs) >> And they're all brilliant technologists, right? And they have defined applications that are so, so good at what they do, but they didn't all get together beforehand and say, "Hey, by the way, how can we work together to make sure that when this is all deployed, and operating in pipelines, and in parallel, that from an IT management perspective, it all just plays well together?" They solved their particular problems, and when it was just one application being deployed no harm no foul, right? When it's 10 applications being deployed, and all of a sudden the line item for big data application starts creeping past five, six, approaching 10%, people start to get a little bit nervous about the operational cost, the management cost, deployability, I talked about lifecycle management, refreshes, tech refreshes, expansion, all these things that when it's a small thing over there in the corner, okay, I'll just ignore it for a while. Yeah. Do you remember the old adventure game pieces? (Jeff laughs) I'm dating myself. >> What's adventure game, I don't know? (laughs) >> Yeah, when you watered a plant, "Water, please! Water, please!" The plant, the plant in there looked pitiful, you gave it water and then it goes "Water! Water! Give me water!" Then it starts to attack, but. >> I'll have to look that one up. (both laugh) Alright so, before I let you go, you've been at this for a while, you've seen a lot of iterations. As you kind of look forward over the next little while, kind of what do you see as some of the next kind of big movements or kind of big developments as kind of the IT evolution, and every company's now an IT company, or software company continues? >> So, let's just say that this is a great time, why I joined DriveScale actually, a couple reasons. This is a great time for composable infrastructure. It's like "Why is composalbe infrastructure important now?" It does solve a lot of problems, you can deploy legacy applications over and stuff, but, they don't have any pain points per se, they're running in their virtualization infrastructure over here, the enterprise storage over here. >> And IBM still sells mainframes, right? So there's still stuff-- >> IBM still sells mainframes. >> There's still stuff runnin' on those boxes. >> Yes there is. (laughs) >> Just let it be, let it run. >> This came up in Europe. (laughs) >> And just let it run, but there's no pain point there, what these increasingly deployed scale out applications, 2004 when the clocks beep was hit, and then everything went multi-core and then parallel applications became the norm, and then it became scale out applications for these for the Facebooks of the world, the Googles of the world, whatever. >> Amazon, et cetera. >> For their applications, that scale out is becoming the norm moving forward for application architecture, and application deployment. The more data that you process, the more scale out you need, and composable infrastructure is becoming a-- is a critical part of getting that under control, and getting you the flexibility and manageability to allow you to actually make sense of that deployment, in the IT center, in the large. And the second thing I want to mention is that, one thing is that Flash has emerged, and that's driven something called NVME over Fabrics, essentially a high-performance fabric interconnect for providing essentially local latency to remote resources; that is part of the composable infrastructure story today, and you're basically accessing with the speed of local access to solid state memory, you're accessing it over the fabric, and all these things are coming together driving a set of applications that are becoming both increasingly important, and increasingly expensive to deploy. And composable infrastructure allows you to get a handle on controlling those costs, and making it a lot more manageable. >> That's a great summary. And clearly, the amount of data, that's going to be coming into these things is only going up, up, up, so. Great conversation Brian, again, we still got to go meet at Terún, later so. >> Yeah, we have to go, yes. >> We will make that happen with ya. >> Great restaurant in Palo Alto. >> Thanks for stoppin' by, and, really appreciate the conversation. >> Yeah, and if you need to buy DriveScale, I'm your guy. (both laughing) >> Alright, he's Brian, I'm Jeff, you're walking the CUBE Conversation from our Palo Alto studios. Thanks for watchin', we'll see you at a conference soon, I'm sure. See ya next time. (intense orchestral music)
SUMMARY :
madness of the conference season, which is fully upon us, but I'm glad to be here, one of the secrets of our business that provides the ability to take the orchestration of hardware, It's absolutely focused on the orchestration part. does the system accommodate and the networking components of your data center, and persistent storage at the same time, and it's going to go ahead and and you could encrypt the data too if you wanted Let me ask the questions. This is like the lesson from VMware, right? I thought that they needed to keep 50% overhead, and apply for the scale out clusters and leveraging a whole bunch of it to satisfy, and the network storage infrastructure. and the infrastructure has to support the app, the Hadoops of the world, the NoSQL databases, MongoDB, you got 'em all. It's awesome. and all of a sudden the line item for big data application the plant in there looked pitiful, kind of the IT evolution, the enterprise storage over here. (laughs) This came up in Europe. for the Facebooks of the world, the Googles of the world, and getting you the flexibility and manageability And clearly, the amount of data, really appreciate the conversation. Yeah, and if you need to buy DriveScale, I'm your guy. we'll see you at a conference soon, I'm sure.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Brian Pawlowski | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Brian | PERSON | 0.99+ |
50% | QUANTITY | 0.99+ |
Europe | LOCATION | 0.99+ |
10 applications | QUANTITY | 0.99+ |
2012 | DATE | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
two | QUANTITY | 0.99+ |
2010 | DATE | 0.99+ |
Sept 2018 | DATE | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
2004 | DATE | 0.99+ |
five year | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
500 nodes | QUANTITY | 0.99+ |
One lesson | QUANTITY | 0.99+ |
MongoDB | TITLE | 0.99+ |
both | QUANTITY | 0.99+ |
six | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
64 gig | QUANTITY | 0.99+ |
eight cores | QUANTITY | 0.99+ |
10% | QUANTITY | 0.99+ |
Network Appliance | ORGANIZATION | 0.98+ |
one application | QUANTITY | 0.98+ |
first step | QUANTITY | 0.98+ |
five | QUANTITY | 0.98+ |
each application | QUANTITY | 0.98+ |
second point | QUANTITY | 0.98+ |
VMware | ORGANIZATION | 0.97+ |
DriveScale | ORGANIZATION | 0.97+ |
GDPR | TITLE | 0.97+ |
101 | QUANTITY | 0.97+ |
today | DATE | 0.97+ |
Cassandra | TITLE | 0.97+ |
Today | DATE | 0.96+ |
second thing | QUANTITY | 0.96+ |
CUBE | ORGANIZATION | 0.96+ |
one | QUANTITY | 0.96+ |
NoSQL | TITLE | 0.96+ |
each | QUANTITY | 0.96+ |
Facebooks | ORGANIZATION | 0.96+ |
one thing | QUANTITY | 0.95+ |
one point | QUANTITY | 0.95+ |
both laugh | QUANTITY | 0.95+ |
first | QUANTITY | 0.94+ |
Googles | ORGANIZATION | 0.94+ |
Dell EMC | ORGANIZATION | 0.94+ |
NetApp | ORGANIZATION | 0.93+ |
Apache | ORGANIZATION | 0.91+ |
three points | QUANTITY | 0.91+ |
DriveScale | TITLE | 0.88+ |
Terún | ORGANIZATION | 0.88+ |
500 bare metal nodes | QUANTITY | 0.88+ |
Flinks | TITLE | 0.87+ |
Vertica | TITLE | 0.86+ |
a hundred nodes | QUANTITY | 0.85+ |
vCenter | TITLE | 0.84+ |
CUBEConversation | EVENT | 0.83+ |
couple seconds | QUANTITY | 0.83+ |
500 physical bare metal nodes | QUANTITY | 0.81+ |
couple | QUANTITY | 0.81+ |
Aerospike | TITLE | 0.78+ |
500 virtualized | QUANTITY | 0.77+ |
hundred nodes | QUANTITY | 0.76+ |
secondary | QUANTITY | 0.76+ |
one button | QUANTITY | 0.72+ |
Spark | TITLE | 0.68+ |
Bill Miller, NetApp | SAP SAPPHIRE NOW 2018
>> From Orlando, Florida, it's theCUBE, covering SAP Sapphire Now 2018, brought to you by NetApp. >> Welcome to theCUBE, I'm Lisa Martin, we are with Keith Townsend, we are in Orlando, in the NetApp booth, at SAP Sapphire 2018, joined by the CIO of NetApp, Bill Miller, Bill welcome to theCUBE. >> Thank you, great to be here, I really appreciate it. >> So, NetApp, 26-year-old company, you guys have been on a big transformation journey, give us some nuggets of NetApp's transformation story. >> Yeah, it's really a fascinating story, and it all centered around the customer. In going back a couple of years when we realized this story was evolving from a storage story and a storage history, to a data-centric story going forward. We spent a lot of time listening to our customers. We listened to them in briefing center meetings, we listened to them through strategic customer account sessions, and we really were drawn to this notion of providing outcomes for our customers rather than providing storage long-term. Storage, like all other appliances, ironically in the name of the company, a very well-established respected company, Network Appliance. It was not going to be about appliances in the future, it was going to be about data management and leveraging the value of the data for our customers. So our transformation was about bringing that journey to life and giving our customers choice. Choice around where their data resides and how they utilize that data and how they leverage that data for their customers. So as we listened and we, we kind of absorbed the impact of this, it became clear that for the foreseeable future we were going to live in a hybrid-cloud world. And really what I mean by that is our large established customers were going to have very consequential private cloud data centers for a long time to come. We did very large complex applications that served their customer communities. They weren't going to be able to pick up those large applications and move them quickly to the cloud so they were going to run in high-intensity private cloud very efficient data centers. But at the same time, they were looking to transform digitally, to go on this digital transformation journey, and the vast majority of them wanted to lean in to the hyperscale or clouds, the cloud suppliers, and build their future strategic applications in the cloud. And it became clear to us that their data was now going to be bifurcated, it was going to reside in their own prim facilities but critical, mission critical, and advantageous data was also going to sit out there in the hyperscaler cloud and a company like NetApp could build this data fabric to connect them seamlessly so that the customers had choice. I mean, that's really what was behind the initiative to transform NetApp. >> So as we talk about that transformation, NetApp identified the opportunity. >> Yes. >> Looked at the product portfolio, looked at the gaps. Identified where they needed to go. >> Right. >> NetApp the company needed to go through a digital transformation itself. >> Yeah. >> So as an SAP customer, as a NetApp customer, as the person responsible for enabling developers, application teams, product teams, to execute on that digital transformation, what were some of the challenges, lessons learned as the CIO of NetApp that you experienced. >> It's an awesome question. You kind of went from we're going to transform for our customers to what I did to, or my teams and myself did, to enable that. There's a middle step which is all of our business partners in the company. You know, whether that's finance or sales or marketing, having to realign their business processes to this new need. So let me give you an example on the sales, the go-to-market function, you know. We call this a go-to-market motion, you know, how you sell. Well if you're selling an appliance, you know a piece of hardware with some software with it, that's one very well-defined and familiar motion. If you're going to sell software solutions, if you're going to sell advanced professional services that advise our customers on how to leverage data, those are very different motions that you have to enable to be successful. So what that means is taking that set of business processes that are unfamiliar to us. You know, when a customer wants to buy our products on a pay-as-you-go, a consumption model, rather than a capitalization acquisition, that's a whole different set of processes we have to put in place behind the scenes. Financial processes, legal processes, and of course IT systems. So it started with the business functions, figuring out how they were going to transform their work flows, and then IT had to come in underneath and say do we have the systems, the tools, the platforms, like SAP and other partner-provided platforms to enable that and make those work flows come to life. So it was really a partnership across the whole enterprise and if you really listen to our CEO, George Kurian, George will tell you, this transformation affected every single employee and every single leader in the corporation. It was a major change for us to figure out how you're going to take a business steaming in this direction and turn them 45 degrees on a dime and quickly embrace those new processes and mobilize them through new systems, tools, and platforms. So this was a wholesale change to the corporation, I mean it was a burn-the-ship's model, we're never going back, (Keith laughs) this is the new way of doing business for NetApp. Very exciting, and at the beginning a daunting journey. >> We had Dave Hitz on theCUBE doing a NetApp insight last year and one of the things that he said, he had to come in and tell the on-tap engineers, on-tapping the cloud is okay, we're NetApp and we can burn down what we've done before and do it again, and we'll make that journey. So, it's enlightening to hear that NetApp was willing to burn down the old stuff to build the new. So as we talk about that new, what are the major drivers, as you're talking to other CIOs, you know, I'm sure the sales team wants more of your time than you can give. >> Very perceptive, very perceptive Keith. (laughs) >> As you're talking to CIOs, what is that conversation, what jewels are they trying to get out of you? >> So, we spent a lot of time with our customers. One of the enjoyable parts of my job is my customers are my peers, our customers are my peers, so I did spend a lot of time looking at what's on their agenda. They're driven by two passions almost globally and consistently across the industry. They're driven by a desire to move to the cloud, to move to the cloud aggressively for flexibility, to take advantage of these new marketplaces that the hyperscalers are offering. Hyperscalers and their partners. But if you come out to our home base in Silicon Valley, what you see, all the start-up companies are being designed in the cloud functionality, so that's where a lot of the new R&D and the new IP is being created. So, my peers want to invest more heavily in the cloud. And the second thing they want to do is enable digital transformation, real digital transformation, how do they monetize the wealth of the data that they've acquired through their relationships with their customers, and then how do they leverage that for their customer benefit. That's what digital transformation really means to CIOs, and how do I engage in the cloud to do that. So when we looked at that we said, okay the story's about data, it's digital transformation around data, and it's enabling that cloud journey for our customers at a rate of consumption that is acceptable and digestable to them, right? Because every customer has a different rate of motion to the cloud and depending on their industry type and their degree of risk and enthusiasm to embrace change, they're in different places. So, we had to be very flexible in guiding different customers in different industries to that cloud database journey and so that's why we have to spend an awful lot of time listening to our customers to help them do that. >> Did you find during this time where, not only are you having to burn some ships down and transform yourself, while still transacting business in a competitive way. >> That's exactly right. >> Did you find yourselves going, alright so NetApp's talking about data is key, data fabric, are you going away from storage, did you find that was a question that was commonly asked and if so, how are they responding now to NetApp's transformation? >> That's a great question. Let me get back to that as you know, NetApp going away from storage, and hit something both of you said. This journey of transformation, you can do transformation a number of ways, but the two common ways are I do it and I'm gone. In other words, I get through the fiery pit and I'm on the other side, I'm like, wow I'm glad that's over, okay? That's not the nature of our company. It is, what George would call it, a culture of transformation, right? It's about being willing to change directions if you need to change direction and go, in this dynamic world. >> Based on the customers, what they think, not what as a company, NetApp would like. >> And we're in one of the most dynamic areas of high-tech, when you look at data and you look at the cloud and the solutions. So we realize, it's not over, we haven't transformed and we're done. We're in transformation 2.0, which is the whole next generation, and most of our leadership team is very comfortable with the discomfort associated with continually transforming. >> Comfortably uncomfortable. >> Yeah and I think it takes a certain kind of person to lead in our company and you have to be bold. You have to be bold and want to do that, okay? >> So George gave some emotional examples last year of data-driven capability. In order to make these transformations, NetApp itself has to be driven by data. >> That's right. >> What are some of the key capabilities as a CIO that you've given the business to be data-driven? George can't make these decisions unless he has data. What new capability has NetApp provided George? >> Well, I'll give you an example sitting here at this wonderful SAP conference, you know? We rolled out SAP C4C Hybris this past year. A big journey for us, we were on a separate platform, we knew we needed to build these new work flows into our day-to-day processes and as we thought about what potential solutions would be to kind of break the mold from where we were and move forward, we really liked the SAP HANA platform. We think the HANA platform, very dynamic you know in memory, a high-performance computing platform that's built on the NetApp framework, right? It's a NetApp high-performance infrastructure with an in-memory processing capability that's second to none in my opinion. So we looked at data availability, reporting, insights that we could get, and the commitment from our partners to continue to evolve in insights. So you know, you hear about Leonardo here, and some of the AI and machine learning platforms that are being developed, we felt like that HANA platform would give us a lot of flexibility in the future to be data-driven, to pull data and to do it fast and dynamically to help our business make the right decisions going forward. >> I'm curious, as we finish up here, how influential is NetApp's transformation? And you're right, it's a journey, right? You're going to get a destination, oh and now we're an intelligent enterprise, if only. How impactful and influential has NetApp's transformation been on really continuing to establish NetApp's relevance and your customer base, have you seen that like make deals happen because look what they've done. >> Yeah, a couple things I'll say to that. First of all, customers admire companies that are bold and that really want to lean into technology and make change, so our journey of transformation is absolutely a fascinating one for our customers. They feel like, if you're willing to do that, if you're willing to change dynamically on the behalf of your customers, we got a lot more confidence that you're serious about what you're doing and you're committed to the future. So number one, they love it. Number two, they just want to know how to transform themselves, so any nuggets they can take away from our journey, and reuse and position in their business for future success is much appreciated. And then the third thing I would say, and it gets back to an earlier question you asked. You know, as we give them more choice, as we give them a choice to either advance their current data center with high-performing flash or build a really cost-effective high-performing private cloud with converged infrastructure or really venture out into that digital transformative space of the hyperscalers, we're giving them choice every day. So, we're not afraid to offer them data management solutions in all three of those environments and not only choice by going out to a hyperscaler, an AWS or an Azure or a Google Cloud platform, but to be able to choose multiple cloud supplier platforms so they can put some workloads in Azure, some workloads in GPC, and get a confident feeling that NetApp's going to be there for them in any of those platforms in any of those configurations. They really feel more confident when they hear that story, and I would argue, to some degree, they're more likely to buy our traditional storage if they feel confident of our future vision in the enablement to allow them to succeed with that future vision, so it's been well received at that level. >> NetApp, bold. I love it Bill. >> I think we are. >> Thanks so much for stopping by, and now you're Cube Alumni, so congratulations. >> Well thank you and I hope to come back some time. >> Absolutely, we'd love to have you back. Thank you for watching theCube, I'm Lisa Martin with Keith Townsend and the NetApp booth at SAP Sapphire 2018. Thanks for watching.
SUMMARY :
brought to you by NetApp. Welcome to theCUBE, I'm Lisa Martin, you guys have been on a big transformation journey, and move them quickly to the cloud So as we talk about that transformation, Looked at the product portfolio, looked at the gaps. NetApp the company needed to go through lessons learned as the CIO of NetApp that you experienced. and then IT had to come in underneath and say the old stuff to build the new. Very perceptive, very perceptive Keith. and how do I engage in the cloud to do that. not only are you having to and I'm on the other side, I'm like, Based on the customers, what they think, and you look at the cloud and the solutions. and you have to be bold. NetApp itself has to be driven by data. What are some of the key capabilities as a CIO and to do it fast and dynamically really continuing to establish and it gets back to an earlier question you asked. I love it Bill. and now you're Cube Alumni, so congratulations. and the NetApp booth at SAP Sapphire 2018.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
George Kurian | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
George | PERSON | 0.99+ |
Bill Miller | PERSON | 0.99+ |
Dave Hitz | PERSON | 0.99+ |
Orlando | LOCATION | 0.99+ |
NetApp | ORGANIZATION | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
45 degrees | QUANTITY | 0.99+ |
Keith Townsend | PERSON | 0.99+ |
Keith | PERSON | 0.99+ |
last year | DATE | 0.99+ |
Bill | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
Orlando, Florida | LOCATION | 0.99+ |
HANA | TITLE | 0.99+ |
NetApp | TITLE | 0.99+ |
One | QUANTITY | 0.98+ |
two common ways | QUANTITY | 0.98+ |
two passions | QUANTITY | 0.98+ |
Cube | ORGANIZATION | 0.97+ |
First | QUANTITY | 0.97+ |
SAP HANA | TITLE | 0.96+ |
second thing | QUANTITY | 0.96+ |
SAP | ORGANIZATION | 0.96+ |
third thing | QUANTITY | 0.92+ |
ORGANIZATION | 0.91+ | |
Azure | TITLE | 0.9+ |
26-year-old | QUANTITY | 0.9+ |
theCUBE | ORGANIZATION | 0.89+ |
second | QUANTITY | 0.89+ |
2018 | DATE | 0.84+ |
one | QUANTITY | 0.84+ |
Network Appliance | ORGANIZATION | 0.83+ |
single employee | QUANTITY | 0.71+ |
this past year | DATE | 0.7+ |
couple | QUANTITY | 0.69+ |
theCube | ORGANIZATION | 0.68+ |
Leonardo | ORGANIZATION | 0.67+ |
Number two | QUANTITY | 0.67+ |
SAPPHIRE | TITLE | 0.63+ |
three | QUANTITY | 0.62+ |
SAP Sapphire 2018 | EVENT | 0.61+ |
SAP | EVENT | 0.54+ |
single leader | QUANTITY | 0.52+ |
SAP C4C Hybris | TITLE | 0.49+ |
GPC | ORGANIZATION | 0.42+ |
years | QUANTITY | 0.41+ |