Image Title

Search Results for Spectrum Virtualize:

Eric Herzog & Sam Werner, IBM | CUBEconversation


 

(upbeat music) >> Hello everyone, and welcome to this "Cube Conversation." My name is Dave Vellante and you know, containers, they used to be stateless and ephemeral but they're maturing very rapidly. As cloud native workloads become more functional and they go mainstream persisting, and protecting the data that lives inside of containers, is becoming more important to organizations. Enterprise capabilities such as high availability or reliability, scalability and other features are now more fundamental and important and containers are linchpin of hybrid cloud, cross-cloud and edge strategies. Now fusing these capabilities together across these regions in an abstraction layer that hides that underlying complexity of the infrastructure, is where the entire enterprise technology industry is headed. But how do you do that without making endless copies of data and managing versions not to mention the complexities and costs of doing so. And with me to talk about how IBM thinks about and is solving these challenges are Eric Herzog, who's the Chief Marketing Officer and VP of Global Storage Channels. For the IBM Storage Division is Sam Werner is the vice president of offering management and the business line executive for IBM Storage. Guys, great to see you again, wish should, were face to face but thanks for coming on "theCUBE." >> Great to be here. >> Thanks Dave, as always. >> All right guys, you heard me my little spiel there about the problem statement. Eric, maybe you could start us off. I mean, is it on point? >> Yeah, absolutely. What we see is containers are going mainstream. I frame it very similarly to what happened with virtualization, right? It got brought in by the dev team, the test team, the applications team, and then eventually of course, it became the main state. Containers is going through exactly that right now. Brought in by the dev ops people, the software teams. And now it's becoming again, persistent, real use clients that want to deploy a million of them. Just the way they historically have deployed a million virtual machines, now they want a million containers or 2 million. So now it's going mainstream and the feature functions that you need once you take it out of the test sort of play with stage to the real production phase, really changes the ball game on the features you need, the quality of what you get, and the types of things you need the underlying storage and the data services that go with that storage,. to do in a fully container world. >> So Sam how'd we get here? I mean, container has been around forever. You look inside a Linux, right? But then they did, as Eric said, go mainstream. But it started out the, kind of little experimental, As I said, their femoral didn't really need to persist them, but it's changed very quickly. Maybe you could talk to that evolution and how we got here. >> I mean, well, it's been a look, this is all about agility right? It's about enterprises trying to accelerate their innovation. They started off by using virtual machines to try to accelerate access to IT for developers, and developers are constantly out, running ahead. They got to go faster and they have to deliver new applications. Business lines need to figure out new ways to engage with their customers. Especially now with the past year we had it even further accelerated this need to engage with customers in new ways. So it's about being agile. Containers promise or provide a lot of the capabilities you need to be agile. What enterprises are discovering, a lot of these initiatives are starting within the business lines and they're building these applications or making these architectural decisions, building dev ops environments on containers. And what they're finding is they're not bringing the infrastructure teams along with them. And they're running into challenges that are inhibiting their ability to achieve the agility they want because their storage needs aren't keeping up. So this is a big challenge that enterprises face. They want to use containers to build a more agile environment to do things like dev ops, but they need to bring the infrastructure teams along. And that's what we're focused on now. Is how do you make that agile infrastructure to support these new container worlds? >> Got it, so Eric, you guys made an announcement to directly address these issues. Like it's kind of a fire hose of innovation. Maybe you could take us through and then we can unpack that a little bit. >> Sure, so what we did is on April 27th, we announced IBM Spectrum Fusion. This is a fully container native software defined storage technology that integrates a number of proven battle-hardened technologies that IBM has been deploying in the enterprise for many years. That includes a global scalable file system that can span edge core and cloud seamlessly with a single copy of the data. So no more data silos and no more 12 copies of the data which of course drive up CapEx and OpEx. Spectrum Fusion reduces that and makes it easier to manage. Cuts the cost from a CapEx perspective and cuts a cost for an OpEx perspective. By being fully container native, it's ready to go for the container centric world and could span all types of areas. So what we've done is create a storage foundation which is what you need at the bottom. So things like the single global namespace, single accessibility, we have local caching. So with your edge core cloud, regardless of where the data is, you think the data's right with you, even if it physically is not. So that allows people to work on it. We have file locking and other technologies to ensure that the data is always good. And then of course we'd imbued it with the HA Disaster Recovery, the backup and restore technology, which we've had for years and have now made of fully container native. So spectrum fusion basically takes several elements of IBM's existing portfolio has made them container native and brought them together into a single piece of software. And we'll provide that both as a software defined storage technology early in 2022. And our first pass will be as a hyperconverged appliance which will be available next quarter in Q3 of 2021. That of course means it'll come with compute, it'll come with storage, come with a rack even, come with networking. And because we can preload everything for the end users or for our business partners, it would also include Kubernetes, Red Gat OpenShift and Red Hat's virtualization technology all in one simple package, all ease of use and a single management gooey to manage everything, both the software side and the physical infrastructure that's part of the hyperconverged system level technologies. >> So, maybe it can help us understand the architecture and maybe the prevailing ways in which people approach container storage, what's the stack look like? And how have you guys approached it? >> Yeah, that's a great question. Really, there's three layers that we look at when we talk about container native storage. It starts with the storage foundation which is the layer that actually lays the data out onto media and does it in an efficient way and makes that data available where it's needed. So that's the core of it. And the quality of your storage services above that depend on the quality of the foundation that you start with. Then you go up to the storage services layer. This is where you bring in capabilities like HA and DR. People take this for granted, I think as they move to containers. We're talking about moving mission critical applications now into a container and hybrid cloud world. How do you actually achieve the same levels of high availability you did in the past? If you look at what large enterprises do, they run three site, for site replication of their data with hyper swap and they can ensure high availability. How do you bring that into a Kubernetes environment? Are you ready to do that? We talk about how only 20% of applications have really moved into a hybrid cloud world. The thing that's inhibiting the other 80% these types of challenges, okay? So the storage services include HA DR, data protection, data governance, data discovery. You talked about making multiple copies of data creates complexity, it also creates risk and security exposures. If you have multiple copies of data, if you needed data to be available in the cloud you're making a copy there. How do you keep track of that? How do you destroy the copy when you're done with it? How do you keep track of governance and GDPR, right? So if I have to delete data about a person how do I delete it everywhere? So there's a lot of these different challenges. These are the storage services. So we talk about a storage services layer. So layer one data foundation, layer two storage services, and then there needs to be connection into the application runtime. There has to be application awareness to do things like high availability and application consistent backup and recovery. So then you have to create the connection. And so in our case, we're focused on open shift, right? When we talk about Kubernetes how do you create the knowledge between layer two, the storage services and layer three of the application services? >> And so this is your three layer cake. And then as far as like the policies that I want to inject, you got an API out and entries in, can use whatever policy engine I want. How does that work? >> So we're creating consistent sets of APIs to bring those storage services up into the application, run time. We in IBM have things like IBM cloud satellite which bring the IBM public cloud experience to your data center and give you a hybrid cloud or into other public cloud environments giving you one hybrid cloud management experience. We'll integrate there, giving you that consistent set of storage services within an IBM cloud satellite. We're also working with Red Hat on their Advanced Cluster Manager, also known as RACM to create a multi-cluster management of your Kubernetes environment and giving that consistent experience. Again, one common set of APIs. >> So the appliance comes first? Is that a no? Okay, so is that just time to market or is there a sort of enduring demand for appliances? Some customers, you know, they want that, maybe you could explain that strategy. >> Yeah, so first let me take it back a second. Look at our existing portfolio. Our award-winning products are both software defined and system-based. So for example Spectrum Virtualize comes on our flash system. Spectrum Scale comes on our elastic storage system. And we've had this model where we provide the exact same software, both on an array or as standalone piece of software. This is unique in the storage industry. When you look at our competitors, when they've got something that's embedded in their array, their array manager, if you will, that's not what they'll try to sell you. It's software defined storage. And of course, many of them don't offer software defined storage in any way, shape or form. So we've done both. So with spectrum fusion, we'll have a hyper-converged configuration which will be available in Q3. We'll have a software defined configuration which were available at the very beginning of 2022. So you wanted to get out of this market feedback from our clients, feedback from our business partners by doing a container native HCI technology, we're way ahead. We're going to where the park is. We're throwing the ball ahead of the wide receiver. If you're a soccer fan, we're making sure that the mid guy got it to the forward ahead of time so you could kick the goal right in. That's what we're doing. Other technologies lead with virtualization, which is great but virtualization is kind of old hat, right? VMware and other virtualization layers have been around for 20 now. Container is where the world is going. And by the way, we'll support everything. We still have customers in certain worlds that are using bare metal, guess what? We work fine with that. We worked fine with virtual as we have a tight integration with both hyper V and VMware. So some customers will still do that. And containers is a new wave. So with spectrum fusion, we are riding the wave not fighting the wave and that way we could meet all the needs, right? Bare metal, virtual environments, and container environments in a way that is all based on the end users applications, workloads, and use cases. What goes, where and IBM Storage can provide all of it. So we'll give them two methods of consumption, by early next year. And we started with a hyper-converged first because, A, we felt we had a lead, truly a lead. Other people are leading with virtualization. We're leading with OpenShift and containers where the first full container-native OpenShift ground up based hyper-converged of anyone in the industry versus somebody who's done VMware or some other virtualization layer and then sort of glommed on containers and as an afterthought. We're going to where the market is moving, not to where the market has been. >> So just follow up on that. You kind of, you got the sort of Switzerland DNA. And it's not just OpenShift and Red Hat and the open source ethos. I mean, it just goes all the way back to San Volume Controller back in the day where you could virtualize anybody's storage. How is that carrying through to this announcement? >> So Spectrum Fusion is doing the same thing. Spectrum Fusion, which has many key elements brought in from our history with Spectrum Scale supports not IBM storage, for example, EMC Isilon NFS. It will support, Fusion will support Spectrum Scale, Fusion will support our elastic storage system. Fusion will support NetApp filers as well. Fusion will support IBM cloud object storage both software defined storage, or as an array technology and Amazon S3 object stores and any other object storage vendor who's compliant with S3. All of those can be part of the global namespace, scalable file system. We can bring in, for example, object data without making a duplicate copy. The normal way to do that as you make a duplicate copy. So you had a copy in the object store. You make a copy and to bring that into the file. Well, guess what, we don't have to do that. So again, cutting CapEx and OpEx and ease of management. But just as we do with our flash systems product and our Spectrum Virtualize and the SAN Volume Controller, we support over 550 storage arrays that are not ours that are our competitors. With Spectrum Fusion, we've done the same thing, fusion, scale the IBM ESS, IBM cloud object storage, Amazon S3 object store, as well as other compliance, EMC Isilon NFS, and NFS from NetApp. And by the way, we can do the discovery model as well not just integration in the system. So we've made sure that we really do protect existing investments. And we try to eliminate, particularly with discovery capability, you've got AI or analytics software connecting with the API, into the discovery technology. You don't have to traverse and try to find things because the discovery will create real time, metadata cataloging, and indexing, not just of our storage but the other storage I'd mentioned, which is the competition. So talk about making it easier to use, particularly for people who are heterogeneous in their storage environment, which is pretty much the bulk of the global fortune 1500, for sure. And so we're allowing them to use multiple vendors but derive real value with Spectrum Fusion and get all the capabilities of Spectrum Fusion and all the advantages of the enterprise data services but not just for our own product but for the other products as well that aren't ours. >> So Sam, we understand the downside of copies, but then, so you're not doing multiple copies. How do you deal with latency? What's the secret sauce here? Is it the file system? Is there other magic in here? >> Yeah, that's a great question. And I'll build a little bit off of what Eric said, but look one of the really great and unique things about Spectrum Scale is its ability to consume any storage. And we can actually allow you to bring in data sets from where they are. It could have originated in object storage we'll cash it into the file system. It can be on any block storage. It can literally be on any storage you can imagine as long as you can integrate a file system with it. And as you know most applications run on top of the file system. So it naturally fits into your application stack. Spectrum Scale uniquely is a globally parallel file system. So there's not very many of them in the world and there's none that can achieve what Spectrum Scale can do. We have customers running in the exabytes of data and the performance improves with scales. So you can actually deploy Spectrum Scale on-prem, build out an environment of it, consuming whatever storage you have. Then you can go into AWS or IBM cloud or Azure, deploy an instance of it and it will now extend your file system into that cloud. Or you can deploy it at the edge and it'll extend your file system to that edge. This gives you the exact same set of files and visibility and we'll cash in only what's needed. Normally you would have to make a copy of data into the other environment. Then you'd have to deal with that copy later, let's say you were doing a cloud bursting use case. Let's look at that as an example, to make this real. You're running an application on-prem. You want to spin up more compute in the cloud for your AI. The data normally you'd have to make a copy of the data. You'd run your AI. They have to figure out what to do with that data. Do you copy some of the fact? Do we sync them? Do you delete it? What do you do? With Spectrum Scale just automatically cash in whatever you need. It'll run there and you get assigned to spin it down. Your copy is still on-prem. You know, no data is lost. We can actually deal with all of those scenarios for you. And then if you look at what's happening at the edge, a lot of say video surveillance, data pouring in. Looking at the manufacturing {for} looking for defects. You can run a AI right at the edge, make it available in the cloud, make that data available in your data center. Again, one file system going across all. And that's something unique in our data foundation built on Spectrum Scale. >> So there's some metadata magic in there as well, and that intelligence based on location. And okay, so you're smart enough to know where the data lives. What's the sweet spot for this Eric? Are there any particular use cases or industries that we should be focused on or is it through? >> Sure, so first let's talk about the industries. We see certain industries going more container quicker than other industries. So first is financial services. We see it happening there. Manufacturing, Sam already talked about AI based manufacturing platforms. We actually have a couple clients right now. We're doing autonomous driving software with us on containers right now, even before Spectrum Fusion with Spectrum Scale. We see public of course, healthcare and in healthcare don't just think delivery at IBM. That includes the research guys. So the genomic companies, the biotech companies, the drug companies are all included in that. And then of course, retail, both on-prem and off-prem. So those are sort of the industries. Then we see from an application workload, basically AI analytics and big data applications or workloads are the key things that Spectrum Fusion helps you because of its file system. It's high performance. And those applications are tending to spread across core ,edge and cloud. So those applications are spreading out. They're becoming broader than just running in the data center. And by the way they want to run it just into the data center, that's fine. Or perfect example, we had giant global auto manufacturer. They've got factories all over. And if you think there isn't compute resources in every factory, there is because those factories I just saw an article, actually, those factories cost about a billion dollars to build them, a billion. So they've got their own IT, now it's connected to their core data center as well. So that's a perfect example that enterprise edge where spectrum fusion would be an ideal solution whether they did it as software defined only, or of course when you got a billion dollar factory, just to make it let alone produce the autos or whatever you're producing. Silicon, for example, those fabs, all cost a billion. That's where the enterprise edge fits in very well with Spectrum Fusion. >> So are those industries, what's driving the adoption of containers? Is it just, they just want to modernize? Is it because they're doing some of those workloads that you mentioned or is there's edge? Like you mentioned manufacturing, I could see that potentially being an edge is the driver. >> Well, it's a little bit of all of those Dave. For example, virtualization came out and virtualization offered advantages over bare metal, okay? Now containerization has come out and containerization is offering advantage over virtualization. The good thing at IBM is we know we can support all three. And we know again, in the global fortune 2000, 1500 they're probably going to run all three based on the application workload or use case. And our storage is really good at bare metal. Very good at virtualization environments. And now with Spectrum Fusion are container native outstanding for container based environments. So we see these big companies will probably have all three and IBM storage is one of the few vendors if not the only vendor that could adroitly support all three of those various workload types. So that's why we see this as a huge advantage. And again, the market is going to containers. We are, I'm a native California. You don't fight the wave, you ride the wave. and the wave is containers and we're riding that wave. >> If you don't ride the wave you become driftwood as Pat Gelsinger would say. >> And that is true, another native California. I'm a whole boss. >> So okay, so, I wonder Sam I sort of hinted upfront in my little narrative there but the way we see this, as you've got on-prem hybrid, you got public clouds across cloud moving to the edge. Open shift is I said is the linchpin to enabling some of those. And what we see is this layer that abstracts the complexity, hides the underlying complexity of the infrastructure that becomes kind of an implementation detail. Eric talked about skating to the park or whatever sports analogy you want to use. Is that where the park is headed? >> Yeah, I mean, look, the bottom line is you have to remove the complexity for the developers. Again, the name of the game here is all about agility. You asked why these industries are implementing containers? It's about accelerating their innovation and their services for their customers. It's about leveraging AI to gain better insights about their customers and delivering what they want and proving their experience. So if it's all about agility developers don't want to wait around for infrastructure. You need to automate it as much as possible. So it's about building infrastructure that's automated, which requires consistent API APIs. And it requires abstracting out the complexity of things like HA and DR. You don't want every application owner to have to figure out how to implement that. You want to make those storage services available and easy for a developer to implement and integrate into what they're doing. You want to ensure security across everything you do as you bring more and more of your data of your information about your customers into these container worlds. You've got to have security rock solid. You can't leave any exposures there and you can't afford downtime. There's increasing threats from things like ransomware. You don't see it in the news every day but it happens every single day. So how do you make sure you can recover when an event happens to you? So yes, you need to build a abstracted layer of storage services and you need to make it simply available to the developers in these dev ops environments. And that's what we're doing with spectrum fusion. We're taking, I think, extremely unique and one of a kind storage foundation with Spectrum Scale that gives you single namespace globally. And we're building onto it an incredible set of storage services, making extremely simple to deploy enterprise class container applications. >> So what's the bottom line business impact. I mean, how does this change? I mean, Sam, you I think articulated very well through all about serving the developers versus you know, storage, admin provisioning, a LUN. So how does this change my organization, my business? What's the impact there? >> I've mentioned one other point that we talk about an IBM a lot, which is the AI ladder. And it's about how do you take all of this information you have and be able to take it to build new insights, to give your company and advantage. An incumbent in an industry shouldn't be able to be disrupted if they're able to leverage all the data they have about the industry and their customers. But in order to do that, you have to be able to get to a single source of data and be able to build it into the fabric of your business operations. So that all decisions you're making in your company, all services you deliver to your customers, are built on that data foundation and information and the only way to do that and infuse it into your culture is to make this stuff real time. And the only way to do that is to build out a containerized application environment that has access to real-time data. The ultimate outcome, sorry, I know you asked for business results is that you will, in real time understand your clients, understand your industry and deliver the best possible services. And the absolute, business outcome is you will continue to gain market share and your environment and grow revenue. I mean, that's the outcome every business wants. >> Yeah, it's all about speed. Everybody's kind of, everybody's last year was forced into digital transformation. It was sort of rushed into and compressed and now they get some time to do it right. And so modernizing apps, containers, dev ops developer led sort of initiatives are really key to modernization. All right, Eric, we've got, we're out of time but give us the bottom summary. We didn't talk, actually, we had to talk about the 3,200. Maybe you could give us a little insight on that before we close. >> Sure, so in addition to what we're doing with Fusion we also introduced a new elastic storage system, 3,200 and it's all flash. It gets 80 gigs, a second sustained at the node level and we can cluster them infinitely. So for example, I've got 10 of them. I'm delivering 800 gigabytes, a second sustained. And of course, AI, big data analytic workloads are extremely, extremely susceptible to bandwidth and or data transfer rate. That's what they need to deliver their application base properly. It comes with Spectrum Scale built in so that comes with it. So you get the advantage of Spectrum Scale. We talked a lot about Spectrum Scale because it is if you will, one of the three fathers of spectrum fusion. So it's ideal with it's highly parallel file system. It's used all over in high performance computing and super computing, in drug research, in health care in finance, probably about 80% of the world's largest banks in the world use Spectrum Scale already for AI, big data analytics. So the new 3,200 is an all flash version twice as fast as the older version and all the benefit of Spectrum Scale including the ability of seamlessly integrating into existing Spectrum Scale or ESS deployments. And when Fusion comes out, you'll be able to have Fusion. And you could also add 3,200 to it if you want to do that because of the capability of our global namespace and our single file system across edge, core and cloud. So that's the 3,200 in a nutshell, Dave. >> All right, give us a bottom line, Eric. And we got to go, what's the bumper sticker. >> Yeah, bumper sticker is, you got to ride the wave of containers and IBM storage is company that can take you there so that you win the big surfing context and get the big prize. >> Eric and Sam, thanks so much, guys. It's great to see you and miss you guys. Hopefully we'll get together soon. So get your jabs and we'll have a beer. >> All right. >> All right, thanks, Dave. >> Nice talking to you. >> All right, thank you for watching everybody. This is Dave Vellante for "theCUBE." We'll see you next time. (upbeat music)

Published Date : Apr 28 2021

SUMMARY :

and protecting the data about the problem statement. and the types of things you Maybe you could talk to that a lot of the capabilities Got it, so Eric, you the data is, you think So that's the core of it. you got an API out and entries in, into the application, run time. So the appliance comes first? that the mid guy got it to in the day where you could And by the way, we can do Is it the file system? and the performance improves with scales. What's the sweet spot for this Eric? And by the way they want to run it being an edge is the driver. and IBM storage is one of the few vendors If you don't ride the And that is true, but the way we see this, as So how do you make sure What's the impact there? and the only way to do that and infuse it and now they get some time to do it right. So that's the 3,200 in a nutshell, Dave. the bumper sticker. so that you win the big It's great to see you and miss you guys. All right, thank you

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
EricPERSON

0.99+

Eric HerzogPERSON

0.99+

Dave VellantePERSON

0.99+

Pat GelsingerPERSON

0.99+

SamPERSON

0.99+

IBMORGANIZATION

0.99+

Sam WernerPERSON

0.99+

April 27thDATE

0.99+

DavePERSON

0.99+

10QUANTITY

0.99+

80 gigsQUANTITY

0.99+

12 copiesQUANTITY

0.99+

3,200QUANTITY

0.99+

CaliforniaLOCATION

0.99+

80%QUANTITY

0.99+

AWSORGANIZATION

0.99+

2 millionQUANTITY

0.99+

AmazonORGANIZATION

0.99+

bothQUANTITY

0.99+

last yearDATE

0.99+

firstQUANTITY

0.99+

CapExTITLE

0.99+

800 gigabytesQUANTITY

0.99+

oneQUANTITY

0.99+

IBM StorageORGANIZATION

0.99+

single copyQUANTITY

0.99+

OpExTITLE

0.98+

three layersQUANTITY

0.98+

Spectrum FusionCOMMERCIAL_ITEM

0.98+

20%QUANTITY

0.98+

EMCORGANIZATION

0.98+

first passQUANTITY

0.98+

S3TITLE

0.98+

Global Storage ChannelsORGANIZATION

0.98+

a billionQUANTITY

0.97+

twoQUANTITY

0.97+

20QUANTITY

0.97+

Spectrum ScaleTITLE

0.97+

three fathersQUANTITY

0.97+

early next yearDATE

0.97+

threeQUANTITY

0.97+

GDPRTITLE

0.96+

Red HatORGANIZATION

0.96+

OpenShiftTITLE

0.96+

Eric Herzog, IBM & Sam Werner, IBM | CUBE Conversation, October 2020


 

(upbeat music) >> Announcer: From theCUBE Studios in Palo Alto and Boston, connecting with thought leaders all around the world. This is a CUBE conversation. >> Hey, welcome back everybody. Jeff Frick here with the CUBE, coming to you from our Palo Alto studios today for a CUBE conversation. we've got a couple of a CUBE alumni veterans who've been on a lot of times. They've got some exciting announcements to tell us today, so we're excited to jump into it, So let's go. First we're joined by Eric Herzog. He's the CMO and VP worldwide storage channels for IBM Storage, made his time on theCUBE Eric, great to see you. >> Great, thanks very much for having us today. >> Jeff: Absolutely. And joining him, I think all the way from North Carolina, Sam Werner, the VP of, and offering manager business line executive storage for IBM. Sam, great to see you as well. >> Great to be here, thank you. >> Absolutely. So let's jump into it. So Sam you're in North Carolina, I think that's where the Red Hat people are. You guys have Red Hat, a lot of conversations about containers, containers are going nuts. We know containers are going nuts and it was Docker and then Kubernetes. And really a lot of traction. Wonder if you can reflect on, on what you see from your point of view and how that impacts what you guys are working on. >> Yeah, you know, it's interesting. We talk, everybody hears about containers constantly. Obviously it's a hot part of digital transformation. What's interesting about it though is most of those initiatives are being driven out of business lines. I spend a lot of time with the people who do infrastructure management, particularly the storage teams, the teams that have to support all of that data in the data center. And they're struggling to be honest with you. These initiatives are coming at them, from application developers and they're being asked to figure out how to deliver the same level of SLAs the same level of performance, governance, security recovery times, availability. And it's a scramble for them to be quite honest they're trying to figure out how to automate their storage. They're trying to figure out how to leverage the investments they've made as they go through a digital transformation and keep in mind, a lot of these initiatives are accelerating right now because of this global pandemic we're living through. I don't know that the strategy's necessarily changed, but there's been an acceleration. So all of a sudden these storage people kind of trying to get up to speed or being thrown right into the mix. So we're working directly with them. You'll see, in some of our announcements, we're helping them, you know, get on that journey and provide the infrastructure their teams need. >> And a lot of this is driven by multicloud and hybrid cloud, which we're seeing, you know, a really aggressive move to before it was kind of this rush to public cloud. And that everybody figured out, "Well maybe public cloud isn't necessarily right for everything." And it's kind of this horses for courses, if you will, with multicloud and hybrid cloud, another kind of complexity thrown into the storage mix that you guys have to deal with. >> Yeah, and that's another big challenge. Now in the early days of cloud, people were lifting and shifting applications trying to get lower capex. And they were also starting to deploy DevOps, in the public cloud in order to improve agility. And what they found is there were a lot of challenges with that, where they thought lifting and shifting an application will lower their capital costs the TCO actually went up significantly. Where they started building new applications in the cloud. They found they were becoming trapped there and they couldn't get the connectivity they needed back into their core applications. So now we're at this point where they're trying to really, transform the rest of it and they're using containers, to modernize the rest of the infrastructure and complete the digital transformation. They want to get into a hybrid cloud environment. What we found is, enterprises get two and a half X more value out of the IT when they use a hybrid multicloud infrastructure model versus an all public cloud model. So what they're trying to figure out is how to piece those different components together. So you need a software-driven storage infrastructure that gives you the flexibility, to deploy in a common way and automate in a common way, both in a public cloud but on premises and give you that flexibility. And that's what we're working on at IBM and with our colleagues at Red Hat. >> So Eric, you've been in the business a long time and you know, it's amazing as it just continues to evolve, continues to evolve this kind of unsexy thing under the covers called storage, which is so foundational. And now as data has become, you know, maybe a liability 'cause I have to buy a bunch of storage. Now it is the core asset of the company. And in fact a lot of valuations on a lot of companies is based on its value, that's data and what they can do. So clearly you've got a couple of aces in the hole you always do. So tell us what you guys are up to at IBM to take advantage of the opportunity. >> Well, what we're doing is we are launching, a number of solutions for various workloads and applications built with a strong container element. For example, a number of solutions about modern data protection cyber resiliency. In fact, we announced last year almost a year ago actually it's only a year ago last week, Sam and I were on stage, and one of our developers did a demo of us protecting data in a container environment. So now we're extending that beyond what we showed a year ago. We have other solutions that involve what we do with AI big data and analytic applications, that are in a container environment. What if I told you, instead of having to replicate and duplicate and have another set of storage right with the OpenShift Container configuration, that you could connect to an existing external exabyte class data lake. So that not only could your container apps get to it, but the existing apps, whether they'll be bare-metal or virtualized, all of them could get to the same data lake. Wow, that's a concept saving time, saving money. One pool of storage that'll work for all those environments. And now that containers are being deployed in production, that's something we're announcing as well. So we've got a lot of announcements today across the board. Most of which are container and some of which are not, for example, LTO-9, the latest high performance and high capacity tape. We're announcing some solutions around there. But the bulk of what we're announcing today, is really on what IBM is doing to continue to be the leader in container storage support. >> And it's great, 'cause you talked about a couple of very specific applications that we hear about all the time. One obviously on the big data and analytics side, you know, as that continues to do, to kind of chase history of honor of ultimately getting the right information to the right people at the right time so they can make the right decision. And the other piece you talked about was business continuity and data replication, and to bring people back. And one of the hot topics we've talked to a lot of people about now is kind of this shift in a security threat around ransomware. And the fact that these guys are a little bit more sophisticated and will actually go after your backup before they let you know that they're into your primary storage. So these are two, really important market areas that we could see continue activity, as all the people that we talk to every day. You must be seeing the same thing. >> Absolutely we are indeed. You know, containers are the wave. I'm a native California and I'm coming to you from Silicon Valley and you don't fight the wave, you ride it. So at IBM we're doing that. We've been the leader in container storage. We, as you know, way back when we invented the hard drive, which is the foundation of almost this entire storage industry and we were responsible for that. So we're making sure that as container is the coming wave that we are riding that in and doing the right things for our customers, for our channel partners that support those customers, whether they be existing customers, and obviously, with this move to containers, is going to be some people searching for probably a new vendor. And that's something that's going to go right into our wheelhouse because of the things we're doing. And some of our capabilities, for example, with our FlashSystems, with our Spectrum Virtualize, we're actually going to be able to support CSI snapshots not only for IBM Storage, but our Spectrum Virtualize products supports over 500 different arrays, most of which aren't ours. So if you got that old EMC VNX2 or that HPE, 3PAR or aNimble or all kinds of other storage, if you need CSI snapshot support, you can get it from IBM, with our Spectrum Virtualize software that runs on our FlashSystems, which of course cuts capex and opex, in a heterogeneous environment, but gives them that advanced container support that they don't get, because they're on older product from, you know, another vendor. We're making sure that we can pull our storage and even our competitor storage into the world of containers and do it in the right way for the end user. >> That's great. Sam, I want to go back to you and talk about the relationship with the Red Hat. I think it was about a year ago, I don't have my notes in front of me, when IBM purchased Red Hat. Clearly you guys have been working very closely together. What does that mean for you? You've been in the business for a long time. You've been at IBM for a long time, to have a partner you know, kind of embed with you, with Red Hat and bringing some of their capabilities into your portfolio. >> It's been an incredible experience, and I always say my friends at Red Hat because we spend so much time together. We're looking at now, leveraging a community that's really on the front edge of this movement to containers. They bring that, along with their experience around storage and containers, along with the years and years of enterprise class storage delivery that we have in the IBM Storage portfolio. And we're bringing those pieces together. And this is a case of truly one plus one equals three. And you know, an example you'll see in this announcement is the integration of our data protection portfolio with their container native storage. We allow you to in any environment, take a snapshot of that data. You know, this move towards modern data protection is all about a movement to doing data protection in a different way which is about leveraging snapshots, taking instant copies of data that are application aware, allowing you to reuse and mount that data for different purposes, be able to protect yourself from ransomware. Our data protection portfolio has industry leading ransomware protection and detection in it. So we'll actually detect it before it becomes a problem. We're taking that, industry leading data protection software and we are integrating it into Red Hat, Container Native Storage, giving you the ability to solve one of the biggest challenges in this digital transformation which is backing up your data. Now that you're moving towards, stateful containers and persistent storage. So that's one area we're collaborating. We're working on ensuring that our storage arrays, that Eric was talking about, that they integrate tightly with OpenShift and that they also work again with, OpenShift Container Storage, the Cloud Native Storage portfolio from, Red Hat. So we're bringing these pieces together. And on top of that, we're doing some really, interesting things with licensing. We allow you to consume the Red Hat Storage portfolio along with the IBM software-defined Storage portfolio under a single license. And you can deploy the different pieces you need, under one single license. So you get this ultimate investment protection and ability to deploy anywhere. So we're, I think we're adding a lot of value for our customers and helping them on this journey. >> Yeah Eric, I wonder if you could share your perspective on multicloud management. I know that's a big piece of what you guys are behind and it's a big piece of kind of the real world as we've kind of gotten through the hype and now we're into production, and it is a multicloud world and it is, you got to manage this stuff it's all over the place. I wonder if you could speak to kind of how that challenge you know, factors into your design decisions and how you guys are about, you know, kind of the future. >> Well we've done this in a couple of ways in things that are coming out in this launch. First of all, IBM has produced with a container-centric model, what they call the Multicloud Manager. It's the IBM Cloud Pak for multicloud management. That product is designed to manage multiple clouds not just the IBM Cloud, but Amazon, Azure, et cetera. What we've done is taken our Spectrum Protect Plus and we've integrated it into the multicloud manager. So what that means, to save time, to save money and make it easier to use, when the customer is in the multicloud manager, they can actually select Spectrum Protect Plus, launch it and then start to protect data. So that's one thing we've done in this launch. The other thing we've done is integrate the capability of IBM Spectrum Virtualize, running in a FlashSystem to also take the capability of supporting OCP, the OpenShift Container Platform in a Clustered environment. So what we can do there, is on-premise, if there really was an earthquake in Silicon Valley right now, that OpenShift is sitting on a server. The servers just got crushed by the roof when it caved in. So you want to make sure you've got disaster recovery. So what we can do is take that OpenShift Container Platform Cluster, we can support it with our Spectrum Virtualize software running on our FlashSystem, just like we can do heterogeneous storage that's not ours, in this case, we're doing it with Red Hat. And then what we can do is to provide disaster recovery and business continuity to different cloud vendors not just to IBM Cloud, but to several cloud vendors. We can give them the capability of replicating and protecting that Cluster to a cloud configuration. So if there really was an earthquake, they could then go to the cloud, they could recover that Red Hat Cluster, to a different data center and run it on-prem. So we're not only doing the integration with a multicloud manager, which is multicloud-centric allowing ease of use with our Spectrum Protect Plus, but incase of a really tough situation of fire in a data center, earthquake, hurricane, whatever, the Red Hat OpenShift Cluster can be replicated out to a cloud, with our Spectrum Virtualize Software. So in most, in both cases, multicloud examples because in the first one of course the multicloud manager is designed and does support multiple clouds. In the second example, we support multiple clouds where our Spectrum Virtualize for public clouds software so you can take that OpenShift Cluster replicate it and not just deal with one cloud vendor but with several. So showing that multicloud management is important and then leverage that in this launch with a very strong element of container centricity. >> Right >> Yeah, I just want to add, you know, and I'm glad you brought that up Eric, this whole multicloud capability with, the Spectrum Virtualize. And I could see the same for our Spectrum Scale Family, which is our storage infrastructure for AI and big data. We actually, in this announcement have containerized the client making it very simple to deploy in Kubernetes Cluster. But one of the really special things about Spectrum Scale is it's active file management. This allows you to build out a file system not only on-premises for your, Kubernetes Cluster but you can actually extend that to a public cloud and it automatically will extend the file system. If you were to go into a public cloud marketplace which it's available in more than one, you can go in there click deploy, for example, in AWS Marketplace, click deploy it will deploy your Spectrum Scale Cluster. You've now extended your file system from on-prem into the cloud. If you need to access any of that data, you can access it and it will automatically cash you on locally and we'll manage all the file access for you. >> Yeah, it's an interesting kind of paradox between, you know, kind of the complexity of what's going on in the back end, but really trying to deliver simplicity on the front end. Again, this ultimate goal of getting the right data to the right person at the right time. You just had a blog post Eric recently, that you talked about every piece of data isn't equal. And I think it's really highlighted in this conversation we just had about recovery and how you prioritize and how you, you know, think about, your data because you know, the relative value of any particular piece might be highly variable, which should drive the way that you treated in your system. So I wonder if you can speak a little bit, you know, to helping people think about data in the right way. As you know, they both have all their operational data which they've always had, but now they've got all this unstructured data that's coming in like crazy and all data isn't created equal, as you said. And if there is an earthquake or there is a ransomware attack, you need to be smart about what you have available to bring back quickly. And maybe what's not quite so important. >> Well, I think the key thing, let me go to, you know a modern data protection term. These are two very technical terms was, one is the recovery time. How long does it take you to get that data back? And the second one is the recovery point, at what point in time, are you recovering the data from? And the reason those are critical, is when you look at your datasets, whether you replicate, you snap, you do a backup. The key thing you've got to figure out is what is my recovery time? How long is it going to take me? What's my recovery point. Obviously in certain industries you want to recover as rapidly as possible. And you also want to have the absolute most recent data. So then once you know what it takes you to do that, okay from an RPO and an RTO perspective, recovery point objective, recovery time objective. Once you know that, then you need to look at your datasets and look at what does it take to run the company if there really was a fire and your data center was destroyed. So you take a look at those datasets, you see what are the ones that I need to recover first, to keep the company up and rolling. So let's take an example, the sales database or the support database. I would say those are pretty critical to almost any company, whether you'd be a high-tech company, whether you'd be a furniture company, whether you'd be a delivery company. However, there also is probably a database of assets. For example, IBM is a big company. We have buildings all over, well, guess what? We don't lease a chair or a table or a whiteboard. We buy them. Those are physical assets that the company has to pay, you know, do write downs on and all this other stuff, they need to track it. If we close a building, we need to move the desk to another building. Like even if we leasing a building now, the furniture is ours, right? So does an asset database need to be recovered instantaneously? Probably not. So we should focus on another thing. So let's say on a bank. Banks are both online and brick and mortar. I happened to be a Wells Fargo person. So guess what? There's Wells Fargo banks, two of them in the city I'm in, okay? So, the assets of the money, in this case now, I don't think the brick and mortar of the building of Wells Fargo or their desks in there but now you're talking financial assets or their high velocity trading apps. Those things need to be recovered almost instantaneously. And that's what you need to do when you're looking at datasets, is figure out what's critical to the business to keep it up and rolling, what's the next most critical. And you do it in basically the way you would tear anything. What's the most important thing, what's the next most important thing. It doesn't matter how you approach your job, how you used to approach school, what are the classes I have to get an A and what classes can I not get an A and depending on what your major was, all that sort of stuff, you're setting priorities, right? And the dataset, since data is the most critical asset of any company, whether it's a Global Fortune 500 or whether it's Herzog Cigar Store, all of those assets, that data is the most valuable. So you've got to make sure, recover what you need as rapidly as you need it. But you can't recover all of it. You just, there's just no way to do that. So that's why you really ranked the importance of the data to use sameware, with malware and ransomware. If you have a malware or ransomware attack, certain data you need to recover as soon as you can. So if there, for example, as a, in fact there was one Jeff, here in Silicon Valley as well. You've probably read about the University of California San Francisco, ended up having to pay over a million dollars of ransom because some of the data related to COVID research University of California, San Francisco, it was the health care center for the University of California in Northern California. They are working on COVID and guess what? The stuff was held for ransom. They had no choice, but to pay them. And they really did pay, this is around end of June, of this year. So, okay, you don't really want to do that. >> Jeff: Right >> So you need to look at everything from malware and ransomware, the importance of the data. And that's how you figure this stuff out, whether be in a container environment, a traditional environment or virtualized environment. And that's why data protection is so important. And with this launch, not only are we doing the data protection we've been doing for years, but now taking it to the heart of the new wave, which is the wave of containers. >> Yeah, let me add just quickly on that Eric. So think about those different cases you talked about. You're probably going to want for your mission critically. You're going to want snapshots of that data that can be recovered near instantaneously. And then, for some of your data, you might decide you want to store it out in cloud. And with Spectrum Protect, we just announced our ability to now store data out in Google cloud. In addition to, we already supported AWS Azure IBM Cloud, in various on-prem object stores. So we already provided that capability. And then we're in this announcement talking about LTL-9. And you got to also be smart about which data do you need to keep, according to regulation for long periods of time, or is it just important to archive? You're not going to beat the economics nor the safety of storing data out on tape. But like Eric said, if all of your data is out on tape and you have an event, you're not going to be able to restore it quickly enough at least the mission critical things. And so those are the things that need to be in snapshot. And that's one of the main things we're announcing here for Kubernetes environments is the ability to quickly snapshot application aware backups, of your mission critical data in your Kubernetes environments. It can very quickly to be recovered. >> That's good. So I'll give you the last word then we're going to sign off, we are out of time, but I do want to get this in it's 2020, if I didn't ask the COVID question, I would be in big trouble. So, you know, you've all seen the memes and the jokes about really COVID being an accelerant to digital transformation, not necessarily change, but certainly a huge accelerant. I mean, you guys have a, I'm sure a product roadmap that's baked pretty far and advanced, but I wonder if you can speak to, you know, from your perspective, as COVID has accelerated digital transformation you guys are so foundational to executing that, you know, kind of what is it done in terms of what you're seeing with your customers, you know, kind of the demand and how you're seeing this kind of validation as to an accelerant to move to these better types of architectures? Let's start with you Sam. >> Yeah, you know I, and I think i said this, but I mean the strategy really hasn't changed for the enterprises, but of course it is accelerating it. And I see storage teams more quickly getting into trouble, trying to solve some of these challenges. So we're working closely with them. They're looking for more automation. They have less people in the data center on-premises. They're looking to do more automation simplify the management of the environment. We're doing a lot around Ansible to help them with that. We're accelerating our roadmaps around that sort of integration and automation. They're looking for better visibility into their environments. So we've made a lot of investments around our storage insights SaaS platform, that allows them to get complete visibility into their data center and not just in their data center. We also give them visibility to the stores they're deploying in the cloud. So we're making it easier for them to monitor and manage and automate their storage infrastructure. And then of course, if you look at everything we're doing in this announcement, it's about enabling our software and our storage infrastructure to integrate directly into these new Kubernetes, initiatives. That way as this digital transformation accelerates and application developers are demanding more and more Kubernetes capabilities. They're able to deliver the same SLAs and the same level of security and the same level of governance, that their customers expect from them, but in this new world. So that's what we're doing. If you look at our announcement, you'll see that across, across the sets of capabilities that we're delivering here. >> Eric, we'll give you the last word, and then we're going to go to Eric Cigar Shop, as soon as this is over. (laughs) >> So it's clearly all about storage made simple, in a Kubernetes environment, in a container environment, whether it's block storage, file storage, whether it be object storage and IBM's goal is to offer ever increasing sophisticated services for the enterprise at the same time, make it easier and easier to use and to consume. If you go back to the old days, the storage admins manage X amount of gigabytes, maybe terabytes. Now the same admin is managing 10 petabytes of data. So the data explosion is real across all environments, container environments, even old bare-metal. And of course the not quite so new anymore virtualized environments. The admins need to manage that more and more easily and automated point and click. Use AI based automated tiering. For example, we have with our Easy Tier technology, that automatically moves data when it's hot to the fastest tier. And when it's not as hot, it's cool, it pushes down to a slower tier, but it's all automated. You point and you click. Let's take our migration capabilities. We built it into our software. I buy a new array, I need to migrate the data. You point, you click, and we automatic transparent migration in the background on the fly without taking the servers or the storage down. And we always favor the application workload. So if the application workload is heavy at certain times a day, we slow the migration. At night for sake of argument, If it's a company that is not truly 24 by seven, you know, heavily 24 by seven, and at night, it slows down, we accelerate the migration. All about automation. We've done it with Ansible, here in this launch, we've done it with additional integration with other platforms. So our Spectrum Scale for example, can use the OpenShift management framework to configure and to grow our Spectrum Scale or elastic storage system clusters. We've done it, in this case with our Spectrum Protect Plus, as you saw integration into the multicloud manager. So for us, it's storage made simple, incredibly new features all the time, but at the same time we do that, make sure that it's easier and easier to use. And in some cases like with Ansible, not even the real storage people, but God forbid, that DevOps guy messes with a storage and loses that data, wow. So by, if you're using something like Ansible and that Ansible framework, we make sure that essentially the DevOps guy, the test guy, the analytics guy, basically doesn't lose the data and screw up the storage. And that's a big, big issue. So all about storage made simple, in the right way with incredible enterprise features that essentially we make easy and easy to use. We're trying to make everything essentially like your iPhone, that easy to use. That's the goal. And with a lot less storage admins in the world then there has been an incredible storage growth every single year. You'd better make it easy for the same person to manage all that storage. 'Cause it's not shrinking. It is, someone who's sitting at 50 petabytes today, is 150 petabytes the next year and five years from now, they'll be sitting on an exabyte of production data, and they're not going to hire tons of admins. It's going to be the same two or four people that were doing the work. Now they got to manage an exabyte, which is why this storage made simplest is such a strong effort for us with integration, with the Open, with the Kubernetes frameworks or done with OpenShift, heck, even what we used to do in the old days with vCenter Ops from VMware, VASA, VAAI, all those old VMware tools, we made sure tight integration, easy to use, easy to manage, but sophisticated features to go with that. Simplicity is really about how you manage storage. It's not about making your storage dumb. People want smarter and smarter storage. Do you make it smarter, but you make it just easy to use at the same time. >> Right. >> Well, great summary. And I don't think I could do a better job. So I think we'll just leave it right there. So congratulations to both of you and the teams for these announcement after a whole lot of hard work and sweat went in, over the last little while and continued success. And thanks for the, check in, always great to see you. >> Thank you. We love being on theCUBE as always. >> All right, thanks again. All right, he's Eric, he was Sam, I'm I'm Jeff, you're watching theCUBE. We'll see you next time, thanks for watching. (upbeat music)

Published Date : Nov 2 2020

SUMMARY :

leaders all around the world. coming to you from our Great, thanks very Sam, great to see you as well. on what you see from your point of view the teams that have to that you guys have to deal with. and complete the digital transformation. So tell us what you guys are up to at IBM that you could connect to an existing And the other piece you talked and I'm coming to you to have a partner you know, and ability to deploy anywhere. of what you guys are behind and make it easier to use, And I could see the same for and how you prioritize that the company has to pay, So you need to look at and you have an event, to executing that, you know, of security and the same Eric, we'll give you the last word, And of course the not quite so new anymore So congratulations to both of you We love being on theCUBE as always. We'll see you next time,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeffPERSON

0.99+

Eric HerzogPERSON

0.99+

IBMORGANIZATION

0.99+

Sam WernerPERSON

0.99+

SamPERSON

0.99+

twoQUANTITY

0.99+

EricPERSON

0.99+

Silicon ValleyLOCATION

0.99+

Jeff FrickPERSON

0.99+

Wells FargoORGANIZATION

0.99+

October 2020DATE

0.99+

Wells FargoORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

BostonLOCATION

0.99+

50 petabytesQUANTITY

0.99+

10 petabytesQUANTITY

0.99+

North CarolinaLOCATION

0.99+

Palo AltoLOCATION

0.99+

last yearDATE

0.99+

150 petabytesQUANTITY

0.99+

CaliforniaLOCATION

0.99+

oneQUANTITY

0.99+

University of CaliforniaORGANIZATION

0.99+

2020DATE

0.99+

a year agoDATE

0.99+

both casesQUANTITY

0.99+

24QUANTITY

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

CUBEORGANIZATION

0.99+

AWSORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

next yearDATE

0.99+

threeQUANTITY

0.99+

bothQUANTITY

0.99+

second exampleQUANTITY

0.99+

Eric Cigar ShopORGANIZATION

0.99+

Herzog Cigar StoreORGANIZATION

0.99+

OpenShiftTITLE

0.99+

todayDATE

0.99+

DevOpsTITLE

0.98+

over 500 different arraysQUANTITY

0.98+

end of JuneDATE

0.98+

four peopleQUANTITY

0.98+

vCenter OpsTITLE

0.98+

Eric Herzog, IBM | VMworld 2020


 

>> Announcer: From around the globe, it's theCUBE. With digital coverage of VMworld 2020, brought to you by VMware and its ecosystem partners. >> Welcome back, I'm Stu Miniman. This is theCUBE's coverage of VMworld 2020 of course, happening virtually. And there are certain people that we talk to every year at theCUBE, and this guest, I believe, has been on theCUBE at VMworld more than any others. It's actually not Pat Gelsinger, Eric Herzog. He is the chief marketing officer and vice president of global storage channels at IBM. Eric, Mr. Zoginstor, welcome back to theCUBE, nice to see you. >> Thank you very much, Stu. IBM always enjoys hanging with you, John, and Dave. And again, glad to be here, although not in person this time at VMworld 2020 virtual. Thanks again for having IBM. >> Alright, so, you know, some things are the same, others, very different. Of course, Eric, IBM, a long, long partner of VMware's. Why don't you set up for us a little bit, you know, 2020, the major engagements, what's new with IBM and VMware? >> So, a couple of things, first of all, we have made our Spectrum Virtualize software, software defined block storage work in virtual machines, both in AWS and IBM Cloud. So we started with IBM Cloud and then earlier this year with AWS. So now we have two different cloud platforms where our Spectrum Virtualize software sits in a VM at the cloud provider. The other thing we've done, of course, is V7 support. In fact, I've done several VMUGs. And in fact, my session at VMworld is going to talk about both our support for V7 but also what we're doing with containers, CSI, Kubernetes overall, and how we can support that in a virtual VMware environment, and also we're doing with traditional ESX and VMware configurations as well. And of course, out to the cloud, as I just talked about. >> Yeah, that discussion of hybrid cloud, Eric, is one that we've been hearing from IBM for a long time. And VMware has had that message, but their cloud solutions have really matured. They've got a whole group going deep on cloud native. The Amazon solutions have been something that they've been partnering, making sure that, you know, data protection, it can span between, you know, the traditional data center environment where VMware is so dominant, and the public clouds. You're giving a session on some of those hybrid cloud solutions, so share with us a little bit, you know, where do the visions completely agree? What's some of the differences between what IBM is doing and maybe what people are hearing from VMware? >> Well, first of all, our solutions don't always require VMware to be installed. So for example, if you're doing it in a container environment, for example, with Red Hat OpenShift, that works slightly different. Not that you can't run Red Hat products inside of a virtual machine, which you can, but in this case, I'm talking Red Hat native. We also of course do VMware native and support what VMware has announced with their Kubernetes based solutions that they've been talking about since VMworld last year, obviously when Pat made some big announcements onstage about what they were doing in the container space. So we've been following that along as well. So from that perspective, we have agreement on a virtual machine perspective and of course, what VMware is doing with the container space. But then also a slightly different one when we're doing Red Hat OpenShift as a native configuration, without having a virtual machine involved in that configuration. So those are both the commonalities and the differences that we're doing with VMware in a hybrid cloud configuration. >> Yeah. Eric, you and I both have some of those scars from making sure that storage works in a virtual environment. It took us about a decade to get things to really work at the VM level. Containers, it's been about five years, it feels like we've made faster progress to make sure that we can have stateful environments, we can tie up with storage, but give us a little bit of a look back as to what we've learned and how we've made sure that containerized, Kubernetes environments, you know, work well with storage for customers today. >> Well, I think there's a couple of things. First of all, I think all the storage vendors learn from VMware. And then the expansion of virtual environments beyond VMware to other virtual environments as well. So I think all the storage vendors, including IBM learned through that process, okay, when the next thing comes, which of course in this case happens to be containers, both in a VMware environment, but in an open environment with the Kubernetes management framework, that you need to be able to support it. So for example, we have done several different things. We support persistent volumes in file block and object store. And we started with that almost three years ago on the block side, then we added the file side and now the object storage side. We also can back up data that's in those containers, which is an important feature, right? I am sitting there and I've got data now and persistent volume, but I got to back it up as well. So we've announced support for container based backup either with Red Hat OpenShift or in a generic Kubernetes environment, because we're realistic at IBM. We know that you have to exist in the software infrastructure milieu, and that includes VMware and competitors of VMware. It includes Red Hat OpenShift, but also competitors to Red Hat. And we've made sure that we support whatever the end user needs. So if they're going with Red Hat, great. If they're going with a generic container environment, great. If they're going to use VMware's container solutions, great. And on the virtualization engines, the same thing. We started with VMware, but also have added other virtualization engines. So you think the storage community as a whole and IBM in particular has learned, we need to be ready day one. And like I said, three years ago, we already had persistent volume support for block store. It's still the dominant storage and we had that three years ago. So for us, that would be really, I guess, two years from what you've talked about when containers started to take off. And within two years we had something going that was working at the end user level. Our sales team could sell our business partners. As you know, many of the business partners are really rallying around containers, whether it be Red Hat or in what I'll call a more generic environment as well. They're seeing the forest through the trees. I do think when you look at it from an end user perspective, though, you're going to see all three. So, particularly in the Global Fortune 1000, you're going to see Red Hat environments, generic Kubernetes environments, VMware environments, just like you often see in some instances, heterogeneous virtualization environments, and you're still going to see bare metal. So I think it's going to vary by application workload and use case. And I think all, I'd say midsize enterprise up, let's say, $5 billion company and up, probably will have at least two, if not all three of those environments, container, virtual machine, and bare metal. So we need to make sure that at IBM we support all those environments to keep those customers happy. >> Yeah, well, Eric, I think anybody, everybody in the industry knows, IBM can span those environments, you know, support through generations. And very much knows that everything in IT tends to be additive. You mentioned customers, Eric, you talk to a lot of customers. So bring us inside, give us a couple examples if you would, how are they dealing with this transition? For years we've been talking about, you know, enabling developers, having them be tied more tightly with what the enterprise is doing. So what are you seeing from some of your customers today? >> Well, I think the key thing is they'd like to use data reuse. So, in this case, think of a backup, a snap or replica dataset, which is real world data, and being able to use that and reuse that. And now the storage guys want to make sure they know who's, if you will, checked it out. We do that with our Spectrum Copy Data Management. You also have, of course, integration with the Ansible framework, which IBM supports, in fact, we'll be announcing some additional support for more features in Ansible coming at the end of October. We'll be doing a large launch, very heavily on containers. Containers and primary storage, containers in hybrid cloud environments, containers in big data and AI environments, and containers in the modern data protection and cyber resiliency space as well. So we'll be talking about some additional support in this case about Ansible as well. So you want to make sure, one of the key things, I think, if you're a storage guy, if I'm the VP of infrastructure, or I'm the CIO, even if I'm not a storage person, in fact, if you think about it, I'm almost 70 now. I have never, ever, ever, ever met a CIO who used to be a storage guy, ever. Whether I, I've been with big companies, I was at EMC, I was at Seagate Maxtor, I've been at IBM actually twice. I've also done seven startups, as you guys know at theCUBE. I have never, ever met a CIO who used to be a storage person. Ever, in all those years. So, what appeals to them is, how do I let the dev guys and the test guys use that storage? At the same time, they're smart enough to know that the software guys and the test guys could actually screw up the storage, lose the data, or if they don't lose the data, cost them hundreds of thousands to millions of dollars because they did something wrong and they have to reconfigure all the storage solutions. So you want to make sure that the CIO is comfortable, that the dev and the test teams can use that storage properly. It's a part of what Ansible's about. You want to make sure that you've got tight integration. So for example, we announced a container native version of our Spectrum Discover software, which gives you comprehensive metadata, cataloging and indexing. Not only for IBM's scale-out file, Spectrum Scale, not only for IBM object storage, IBM cloud object storage, but also for Amazon S3 and also for NetApp filers and also for EMC Isilon. And it's a container native. So you want to make sure in that case, we have an API. So the AI software guys, or the big data software guys could interface with that API to Spectrum Discover, let them do all the work. And we're talking about a piece of software that can traverse billions of objects in two seconds, billions of them. And is ideal to use in solutions that are hundreds of petabytes, up into multiple exabytes. So it's a great way that by having that API where the CIO is confident that the software guys can use the API, not mess up the storage because you know, the storage guys and the data scientists can configure Spectrum Discover and then save it as templates and run an AI workload every Monday, and then run a big data workload every Tuesday, and then Wednesday run a different AI workload and Thursday run a different big data. And so once they've set that up, everything is automated. And CIOs love automation, and they really are sensitive. Although they're all software guys, they are sensitive to software guys messing up the storage 'cause it could cost them money, right? So that's their concern. We make it easy. >> Absolutely, Eric, you know, it'd be lovely to say that storage is just invisible, I don't need to think about it, but when something goes wrong, you need those experts to be able to dig in. You spent some time talking about automation, so critically important. How about the management layer? You know, you think back, for years it was, vCenter would be the place that everything can plug in. You could have more generalists using it. The HCI waves were people kind of getting away from being storage specialists. Today VMware has, of course vCenter's their main estate, but they have Tanzu. On the IBM and Red Hat side, you know, this year you announced the Advanced Cluster Management. What's that management landscape look like? How does the storage get away from managing some of the bits and bytes and, you know, just embrace more of that automation that you talked about? >> So in the case of IBM, we make sure we can support both. We need to appeal to the storage nerd, the storage geek if you will. The same time to a more generalist environment, whether it be an infrastructure manager, whether it be some of the software guys. So for example, we support, obviously vCenter. We're going to be supporting all of the elements that are going to happen in a container environment that VMware is doing. We have hot integration and big time integration with Red Hat's management framework, both with Ansible, but also in the container space as well. We're announcing some things that are coming again at the end of October in the container space about how we interface with the Red Hat management schema. And so you don't always have to have the storage expert manage the storage. You can have the Red Hat administrator, or in some cases, the DevOps guys do it. So we're making sure that we can cover both sides of the fence. Some companies, this just my personal belief, that as containers become commonplace while the software guys are going to want to still control it, there eventually will be a Red Hat/container admin, just like all the big companies today have VMware admins. They all do. Or virtualization admins that cover VMware and VMware's competitors such as Hyper-V. They have specialized admins to run that. And you would argue, VMware is very easy to use, why aren't the software guys playing with it? 'Cause guess what? Those VMs are sitting on servers containing both apps and data. And if the software guy comes in to do something, messes it up, so what have of the big entities done? They've created basically a virtualization admin layer. I think that over time, either the virtualization admins become virtualization/container admins, or if it's a big enough for both estates, there'll be container admins at the Global Fortune 500, and they'll also be virtualization admins. And then the software guys, the devOps guys will interface with that. There will always be a level of management framework. Which is why we integrate, for example, with vCenter, what we're doing with Red Hat, what we do with generic Kubernetes, to make sure that we can integrate there. So we'll make sure that we cover all areas because a number of our customers are very large, but some of our customers are very small. In fact, we have a company that's in the software development space for autonomous driving. They have over a hundred petabytes of IBM Spectrum Scale in a container environment. So that's a small company that's gone all containers, at the same time, we have a bunch of course, Global Fortune 1000s where IBM plays exceedingly well that have our products. And they've got some stuff sitting in VMware, some such sitting in generic Kubernetes, some stuff sitting in Red Hat OpenShift and some stuff still in bare metal. And in some cases they don't want their software people to touch it, in other cases, these big accounts, they want their software people empowered. So we're going to make sure we could support both and both management frameworks. Traditional storage management framework with each one of our products and also management frameworks for virtualization, which we've already been doing. And now management frame first with container. We'll make sure we can cover all three of those bases 'cause that's what the big entities will want. And then in the smaller names, you'll have to see who wins out. I mean, they may still use three in a small company, you really don't know, so you want to make sure you've got everything covered. And it's very easy for us to do this integration because of things we've already historically done, particularly with the virtualization environment. So yes, the interstices of the integration are different, but we know here's kind of the process to do the interconnectivity between a storage management framework and a generic management framework, in, originally of course, vCenter, and now doing it for the container world as well. So at least we've learned best practices and now we're just tweaking those best practices in the difference between a container world and a virtualization world. >> Eric, VMworld is one of the biggest times of the year, where we all get together. I know how busy you are going to the show, meeting with customers, meeting with partners, you know, walking the hallways. You're one of the people that traveled more than I did pre-COVID. You know, you're always at the partner shows and meeting with people. Give us a little insight as to how you're making sure that, partners and customers, those conversations are still happening. We understand everything over video can be a little bit challenging, but, what are you seeing here in 2020? How's everybody doing? >> Well, so, a couple of things. First of all, I already did two partner meetings today. (laughs) And I have an end user meeting, two end user meetings tomorrow. So what we've done at IBM is make sure we do a couple things. One, short and to the point, okay? We have automated tools to actually show, drawing, just like the infamous walk up to the whiteboard in a face to face meeting, we've got that. We've also now tried to make sure everybody is being overly inundated with WebEx. And by the way, there's already a lot of WebEx anyway. I can think of meeting I had with a telco, one of the Fortune 300, and this was actually right before Thanksgiving. I was in their office in San Jose, but they had guys in Texas and guys in the East Coast all on. So we're still over WebEx, but it also was a two and a half hour meeting, actually almost a three hour meeting. And both myself and our Flash CTO went up to the whiteboard, which you could then see over WebEx 'cause they had a camera showing up onto the whiteboard. So now you have to take that and use integrated tools. One, but since people are now, I would argue, over WebEx. There is a different feel to doing the WebEx than when you're doing it face to face. We have to fly somewhere, or they have to fly somewhere. We have to even drive somewhere, so in between meetings, if you're going to do four customer calls, Stu, as you know, I travel all over the world. So I was in Sweden actually right before COVID. And in one day, the day after we had a launch, we launched our new Flash System products in February on the 11th, on February 12th, I was still in Stockholm and I had two partner meetings and two end user meetings. But the sales guy was driving me around. So in between the meetings, you'd be in the car for 20 minutes or half an hour. So it connects different when you can do WebEx after WebEx after WebEx with basically no break. So you have to be sensitive to that when you're talking to your partners, sensitive of that when you're talking to the customers sensitive when you're talking to the analysts, such as you guys, sensitive when you're talking to the press and all your various constituents. So we've been doing that at IBM, really, since the COVID thing got started, is coming up with some best practices so we don't overtax the end users and overtax our channel partners. >> Yeah, Eric, the joke I had on that is we're all following the Bill Belichick model now, no days off, just meeting, meeting, meeting every day, you can stack them up, right? You used to enjoy those downtimes in between where you could catch up on a call, do some things. I had to carve out some time to make sure that stack of books that normally I would read in the airports or on flights, everything, you know. I do enjoy reading a book every now and again, so. Final thing, I guess, Eric. Here at VMworld 2020, you know, give us final takeaways that you want your customers to have when it comes to IBM and VMware. >> So a couple of things, A, we were tightly integrated and have been tightly integrated for what they've been doing in their traditional virtualization environment. As they move to containers we'll be tightly integrated with them as well, as well as other container platforms, not just from IBM with Red Hat, but again, generic Kubernetes environments with open source container configurations that don't use IBM Red Hat and don't use VMware. So we want to make sure that we span that. In traditional VMware environments, like with Version 7 that came out, we make sure we support it. In fact, VMware just announced support for NVMe over Fibre Channel. Well, we've been shipping NVMe over Fibre Channel for just under two years now. It'll be almost two years, well, it will be two years in October. So we're sitting here in September, it's almost been two years since we've been shipping that. But they haven't supported it, so now of course we actually, as part of our launch, I pre say something, as part of our launch, the last week of October at IBM's TechU it'll be on October 27th, you can join for free. You don't need to attend TechU, we'll have a free registration page. So just follow Zoginstor or look at my LinkedIns 'cause I'll be posting shortly when we have the link, but we'll be talking about things that we're doing around V7, with support for VMware's announcement of NVMe over Fibre Channel, even though we've had it for two years coming next month. But they're announcing support, so we're doing that as well. So all of those sort of checkbox items, we'll continue to do as they push forward into the container world. IBM will be there right with them as well because we know it's a very large world and we need to support everybody. We support VMware. We supported their competitors in the virtualization space 'cause some customers have, in fact, some customers have both. They've got VMware and maybe one other of the virtualization elements. Usually VMware is the dominant of course, but if they've got even a little bit of it, we need to make sure our storage works with it. We're going to do the same thing in the container world. So we will continue to push forward with VMware. It's a tight relationship, not just with IBM Storage, but with the server group, clearly with the cloud team. So we need to make sure that IBM as a company stays very close to VMware, as well as, obviously, what we're doing with Red Hat. And IBM Storage makes sure we will do both. I like to say that IBM Storage is a Switzerland of the storage industry. We work with everyone. We work with all these infrastructure players from the software world. And even with our competitors, our Spectrum Virtualized software that comes on our Flash Systems Array supports over 550 different storage arrays that are not IBM's. Delivering enterprise-class data services, such as snapshot, replication data, at rest encryption, migration, all those features, but you can buy the software and use it with our competitors' storage array. So at IBM we've made a practice of making sure that we're very inclusive with our software business across the whole company and in storage in particular with things like Spectrum Virtualize, with what we've done with our backup products, of course we backup everybody's stuff, not just ours. We're making sure we do the same thing in the virtualization environment. Particularly with VMware and where they're going into the container world and what we're doing with our own, obviously sister division, Red Hat, but even in a generic Kubernetes environment. Everyone's not going to buy Red Hat or VMware. There are people going to do Kubernetes industry standard, they're going to use that, if you will, open source container environment with Kubernetes on top and not use VMware and not use Red Hat. We're going to make sure if they do it, what I'll call generically, if they use Red Hat, if they use VMware or some combo, we will support all of it and that's very important for us at VMworld to make sure everyone is aware that while we may own Red Hat, we have a very strong, powerful connection to VMware and going to continue to do that in the future as well. >> Eric Herzog, thanks so much for joining us. Always a pleasure catching up with you. >> Thank you very much. We love being with theCUBE, you guys do great work at every show and one of these days I'll see you again and we'll have a beer. In person. >> Absolutely. So, definitely, Dave Vellante and John Furrier send their best, I'm Stu Miniman, and thank you as always for watching theCUBE. (relaxed electronic music)

Published Date : Sep 29 2020

SUMMARY :

brought to you by VMware He is the chief marketing officer And again, glad to be here, you know, 2020, the major engagements, So we started with IBM Cloud so share with us a little bit, you know, and the differences that we're doing to make sure that we can and now the object storage side. So what are you seeing from and containers in the On the IBM and Red Hat side, you know, So in the case of IBM, we and meeting with people. and guys in the East Coast all on. in the airports or on and maybe one other of the Always a pleasure catching up with you. We love being with theCUBE, and thank you as always

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
EricPERSON

0.99+

Pat GelsingerPERSON

0.99+

IBMORGANIZATION

0.99+

Eric HerzogPERSON

0.99+

JohnPERSON

0.99+

ZoginstorPERSON

0.99+

TexasLOCATION

0.99+

DavePERSON

0.99+

StockholmLOCATION

0.99+

SwedenLOCATION

0.99+

20 minutesQUANTITY

0.99+

Dave VellantePERSON

0.99+

$5 billionQUANTITY

0.99+

San JoseLOCATION

0.99+

Stu MinimanPERSON

0.99+

FebruaryDATE

0.99+

SeptemberDATE

0.99+

billionsQUANTITY

0.99+

2020DATE

0.99+

October 27thDATE

0.99+

AWSORGANIZATION

0.99+

last yearDATE

0.99+

John FurrierPERSON

0.99+

VMworldORGANIZATION

0.99+

two secondsQUANTITY

0.99+

half an hourQUANTITY

0.99+

VMwareORGANIZATION

0.99+

ThursdayDATE

0.99+

WednesdayDATE

0.99+

Red HatTITLE

0.99+

bothQUANTITY

0.99+

February 12thDATE

0.99+

Red Hat OpenShiftTITLE

0.99+

Red HatORGANIZATION

0.99+

two yearsQUANTITY

0.99+

twoQUANTITY

0.99+

end of OctoberDATE

0.99+

twiceQUANTITY

0.99+

two and a half hourQUANTITY

0.99+

tomorrowDATE

0.99+

OctoberDATE

0.99+

SwitzerlandLOCATION

0.99+

hundreds of petabytesQUANTITY

0.99+

hundreds of thousandsQUANTITY

0.99+

StuPERSON

0.99+

PatPERSON

0.99+

Seagate MaxtorORGANIZATION

0.99+

telcoORGANIZATION

0.99+

three years agoDATE

0.99+

Eric Herzog, IBM Storage | CUBE Conversation February 2020


 

(upbeat funk jazz music) >> Hello, and welcome to theCUBE Studios in Palo Alto, California for another CUBE Conversation, where we go in depth with thought leaders driving innovation across tech industry. I'm your host, Peter Burris. What does every CIO want to do? They want to support the business as it evolves and transforms, using data as that catalyst for better customer experience, improved operations, and more profitable options. But to do that we have to come up with a way of improving the underlying infrastructure that makes all this possible. We can't have a situation where we introduce more complex applications in response to richer business needs and have that translated into non-scalable underlying technology. CIOs in 2020 and beyond have to increasingly push their suppliers to make things simpler. And that's true in all domains, but perhaps especially storage, where the explosion of data is driving so many of these changes. So what does it mean to say that storage can be made more simple? Well to have that conversation we're going to be speaking with Eric Herzog, CMO and VP of Global Channels at IBM Storage, about, quite frankly, an announcement that IBM's doing to specifically address that question, making storage simpler. Eric, thanks very much for coming back to theCUBE. >> Great, thank you. We love to be here. >> All right, I know you got an announcement to talk about, but give us the update. What's going on with IBM Storage? >> Well, I think the big thing is, clients have told us, storage is too complex. We have a multitude of different platforms, an entry product, a mid-range product, a high-end product, then we have to traverse to the cloud. Why can't we get a simple, easy to use, but very robust feature set? So at IBM Storage with this FlashSystem announcement, we have a family that traverses entry, mid-range, enterprise and automatically can go out to a hybrid multicloud environment, all driven across a common platform, common API, common software, our award-winning Spectrum Virtualize, and innovative technologies around, whether it be cyber-resiliency, performance, incredible performance, ease of use, easier and easier to use. For example, we can do AI-based automated tiering from one flash array to another, or from storage class memory to flash. Innovation, at the same time driving better value out of the storage but not charging a lot of extra money for these features. In fact, our FlashSystems announcement, the platforms, depending on the configuration, can be as much as 50% lower than our previous generation. Now that's delivering value, but at the same time we added enhanced features, for example, the capability of even better container support than we already had in our older platform. Or our new FlashCore Modules that can deliver performance in a cluster of up to 17.2 million IOPS, up from our previous performance of 15. Yet, as I said before, delivering that enterprise value and those enterprise data services, in this case I think you said, depending on the config, up to as much as 50% less expensive than some of our previous generation products. >> So let me unpack that a little bit. So, historically, when you look at, or even today, when you look at how storage product lines are set up, they're typically set up for one footprint for the low end, one or more footprints in the mid-range, and then one or more footprints at the high-end. And those are differentiated by the characteristics of the technologies being employed, the function and services that are being offered, and the prices and financial arrangements that are part of it. Are you talking about, essentially, a common product line that is differentiated only by the configuration needs of the volume and workloads? >> Exactly. The FlashSystem traverses entry, mid-range, enterprise, and can automatically get you out to a hybrid multicloud environment, same APIs, same software, same management infrastructure. Our Storage Insights product, which is a could-based storage manager and predictive analytics, works on the entry product, at no charge, mid-range product at no charge, the enterprise product at no charge, and we've even added, in that solution, support for non-IBM platforms, again. So, delivering more value across a standard platform with a common API, a common software. Remember, today's storage is growing exponentially. Are the enterprise customers getting exponentially more storage admins? No. In fact, many of the big enterprises, after the downturn of '08 and '09 had to cut back on storage resources. They haven't hired back to how many storage resources they had in 2007 or '8. They've gotten back to full IT, but a lot of those guys are DevOps people or other functions, so, the storage admins and the IT infrastructure admins have to manage extra petabytes, extra exabytes depending on the type of company. So one platform that can do that and traverse out to the cloud automatically, gives you that innovation and that value. In fact, two of our competitors, just as example, do the same thing, have four platforms. Two other have three. We can do it with one. Simple platform, common API, common storage management, common interface, incredible performance, cyber-resiliency, but all built in something that's a common data management infrastructure with common data software, yet continuing to innovate as we've done with this release of the FlashSystem family. >> OK, so talk about the things that, common API, common software, also, I presume, common, the core module, that FlashCore Module that you have, common across the family as well? >> Almost all the family. At the very entry space we still do use interstandard SSDs but we can get as low as a street price for all-flash config of $16,000 for an all-flash array. Two, three years ago that would've been unheard of. And, by the way, it had six lines of availability, same software interface and API as a system that could go up to millions of dollars at the way high end, right? And anything in between. So common ease of use, common management, simple to manage, simple to deploy, simple to use, but not simple in the value proposition. Reduce the TCO, reduce the ROI, reduce the operational manpower, they're overtaxed as it is. So by making this across the portfolio with the FlashSystem and go out to the hybrid multicloud but bringing in all this high technology such as our FlashCore Modules and, as I said, at a reduced price to the previous generation. What more could you ask for? >> OK, so you've got some promises that you made in 2019 that you're also actually realizing. One of my favorite ones, something I think is pretty important, is storage class memory. Talk about how some of those 2019 promises are being realized in this announcement. >> So what we did is, when we announced our first FlashSystem family in 2018 using our new NVMe FlashCore Modules, we had an older FlashSystem family for several years that used, you know, the standard SaaS interface. But our first NVMe product was announced in the summer of 2018. At that time we said, all the way back then, that in early '20 we would be start shipping storage class memory. Now, by the way, those FlashSystems NVMe products that we announced back then, actually can still use storage class memory, so, we're protecting the investment of our installed base. Again, innovation with value on the installed base. >> A very IBM thing to do. >> Yes, we want to take care of the installed base, we also want to have new modern technologies, like storage class memory, like improved performance and capacity in our FlashCore Modules where we take off the shelf Flash and create our own modules. Seven year media warranty, up to 17.2 million IOPS, 17 mites of latency, which is 30% better than our next nearest competitor. By the way, we can create a 17 million IOP config in only eight rack U. One of our competitors gets close, 15 million, but it takes them 40 rack U. Again, operational manpower, 40 rack U's harder to manage, simplicity of deployment, it's harder to deploy all that in 40 rack U, we can do it in eight. >> And pricing. >> Yes. And we've even brought out now, a preconfigured rack. So what we call the FlashSystem 9200R built into the rack with a switching infrastructure, with the storage you need, IBM services will deploy it for you, that's part of the deal, and you can create big solutions that can scale dramatically. >> Now R stands for hybrid? >> Rack. >> Rack. Well talk to me about some of the hybrid packaging that you're bringing out for hybrid cloud. >> Sure, so, from a hybrid cloud perspective, our Spectrum Virtualize software, which sits on-prem, entry, mid-range and at the upper end, can traverse to a cloud called Spectrum Virtualize for Cloud. Now, one of the keys things of Spectrum Virtualize, both on-prem and our cloud version, is it supports not only IBM arrays, but through a storage virtualisation technology, over 450 arrays from multi-vendors, and in short our competition. So we can take our arrays, and automatically go out to the cloud. We can do a lot of things. Cloud air gapping, to help with malware and ransonware protection, DR, snapshots and replicas. Not only can the new FlashSystem family do that, to Spectrum Virtualize on-prem and then out, but Spectrum Virtualize coming on our FlashSystem portfolio can actually virtualize non-IBM arrays and give them the same enterprise functionality and in this case, hybrid cloud technology, not only for us, but for our competitors products as well. One user interface. Now talk about simple. Our own products, again one family, entry, mid-range and enterprise traversing the cloud. And by the way, for those of you who are heterogeneous, we can deliver those enterprise class services, including going out to a hybrid multi-cloud configuration, for our competitors products as well. One user interface, one throat to choke, one support infrastructure with our Storage Insights platform, so it's a great way to make things easier, cut the CAPEX and OPEX, but not cut the innovation. We believe in value and innovation, but in an easy deploy methodology, so that you're not overly complex. And that is killing people, the complexity of their solutions. >> All right. So there's a couple of things about cloud, as we move forward, that are going to be especially interesting. One of them is going to be containers. Everybody's talking about, and IBM's been talking about, you've been talking about this, we've talked about this a number of times, about how containers and storage and data are going to come together. How do you see this announcement supporting those emerging and evolving need for container-based applications in the enterprise. >> So, first of all, it's often tied to hybrid multi-cloudness. Many of the hybrid cloud configurations are configured on a container based environment. We support Red Hat OpenShift. We support Kubernetes environments. We can provide on these systems at no charge, persistent storage for those configurations. We also, although it does require a backup package, Spectrum Protect, the capability of backing up that persistent storage in an OpenShift or Kubernetes environment. So really it's critical. Part of our simplicity is this FlashSystem platform with this technology, can support bare metal workloads, virtualised workloads, VMware, HyperV, KVM, OVM, and now container workloads. And we do see, for the next coming years, think about bare metal. Bare metal is as old as I am. That's pretty old. Well we got tons of customers still got bare metal applications, but everyone's also gone virtualized. So it's not, are we going to have one? It's you're going to have all three. So with the FlashSystems family, and what we have with Spectrum Virtualized software, what we have with our container support, we need with bare metal support, incredible performance, whatever you need, VMware integration, HyperV integration, everything you need for a virtualized environment, and for a container environment, we have everything too. And we do think the, especially the mid to big accounts, are going to try run all three, at least for the next couple of years. This gives you a platform that can do that, at the entry point, up to the high end, and then out to a hybrid multi-cloud environment. >> With that common software and APIs across. Now, every year that you and I have talked, you've been especially passionate about the need for turning the crank, and evolving and improving the nature of automation, which is another one of the absolute necessities, as we start thinking about cloud. How is this announcement helping to take that next step, turn the crank in automation? >> So a couple of things. One is our support now for Ansible, so offering that Ansible support, integrates into the container management frameworks. Second thing is, we have a ton of AI-type specific based technology built into the FlashSystem platform. First is our cloud based storage and management predictive analytics package, Storage Insights. The base version comes for free across our whole portfolio, whether it be entry, mid-range or high-end, across the whole FlashSystems family. It gives you predictive analytics. If you really do have a support problem, it eases the support issues. For example, instead of me saying, "Peter send me those log files." Guess what? We can see the log files. And we can do it right there while you're on the phone. You've got a problem? Let's make it easier for you to get it solved. So Storage Insights across AI based, predictive analytics, performance, configuration issues, all predicatively done, so AI based. Secondly, we've integrated AI in to our Spectrum Virtualize product. So as exemplar, easier to your technology, can allow you to tier data from storage class memory to Flash, as an example, and guess what it does? It automatically knows based on usage patterns, where the data should go. Should it be on the storage class memory? Should it be on Flash core modules? And in fact, we can create a configuration, we have Flash core modules and introduce standard SSDs, which are both Flash, but our Flash core modules are substantially faster, much better latency, like I said, 30% better than the next nearest competition, up to 17.2 million IOPS. The next closest is 15. And in fact, it's interesting, one of our competitors has used storage class memory as a read cache. It dramatically helps them. But they go from 250 publicly stated mites of latency, to 125. With this product, the FlashSystem, anything that uses our Flash core modules, our FlashSystems semi 200, our FlashSystem 9200 product, and the 9200-R product. We can do 70 mites of latency, so almost twice as fast, without using storage class memory. So think what that storage class memory will offer. So we can create hybrid configurations, with StorageClass and Flash, you could have our Flash core modules, and introduce standard SSDs if you want, but it's all AI based. So we have AI based in our Storage Insights, predictive analytics, management and support infrastructure. And we have predictive analytics in things like our Easy Tier. So not only do we think storage is a critical foundation for the AI application workload and use case, which it is, but you need to imbue your storage, which we've done across FlashSystems, including what we've done with our cloud edition, because Spectrum Virtualize has a cloud edition, and an on-prem edition, seamless transparency, but AI in across that entire platform, using Spectrum Virtualize. >> All right, so let me summarize. We've got an absolute requirement from enterprise, to make storage simpler, which requires simple product families with more commonality, where that commonality delivers great value, and at the same time the option to innovate, where that innovation's going to create value. We have a lot simpler set of interfaces and technologies, as you said they're common, but they are more focused on the hybrid cloud, the multi-cloud world, that we're working in right now, that brings more automation and more high-quality storage services to bear wherever you are in the enterprise. So I've got to ask you one more question. I'm a storage administrator, or a person who is administering data, inside the infrastructure. I used to think of doing things this way, what is the one or two things that I'm going to do differently as a consequence of this kind of an announcement? >> So I think the first one, it's going to reduce your operational expenses and your operational man power, because you have a common API, a common software platform, a common foundation for data management and data movement, it's not going to be as complex for you to pull your storage configurations. Second thing, you don't have to make as many choices between high-end workloads, mid-range workloads, and entry workloads. Six lines across the board. Enterprise class data services across the board. So when you think simple, don't think simple as simplistic, low-end. This is a simple to use, simple deploy, simple to manage product, with extensive innovation and a price that's- >> So simple to secure? >> And simple to secure. Data rest encryption across the portfolio. And in fact those that use our FlashCore Modules, no performance hit on encryption, and no performance hit on data compression. So it can help you shrink the actual amount you need to buy from us, which sounds sort of crazy, that a storage company would do that, but with our data reduction technologies, compression being one of them, there's no performance hits, you can compress compressable workloads, and now, anything with a FlashCore Module, which by the way, happens to be FIPS 140-2 certified, there's no excuse not to encrypt, because encryption, as you know, has had a performance hit in the past. Now, our 7200, our 5100 FlashSystem, and our FlashSystem 9200 and 9200R, there's no performance on encrypting, so it gives you that extra resiliency, that you need in a storage world, and you don't get a non-compression, which helps you shrink how much you end up buying from IBM. So that's the type of innovation we deliver, in a simple to use, easy to deploy, easy to manage but incredible innovative value, brought into a very innovative solution, across the board, not just let's innovate at the high end or you know what I mean? Trying to make that innovation spread, which, by the way, makes it easier for the storage guy. >> Well, look, in a world, even inside a single enterprise, you're going to have branch offices, you're going to have local this, the edge, you can't let the bad guys in on a lesser platform that then can hit data on a higher end platform. So the days of presuming that there's this great differentiation in the tier are slowly coming to an end as everything becomes increasingly integrated. >> Well as you've pointed out many times, data is the asset. Not the most valuable one. It is the asset of today's digital enterprise and it doesn't matter whether you're a global Fortune 500, or you're a (mumble). Everybody is a digital enterprise these days, big, medium or small. So cyber resiliency is important, cutting costs is important, being able to modernize and optimize your infrastructure, simply and easily. The small guys don't have a storage guy, and a network guy and a server guy, they have the IT guy. And even the big guys, who used to have hundreds of storage admins in some cases, don't have hundreds any more. They've got a lot of IT people, but they cut back so these storage admins and infrastructure admins in these global enterprise, they're managing 10, 20 times the amount of storage they managed even two or three years ago. So, simple, across the board, and of course hyper multicloud is critical to these configurations. >> Eric, it's a great annoucement, congratulations to IBM to actually delivering on what your promises are. Once again, great to have you on theCUBE. >> Great, thank you very much Peter. >> And thanks to you, again, for participating in this CUBE conversation, I'm Peter Burris, see you next time. (upbeat, jazz music)

Published Date : Feb 12 2020

SUMMARY :

But to do that we have to come up with We love to be here. I know you got an announcement to talk about, Innovation, at the same time driving better value and the prices and financial arrangements No. In fact, many of the big enterprises, At the very entry space we still do use interstandard SSDs in 2019 that you're also actually realizing. in the summer of 2018. By the way, we can create a 17 million IOP config and you can create big solutions that you're bringing out for hybrid cloud. And by the way, for those of you who are heterogeneous, container-based applications in the enterprise. and then out to a hybrid multi-cloud environment. and evolving and improving the nature of automation, and the 9200-R product. and at the same time the option to innovate, it's not going to be as complex for you So that's the type of innovation we deliver, So the days of presuming It is the asset of today's digital enterprise Once again, great to have you on theCUBE. And thanks to you, again,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
TristanPERSON

0.99+

George GilbertPERSON

0.99+

JohnPERSON

0.99+

GeorgePERSON

0.99+

Steve MullaneyPERSON

0.99+

KatiePERSON

0.99+

David FloyerPERSON

0.99+

CharlesPERSON

0.99+

Mike DooleyPERSON

0.99+

Peter BurrisPERSON

0.99+

ChrisPERSON

0.99+

Tristan HandyPERSON

0.99+

BobPERSON

0.99+

Maribel LopezPERSON

0.99+

Dave VellantePERSON

0.99+

Mike WolfPERSON

0.99+

VMwareORGANIZATION

0.99+

MerimPERSON

0.99+

Adrian CockcroftPERSON

0.99+

AmazonORGANIZATION

0.99+

BrianPERSON

0.99+

Brian RossiPERSON

0.99+

Jeff FrickPERSON

0.99+

Chris WegmannPERSON

0.99+

Whole FoodsORGANIZATION

0.99+

EricPERSON

0.99+

Chris HoffPERSON

0.99+

Jamak DaganiPERSON

0.99+

Jerry ChenPERSON

0.99+

CaterpillarORGANIZATION

0.99+

John WallsPERSON

0.99+

Marianna TesselPERSON

0.99+

JoshPERSON

0.99+

EuropeLOCATION

0.99+

JeromePERSON

0.99+

GoogleORGANIZATION

0.99+

Lori MacVittiePERSON

0.99+

2007DATE

0.99+

SeattleLOCATION

0.99+

10QUANTITY

0.99+

fiveQUANTITY

0.99+

Ali GhodsiPERSON

0.99+

Peter McKeePERSON

0.99+

NutanixORGANIZATION

0.99+

Eric HerzogPERSON

0.99+

IndiaLOCATION

0.99+

MikePERSON

0.99+

WalmartORGANIZATION

0.99+

five yearsQUANTITY

0.99+

AWSORGANIZATION

0.99+

Kit ColbertPERSON

0.99+

PeterPERSON

0.99+

DavePERSON

0.99+

Tanuja RanderyPERSON

0.99+

Eric Herzog, IBM | Cisco Live EU Barcelona 2020


 

>> Announcer: Live from Barcelona, Spain, it's theCUBE, covering Cisco Live 2020, brought to you by Cisco and its ecosystem partners. >> Welcome back to Barcelona, everybody, we're here at Cisco Live, and you're watching theCUBE, the leader in live tech coverage. We go to the events and extract the signal from the noise. This is day one, really, we started day zero yesterday. Eric Herzog is here, he's the CMO and Vice President of Storage Channels. Probably been on theCUBE more than anybody, with the possible exception of Pat Gelsinger, but you might surpass him this week, Eric. Great to see you. >> Great to see you guys, love being on theCUBE, and really appreciate the coverage you do of the entire industry. >> This is a big show for you guys. I was coming down the escalator, I saw up next Eric Herzog, so I sat down and caught the beginning of your presentation yesterday. You were talking about multicloud, which we're going to get into, you talked about cybersecurity, well let's sort of recap what you told the audience there and really let's dig in. >> Sure, well, first thing is, IBM is a strong partner of Cisco, I mean they're a strong partner of ours both ways. We do all kinds of joint activities with them on the storage side, but in other divisions as well. The security guys do stuff with Cisco, the services guys do a ton of stuff with Cisco. So Cisco's one of our valued partners, which is why we're here at the show, and obviously, as you guys know, with a lot of the coverage you do to the storage industry, that is considered one of the big storage shows, you know, in the industry, and has been a very strong show for IBM Storage and what we do. >> Yeah, and I feel like, you know, it brings together storage folks, whether it's data protection, or primary storage, and sort of is a collection point, because Cisco is a very partner-friendly organization. So talk a little bit about how you go to market, how you guys see the multicloud world, and what each of you brings to the table. >> Well, so we see it in a couple of different facts. So first of all, the day of public cloud only or on-prem only is long gone. There are a few companies that use public cloud only, but yeah, when you're talking mid-size enterprise, and certainly into let's say the global 2500, that just doesn't work. So certain workloads reside well in the cloud, and certain workloads reside well on-prem, and there's certain that can back and forth, right, developed in a cloud but then move it back on, for example, highly transactional workload, once you get going on that, you're not going to run that on any cloud provider, but that doesn't mean you can't develop the app, test the app, out in the cloud and then bring it back on. So we also see that the days of a cloud provider for big enterprise and again up to the 2500 of the global fortunes, that's not true either, because just as with other infrastructure and other technologies, they often have multiple vendors, and in fact, you know, what I've seen from talking to CIOs is, if they have three cloud providers, that's low. Many of 'em talk about five or six, whether that be for legal reasons, whether that be for security reasons, or of course the easy one, which is, we need to get a good price, and if we just use one vendor, we're not going to get a good price. And cloud is mature, cloud's not new anymore, the cloud is pretty old, it's basically, sort of, version three of the internet, (laughs) and so, you know, I think some of the procurement guys are a little savvy about why would you only use Amazon or only use Azure or only use Google or only use IBM Cloud. Why not use a couple to keep them, you know, which is kind of normal when procurement gets involved, and say, cloud is not new anymore, so that means procurement gets involved. >> Well, and it's kind of, comes down to the workload. You got certain clouds that are better, you have Microsoft if you want collaboration, you have Amazon if you want infrastructure for devs, on-prem if you want, you know, family jewels. So I got a question for you. So if you look at, you know, it's early 2020, entering a new decade, if you look at the last decade, some of the big themes. You had the consumerization of IT, you had, you know, Web 2.0, you obviously had the big data meme, which came and went and now it's got an AI. And of course you had cloud. So those are the things that brought us here over the last 10 years of innovation. How do you see the next 10 years? What are going to be those innovation drivers? >> Well I think one of the big innovations from a cloud perspective is like, truly deploying cloud. Not playing with the cloud, but really deploying the cloud. Obviously when I say cloud, I would include private cloud utilization. Basically, when you think on-prem in my world, on-prem is really a private cloud talking to a public cloud. That's how you get a multicloud, or, if you will, a hybrid cloud. Some people still think when you talk hybrid, like literally, bare metal servers talking to the cloud, and that just isn't true, because when you look at certainly the global 2500, I can't think any of them what isn't essentially running a private cloud inside their own walls, and then, whether they're going out or not, most do, but the few that don't, they mimic a public cloud inside because of the value they see in moving workloads around, easy deployment, and scale up and scale down, whether that be storage or servers or whatever the infrastructure is, let alone the app. So I think what you're going to see now is a recognization that it's not just private cloud, it's not just public cloud, things are going to go back and forth, and basically, it's going to be a true hybrid cloud world, and I also think with the cloud maturity, this idea of a multicloud, 'cause some people think multicloud is basically private cloud talking to public cloud, and I see multicloud as not just that, but literally, I'm a big company, I'm going to use eight or nine cloud providers to keep everybody honest, or, as you just said, Dave, and put it out, certain clouds are better for certain workloads, so just as certain storage or certain servers are better when it's on-prem, that doesn't surprise us, certain cloud vendors specialize in the apps. >> Right, so Eric, we know IBM and Cisco have had a very successful partnership with the VersaStack. If you talk about in your data center, in IBM Storage, Cisco networking in servers. When I hear both IBM and Cisco talking about the message for hybrid and multicloud, they talk the software solutions you have, the management in various pieces and integration that Cisco's doing. Help me understand where VersaStack fits into that broader message that you were just talking about. >> So we have VersaStack solutions built around primarily our FlashSystems which use our Spectrum Virtualize software. Spectrum Virtualize not only supports IBM arrays, but over 500 other arrays that are not ours. But we also have a version of Spectrum Virtualize that will work with AWS and IBM Cloud and sits in a virtual machine at the cloud providers. So whether it be test and dev, whether it be migration, whether it business continuity and disaster recovery, or whether it be what I'll call logical cloud error gapping. We can do that for ourselves, when it's not a VersaStack, out to the cloud and back. And then we also have solutions in the VersaStack world that are built around our Spectrum Scale product for big data and AI. So Spectrum Scale goes out and back to the cloud, Spectrum Virtualize, and those are embedded on the arrays that come in a VersaStack solution. >> I want to bring it back to cloud a little bit. We were talking about workloads and sort of what Furrier calls horses for courses. IBM has a public cloud, and I would put forth that your wheelhouse, IBM's wheelhouse for cloud workload is the hybrid mission-critical work that's being done on-prem today in the large IBM customer base, and to the extent that some of that work's going to move into the cloud. The logical place to put that is the IBM Cloud. Here's why. You could argue speeds and feeds and features and function all day long. The migration cost of moving data and workloads from wherever, on-prem into a cloud or from on-prem into another platform are onerous. Any CIO will tell you that. So to the extent that you can minimize those migration costs, the business case for, in IBM's case, for staying within that blue blanket, is going to be overwhelmingly positive relative to having to migrate. That's my premise. So I wonder if you could comment on that, and talk about, you know, what's happening in that hybrid world specifically with your cloud? >> Well, yeah, the key thing from our perspective is we are basically running block data or file data, and we just see ourselves sitting in IBM Cloud. So when you've got a FlashSystem product or you've got our Elastic Storage System 3000, when you're talking to the IBM Cloud, you think you're talking to another one of our boxes sitting on-prem. So what we do is make that transition completely seamless, and moving data back and forth is seamless, and that's because we take a version of our software and stick in a virtual machine running at the cloud provider, in this case IBM Cloud. So the movement of data back and forth, whether it be our FlashSystem product, even we have our DS8000 can do the same thing, is very easy for an IBM customer to move to an IBM Cloud. That said, just to make sure that we're covering, and in the year of multicloud, remember the IBM Cloud division just released the Multicloud Manager, you know, second half of last year, recognizing that while they want people to focus on the IBM Cloud, they're being realistic that they're going to have multiple cloud vendors. So we've followed that mantra too, and made sure that we've followed what they're doing. As they were going to multicloud, we made sure we were supporting other clouds besides them. But from IBM to IBM Cloud it's easy to do, it's easy to traverse, and basically, our software sits on the other side, and it basically is as if we're talking to an array on prem but we're really not, we're out in the cloud. We make it seamless. >> So testing my premise, I mean again, my argument is that the complexity of that migration is going to determine in part what cloud you should go to. If it's a simple migration, and it's better, and the customer decides okay it's better off on AWS, you as a storage supplier don't care. >> That is true. >> It's agnostic to you. IBM, as a supplier of multicloud management doesn't care. I'm sure you'd rather have it run on the IBM Cloud, but if the customer says, "No, we're going to run it "over here on Azure", you say, "Great. "We're going to help you manage that experience across clouds". >> Absolutely. So, as an IBM shareholder, we wanted to go to IBM Cloud. As a realist, with what CIOs say, which is I'm probably going to use multiple clouds, we want to make sure whatever cloud they pick, hopefully IBM first, but they're going to have a secondary cloud, we want to make sure we capture that footprint regardless, and that's what we've done. As I've said for years and years, a partial PO is better than no PO. So if they use our storage and go to a competitor of IBM Cloud, while I don't like that as a shareholder, it's still good for IBM, 'cause we're still getting money from the storage division, even though we're not working with IBM Cloud. So we make it as flexible as possible for the customer, The Multicloud Manager is about customer choice, which is leading with IBM Cloud, but if they want to use a, and again, I think it's a realization at IBM Corporate that no one's going to use just one cloud provider, and so we want to make sure we empower that. Leading with IBM Cloud first, always leading with IBM Cloud first, but we want to get all of their business, and that means, other areas, for example, the Red Hat team. Red Hat works with every cloud, right? And they don't really necessarily lead with IBM Cloud, but they work with IBM Cloud all right, but guess what, IBM gets the revenue no matter what. So I don't see it's like the old traditional component guy with an OEM deal, but it kind of sort of is. 'Cause we can make money no matter what, and that's good for the IBM Corporation, but we do always lead with IBM Cloud first but we work with everybody. >> Right, so Eric, we'd agree with your point that data is not just going to live one place. One area that there's huge opportunity that I'd love to get your comment here on is edge. So we talked about, you know, the data center, we talked about public cloud. Cisco's talking a lot about their edge strategy, and one of our questions is how will they enable their partners and help grow that ecosystem? So love to hear your thoughts on edge, and any synergies between what Cisco's doing and IBM in that standpoint. >> So the thing from an edge perspective for us, is built around our new Elastic Storage System 3000, which we announced in Q4. And while it's ideal for the typical big data and AI workloads, runs Spectrum Scale, we have many a customers with Scale that are exabytes in production, so we can go big, but we also go small. It's a compact 2U all-flash array, up to 400 terabytes, that can easily be deployed at a remote location, an oil well, right, or I should say, a platform, oil platform, could be deployed obviously if you think about what's going on in the building space or I should say the skyscraper space, they're all computerized now. So you'd have that as an edge processing box, whether that be for the heating systems, the security systems, we can do that at the edge, but because of Spectrum Scale you could also send it back to whatever their core is, whether that be their core data center or whether they're working with a cloud provider. So for us, the ideal solution for us, is built around the Elastic Storage System 3000. Self-contained, two rack U, all-flash, but with Spectrum Scale on it, versus what we normally sell with our all-flash arrays, which tends to be our Spectrum Virtualize for block. This is file-based, can do the analytics at the edge, and then move the data to whatever target they want. So the source would be the ESS 3000 at the edge box, doing processing at the edge, such as an oil platform or in, I don't know what really you call it, but, you know, the guys that own all the buildings, right, who have all this stuff computerized. So that's at the edge, and then wherever their core data center is, or their cloud partner they can go that way. So it's an ideal solution because you can go back and forth to the cloud or back to their core data center, but do it with a super-compact, very high performance analytics engine that can sit at the edge. >> You know, I want to talk a little bit about business. I remember seven years ago, we covered, theCUBE, the z13 announcement, and I was talking to a practitioner at a very large bank, and I said, "You going to buy this thing?", this is the z13, you know, a couple of generations ago. He says, "Yeah, absolutely, I'll buy it sight unseen". I said, "Really, sight unseen?" He goes, "Yeah, no question. "By going to the upgrade, I'm able to drive "more transactions through my system "in a certain amount of time. "That's dropping revenue right to my bottom line. "It's a no-brainer for me." So fast forward to the z15 announcement in September in my breaking analysis, I said, "Look, IBM's going to have a great Q4 in systems", and the thing you did in storage is you synchronized, I don't know if it was by design or what, you synchronized the DS8000, new 8000 announcement with the z15, and I predicted at the time you're going to see an uptick in both the systems business, which we saw, huge, 63%, and the storage business grew I think three points as well. So I wonder if you can talk about that. Was that again by design, was it a little bit of luck involved, and you know, give us an update. >> So that was by design. When the z14 came out, which is right when I first come over from EMC, one of the things I said to my guys is, "Let's see, we have "the number one storage platform on the mainframe "in revenue, according to the analysts that check revenue. "When they launch a box, why are we not launching with them?" So for example, we were in that original press release on the z14, and then they ran a series of roadshows all over the world, probably 60. I said, "Well don't you guys do the roadshows?", and my team said, "No, we didn't do that on z12 and 13". I said, "Well were are now, because we're the number one "mainframe storage company". Why would we not go out there, get 20 minutes to speak, the bulk of it would be on the Zs. So A, we did that of course with this launch, but we also made sure that on day one launch, we were part of the launch and truly integrated. Why IBM hadn't been doing for a while is kind of beyond me, especially with our market position. So it helped us with a great quarter, helped us in the field, now by the way, we did talk about other areas that grew publicly, so there were other areas, particularly all-flash. Now we do have an all-flash 8900 of course, and the high-end tape grew as well, but our overall all-flash, both at the high end, mid range and entry, all grew. So all-flash for us was a home run. Yeah, I would argue that, you know, on the Z side, it was grand slam home run, but it was a home run even for the entry flash, which did very, very well as well. So, you know, we're hitting the right wheelhouse on flash, we led with the DS8900 attached to the Z, but some of that also pulls through, you get the magic fairy dust stuff, well they have an all-flash array on the Z, 'cause last time we didn't have an all, we had all-flash or hybrids, before that was hybrid and hard drive. This time we just said, "Forget that hybrid stuff. "We're going all-flash." So this helps, if you will, the magic fairy dust across the entire portfolio, because of our power with the mainframe, and you know, even in fact the quarter before, our entry products, we announced six nines of availability on an array that could be as low cost as $US16,000 for RAID 5 all-flash array, and most guys don't offer six nines of availability at the system level, let alone we have 100% availability guaranteed. We do charge extra for that, but most people won't even offer that on entry product, we do. So that's helped overall, and then the Z was a great launch for us. >> Now you guys, you obviously can't give guidance, you have to be very careful about that, but I, as I say, predicted in September that you'd have a good quarter in systems and storage both. I'm on the record now I'm going to say that you're going to continue to see growth, particularly in the storage side, I would say systems as well. So I would look for that. The other thing I want to point out is, you guys, you sell a lot of storage, you sell a lot of storage that sometimes the analysts don't track. When you sell into cloud, for example, IBM Storage Cloud, I don't think you get credit for that, or maybe the services, the global services division. So there's a big chunk of revenue that you don't get credited for, that I just want to highlight. Is that accurate? >> Yeah, so think about it, IBM is a very diverse company, all kinds of acquisitions, tons of different divisions, which we document publicly, and, you know, we do it differently than if it was Zoggan Store. So if I were Zoggan Store, a standalone storage company, I'd get all credit for supporting services, there's all kinds of things I'd get credit for, but because of IBM's history of how the company grew and how company acquired, stuff that is storage that Ed Walsh, or GM, does own, it's somewhat dispersed, and so we don't always get credit on it publicly, but the number we do in storage is substantially larger than what we report, 'cause all we really report is our storage systems business. Even our storage software, which one of the analysts that does numbers has us as the number two storage software company, when we do our public stuff, we don't take credit for that. Now, luckily that analyst publishes a report on the numbers side, and we are shown to be the number two storage software company in the world, but when we do our financial reporting, that, because just the history of IBM, is spread out over other parts of the company, even though our guys do the work on the sales side, the marketing side, the development side, all under Ed Walsh, but you know, part of that's just the history of the company, and all the acquisitions over years and years, remember it's a 100-year-old company. So, you know, just we don't always get all the credit, but we do own it internally, and our teams take and manage most of what is storage in the minds of storage analysts like you guys, you know what storage is, most of that is us. >> I wanted to point that out because a lot of times, practitioners will look at the data, and they'll say, oh wow, the sales person of the competitor will come in and say, "Look at this, we're number one!" But you really got to dig in, ask the questions, and obviously make the decisions for yourself. Eric, great to see you. We're going to see you later on this week as well we're going to dig into cyber. Thanks so much for coming back. >> Great, well thank you, you guys do a great job and theCUBE is literally the best at getting IT information out, particularly all the shows you do all over the world, you guys are top notch. >> Thank you. All right, and thank you for watching everybody, we'll be back with our next guest right after this break. We're here at Cisco Live in Barcelona, Dave Vellante, Stu Miniman, John Furrier. We'll be right back.

Published Date : Jan 28 2020

SUMMARY :

covering Cisco Live 2020, brought to you by Cisco but you might surpass him this week, Eric. and really appreciate the coverage you do and caught the beginning of your presentation yesterday. and obviously, as you guys know, Yeah, and I feel like, you know, and in fact, you know, what I've seen from talking So if you look at, you know, it's early 2020, and that just isn't true, because when you look at that broader message that you were just talking about. So Spectrum Scale goes out and back to the cloud, So to the extent that you can minimize the Multicloud Manager, you know, second half of last year, is going to determine in part what cloud you should go to. "We're going to help you manage that experience across clouds". and that's good for the IBM Corporation, So we talked about, you know, the data center, the security systems, we can do that at the edge, and the thing you did in storage is you synchronized, and you know, even in fact the quarter before, I'm on the record now I'm going to say in the minds of storage analysts like you guys, We're going to see you later on this week as well particularly all the shows you do all over the world, All right, and thank you for watching everybody,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
EricPERSON

0.99+

Dave VellantePERSON

0.99+

CiscoORGANIZATION

0.99+

Eric HerzogPERSON

0.99+

IBMORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

SeptemberDATE

0.99+

Pat GelsingerPERSON

0.99+

eightQUANTITY

0.99+

100%QUANTITY

0.99+

20 minutesQUANTITY

0.99+

John FurrierPERSON

0.99+

AWSORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

$US16,000QUANTITY

0.99+

63%QUANTITY

0.99+

EMCORGANIZATION

0.99+

Zoggan StoreORGANIZATION

0.99+

oneQUANTITY

0.99+

yesterdayDATE

0.99+

BarcelonaLOCATION

0.99+

VersaStackTITLE

0.99+

Ed WalshPERSON

0.99+

z14COMMERCIAL_ITEM

0.99+

DavePERSON

0.99+

z15COMMERCIAL_ITEM

0.99+

GoogleORGANIZATION

0.99+

this weekDATE

0.99+

Red HatORGANIZATION

0.99+

GMORGANIZATION

0.99+

Barcelona, SpainLOCATION

0.98+

sixQUANTITY

0.98+

DS8000COMMERCIAL_ITEM

0.98+

early 2020DATE

0.98+

both waysQUANTITY

0.98+

seven years agoDATE

0.98+

ESS 3000COMMERCIAL_ITEM

0.98+

bothQUANTITY

0.98+

IBM CorporationORGANIZATION

0.98+

100-year-oldQUANTITY

0.98+

six ninesQUANTITY

0.97+

z13COMMERCIAL_ITEM

0.97+

multicloudORGANIZATION

0.97+

DS8900COMMERCIAL_ITEM

0.97+

lpsQUANTITY

0.97+

z12COMMERCIAL_ITEM

0.96+

Eric Herzog, IBM Storage | CUBE Conversation December 2019


 

(funky music) >> Hello and welcome to theCUBE Studios in Palo Alto, California for another CUBE conversation, where we go in-depth with thought leaders driving innovation across the tech industry. I'm your host Peter Burris. Well, as I sit here in our CUBE studios, 2020's fast approaching, and every year as we turn the corner on a new year, we bring in some of our leading thought leaders to ask them what they see the coming year holding in the particular technology domain in which they work. And this one is no different. We've got a great CUBE guest, a frequent CUBE guest, Eric Herzog, the CMO and VP of Global Channels, IBM Storage, and Eric's here to talk about storage in 2020. Eric? >> Peter, thank you. Love being here at theCUBE. Great solutions. You guys do a great job on educating everyone in the marketplace. >> Well, thanks very much. But let's start really quickly, quick update on IBM Storage. >> Well, been a very good year for us. Lots of innovation. We've brought out a new Storwize family in the entry space. Brought out some great solutions for big data and AI solutions with our Elastic Storage System 3000. Support for backup in container environments. We've had persistent storage for containers, but now we can back it up with our award-winning Spectrum Protect and Protect Plus. We've got a great set of solutions for the hybrid multicloud world for big data and AI and the things you need to get cyber resiliency across your enterprise in your storage estate. >> All right, so let's talk about how folks are going to apply those technologies. You've heard me say this a lot. The difference between business and digital business is the role that data plays in a digital business. So let's start with data and work our way down into some of the trends. >> Okay. >> How are, in your conversations with customers, 'cause you talk to a lot of customers, is that notion of data as an asset starting to take hold? >> Most of our clients, whether it be big, medium, or small, and it doesn't matter where they are in the world, realize that data is their most valuable asset. Their customer database, their product databases, what they do for service and support. It doesn't matter what the industry is. Retail, manufacturing. Obviously we support a number of other IT players in the industry that leverage IBM technologies across the board, but they really know that data is the thing that they need to grow, they need to nurture, and they always need to make sure that data's protected or they could be out of business. >> All right, so let's now, starting with that point, in the tech industry, storage has always kind of been the thing you did after you did your server, after you did your network. But there's evidence that as data starts taking more center stage, more enterprises are starting to think more about the data services they need, and that points more directly to storage hardware, storage software. Let's start with that notion of the ascension of storage within the enterprise. >> So with data as their most valuable asset, what that means is storage is the critical foundation. As you know, if the storage makes a mistake, that data's gone. >> Right. >> If you have a malware or ransomware attack, guess what? Storage can help you recover. In fact, we even got some technology in our Spectrum Protect product that can detect anomalous activity and help the backup admin or the storage admins realize they're having a ransomware or malware attack, and then they could take the right corrective action. So storage is that foundation across all their applications, workloads, and use cases that optimizes it, and with data as the end result of those applications, workloads, and use cases, if the storage has a problem, the data has a problem. >> So let's talk about what you see as in that foundation some of the storage services we're going to be talking most about in 2020. >> Eric: So I think one of the big things is-- >> Oh, I'm sorry, data services that we're going to be talking most about in 2020. >> So I think one of the big things is the critical nature of the storage to help protect their data. People when they think of cyber security and resiliency think about keeping the bad guy out, and since it's not an issue of if, it's when, chasing the bad guy down. But I've talked to CIOs and other executives. Sometimes they get the bad guy right away. Other times it takes them weeks. So if you don't have storage with the right cyber resiliency, whether that be data at rest encryption, encrypting data when you send it out transparently to your hybrid multicloud environment, whether malware and ransomware detection, things like air gap, whether it be air gap to tape or air gap to cloud. If you don't think about that as part of your overall security strategy, you're going to leave yourself vulnerable, and that data could be compromised and stolen. So I can almost say that in 2020, we're going to talk more about how the relationship between security and data and storage is going to evolve, almost to the point where we're actually going to start thinking about how security can be, it becomes almost a feature or an attribute of a storage or a data object. Have I got that right? >> Yeah, I mean, think of it as storage infused with cyber resiliency so that when it does happen, the storage helps you be protected until you get the bad guy and track him down. And until you do, you want that storage to resist all attacks. You need that storage to be encrypted so they can't steal it. So that's a thing, when you look at an overarching security strategy, yes, you want to keep the bad guy out. Yes, you want to track the bad guy down. But when they get in, you'd better make sure that what's there is bolted to the wall. You know, it's the jewelry in the floor safe underneath the carpet. They don't even know it's there. So those are the types of things you need to rely on, and your storage can do almost all of that for you once the bad guy's there till you get him. >> So the second thing I want to talk about along this vein is we've talked about the difference between hardware and software, software-defined storage, but still it ends up looking like a silo for most of the players out there. And I've talked to a number of CIOs who say, you know, buying a lot of these software-defined storage systems is just like buying not a piece of hardware, but a piece of software as a separate thing to manage. At what point in time do you think we're going to start talking about a set of technologies that are capable of spanning multiple vendors and delivering a more broad, generalized, but nonetheless high function, highly secure storage infrastructure that brings with it software-defined, cloud-like capabilities. >> So what we see is the capability of A, transparently traversing from on-prem to your hybrid multicloud seamlessly. They can't, it can't be hard to do. It's got to happen very easily. The cloud is a target, and by the way, most mid-size enterprise and up don't use one cloud, they use many, so you've got to be able to traverse those many, move data back and forth transparently. Second thing we see coming this year is taking the overcomplexity of multiple storage platforms coupled with hybrid cloud and merging them across. So you could have an entry system, mid-range system, a high-end system, traversing the cloud with a single API, a single data management platform, performance and price points that vary depending on your application workload and use case. Obviously you use entry storage for certain things, high-end storage for other things. But if you could have one way to manage all that data, and by the way, for certain solutions, we've got this with one of our products called Spectrum Virtualize. We support enterprise-class data service including moving the data out to cloud not only on IBM storage, but over 450 other arrays which are not IBM-logoed. Now, that's taking that seamlessness of entry, mid-range, on-prem enterprise, traversing it to the cloud, doing it not only for IBM storage, but doing it for our competitors, quite honestly. >> Now, once you have that flexibility, now it introduces a lot of conversations about how to match workloads to the right data technologies. How do you see workloads evolving, some of these data-first workloads, AI, ML, and how is that going to drive storage decisions in the next year, year and a half, do you think? >> Well, again, as we talked about already, storage is that critical foundation for all of your data needs. So depending on the data need, you've got multiple price points that we've talked about traversing out to the cloud. The second thing we see is there's different parameters that you can leverage. For example, AI, big data, and analytic workloads are very dependent on bandwidth. So if you can take a scalable infrastructure that scales to exabytes of capacity, can scale to terabytes per second of bandwidth, then that means across a giant global namespace, for example, we've got with our Spectrum Scale solutions and our Elastic Storage System 3000 the capability of racking and stacking two rack U at a time, growing the capacity seamlessly, growing the performance seamlessly, providing that high-performance bandwidth you need for AI, analytic, and big data workloads. And by the way, guess what, you could traverse it out to the cloud when you need to archive it. So looking at AI as a major force in the coming, not just next year, but in the coming years to go, it's here to stay, and the characteristics that IBM sees that we've had in our Spectrum Scale products, we've had for years that have really come out of the supercomputing and the high-performance computing space, those are the similar characteristics to AI workloads, machine workloads, to the big data workloads and analytics. So we've got the right solution. In fact, the two largest supercomputers on this planet have almost an exabyte of IBM storage focused on AI, analytics, and big data. So that's what we see traversing everywhere. And by the way, we also see these AI workloads moving from just the big enterprise guys down into small shops, as well. So that's another trend you're going to see. The easier you make that storage foundation underneath your AI workloads, the more easy it is for the big company, the mid-size company, the small company all to get into AI and get the value. The small companies have to compete with the big guys, so they need something, too, and we can provide that starting with a little simple two rack U unit and scaling up into exabyte-class capabilities. >> So all these new workloads and the simplicity of how you can apply them nonetheless is still driving questions about how the storage hierarchies evolved. Now, this notion of the storage hierarchy's been around for, what, 40, 50 years, or something like that. >> Eric: Right. >> You know, tape and this and, but there's some new entrants here and there are some reasons why some of the old entrants are still going to be around. So I want to talk about two. How do you see tape evolving? Is that, is there still need for that? Let's start there. >> So we see tape as actually very valuable. We've had a real strong uptick the last couple years in tape consumption, and not just in the enterprise accounts. In fact, several of the largest cloud providers use IBM tape solutions. So when you need to provide incredible amounts of data, you need to provide primary, secondary, and I'd say archive workloads, and you're looking at petabytes and petabytes and petabytes and exabytes and exabytes and exabytes and zetabytes and zetabytes, you've got to have a low-cost platform, and tape provides still by far the lowest cost platform. So tape is here to stay as one of those key media choices to help you keep your costs down yet easily go out to the cloud or easily pull data back. >> So tape still is a reasonable, in fact, a necessary entrant in that overall storage hierarchy. One of the new ones that we're starting to hear more about is storage-class memory, the idea of filling in that performance gap between external devices and memory itself so that we can have a persistent store that can service all the new kinds of parallelism that we're introducing into these systems. How do you see storage-class memory playing out in the next couple years? >> Well, we already publicly announced in 2019 that in 2020, in the first half, we'd be shipping storage-class memory. It would not only working some coming systems that we're going to be announcing in the first half of the year, but they would also work on some of our older products such as the FlashSystem 9100 family, the Storwize V7000 gen three will be able to use storage-class memory, as well. So it is a way to also leverage AI-based tiering. So in the old days, flash would tier to disk. You've created a hybrid array. With storage-class memory, it'll be a different type of hybrid array in the future, storage-class memory actually tiering to flash. Now, obviously the storage-class memory is incredibly fast and flash is incredibly fast compared to disk, but it's all relative. In the old days, a hybrid array was faster than an all hard drive array, and that was flash and disk. Now you're going to see hybrid arrays that'll be storage-class memory and with our easy tier function, which is part of our Spectrum Virtualize software, we use AI-based tiering to automatically move the data back and forth when it's hot and when it's cool. Now, obviously flash is still fast, but if flash is that secondary medium in a configuration like that, it's going to be incredibly fast, but it's still going to be lower cost. The other thing in the early years that storage-class memory will be an expensive option from all vendors. It will, of course, over time get cheap, just the way flash did. >> Sure. >> Flash was way more expensive than hard drives. Over time it, you know, now it's basically the same price as what were the old 15,000 RPM hard drives, which have basically gone away. Storage-class over several years will do that, of course, as well, and by the way, it's very traditional in storage, as you, and I've been around so long and I've worked at hard drive companies in the old days. I remember when the fast hard drive was a 5400 RPM drive, then a 7200 RPM drive, then a 10,000 RPM drive. And if you think about it in the hard drive world, there was almost always two to three different spin speeds at different price points. You can do the same thing now with storage-class memory as your fastest tier, and now a still incredibly fast tier with flash. So it'll allow you to do that. And that will grow over time. It's going to be slow to start, but it'll continue to grow. We're there at IBM already publicly announcing. We'll have products in the first half of 2020 that will support storage-class memory. >> All right, so let's hit flash, because there's always been this concern about are we going to have enough flash capacity? You know, is enough going to, enough product going to come online, but also this notion that, you know, since everybody's getting flash from the same place, the flash, there's not going to be a lot of innovation. There's not going to be a lot of differentiation in the flash drives. Now, how do you see that playing out? Is there still room for innovation on the actual drive itself or the actual module itself? >> So when you look at flash, that's what IBM has funded on. We have focused on taking raw flash and creating our own flash modules. Yes, we can use industry standard solid state disks if you want to, but our flash core modules, which have been out since our FlashSystem product line, which is many years old. We just announced a new set in 2018 in the middle of the year that delivered in a four-node cluster up to 15 million IOPS with under 100 microseconds of latency by creating our own custom flash. At the same time when we launched that product, the FlashSystem 9100, we were able to launch it with NVME technology built right in. So we were one of the first players to ship NVME in a storage subsystem. By the way, we're end-to-end, so you can go fiber channel of fabric, InfiniBand over fabric, or ethernet over fabric to NVME all the way on the back side at the media level. But not only do we get that performance and that latency, we've also been able to put up to two petabytes in only two rack U. Two petabytes in two rack U. So incredibly rack density. So those are the things you can do by innovating in a flash environment. So flash can continue to have innovation, and in fact, you should watch for some of the things we're going to be announcing in the first half of 2020 around our flash core modules and our FlashSystem technology. >> Well, I look forward to that conversation. But before you go here, I got one more question for you. >> Sure. >> Look, I've known you for a long time. You spend as much time with customers as anybody in this world. Every CIO I talk to says, "I want to talk to the guy who brings me "or the gal who brings me the great idea." You know, "I want those new ideas." When Eric Herzog walks into their office, what's the good idea that you're bringing them, especially as it pertains to storage for the next year? >> So, actually, it's really a couple things. One, it's all about hybrid and multicloud. You need to seamlessly move data back and forth. It's got to be easy to do. Entry platform, mid-range, high-end, out to the cloud, back and forth, and you don't want to spend a lot of time doing it and you want it to be fully automated. >> So storage doesn't create any barriers. >> Storage is that foundation that goes on and off-prem and it supports multiple cloud vendors. >> Got it. >> Second thing is what we already talked about, which is because data is your most valuable asset, if you don't have cyber-resiliency on the storage side, you are leaving yourself exposed. Clearly big data and AI, and the other thing that's been a hot topic, which is related, by the way, to hybrid multiclouds, is the rise of the container space. For primary, for secondary, how do you integrate with Red Hat? What do you do to support containers in a Kubernetes environment? That's a critical thing. And we see the world in 2020 being trifold. You're still going to have applications that are bare metal, right on the server. You're going to have tons of applications that are virtualized, VMware, Hyper-V, KVM, OVM, all the virtualization layers. But you're going to start seeing the rise of the container admin. Containers are not just going to be the purview of the devops guy. We have customers that talk about doing 10,000, 20,000, 30,000 containers, just like they did when they first started going into the VM worlds, and now that they're going to do that, you're going to see customers that have bare metal, virtual machines, and containers, and guess what? They may start having to have container admins that focus on the administration of containers because when you start doing 30, 40, 50,000, you can't have the devops guy manage that 'cause you're deploying it all over the place. So we see containers. This is the year that containers starts to go really big-time. And we're there already with our Red Hat support, what we do in Kubernetes environments. We provide primary storage support for persistency containers, and we also, by the way, have the capability of backing that up. So we see containers really taking off in how it relates to your storage environment, which, by the way, often ties to how you configure hybrid multicloud configs. >> Excellent. Eric Herzog, CMO and vice president of partner strategies for IBM Storage. Once again, thanks for being on theCUBE. >> Thank you. >> And thanks for joining us for another CUBE conversation. I'm Peter Burris. See you next time. (funky music)

Published Date : Dec 29 2019

SUMMARY :

in the particular technology everyone in the marketplace. But let's start really quickly, and the things you need is the role that data plays that data is the thing of been the thing you did is the critical foundation. and help the backup admin some of the storage services that we're going to be talking of the storage to help protect their data. once the bad guy's there till you get him. So the second thing I want including moving the data out to cloud and how is that going to and the characteristics that IBM sees and the simplicity of are still going to be around. and not just in the enterprise accounts. that can service all the So in the old days, and by the way, it's very in the flash drives. in the middle of the year that delivered But before you go here, storage for the next year? and you don't want to spend and it supports multiple cloud vendors. and now that they're going to do that, Eric Herzog, CMO and vice See you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Eric HerzogPERSON

0.99+

Peter BurrisPERSON

0.99+

2019DATE

0.99+

EricPERSON

0.99+

PeterPERSON

0.99+

December 2019DATE

0.99+

2018DATE

0.99+

2020DATE

0.99+

IBMORGANIZATION

0.99+

15,000 RPMQUANTITY

0.99+

5400 RPMQUANTITY

0.99+

30QUANTITY

0.99+

twoQUANTITY

0.99+

10,000QUANTITY

0.99+

7200 RPMQUANTITY

0.99+

40QUANTITY

0.99+

10,000 RPMQUANTITY

0.99+

50 yearsQUANTITY

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

two rackQUANTITY

0.99+

IBM StorageORGANIZATION

0.99+

next yearDATE

0.99+

Two petabytesQUANTITY

0.99+

Global ChannelsORGANIZATION

0.99+

this yearDATE

0.99+

oneQUANTITY

0.99+

Elastic Storage System 3000COMMERCIAL_ITEM

0.99+

CUBEORGANIZATION

0.98+

first halfQUANTITY

0.98+

Second thingQUANTITY

0.98+

under 100 microsecondsQUANTITY

0.98+

20,000QUANTITY

0.98+

second thingQUANTITY

0.97+

OneQUANTITY

0.97+

one wayQUANTITY

0.96+

firstQUANTITY

0.96+

one more questionQUANTITY

0.96+

FlashSystem 9100COMMERCIAL_ITEM

0.95+

four-nodeQUANTITY

0.95+

singleQUANTITY

0.94+

theCUBEORGANIZATION

0.94+

two petabytesQUANTITY

0.93+

CMOORGANIZATION

0.92+

first playersQUANTITY

0.92+

first half of 2020DATE

0.91+

two largest supercomputersQUANTITY

0.89+

Red HatTITLE

0.89+

terabytesQUANTITY

0.89+

over 450 other arraysQUANTITY

0.88+

theCUBE StudiosORGANIZATION

0.86+

next couple yearsDATE

0.85+

year and a halfQUANTITY

0.85+

up to 15 million IOPSQUANTITY

0.84+

Spectrum ProtectCOMMERCIAL_ITEM

0.84+

yearsQUANTITY

0.84+

Eric Herzog, IBM Storage | VMworld 2019


 

>> Voiceover: Live from San Francisco, celebrating 10 years of high tech coverage, it's theCUBE. Covering VMworld 2019. Brought to you by VMware and its ecosystem partners. >> Welcome back, everyone, CUBE's live coverage for VMworld 2019 in Moscone North, in San Francisco, California. I'm John Furrier with Dave Vellante. Dave, our 10 years, we have Eric Herzog, the CMO and vice president of Global Storage Channels at IBM. CUBE alum, this is his 11th appearance on theCUBE at VMworld. That's the number one position. >> Dave: It's just at VMworld. >> Congratulations, welcome back. >> Well, thank you very much. Always love to come to theCUBE. >> John: Sporting the nice shirt and the IBM badge, well done. >> Thank you, thank you. >> What's going on with IBM in VMworld? First, get the news out. What's happening for you guys here? >> So for us, we just had a big launch actually in July. That was all about big data, storage for big data and AI, and also storage for cyber-resiliency. So we just had a big launch in July, so we're just sort of continuing that momentum. We have some exciting things coming out on September 12th in the high end of our storage product line, and then some additional things very heavily around containers at the end of October. >> So the open shift is the first question I have that pops into my head. You know, I think of IBM, I think of IBM Storage, I think of Red Hat, the acquisition, OpenShift's been very successful. Pat Gelsinger was talking containers, Kubernetes-- >> Eric: Right. >> OpenShift has been a big part of Red Hat's offering, now part of IBM. Has that Red Shift, I mean OpenShift's come in, to your world, and how do you guys view that? I mean, it's containers, obviously, is there any impact there at all? >> So from a storage perspective, no. IBM storage has been working with Red Hat for over 15 years, way before the company ever thought about buying them. So we went to the old Red Hat Summits, it was two guys, a dog, and a note, and IBM was there. So we've been supporting Red Hat for years, and years, and years. So for the storage division, it's probably one of the least changes to the direction, compared to the rest of IBM 'cause we were already doing so much with Red Hat. >> You guys were present at the creation of the whole Red Hat movement. >> Yeah, I mean we were-- >> We've seen the summits, but I was kind of teeing up the question, but legitimately though, now that you have that relationship under your belt-- >> Eric: Right. >> And IBM's into creating OpenShift in all the services, you're starting to see Red Hat being an integral part across IBM-- >> Eric: Right. >> Does that impact you guys at all? >> So we've already talked about our support for Red Hat OpenShift. We do support it. We also support any sort of container environment. So we've made sure that if it's not OpenShift and someone's going to leverage something else, that our storage will work with it. We've had support for containers now for two and half years. We also support the CSI Standard. We publicly announced that earlier in the year, that we'd be having products at the end of the year and into the next year around the CSI specification. So, we're working on that as well. And then, IBM also came out with a thing that are called the Cloud Paks. These Cloud Paks are built around Red Hat. These are add-ons that across multiple divisions, and from that perspective, we're positioned as, you know, really that ideal rock solid foundation underneath any of those Cloud Paks with our support for Red Hat and the container world. >> How about protecting containers? I mean, you guys obviously have a lot of history in data protection of containers. They're more complicated. There's lots of them. You spin 'em up, spin 'em down. If they don't spin 'em down, they're an attack point. What are your thoughts on that? >> Well, first thing I'd say is stay tuned for the 22nd of October 'cause we will be doing a big announcement around what we're doing for modern data protection in the container space. We've already publicly stated we would be doing stuff. Right, already said we'd be having stuff either the end of this year in Q4 or in Q1. So, we'll be doing actually our formal launch on the 22nd of October from Prague. And we'll be talking much more detail about what we're doing for modern data protection in the container space. >> Now, why Prague? What's your thinking? >> Oh, IBM has a big event called TechU, it's a Technical University, and there'll be about 2,000 people there. So, we'll be doing our launch as part of the TechU process. So, Ed Walsh, who you both know well and myself will be doing a joint keynote at that event on the 22nd. >> So, talk a little bit more about multi-cloud. You hear all kinds of stuff on multi-cloud here, and we've been talkin' on theCUBE for a while. It's like you got IBM Red Hat, you got Google, CISCO's throwin' a hat in the ring. Obviously, VMware has designs on it. You guys are an arms dealer, but of course, you're, at the same time, IBM. IBM just bought Red Hat so what are your thoughts on multi-cloud? First, how real is it? Sizeable opportunity, and from a storage perspective, storage divisions perspective, what's your strategy there? >> Well, from our strategy, we've already been takin' hybrid multi-cloud for several years. In fact, we came to Wikibon, your sister entity, and actually, Ed and I did a presentation to you in July of 2017. I looked it up, the title says hybrid multi-cloud. (Dave laughs) Storage for hybrid multi-cloud. So, before IBM started talkin' about it, as a company, which now is, of course, our official line hybrid multi-cloud, the IBM storage division was supporting that. So, we've been supporting all sorts of cloud now for several years. What we have called transparent cloud tiering where we basically just see cloud as a tier. Just the way Flash would see hard drive or tape as a tier, we now see cloud as a tier, and our spectrum virtualized for cloud sits in a VM either in Amazon or in IBM Cloud, and then, several of our software products the Spectrum line, Spectrum Protect, Spectrum Scale, are available on the AWS Marketplace as well as the IBM Cloud Marketplace. So, for us, we see multi-cloud from a software perspective where the cloud providers offer it on their marketplaces, our solutions, and we have several, got some stuff with Google as well. So, we don't really care what cloud, and it's all about choice, and customers are going to make that choice. There's been surveys done. You know, you guys have talked about it that certainly in the enterprise space, you're not going to use one cloud. You use multiple clouds, three, four, five, seven, so we're not going to care what cloud you use, whether it be the big four, right? Google, IBM, Amazon, or Azure. Could it be NTT in Japan? We have over 400 small and medium cloud providers that use our Spectrum Protect as the engine for their backup as a service. We love all 400 of them. By the way, there's another 400 we'd like to start selling Spectrum Protect as a service. So, from our perspective, we will work with any cloud provider, big, medium, and small, and believe that that's where the end users are going is to use not just one cloud provider but several. So, we want to be the storage connected. >> That's a good bet, and again, you bring up a good point, which I'll just highlight for everyone watching, you guys have made really good bets early, kind of like we were just talking to Pat Gelsinger. He was making some great bets. You guys have made some, the right calls on a lot of things. Sometimes, you know, Dave's critical of things in there that I don't really have visibility in the storage analyst he is, but generally speaking, you, Red Hat, software, the systems group made it software. How would you describe the benefits of those bets paying off today for customers? You mentioned versatility, all these different partners. Why is IBM relevant now, and from those bets that you've made, what's the benefit to the customers? How would you talk about that? Because it's kind of a big message. You got a lot going on at IBM Storage, but you've made some good bets that turned out to be on the right side of tech history. What are those bets? And what are they materializing into? >> Sure, well, the key thing is you know I always wear a Hawaiian shirt on theCUBE. I think once maybe I haven't. >> You were forced to wear a white shirt. You were forced to wear the-- >> Yes, an IBM white shirt, and once, I actually had a shirt from when I used to work for Pat at the EMC, but in general, Hawaiian shirt, and why? Because you don't fight the wave, you ride the wave, and we've been riding the wave of technology. First, it was all about AI and automation inside of storage. Our easy tier product automatically tiers. You don't have, all you do is set it up once, and after that, it automatically moves data back and forth, not only to our arrays, but over 450 arrays that aren't ours, and the data that's hottest goes to the fastest tier. If you have 15,000 RPM drives, that's your fastest, it automatically knows that and moves data back and forth between hot, fast, and cold. So, one was putting AI and automation in storage. Second wave we've been following was clearly Flash. It's all about Flash. We create our own Flash, we buy raw Flash, create our own modules. They are in the industry standard form factor, but we do things, for example, like embed encryption with no performance hit into the Flash. Latency as low as 20 microseconds, things that we can do because we take the Flash and customize it, although it is in industry standard form factor. The other one is clearly storage software and software-defined storage. All of our arrays come with software. We don't sell hardware. We sell a storage solution. They either come with Spectrum Virtualize or Spectrum Scale, but those packages are also available stand-alone. If you want to go to your reseller or your distributor and buy off-the-shelf white-box componentry, storage-rich servers, you can create your own array with Spectrum Virtualize for block, Spectrum Scale for File, IBM Object Storage for Cloud. So, if someone wants to buy software only, just the way Pat was talking about software-defined networking, we'll sell 'em software for file blocker object, and they don't buy any infrastructure from us. They only buy the software, so-- >> So, is that why you have a large customer base? Is that why there's so much, diverse set of implementations? >> Well, we've got our customers that are system-oriented, right, some you have Flash system. Got other customers that say, "Look, I just want to buy Spectrum Scale. "I don't want to buy your infrastructure. "Just I'll build my own," and we're fine with that. And the other aspect we have, of course, is we've got the modern data protection with Spectrum Protect. So, you've got a lot of vendors out on the floor. They only sell backup. That's all they sell, and you got other people on the floor, they only sell an array. They have nice little arrays, but they can't do an array and software-defined storage and modern data protection one throat to choke, one tech support, entity to deal with one set of business partners to deal with, and we can do that, which is why it's so diverse. We have people who don't have any of IBM storage at all, but they back up everything with Spectrum Protect. We have other customers who have Flash systems, but they use backup from one of our competitors, and that's okay 'cause we'll always get a PO one way or another, right? >> So, you want the choice as factor. >> Right. >> Question on the ecosystem and your relationship with VMware. As John said, 10th year at VMworld, if you go back 10 years, storage, VMware storage was limited. They had very few resources. They were throwin' out APIs to the storage industry and sayin' here, you guys, fix this problem, and you had this cartel, you know, it was EMC, IBM was certainly in there, and NetApp, a couple others, HPE, HP at the time, Dell, I don't know, I'm not sure if Dell was there. They probably were, but you had the big Cos that actually got the SDK early, and then, you'd go off and try to sell all the storage problems. Of course, EMC at the time was sort of puttin' the brakes on VMware. Now, it's totally different. You've got, actually similar cartel. Although, you've got different ownership structure with Dell, EMC, and you got (mumbles) VMwware's doin' its own software finally. The cuffs are off. So, your thoughts on the changes that have gone on in the ecosystem. IBM's sort of position and your relationship with VMware, how that's evolved. >> So, the relationship for us is very tight. Whether it be the old days of VASA, VAAI, V-center op support, right, then-- >> Dave: V-Vault, yeah yeah. >> Now, V-Vault two so we've been there every single time, and again, we don't fight the wave, we ride the wave. Virtualization's a wave. It's swept the industry. It swept the end users. It's swept every aspect of compute. We just were riding that wave and making sure our storage always worked with it with VMware, as well as other hypervisors as well, but we always supported VMware first. VMware also has a strong relationship with the cloud division, as you know, they've now solved all kinds of different things with IBM Cloud so we're making sure that we stay there with them and are always up front and center. We are riding all the waves that they start. We're not fighting it. We ride it. >> You got the Hawaiian shirt. You're riding the waves. You're hanging 10, as you used to say. Toes on the nose, as the expression goes. As Pat Gelsinger says, ride the new wave, you're a driftwood. Eric, great to see you, CMO of IBM Storage, great to have you all these years and interviewing you, and gettin' the knowledge. You're a walking storage encyclopedia, Wikipedia, thanks for comin' on. >> Great, thank you. >> All right, it's more CUBE coverage here live in San Francisco. I'm John Furrier for Dave Vellante, stay with us. I got Sanjay Putin coming up, and we have all the big executives who run the different divisions. We're going to dig into them. We're going to get the data, share with you. We'll be right back. (upbeat music)

Published Date : Aug 27 2019

SUMMARY :

Brought to you by VMware and its ecosystem partners. That's the number one position. Well, thank you very much. and the IBM badge, well done. First, get the news out. in the high end of our storage product line, So the open shift is the first question I have to your world, and how do you guys view that? it's probably one of the least changes to the direction, of the whole Red Hat movement. We publicly announced that earlier in the year, I mean, you guys obviously have a lot of history for the 22nd of October So, Ed Walsh, who you both know well and myself and we've been talkin' on theCUBE for a while. and actually, Ed and I did a presentation to you You guys have made some, the right calls on a lot of things. Sure, well, the key thing is you know I always wear You were forced to wear a white shirt. They are in the industry standard form factor, And the other aspect we have, of course, that actually got the SDK early, So, the relationship for us is very tight. We are riding all the waves that they start. and gettin' the knowledge. and we have all the big executives who run

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Eric HerzogPERSON

0.99+

Dave VellantePERSON

0.99+

JohnPERSON

0.99+

Ed WalshPERSON

0.99+

IBMORGANIZATION

0.99+

Pat GelsingerPERSON

0.99+

CISCOORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

EricPERSON

0.99+

DavePERSON

0.99+

July of 2017DATE

0.99+

John FurrierPERSON

0.99+

San FranciscoLOCATION

0.99+

DellORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

JapanLOCATION

0.99+

Sanjay PutinPERSON

0.99+

JulyDATE

0.99+

September 12thDATE

0.99+

EdPERSON

0.99+

10 yearsQUANTITY

0.99+

PragueLOCATION

0.99+

HPORGANIZATION

0.99+

PatPERSON

0.99+

two guysQUANTITY

0.99+

Red HatORGANIZATION

0.99+

VMwareORGANIZATION

0.99+

FirstQUANTITY

0.99+

EMCORGANIZATION

0.99+

CUBEORGANIZATION

0.99+

VMworldORGANIZATION

0.99+

Moscone NorthLOCATION

0.99+

22nd of OctoberDATE

0.99+

Global Storage ChannelsORGANIZATION

0.99+

OpenShiftORGANIZATION

0.99+

Red HatTITLE

0.99+

first questionQUANTITY

0.99+

20 microsecondsQUANTITY

0.99+

VAAIORGANIZATION

0.99+

HPEORGANIZATION

0.99+

WikibonORGANIZATION

0.99+

10th yearQUANTITY

0.99+

15,000 RPMQUANTITY

0.98+

VMworld 2019EVENT

0.98+

AzureORGANIZATION

0.98+

11th appearanceQUANTITY

0.98+

AWSORGANIZATION

0.98+

OpenShiftTITLE

0.98+

Eric Herzog, IBM | CUBEConversation, March 2019


 

(upbeat music) [Announcer] From our studios in the heart of Silicon Valley Palo Alto, California. This is a CUBE conversation. >> Hi, I'm Peter Burris, and welcome to another CUBE conversation from our studios in beautiful Palo Alto, California. One of the biggest challenges that every user faces is how are they going to arrange their resources that are responsible for storing, managing, delivering, and protecting data. And that's a significant challenge, but it gets even worse when we start talking about multi-cloud. So, today we've got Eric Herzog who's the CMO and VP of Worldwide Storage Channels at IBM Storage to talk a bit about the evolving relationship of what constitutes a modern, comprehensive storage portfolio and multi-cloud. Eric, welcome to theCUBE. >> Peter, Thank you, thank you. >> So, start off, what's happening with IBM Storage these days, and let's get into this kind of how multi-cloud is affecting some of your decisions, and some of your customer's decisions. >> So, what we've done, is we've started talking about multi-cloud over two years ago. When Ed Walsh joined the company as a general manager, we went on an analyst roadshow, in fact, we came here to theCUBE and shot a video, and we talked about how the IBM Storage Division is all about multi-cloud. And we look about that in three ways. First of all, if you are creating a private cloud, we work with you. From a container, whether you're Vmware based, whether you are doing a more traditional cloud- private cloud. Now the modern private cloud, all container based. Second is Hybrid Cloud, data on parem, out to a public cloud provider. And the third aspect, and in fact, you guys have written about it in one of your studies is that no one is going to use one public cloud provider, they're going to use multiple cloud providers. So whether that be IBM Cloud, which of course we love because we're IBM shareholders, but we work with Amazon, we work with Google, and in fact we work with any cloud provider. Our Spectrum Protect backup product, which is one of the most awarded enterprise backup packages can backup to any cloud. In fact, over 350 small to medium cloud providers, the engine for their backup as a service, is Spectrum Protect. Again, completely heterogeneous, we don't care what cloud you use, we support everyone. And we started that mantra two and a half years ago, when Ed first joined the company. >> Now, I remember when you came on, we talked a lot about this notion of data first and the idea that data driven was what we talked about >> Right, data driven. >> And increasingly, we talked about, or we made the observation that enterprises were going to take a look at the natural arrangement of their data, and that was going to influence a lot of their cloud, a lot of their architecture, and certainly a lot of their storage divisions or decisions. How is that playing out? Is that still obtaining? Are you still seeing more enterprises taking this kind of data driven approach to thinking about their overall cloud architectures? >> Well the world is absolutely data-centric. Where does the data go? What are security issues with that data? How is it close to the compute when I need it? How do I archive I, how do I back it up? How do I protect it? We're here in Silicon Valley. I'm a native Palo Alton, by the way, and we really do have earthquakes here, and they really do have earthquakes in Japan and China and there is all kinds of natural disasters. And of course as you guys have pointed out, as have almost all of the analysts, the number one cause of data loss besides humans is actually still fire. Even with fire suppressant data centers. >> And we have fires out here in Northern California too. >> That's true. So, you've got to make sure that you're backing up that data, you're archiving the data. Cloud could be part of that strategy. When does it need to be on parem, when does it need to be off parem? So, it's all about being a data-driven, and companies look at the data, profile the date and time, What sort of storage do I need? Can I go high end, mid-range and entry, profile that data, figure that out, what they need to do. And then do the same thing now with on parem and off parem. For certain data sets, for security reasons, legal reasons you probably are not going to put it out into a public cloud provider. But other data sets are ideal for that and so all of those decisions that are being made by: What's the security of the data? What's the legality of that data? What's the performance I need of that data? And, how often do I need the data? If you're going to constantly go back and forth, pull data back in, going to a public cloud provider, which charge both for in and out of the data, that actually may cost more than buying an Array on parem. And so, everyone's using that data-centricity to figure out how do they spend their money, and how do they optimize the data to use it in their applications, workloads and use cases. >> So, if you think about it, the reality is by application, workload, location, regulatory issues, we're seeing enterprises start to recognize and increase specialization of their data assets. And that's going to lead to a degree of specializations in the classes of data management and storage technologies that they utilize. Now, what is the challenge of choosing a specific solution versus looking at more of a portfolio of solutions, that perhaps provide a little bit more commonality? How are customers, how are the IMB customer base dealing with that question. >> Well, for us the good thing was to have a broad portfolio. When you look at the base storage Arrays we have file, block and object, they're all award winning. We can go big, we can go medium, and we can go small. And because of what we do with our Array family we have products that tend to be expensive because of what they do, products that mid-price and products that are perfect for Herzog's Bar and Grill. Or maybe for 5,000 different bank branches, 'cause that bank is not going to buy expensive storage for every branch. They have a small Array there in case core goes down, of course. When you or I go in to get a check or transact, if the core data center is down, that Wells Fargo, BofA, Bank of Tokyo. >> Still has to do business. >> They are all transacting. There's a small Array there. Well you don't want to spend a lot of money for that, you need a good, reliable all flash Array with the right RAS capability, right? The availability, capability, that's what you need, And we can do that. The other thing we do is, we have very much, cloud-ified everything we do. We can tier to the cloud, we can backup to the cloud. With object storage we can place it in the cloud. So we've made the cloud, if you will, a seamless tier to the storage infrastructure for our customers. Whether that be backup data, archive data, primary data, and made it so it's very easy to do. Remember, with that downturn in '08 and '09 a lot of storage people left their job. And while IT headcount is back up to where it used to be, in fact it's actually exceeded, if there was 50 storage guys at Company X, and they had to let go 25 of them, they didn't hire 25 storage guys now, but they got 10 times the data. So they probably have 2 more storage guys, they're from 25 to 27, except they're managing 10 times the data, so automation, seamless integration with clouds, and being multi-cloud, supporting hybrid clouds is a critical thing in today's storage world. >> So you've talked a little bit about format, data format issues still impact storage decisions. You've talked about how disasters or availability still impact storage decisions, certainly cost does. But you've also talked about some of the innovative things that are happening, security, encryption, evolved backup and and restore capabilities, AI and how that's going to play, what are some of the key thing that your customer base is asking for that's really driving some of your portfolio decisions? >> Sure, well when we look beyond making sure we integrate with every cloud and make it seamless, the other aspect is AI. AI has taken off, machine learning, big data, all those. And there it's all about having the right platform from an Array perspective, but then marrying it with the right software. So for example, our scale-out file system, Spectrum Scale can go to Exabyte Class, in fact the two fastest super computers on this planet have almost half an exabyte of IBM Spectrum Scale for big data, analytics, and machine learning workloads. At the same time you need to have Object Store. If you're generating that huge amount of data set in AI world, you want to be able to put it out. We also now have Spectrum discover, which allows you to use Metadata, which is the data about the data, and allow and AI app, a machine learning app, or an analytics app to actually access the metadata through an API. So that's one area, so cloud, then AI, is a very important aspect. And of course, cyber resiliency, and cyber security is critical. Everyone thinks, I got to call a security company, so the IBM Security Division, RSA, Check Point, Symantec, McAfee, all of these things. But the reality is, as you guys have noted, 98% of all enterprises are going to get broken into. So while they're in your house, they can steal you blind. Before the cops show up, like the old movie, what are they doing? They're loading up the truck before the cops show up. Well guess what, what if that happened, cops didn't show up for 20 minutes, but they couldn't steal anything, or the TV was tied to your fingerprint? So guess what, they couldn't use the TV, so they couldn't steal it, that's what we've done. So, whether it be encryption everywhere, we can encrypt backup sets, we can encrypt data at rest, we can even encrypt Arrays that aren't ours with our Spectrum Virtualize family. Air gapping, so that if you have ransomware or malware you can air-gap to tape. We've actually created air gapping out with a cloud snapshot. We have a product called Safeguard Copy which creates what I'll call a faux air gap in the mainframe space, but allows that protection so it's almost as if it was air gapped even though it's on an Array. So that's a ransomware and malware, being able to detect that, our backup products when they see an unusual activity will flag the backup restore jam and say there is unusual activity. Why, because ransomware and malware generate unusual activity on back up data sets in particular, so it's flaky. Now we don't go out and say, "By the way, that's Herzog ransomware, or "Peter Burris ransomware." But we do say "something is wrong, you need to take a look." So, integrating that sort of cyber resiliency and cyber security into the entire storage portfolio doesn't mean we solve everything. Which is why when you get an overall security strategy, you've got that Great Wall of China to keep the enemy out, you've got the what I call, chase software to get the bad guy once he's in the house, the cops that are coming to get the bad guy. But you've got to be able to lock everything down, you'll do it. So a comprehensive security strategy, and resiliency strategy involves not only your security vendor, but actually your storage vendor. And IBM's got the right cyber resiliency and security technology on the storage side to marry up, regardless of which security vendor they choose. >> Now you mention a number of things that are associated with how an enterprise is going to generate greater leverage, greater value out of data that you already know. So, you mentioned, you know, encryption end to end, you mention being able to look at metadata for AI applications. As we move to a software driven world of storage where physical volumes can still be made more virtual so you can move them around to different workloads. >> Right. >> And associate the data more easily, tell us a little bit about how data movement becomes an issue in the storage world, because the storage has already been associated with it's here. But increasingly, because of automation, because of AI, because of what businesses are trying to do, it's becoming more associated with intelligent, smart, secure, optimized movement of data. How is that starting to impact the portfolio? >> So we look at that really as data mobility. And data mobility can be another number of different things, for example, we already mentioned, we treat clouds as transparent tiers. We can backup to cloud, that's data mobility. We also tier data, we can tier data within an Array, or the Spectrum Virtualize product. We can tier data, block data cross 450 Arrays, most of which aren't IBM logo'd. We can tier from IBM to EMC, EMC can then tier to HDS, HDS can tier to Hitachi, and we do that on Arrays that aren't ours. So in that case what you're doing is looking for the optimal price point, whether it be- >> And feature set. >> And feature sets, and you move things, data around all transparently, so it's all got to be automated, that's another thing, in the old days we thought we had Nirvana when the tiering was automatically moved the data when it's 30 days old. What if we automatically move data with our Easy Tier technology through AI, when the data is hot moves it to the hottest tier, when the data is cold it puts it out to the lowest cost tier. That's real automation leveraging AI technology. Same thing, something simple, migration. How much money have all the storage companies made on migration services? What if you could do transparent block migration in the background on the fly, without ever taking your servers down, we can do that. And what we do is, it's so intelligent we always favor the data set, so when the data is being worked on, migration slows down. When the data set slows down, guess what? Migration picks up. But the point is, data mobility, in this case from an old Array to an new Array. So whether it be migrating data, whether it be tiering data, whether you're moving data out to the cloud, whether it be primary data or backup data, or object data for archive, the bottom line is we've infused not only the cloudification of our storage portfolio, but the mobility aspects of the portfolio. Which does of course include cloud. But all tiering more likely is on premise. You could tier to the cloud, but all flash Array to a cheap 7200 RPM Array, you save a lot of money and we can do that using AI technology with Easy Tier. All examples of moving data around transparently, quickly, efficiently, to save cost both in CapEx, using 7200 RPM Arrays of course to cut costs, but actually OpEx the storage admin, there aren't a hundred storage admins at Burris Incorporated. You had to let them go, you've hired 100 of the people back, but you hired them all for DevOps so you have 50 guys in storage >> Actually there are, but I'm a lousy businessman so I'm not going to be in business long. (laughing) One more question, Eric. I mean look you're an old style road warrior, you're out with customers a lot. Increasingly, and I know this because we've talked about it, you're finding yourself trying to explain to business people, not just IT people how digital business, data and storage come together. When you're having these conversations with executives on the business side, how does this notion of data services get discussed? What are some of the conversations like? >> Well I think the key thing you got to point out is storage guys love to talk speeds and feeds. I'm so old I can still talk TPI and BPI on hard drives and no one does that anymore, right? But, when you're talking to the CEO or the CFO or the business owner, it's all about delivering data at the right performance level you need for your applications, workloads and use cases, your right resiliency for applications, workloads and use cases, your right availability, so it's all about application, workloads, and use cases. So you don't talk about storage speeds and feeds that you would with Storage Admin, or maybe in the VP of infrastructure in the Fortune 500, you'd talk about it's all about the data, keeping the data secure, keeping the data reliable, keeping it at right performance. So if it's on the type of workload that needs performance, for example, let's take the easy one, Flash. Why do I need Flash? Well, Mr. CEO, do you use logistics? Of course we do! Who do you use, SAP. Oh, how long does that logistics workload take? Oh, it takes like 24 hours to run. What if I told you you could run that every night, in an hour? That's the power of Flash. So you translate what you and I are used to, storage nerdiness, we translate it into businessfied, in this case, running that SAP workload in an hour vs. 24 has a real business impact. And that's the way you got to talk about storage these days. When you're out talking to a storage admin, with the admin, yes, you want to talk latency and IOPS and bandwidth. But the CEO is just going to turn his nose up. But when you say I can run the MongoDB workload, or I can do this or do that, and I can do it. What was 24 hours in an hour, or half an hour. That translates to real data, and real value out of that data. And that's what they're looking for, is how to extract value from the data. If the data isn't performant, you get less value. If the data isn't there, you clearly have no value. And if the data isn't available enough so that it's down part time, if you are doing truly digital business. So, if Herzog's Bar and Grill, actually everything is done digitally, so before you get that pizza, or before you get that cigar, you have to order it online. If my website, which has a database underneath, of course, so I can handle the transactions right, I got to take the credit card, I got to get the orders right. If that is down half the time, my business is down, and that's an example of taking IT and translating it to something as simple as a Bar and Grill. And everyone is doing it these days. So when you talk about, do you want that website up all the time? Do you need your order entry system up all the time? Do you need your this or that? Then they actually get it, and then obviously, making sure that the applications run quickly, swiftly, and smoothly. And storage is, if you will, that critical foundation underneath everything. It's not the fancy windows, it's not the fancy paint. But if that foundation isn't right, what happens? The whole building falls down. And that's exactly what storage delivers regardless of the application workload. That right critical foundation of performance, availability, reliability. That's what they need, when you have that all applications run better, and your business runs better. >> Yeah, and the one thing I'd add to that, Eric, is increasingly the conversations that we're having is options. And one of the advantages of a large portfolio or a platform approach is that the things you're doing today, you'll discover new things that you didn't anticipate, and you want the option to be able to do them quickly. >> Absolutely. >> Very, very important thing. So, applications, workload, use cases, multi-cloud storage portfolio. Eric, thanks again for coming on theCUBE, always love having you. >> Great, thank you. >> And once again, I'm Peter Burris, talking with Eric Herzog, CMO, VP of Worldwide Storage Channels at IBM Storage. Thanks again for watching this CUBE conversation, until next time. (upbeat music)

Published Date : Mar 22 2019

SUMMARY :

[Announcer] From our studios in the heart One of the biggest challenges that every user faces how multi-cloud is affecting some of your And the third aspect, and in fact, you guys have take a look at the natural arrangement of their And of course as you guys have pointed out, as have What's the legality of that data? How are customers, how are the IMB customer base And because of what we do with our Array family We can tier to the cloud, we can backup to the cloud. AI and how that's going to play, But the reality is, as you guys have noted, 98% of data that you already know. And associate the data more easily, tell us a little HDS, HDS can tier to Hitachi, and we cloudification of our storage portfolio, but the What are some of the conversations like? And that's the way you got to talk about storage these days. Yeah, and the one thing I'd add to that, Eric, is multi-cloud storage portfolio. And once again, I'm Peter Burris, talking with

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
EMCORGANIZATION

0.99+

BofAORGANIZATION

0.99+

IBMORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

Eric HerzogPERSON

0.99+

McAfeeORGANIZATION

0.99+

Peter BurrisPERSON

0.99+

GoogleORGANIZATION

0.99+

EricPERSON

0.99+

SymantecORGANIZATION

0.99+

Ed WalshPERSON

0.99+

HitachiORGANIZATION

0.99+

JapanLOCATION

0.99+

Wells FargoORGANIZATION

0.99+

10 timesQUANTITY

0.99+

24 hoursQUANTITY

0.99+

March 2019DATE

0.99+

Silicon ValleyLOCATION

0.99+

25QUANTITY

0.99+

50 guysQUANTITY

0.99+

Bank of TokyoORGANIZATION

0.99+

RSAORGANIZATION

0.99+

Check PointORGANIZATION

0.99+

20 minutesQUANTITY

0.99+

Northern CaliforniaLOCATION

0.99+

Company XORGANIZATION

0.99+

98%QUANTITY

0.99+

Burris IncorporatedORGANIZATION

0.99+

Herzog's Bar and GrillORGANIZATION

0.99+

ChinaLOCATION

0.99+

PeterPERSON

0.99+

'08DATE

0.99+

half an hourQUANTITY

0.99+

IBM StorageORGANIZATION

0.99+

SecondQUANTITY

0.99+

50 storage guysQUANTITY

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

'09DATE

0.99+

third aspectQUANTITY

0.99+

24QUANTITY

0.99+

7200 RPMQUANTITY

0.99+

27QUANTITY

0.99+

oneQUANTITY

0.99+

FirstQUANTITY

0.99+

IBM Security DivisionORGANIZATION

0.99+

Bar and GrillORGANIZATION

0.98+

firstQUANTITY

0.98+

Easy TierOTHER

0.98+

IBM StorageORGANIZATION

0.98+

bothQUANTITY

0.98+

an hourQUANTITY

0.98+

three waysQUANTITY

0.98+

One more questionQUANTITY

0.97+

OneQUANTITY

0.97+

Great Wall of ChinaLOCATION

0.97+

todayDATE

0.96+

Palo AltonLOCATION

0.96+

5,000 different bank branchesQUANTITY

0.95+

two and a half years agoDATE

0.93+

CUBEORGANIZATION

0.93+

hundred storage adminsQUANTITY

0.93+

2 more storage guysQUANTITY

0.93+

SpectrumCOMMERCIAL_ITEM

0.91+

Silicon Valley Palo Alto, CaliforniaLOCATION

0.91+

VmwareORGANIZATION

0.91+

two fastest super computersQUANTITY

0.9+

one areaQUANTITY

0.89+

25 storage guysQUANTITY

0.87+

paremORGANIZATION

0.87+

30 days oldQUANTITY

0.86+

HerzogORGANIZATION

0.86+

FlashTITLE

0.85+

EdPERSON

0.85+

Spectrum VirtualizeCOMMERCIAL_ITEM

0.84+

IBM Storage DivisionORGANIZATION

0.83+

IBM Flash System 9100 Digital Launch


 

(bright music) >> Hi, I'm Peter Burris, and welcome to another special digital community event, brought to you by theCUBE and Wikibon. We've got a great session planned for the next hour or so. Specifically, we're gonna talk about the journey to the data-driven multi-cloud. Sponsored by IBM, with a lot of great thought leadership content from IBM guests. Now, what we'll do is, we'll introduce some of these topics, we'll have these conversations, and at the end, this is gonna be an opportunity for you to participate, as a community, in a crowd chat, so that you can ask questions, voice your opinions, hear what others have to say about this crucial issue. Now why is this so important? Well Wikibon believes very strongly that one of the seminal features of the transition to digital business, driving new-type AI classes of applications, et cetera, is the ability of using flash-based storage systems and related software, to do a better job of delivering data to more complex, richer applications, faster, and that's catalyzing a lot of the transformation that we're talking about. So let me introduce our first guest. Eric Herzog is the CMO and VP Worldwide Storage Channels at IBM. Eric, thanks for coming on theCUBE. >> Great, well thank you Peter. We love coming to theCUBE, and most importantly, it's what you guys can do to help educate all the end-users and the resellers that sell to them, and that's very, very valuable and we've had good feedback from clients and partners, that, hey, we heard you guys on theCUBE, and very interesting, so I really appreciate all the work you guys do. >> Oh, thank you very much. We've got a lot of great things to talk about today. First, and I want to start it off, kick off the proceedings for the next hour or so by addressing the most important issue here. Data-driven. Now Wikibon believes that digital transformation means something, it's the process by which a business treats data as an asset, and re-institutionalizes its work and changes the way it engages with customers, et cetera. But this notion of data-driven is especially important because it elevates the role that storage is gonna play within an organization. Sometimes I think maybe we shouldn't even call it storage. Talk to us a little bit about data-driven and how that concept is driving some of the concepts in innovation that are represented in this and future IBM products. >> Sure. So I think the first thing, it is all about the data, and it doesn't matter whether you're a small company, like Herzog's Bar and Grill, or the largest Fortune 500 in the world. The bottom line is, your most valuable asset is you data, whether that's customer data, supply chain data, partner data that comes to you, that you use, services data, the data you guys sell, right? You're an analysis firm, so you've got data, and you use that data to create you analysis, and then you use that as a product. So, data is the most critical asset. At the same time, data always goes onto storage. So if that foundation of storage is not resilient, is not available, is not performant, then either A, it's totally unavailable, right, you can't get to the customer data. B, there's a problem with the data, okay, so you're doing supply chain and if the storage corrupts the data, then guess what? You can't send out the T-shirts to the right retail location, or have it available online if you're an online retailer. >> Or you sent 200,000 instead of 20, and you get stuck with the bill. >> Right, exactly. So data is that incredible asset and then underneath, think of storage as the foundation of a building. Data is your building, okay, and all the various aspects of that data, customer data, your data, internal data, everything you're doing, that's the building. If the foundation of the building isn't rock solid the building falls down. Whether your building is big or small, and that's what storage does, and then storage can also optimize the building above it. So think of it more than just the foundation but the foundation if you will, that almost has like a tree, and has got things that come up from the bottom and have that beautiful image, and storage can help you out. For example, metadata. Metadata which is data about data could be used by analytics, package them, well guess what? The metadata about data could be exposed by the storage company. So that's why data-driven is so important from an end-user perspective and why storage is that foundation underneath a data-driven enterprise. >> Now we've seen a lot of folks talk about how cloud is the centerpiece of thinking about infrastructure. You're suggesting that data is the centerpiece of infrastructure, and cloud is gonna be an implementation decision. Where do I put the workloads, costs, all the other elements associated with it. But it suggests ultimately that data is not gonna end up in one place. We have to think about data as being where it needs to be to perform the work. That suggests multi-cloud, multi-premise. Talk to us a little bit about the role that storage and multi-cloud play together. >> So let's take multi-cloud first and peel that away. So multi-cloud, we see a couple of different things. So first of all, certain companies don't want to use a public cloud. Whether it's a security issue, and actually some people have found out that public cloud providers, no matter who the vendor is, sort of is a razor in a razor blade. Very cheap to put the storage out there but we want certain SLAs, guess what? The cloud vendors charge more. If you move data around a lot, in and out as you were describing, it's really that valuable, guess what? On ingress and egress gets you charges for that. The cloud provider. So it's almost the razor and the razor blades. So A, there's a cost factor in public only. B, you've got people that have security issues. C, what we've seen is, in many cases, hybrid. So certain datasets go out to the cloud and other datasets stay on the premises. So you've got that aspect of multi, which is public, private or hybrid. The second aspect, which is very common in bigger companies that are either divisionalized or large geographically, is literally the usage, in a hybrid or a public cloud environment, of multiple cloud vendors. So for example, in several countries the data has to physically stay within the confines of that country. So if you're a big enterprise and you've got offices in 200 different, well not 200, but 100 different countries, and 20 of 'em you have to keep in that country by law. If your cloud provider doesn't have a data center there you need to use a different cloud provider. So you've got that. And you also have, I would argue that the cloud is not new anymore. The internet is the original cloud. So it's really old. >> Cloud in many respects is the programming model, or the mature programming model for the internet-based programming applications. >> I'd agree with that. So what that means is, as it gets more mature, from the mid-sized company up, all of a sudden procurement's involved. So think about the way networking, storage and servers, and sometimes even software was bought. The IT guy, the CIO, the line of business might specify, I want to use it but then it goes to procurement. In the mid to big company it's like, great, are we getting three bids on that? So we've also seen that happen, particularly with larger enterprise where, well you were using IBM cloud, that's great, but you are getting a quote from Microsoft or Amazon right? So those are the two aspects we see in multi-cloud, and by the way, that can be a very complex situation dealing with big companies. So the key thing that we do at IBM, is make sure that whichever model you take, public, private or hybrid, or multiple public clouds, or multiple public cloud providers, using a hybrid configuration, that we can support that. So things like our transparent cloud tiering, we've also recently created some solution blueprints for multi-clouds. So these things allow you to simply and easily deploy. Storage has to be viewed as transparent to a cloud. You've gotta be able to move the data back and forth, whether that be backing the data up, or archiving the data, or secondary data usage, or whatever that may be. And so storage really is, gotta be multi-cloud and we've been doing those solutions already and in fact, but honestly for the software side of the IBM portfolio for storage, we have hundreds of cloud providers mid, big and small, that use our storage software to offer backup as a service or storage as a service, and we're again the software foundation underneath what an end-user would buy as a service from those cloud providers. >> So I want to pick up on a word you used, simplicity. So, you and I are old infrastructure hacks and for many years I used to tell my management, infrastructure must do no harm. That's the best way to think about infrastructure. Simplicity is the new value proposition, complexity remains the killer. Talk to us a little bit about the role that simplicity in packaging and service delivery and everything else is again, shaping the way you guys, IBM, think about what products, what systems and when. >> So I think there's a couple of things. First of all, it's all about the right tool for the right job. So you don't want to over-sell and sell a big, giant piece of high-end all-flash array, for example, to a small company. They're not gonna buy that. So we have created a portfolio of which our FlashSystem 9100 is our newest product, but we've got a whole set of portfolios from the entry space to the mid range to the high end. We also have stuff that's tuned for applications, so for example, our lasting storage server which comes in an all-flash configuration is ideal for big data analytics workloads. Our DS8000 family of flash is ideal for mainframe attach, and in fact we have close to 65% of all mainframe attached storage, is from IBM. But you have the right tool for the right job, so that's item number one. The second thing you want to do is easier and easier to use. Whether that be configuring the physical entity itself, so how do you cable, how do you rack and stack it, make sure that it easily integrates into whatever else they're putting together in their data center, but it a cloud data center, a traditional on-premises data center, it doesn't matter. The third thing is all about the software. So how do you have software that makes the array easier and easier to use, and is heavily automated based on AI. So the old automation way, and we've both been in that era, was you set policies. Policy-based management, and when it came out 10 years ago, it was a transformational event. Now it's all about using AI in your infrastructure. Not only does your storage need to be right to enable AI at the server workload level, but we're saying, we've actually deployed AI inside of our storage, making it easier for the storage manager or the IT manager, and in some cases even the app owner to configure the storage 'cause it's automated. >> Going back to that notion that the storage knows something about the metadata, too. >> Right, exactly, exactly. So the last thing is our multi-cloud blueprint. So in those cases, what we've done is create these multi-cloud blueprints. For example, disaster recovery and business continuity using a public cloud. Or secondary data use in a public cloud. How do you go ahead and take a snapshot, a replica or a backup, and use it for dev-ops or test or analytics? And by the way, our Spectrum copy data management software allows you, but you need a blueprint so that it's easy for the end user, or for those end users who buy through our partners, our partners then have this recipe book, these blueprints, you put them together, use the software that happens to come embedded in our new FlashSystem 9100 and then they use that and create all these various different recipes. Almost, I hate to say it, like a baker would do. They use some base ingredients in baking but you can make cookies, candies, all kinds of stuff, like a donut is essentially a baked good that's fried. So all these things use the same base ingredients and that software that comes with the FlashSystem 9100, are those base ingredients, reformulated in different models to give all these multi-cloud blueprints. >> And we've gotta learn more about vegetables so we can talk about salad in that metaphor, (Eric laughing) you and I. Eric once again. >> Great, thank you. >> Thank you so much for joining us here on the CUBE. >> Great, thank you. >> Alright, so let's hear this come to life in the form of a product video from IBM on the FlashSystem 9100. >> Some things change so quickly, it's impossible to track with the naked eye. The speed of change in your business can be just as sudden and requires the ability to rapidly analyze the details of your data. The new, IBM FlashSystem 9100, accelerates your ability to obtain real-time value from that information, and rapidly evolve to a multi-cloud infrastructure, fueled by NVMe technology. In one powerful platform. IBM FlashSystem 9100, combines the performance, of IBM FlashCore technology. The efficiency of IBM Spectrum Virtualize. The IBM software solutions, to speed your multi-cloud deployments, reduce overall costs, plan for performance and capacity, and simplify support using cloud-based IBM storage insights to provide AI-powered predictive analytics, and simplify data protection with a storage solution that's flexible, modern, and agile. It's time to re-think your data infrastructure. (upbeat music) >> Great to hear about the IBM FlashSystem 9100 but let's get some more details. To help us with that, we've got Bina Hallman who's the Vice President Offering Management at IBM Storage. Bina, welcome to theCUBE. >> Well, thanks for having me. It's an exciting even, we're looking forward to it. >> So Bina, I want to build on some of the stuff that we talked to Eric about. Eric did a good job of articulating the overall customer challenge. As IBM conceives how it's going to approach customers and help them solve these challenges, let's talk about some of the core values that IBM brings to bear. What would you say would be one of the, say three, what are the three things that IBM really focuses on, as it thinks about its core values to approach these challenges? >> Sure, sure. It's really around helping the client, providing a simple one-stop shopping approach, ensuring that we're doing all the right things to bring the capabilities together so that clients don't have to take different component technologies and put them together themselves. They can focus on providing business value. And it's really around, delivering the economic benefits around CapEx and OpEx, delivering a set of capabilities that help them move on their journey to a data-driven, multi-cloud. Make it easier and make it simpler. >> So, making sure that it's one place they can go where they can get the solution. But IBM has a long history of engineering. Are you doing anything special in terms of pre-testing, pre-packaging some of these things to make it easier? >> Yeah, we over the years have worked with many of our clients around the world and helping them achieve their vision and their strategy around multi-cloud, and in that journey and those set of experiences, we've identified some key solutions that really do make it easier. And so we're leveraging the breadth of IBM, the power of IBM, making those investment to deliver a set of solutions that are pre-tested, they are supported at the solutions level. Really focusing on delivering and underpinning the solutions with blueprints. Step-by-step documentation, and as clients deploy these solutions, they run into challenges, having IBM support to assist. Really bringing it all together. This notion of a multi-cloud architecture, around delivering modern infrastructure capabilities, NVMe acceleration, but also some of our really core differentiation that we deliver through FlashCore data reduction capabilities, along with things like modern data protection. That segment is changing and we really want to enable clients, their IT, and their line of business to really free them up and focus on a business value, versus putting these components together. So it's really around taking those complex things and make them easier for clients. Get improved RPO, RTO, get improved performance, get improved costs, but also flexibility and agility are very critical. >> That sounds like therefore, I mean the history of storage has been trade-offs that you, this can only go that fast, and that tape can only go that fast but now when we start thinking about flash, NVMe, the trade-offs are not as acute as they used to be. Is IBM's engineering chops capable of pointing how you can in fact have almost all of this at one time? >> Oh absolutely. The breadth and the capabilities in our R and D and the research capabilities, also our experiences that I talked about, engagements, putting all of that together to deliver some key solutions and capabilities. Like, look, everybody needs backup and archive. Backup to recover your data in case of a disaster occurs, archive for long-term retention. That data management, the data protection segment, it's going through a transformation. New emerging capabilities, new ways to do backup. And what we're doing is, pulling all of that together, with things that we introduced, for example, our Protect Plus in the fourth quarter, along with this FS 9100 and the cloud capabilities, to deliver a solution around data protection, data reuse, so that you have a modern backup approach for both virtual and physical environments that is really based on things like snapshots and mountable copies, So you're not using that traditional approach to recovering your copy from a backup by bringing it back. Instead, all you're doing is mounting one of those copies and instantly getting your application back and running for operational recovery. >> So to summarize some of those value, once stop, pre-tested, advanced technologies, smartly engineered. You guys did something interesting on July 10th. Why don't you talk about how those values, and the understanding of the problem, manifested so fast. Kind of an exciting set of new products that you guys introduced on July 10th. >> Absolutely. On July 10th we not only introduced our flagship FlashSystem, the FS 9100, which delivers some amazing client value around the economic benefits of CapEx, OpEx reduction, but also seamless data mobility, data reuse, security. All the things that are important for a client on their cloud journey. In addition to that, we infused that offering with AI-based predictive analytics and of course that performance and NVMe acceleration is really key, but in addition to doing that, we've also introduced some very exciting solutions. Really three key solutions. One around data protection, data reuse, to enable clients to get that agility, and second is around business continuity and data reuse. To be able to really reduce the expense of having business continuity in today's environment. It's a high-risk environment, it's inevitable to have disruptions but really being prepared to mitigate some of those risks and having operational continuity is important and by doing things like leveraging the public cloud for your DR capabilities. That's very important, so we introduced a solution around that. And the third is around private cloud. Taking your IBM storage, your FS 9100, along with the heterogeneous environment you have, and making it cloud-ready. Getting the cloud efficiencies. Making it to where you can use it for environments to create things like native cloud applications that are portable, from on-prem and into the cloud. So those are some of the key ways that we brought this together to really deliver on client value. >> So could you give us just one quick use case of your clients that are applying these technologies to solve their problems? >> Yeah, so let me use the first one that I talked about, the data protection and data reuse. So to be able to take your on-premise environment, really apply an abstraction layer, set up catalogs, set up SLAs and access control, but then be able to step away and manage that storage all through API bays. We have a lot of clients that are doing that and then taking that, making the snapshots, using those copies for things like, whether it's the disaster recovery or secondary use cases like analytics, dev-ops. You know, dev-ops is a really important use case and our clients are really leveraging some of these capabilities for it because you want to make sure that, as application developers are developing their applications, they're working with the latest data and making sure that the testing they're doing is meaningful in finding the maximum number of defects so you get the highest quality of code coming out of them and being able to do that, in a self-service driven way so that they're not having to slow down their innovation. We have clients leveraging our capabilities for those kinds of use cases. >> It's great to hear about the FlashSystem 9100 but let's hear what customers have to say about it. Not too long ago, IBM convened a customer panel to discuss many aspects of this announcement. So let's hear what some of the customers had to say about the FlashSystem 9100. >> Now Owen, you've used just about every flash system that IBM has made. Tell us, what excites you about this announcement of our new FlashSystem 9100. >> Well, let's start with the hardware. The fact that they took the big modules from the older systems, and collapsed that down to a two-and-a-half inch form-factor NVMe drive is mind-blowing. And to do it with the full speed compression as well. When the compression was first announced, for the last FlashSystem 900, I didn't think it was possible. We tested it, I was proven wrong. (laughing) It's entirely possible. And to do that on a small form-factor NVMe drive is just astounding. Now to layer on the full software stack, get all those features, and the possibilities for your business, and what we can do, and leverage those systems and technologies, and take the snapshots in the replication and the insights into what our system's doing, it is really mind-blowing what's coming out today and I cannot wait to just kick those tires. There's more. So with that real-world compression ratio, that we can validate on the new 900, and it's the same in this new system, which is astounding, but we can get more, and just the amount of storage you get in this really small footprint. Like, two rack units is nothing. Half our services are two rack units, which is absolutely astounding, to get that much data in such a very small package, like, 460 terabytes is phenomenal, with all these features. The full solution is amazing, but what else can we do with it? And especially as they've said, if it's for a comparable price as what we've bought before, and we're getting the full solution with the software, the hardware, the extremely small form-factor, what else can you do? What workloads can you pull forward? So where our backup systems weren't on the super fast storage like our production systems are, now we can pull those forward and they can give the same performance as production to run the back-end of the company, which I can't wait to test. >> It's great to hear from customers. The centerpiece of the Wikibon community. But let's also get the analyst's perspective. Let's hear from Eric Burgener, who's the Research Vice President for Storage at IDC. >> Thanks very much Peter, good to be back. >> So we've heard a lot from a number of folks today about some of the changes that are happening in the industry and I want to amplify some things and get the analyst's perspective. So Wikibon, as a fellow analyst, Wikibon believes pretty strongly that the emergence of flash-based storage systems is one of the catalyst technologies that's driving a lot of the changes. If only because, old storage technologies are focused on persisting data. Disc, slow, but at least it was there. Flash systems allow a bit flip, they allow you to think about delivering data to anywhere in your organization. Different applications, without a lot of complexity, but it's gotta be more than that. What else is crucial, to making sure that these systems in fact are enabling the types of applications that customers are trying to deliver today. >> Yeah, so actually there's an emerging technology that provides the perfect answer to that, which is NVMe. If you look at most of the all-flash systems that have shipped so far, they've been based around SCSI. SCSI was a protocol designed for hard disk drives, not flash, even though you can use it with flash. NVMe is specifically designed for flash and that's really gonna open up the ability to get the full value of the performance, the capacity utilization, and the efficiencies, that all-flash arrays can bring to the market. And in this era of big data, more than ever, we need to unlock that performance capability. >> So as we think about the big data, AI, that's gonna have a significant impact overall in the market and how a lot of different vendors are jockeying for position. When IDC looks at the impact of flash, NVMe, and the reemergence of some traditional big vendors, how do you think the market landscape's gonna be changing over the next few years? >> Yeah, how this market has developed, really the NVMe-based all-flash arrays are gonna be a carve-out from the primary storage market which are SCSI-based AFAs today. So we're gonna see that start to grow over time, it's just emerging. We had startups begin to ship NVMe-based arrays back in 2016. This year we've actually got several of the majors who've got products based around their flagship platforms that are optimized for NVMe. So very quickly we're gonna move to a situation where we've got a number of options from both startups and major players available, with the NVMe technology as the core. >> And as you think about NVMe, at the core, it also means that we can do more with software, closer to the data. So that's gotta be another feature of how the market's gonna evolve over the next couple of years, wouldn't you say? >> Yeah, absolutely. A lot of the data services that generate latencies, like in-line data reduction, encryption and that type of thing, we can run those with less impact on the application side when we have much more performant storage on the back-end. But I have to mention one other thing. To really get all that NVMe performance all the way to the application side, you've gotta have an NVMe Over Fabric connection. So it's not enough to just have NVMe in the back-end array but you need that RDMA connection to the hosts and that's what NVMe Over Fabric provides for you. >> Great, so that's what's happening on the technology-product-vendor side, but ultimately the goal here is to enable enterprises to do something different. So what's gonna be the impact on the enterprise over the next few years? >> Yeah, so we believe that SCSI clearly will get replaced in the primary storage space, by NVMe over time. In fact, we've predicted that by 2021, we think that over 50% of all the external, primary storage revenue, will be generated by these end-to-end NVMe-based systems. So we see that transition happening over the course of the next two to three years. Probably by the end of this year, we'll have NVMe-based offerings, with NVMe Over Fabric front ends, available from six of the established storage providers, as well as a number of smaller startups. >> We've come a long way from the brown, spinning stuff, haven't we? >> (laughing) Absolutely. >> Alright, Eric Burgener, thank you very much. IDC Research Vice President, great once again to have you in theCUBE. >> Thanks Peter. >> Always great to get the analyst's perspective, but let's get back to the customer perspective. Again, from that same panel that we saw before, here's some highlights of what customers had to say about IBM's Spectrum family of software. (upbeat music) We love hearing those customer highlights but let's get into some of the overall storage trends and to do that we've asked Eric Herzog and Bina Hallman back to theCUBE. Eric, Bina, thanks again for coming back. So, what I want to do now is, I want to talk a little bit about some trends within the storage world and what the next few years are gonna mean, but Eric, I want to start with you. I was recently at IBM Think, and Ginni Rometty talked about the idea of putting smart to work. Now, I can tell you, that means something to me because the whole notion of how data gets used, how work gets institutionalized around your data, what does storage do in that context? To put smart to work. >> Well I think there's a couple of things. First we've gotta realize that it's not about storage, it's about the data and the information that happens to sit on the storage. So you have to have storage that's always available, always resilient, is incredibly fast, and as I said earlier, transparently moves things in and out of the cloud, automatically, so that the user doesn't have to do it. Second thing that's critical is the integration of AI, artificial intelligence. Both into the storage solution itself, of what the storage does, how you do it, and how it plays with the data, but also, if you're gonna do AI on a broad scale, and for example we're working with a customer right now and their AI configuration in 100 petabytes. Leveraging our storage underneath the hood of that big, giant AI analytics workload. So that's why they have to both think of it in the storage to make the storage better and more productive with the data and the information that it has, but then also as the undercurrent for any AI solution that anyone wants to employ, big, medium or small. >> So Bina, I want to pick up on that because there are gonna be some, there's some advanced technologies that are being exploited within storage right now, to achieve what Eric's talking about, but there's gonna be a lot more. And there's gonna be more intensive application utilizations of some of those technologies. What are some of the technologies that are becoming increasingly important, from a storage standpoint, that people have to think about as they try to achieve their digital transformation objectives. >> That's right, I mean Peter, in addition to some of the basics around making sure your infrastructure is enabled to handle the SLAs and the level of performance that's required by these AI workloads, when you think about what Eric said, this data's gonna reside, it's gonna reside on-premise, it's gonna be behind a firewall, potentially in the cloud, or multiple public clouds. How do you manage that data? How do you get visibility to that data? And then be able to leverage that data for your analytics. And so data management is going to be very important but also, being able to understand what that data contains and be able to run the analytics and be able to do things like tagging the metadata and then doing some specialized analytics around that is going to be very important. The fabric to move that data, data portability from on-prem into the cloud, and back and forth, bidirectionally, is gonna be very important as you look into the future. >> And obviously things like IOT's gonna mean bigger, more, more available. So a lot of technologies, in a big picture, are gonna become more closely associated with storage. I like to say that, at some point in time we've gotta stop thinking about calling stuff storage because it's gonna be so central to the fabric of how data works within a business. But Eric, I want to come back to you and say, those are some of the big picture technologies but what are some of the little picture technologies? That none-the-less are really central to being able to build up this vision over the course of the next few years? >> Well a couple of things. One is the move to NVMe, so we've integrated NVMe into our FLashSystem 9100, we have fabric support, we already announced back in February actually, fabric support for NVMe over an InfiniBand infrastructure with our FlashSystem 900 and we're extending that to all of the other inter-connects from a fabric perspective for NVMe, whether that be ethernet or whether that be fiber channel and we put NVMe in the system. We also have integrated our custom flash models, our FlashCore technology allows us to take raw flash and create, if you will, a custom SSD. Why does that matter? We can get better resiliency, we can get incredibly better performance, which is very tied in to your applications workloads and use cases, especially in data-driven multi-cloud environment. It's critical that the flash is incredibly fast and it really matters. And resilient, what do you do? You try to move it to the cloud and you lose your data. So if you don't have that resiliency and availability, that's a big issue. I think the third thing is, what I call the cloud-ification of software. All of IBM's storage software is cloud-ified. We can move things simultaneously into the cloud. It's all automated. We can move data around all over the place. Not only our data, not only to our boxes, we could actually move other people's array's data around for them and we can do it with our storage software. So it's really critical to have this cloud-ification. It's really cool to have this now technology, NVMe from an end-to-end perspective for fabric and then inside the system, to get the right resiliency, the right availability, the right performance for your applications, workloads and use cases, and you've gotta make sure that everything is cloud-ified and portable, and mobile, and we've done that with the solutions that are wrapped into our FlashSystem 9100 that we launched a couple of weeks ago. >> So you are both though leaders in the storage industry. I think that's very clear, and the whole notion of storage technology, and you work with a lot of customers, you see a lot of use cases. So I want to ask you one quick question, to close here. And that is, if there was one thing that you would tell a storage leader, a CIO or someone who things about storage in a broad way, one mindset change that they have to make, to start this journey and get it going so that it's gonna be successful. What would that one mindset change be? Bina, what do you think? >> You know, I think it's really around, there's a lot of capabilities out there. It's really around simplifying your environment and making sure that, as you're deploying these new solutions or new capabilities, that you've really got a partnership with a vendor that's gonna help you make it easier. Take those complex tasks, make them easier, deliver those step-by-step instructions and documentation and be right there when you need their assistance. So I think that's gonna be really important. >> So look at it from a portfolio perspective, where best of breed is still important, but it's gotta work together because it leverages itself. >> It's gotta work together, absolutely. >> Eric, what would you say? >> Well I think the key thing is, people think storage is storage. All storage is not the same and one of the central tenets at IBM storage is to make sure that we're integrated with the cloud. We can move data around transparently, easily, simply, Bina pointed out the simplicity. If you can't support the cloud, then you're really just a storage box, and that's not what IBM does. Over 40% of what we sell is actually storage software and all that software works with all of our competitors' gear. And in fact our Spectrum Virtualize for Public Cloud, for example, can simultaneously have datasets sitting in a cloud instantiation, and sitting on premises, and then we can use our copy data management to take advantage of that secondary copy. That's all because we're so cloud-ified from a software perspective, so all storage is not the same, and you can't think of storage as, I need the cheapest storage. It's gotta be, how does it drive business value for my oceans of data? That's what matters most, and by the way, we're very cost-effective anyway, especially because of our custom flash model which allows us to have a real price advantage. >> You ain't doing business at a level of 100 petabytes if you're not cost effective. >> Right, so those are the things that we see as really critical, is storage is not storage. Storage is about data and information. >> So let me summarize your point then, if I can really quickly. That in other words, that we have to think about storage as the first step to great data management. >> Absolutely, absolutely Peter. >> Eric, Bina, great conversation. >> Thank you. >> So we've heard a lot of great thought leaderships comments on the data-driven journey with multi-cloud and some great product announcements. But now, let's do the crowd chat. This is your opportunity to participate in this proceedings. It's the centerpiece of the digital community event. What questions do you have? What comments do you have? What answers might you provide to your peers? This is an opportunity for all of us collectively to engage and have those crucial conversations that are gonna allow you to, from a storage perspective, drive business value in your digital business transformations. So, let's get straight to the crowd chat. (bright music)

Published Date : Jul 25 2018

SUMMARY :

the journey to the data-driven multi-cloud. and the resellers that sell to them, and changes the way it engages with customers, et cetera. and if the storage corrupts the data, then guess what? and you get stuck with the bill. and have that beautiful image, and storage can help you out. is the centerpiece of infrastructure, the data has to physically stay Cloud in many respects is the programming model, already and in fact, but honestly for the software side is again, shaping the way you guys, IBM, think about from the entry space to the mid range to the high end. Going back to that notion that the storage so that it's easy for the end user, (Eric laughing) you and I. Thank you so much in the form of a product video from IBM and requires the ability to rapidly analyze the details Great to hear about the IBM FlashSystem 9100 It's an exciting even, we're looking forward to it. that IBM brings to bear. so that clients don't have to pre-packaging some of these things to make it easier? and in that journey and those set of experiences, and that tape can only go that fast and the research capabilities, also our experiences and the understanding of the problem, manifested so fast. Making it to where you can use it for environments and making sure that the testing they're doing It's great to hear about the FlashSystem 9100 Tell us, what excites you about this announcement and it's the same in this new system, which is astounding, The centerpiece of the Wikibon community. and get the analyst's perspective. that provides the perfect answer to that, and the reemergence of some traditional big vendors, really the NVMe-based all-flash arrays over the next couple of years, wouldn't you say? So it's not enough to just have NVMe in the back-end array over the next few years? over the course of the next two to three years. great once again to have you in theCUBE. and to do that we've asked Eric Herzog so that the user doesn't have to do it. from a storage standpoint, that people have to think about and be able to run the analytics because it's gonna be so central to the fabric One is the move to NVMe, so we've integrated NVMe and the whole notion of storage technology, and be right there when you need their assistance. So look at it from a portfolio perspective, It's gotta work together, and by the way, we're very cost-effective anyway, You ain't doing business at a level of 100 petabytes that we see as really critical, as the first step to great data management. on the data-driven journey with multi-cloud

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Eric BurgenerPERSON

0.99+

Peter BurrisPERSON

0.99+

MicrosoftORGANIZATION

0.99+

EricPERSON

0.99+

PeterPERSON

0.99+

AmazonORGANIZATION

0.99+

Eric HerzogPERSON

0.99+

IBMORGANIZATION

0.99+

July 10thDATE

0.99+

OwenPERSON

0.99+

Herzog's Bar and GrillORGANIZATION

0.99+

2016DATE

0.99+

sixQUANTITY

0.99+

FebruaryDATE

0.99+

firstQUANTITY

0.99+

Bina HallmanPERSON

0.99+

200,000QUANTITY

0.99+

FirstQUANTITY

0.99+

20QUANTITY

0.99+

2021DATE

0.99+

100 petabytesQUANTITY

0.99+

BinaPERSON

0.99+

two aspectsQUANTITY

0.99+

DS8000COMMERCIAL_ITEM

0.99+

WikibonORGANIZATION

0.99+

100 different countriesQUANTITY

0.99+

two-and-a-half inchQUANTITY

0.99+

460 terabytesQUANTITY

0.99+

Ginni RomettyPERSON

0.99+

FlashSystem 9100COMMERCIAL_ITEM

0.99+

FlashSystem 900COMMERCIAL_ITEM

0.99+

second aspectQUANTITY

0.99+

FS 9100COMMERCIAL_ITEM

0.99+

hundredsQUANTITY

0.99+

third thingQUANTITY

0.99+

200QUANTITY

0.99+

BothQUANTITY

0.99+

todayDATE

0.99+

ingressORGANIZATION

0.98+

OneQUANTITY

0.98+

Over 40%QUANTITY

0.98+

oneQUANTITY

0.98+

bothQUANTITY

0.98+

one thingQUANTITY

0.98+

first stepQUANTITY

0.98+

EMBARGOED DO NOT PUBLISH Eric Herzog 06.15.18 CUBEConversation


 

(sprightly music) >> Eric, welcome back to theCUBE. >> Great. Well, thanks for having us. We love to be participating in this community event, something a little bit different from what we've usually done with theCUBE. We're really excited to be here today. >> Yeah, this is a really special thing that we're trying to do 'cause we're going to introduce some thought leadership concepts, take a look at some recent moves by IBM and then at the end of it, as we've talked about, give an opportunity for the community to come together in the crowd chat. But let me start with this proposition, Eric. Our research, here at Wikibon, pretty strongly shows that we're in the midst of a pretty similar transformation in the industry. Everybody talks about digital transformation, everybody talks about the emergence of AI, big data and related type of technologies are going to have an enormous impact on how businesses operate and how they engage their customers. But we think that, none of that would be possible without some of the fundamental changes that are happening in storage. I mean, a lot of these AI algorithms for knowledge representation and learning have been around for 30, 40 years. What's new, is now we have storage technologies that are not dedicated to persisting data but actually delivering data, and increasingly delivering data over high-value storage services that look like a fabric. And that changes the way we think about applications and we could think about digital business. Have I got that right? >> Yeah, absolutely. It's very much about the application, workloads and use cases and you go to digital transformation. And what you need, underneath that is a strong storage highway. From a performance perspective you've got to be able to move data back and forth seamlessly, transparently and automatically, to the cloud, back on prem and from cloud-to-cloud in a multi-cloud environment. All that's critical to go to digital transformation. For any company, big, medium or small. >> So, specifically it's a lot of the new things that we can do with Flash, in terms of getting data out faster but you mentioned multi-cloud. Now, it seems like it's a practical reality, it's not going to all end up in one cloud, in fact there's going to be multiple avenues for achieving that cloud experience. What do you think about that? Why is that so important? >> Well I think you've got a couple different things. From a perspective of, why multi-cloud? You've got some people that want to keep it private. So you've got companies we need to, for a storage company, we need to provide them with private cloud technology. You've got public cloud providers which are a big customer, by the way, of IBM Storage. And the third thing is hybrid cloud, part private, part public, moving around. And lastly, some people really will be using multiple public cloud providers whether it be cloud provider going public or whether they're trying to be hybrid. You're going to use IBM Cloud and you're going to use Amazon depending on the size of the company, the geography and sometimes even legal regulation. IBM's whole storage strategy is to make sure not only provide that incredible resiliency, that incredible reliability and the performance you need for all data set workloads as they move to the cloud, but also you can transparently move data to the cloud and in fact, all of our storage software is heavily cloudified, as we've talked about actually a couple of years ago, with IBM Storage software. >> So, that suggests ultimately that we and that these businesses think about digital business, they're going to use their data differently. And storage, types of storage technology, relying on it to happen, in a multi-cloud setting. But what are some of those kind of baseline storage technologies that make this, that become so critical, as we think about this transformation? >> So, the first thing is the move to NVMe. NVMe is a new protocol that's very high-speed. We've introduced a new product, couple weeks ago, called the FlashSystem 9100, that has NVMe actually in the storage subsystem itself, dramatically increasing the performance, for example, you can go up to 10 million IOPS, and have latency of only 350 mics. Well, guess what? The real enemy of all workloads and applications is your latency and at 350 mics, that's almost nothing. Then the second thing is making sure that you have a robust software interconnect. So what we've done, is make sure that all of our storage products can automatically and transparently tier data out to multiple clouds, not just one but in a multi cloud environment. So those two things are critical. The high performance of NVMe that we just launched and integrating a storage subsystem with all of this cloudified software, and getting it for one price. >> Now you mentioned the FlashSystem 9100. I presume also that there is new thoughts about how to imagine packaging, how to imbue some of that engineering discipline into some of these new products. Tell, talk to us a little bit about some of the things that IBM is doing to simplify the packaging and the availability of these types of technologies in context of this journey to the multi-cloud. >> So we've done two things. In addition to having this FlashSystem 9100 being NVMe from end-to-end, supporting NVMe over fabric infrastructures, you talked about already, the cloud is all about a fabric. But, secondly what we're doing is integrating a whole family of the IBM Spectrum Storage software. Spectrum Protect Plus, we've got our Spectrum Connect for containerized environments, we've got our Spectrum Virtualize for public cloud, the base software is our regular Spectrum Virtualize which happen to work with over 440 arrays that are ours, giving them all sorts of technology such as data rest encryption, the ability to move out to the cloud, so if you've got an older array and it happens to be working with the 9100 with our Spectrum Virtualize, we can move data from an older array, that's not ours, or our older arrays too, and move it out to IBM Cloud or move it out to Amazon or Azure with our transparent cloud tiering. So imbuing all of this software functionality. And now the third thing we've done is create a bunch of solutions blueprints, think of them as recipes. Easy use for the end-user, it shows them what they need to do to work in a private cloud environment, what they need to do to really work on secondary data sets. Remember, data is your most valuable asset. It's not just about your primary data. What do you do with your secondary data? Are you mining that for AI? Are you in that to do any business intelligence? And we've got software that allows you to do that, and all of that is imbued and embedded integrated into your FlashSystem 9100, when you buy it. >> Sounds like a pretty exciting time for storage. >> It is. Storage is white hot. >> Great conversation Eric-- >> White hot. >> Great. >> Alright so that's the first one, we'll go into the segway. (sprightly music) Good? (sprightly music)

Published Date : Jun 15 2018

SUMMARY :

We love to be participating in this community event, And that changes the way we think about applications and use cases and you go to digital transformation. that we can do with Flash, and the performance you need for all data set workloads and that these businesses think about So, the first thing is the move to NVMe. some of the things that IBM is doing to and it happens to be working with the 9100 Storage is white hot. Alright so that's the first one,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
EricPERSON

0.99+

IBMORGANIZATION

0.99+

Eric HerzogPERSON

0.99+

AmazonORGANIZATION

0.99+

350 micsQUANTITY

0.99+

two thingsQUANTITY

0.99+

second thingQUANTITY

0.99+

todayDATE

0.98+

couple weeks agoDATE

0.98+

06.15.18DATE

0.97+

first thingQUANTITY

0.97+

WikibonORGANIZATION

0.97+

third thingQUANTITY

0.96+

over 440 arraysQUANTITY

0.96+

one cloudQUANTITY

0.96+

oneQUANTITY

0.95+

first oneQUANTITY

0.92+

one priceQUANTITY

0.92+

FlashTITLE

0.91+

ConnectCOMMERCIAL_ITEM

0.89+

Spectrum VirtualizeCOMMERCIAL_ITEM

0.89+

40 yearsQUANTITY

0.87+

couple of years agoDATE

0.87+

FlashSystem 9100COMMERCIAL_ITEM

0.87+

theCUBEORGANIZATION

0.87+

30QUANTITY

0.86+

secondlyQUANTITY

0.84+

9100COMMERCIAL_ITEM

0.8+

Spectrum Protect PlusCOMMERCIAL_ITEM

0.77+

IBM StorageORGANIZATION

0.76+

CloudTITLE

0.75+

up to 10 million IOPSQUANTITY

0.71+

FlashSystem 9100OTHER

0.69+

IBM CloudTITLE

0.62+

Spectrum StorageTITLE

0.61+

StorageTITLE

0.6+

AzureORGANIZATION

0.57+

coupleQUANTITY

0.52+

SpectrumTITLE

0.5+

IBM’s 20 February 2018 Storage Announcements with Eric Herzog


 

(fast orchestral music) >> Hi, I'm Peter Burris, and welcome to another Wikibon CUBE Conversation. Today I'm joined by Eric Herzog, who's the CMO and Vice President of Channels in IBM's Storage Group. Welcome, Eric. >> Peter, thank you very much. Really appreciate spending time with theCube. >> Absolutely, it's always great to have you here, Eric. And you know, it's interesting. When you come in, it's kind of, let's focus on storage, cause that's what you do, but it's kind of interesting overall, the degree to which storage and business is now becoming more than just a thing that you have to have, but part of your overall business strategy increasingly because of the role that digital business is playing. Well, earlier today IBM made some pretty consequential announcements about how you intend to help customers draw those two together closely. Why don't you take us through 'em? >> So, first thing I think, with the digital business, it's all about data. And the digital business is driven by data. Data always ends up on storage and is always managed by storage software, so while it may be underneath the hood if you will, it is the critical engine underneath that entire car. If you don't have the right engine or transmission, which you could argue storage and storage software is, then you can't have a truly digital business. >> True, so tell us, what did IBM do? >> So what we do is we announced a number of technologies today, some of which were enhancing, some of which were brand new. So for example, a lot of it was around our Spectrum storage software family. We introduced a new software-defined storage for NAS, Spectrum NAS. We introduced enhancements to our IBM cloud object storage offering, also to our Spectrum Virtualize, several enhancements to our modern data protection suite, which is Spectrum Protect and Spectrum Tech Plus were enhanced. And lastly, from an infrastructure perspective, we announced a first real product around an NVMe storage solution over an InfiniBand fabric, and what we're going to do for rest year-round NVMe and how that impacts storage systems. Which are of course, a critical component in your digital data business. >> You also announced some new terms and conditions, or new ways of conceiving how you can get access to the storage, capacity storage plans you want. Why don't ya give us a little bit of inside on that. >> So one of the things we've done is we've already created, a couple years ago the Spectrum storage suite which has a whole raft of different products, file software, block software, back-up, archive software. So we added the Spectrum Protect Plus offering into that suite. We also had a back-up only suite which focuses just on modern data protection. We've put it in there and in both cases, it's at no additional fee. So if you buy the suite, you get Spectrum Protect Plus. If you buy the back-up only suite, so you're more focused on back-up only, again at no extra charge to the end user. The other thing we've done is we announced in Q4, a storage utility model. So think that you can buy storage, the way you buy your power bill or your water bill or your gas bill. So it can go up and it can go down. We bill you quarterly. We added our IBM cloud object storage on premises solution to that set of products. We had an earlier set of products built around flash we announced in Q4 of last year. Now we've added object storage as a way to consume in basically a utility offering model. >> So we talk a lot at Wikibon about the need for what we call the true private client approach which is basically the idea that you want the cloud experience wherever your data requires. And it sounds like IBM is actually starting to accelerate the process by which it introduces many of these features, especially in the storage unit. You've bought in more stuff underneath the spectrum family. You're starting to introduce some of those new highly innovative technologies like NVMe over Fabric and you've also introduced an honest utility model that allows people to have or to treat their storage capacity more like that cloud experience. Have I got that right? >> Absolutely. And we've done one other things too. For example, as you know, from a cloud perspective everyone is moving to containers, right? Our Spectrum Connect product offers free support for dockers and kubernetes. So if you're going to create a private cloud, and you're going to do that on your own, of even hybrid cloud where you're, you know, sluffing some of it into your public cloud provider. Bottom line is that dockers support, that container support is what you need to create the true private cloud experience that Wikibon has been talking about for the last year and half now. >> Well, let's talk about the kubernetes and dockers and the notion of containers as a dissociative storage. I want to take it in two directions. First off, tell us a little bit about how it works kind of dissolver oriented terms and then, let's talk about what that's going to mean to the ecosystem and how people are going to think about buying storage going forward. So why don't we start with how does this capability work? >> Sure. So the key thing we've done with the Spectrum Connect product is provide persistent storage capability to a container environment. As you know, containers just like VM's in the past can come up and come down very frequently especially if you're in a dev-ops environment. The whole point is they can spin them up quickly and take them down quickly. The problem is they don't allow for persistent storage. So our Spectrum container product allows for the capability of doing persistent storage connected to a containerized environment. >> So they way this would would work is you'd still have a server, you'd still have machine with some compute that would be responsible for spinning the containers up and down. But you'd have a storage feature that would make sure that that storage associated with that container would persist. >> Correct. >> Therefore you could continue to do the container up and down in the server while at the same time persisting the storage over an extended period of time. >> Right. So what that means is any of our customers who have our Spectrum Accelerate software defined storage for block, our Spectrum Virtualized software defined storage for block, and the associated family of arrays that ship with that software embedded. Remember, for us, our software defined storage can be sold stand alone as just a piece of software or embedded in our arrays, which for example, at Spectrum Virtualized means there's hundreds and hundreds of thousands of our software defined storage between the software only version and the array version. So for people who have those arrays, the container support is absolutely free. So if you've already bought the product and you're on our maintenance support, you just download the Spectrum Connect, boom you're off to the races, you deploy your containers for your private cloud environment and you've got it right there. If you're a brand new customer, you're going to buy let's say for example next week, you buy it next week. You get the Spectrum Virtualized, let's say for example on our Storewize V7000 F all-flash array cause that software comes with it. And you could go download Spectrum Connect at no fee cause you just type in you're a customer, put in your serial number, boom! They can just download it. And we don't charge anything for that. >> And now your storage guys and your developer guys are working a little bit more closely together as opposed to being at each others' throats. >> And saying what happened to the storage? >> There you go. >> Oh wait. I thought that was going to be... well no, it's not persistent. And in this case, it's persistent. They can take it up. They can take it down. They can do whatever they want. And that container product is free so the IT guy doesn't go, "Oh now I got to pay more money cause he doesn't." And then the guys on the dev-ops side and on the deployment application side are saying oh okay now I don't have to worry about that as an issue anymore. The IT guys took care of that for me. So you get everybody working together. You get the persistent storage that is not, you know, comes when you get a container environment. You get the exact opposite that is not persistent. And now we've offered that. And again it's a no charge for the users so it's easy to deploy. Easy to use and there's no fee. >> And so Eric, the reason I ask questions is because it's the compounding of these little annoyances that make it difficult for companies to accelerate their entree into digital business. And how they engage their customers differently and so this is one of those examples where as you said, data is the asset that distinguishes a digital business from a regular business competitor. What types of changes is this going to mean to the way the business thinks, the way the business buys, the way the business perceives storage? >> So I think the first thing is they need to realize that in a digital business, data is the oil. It is the gold, it is the silver, it's the diamonds. It is the number one entity. >> It's the value. >> It is the value of your digital business. So, you have to realize that the underlying infrastructure if it goes down, guess what? Your digital business is no longer up and running. So from that perspective, you need to have your underlying foundation from a storage perspective. In this case, think of Storage System the highly, highly available, highly, highly reliable and it needs to be incredibly fast because now you're doing everything from a digital business. And so everything is pounding on your server and storage infrastructure. Not that it wasn't a traditional data center but if certain things need to be slow, it's okay. But now that you've gone true private cloud with a full digital business, it can't be slow. It has to be resilient and it has to be always available. And those are things we've built in to both our storage software lair, the Spectrum family and to all of our storage arrays. The Storewize family, our DS family, our Flash System family. All are highly redundant, highly available and they're all flash. >> And let me add two more things to that. Cause I think it's pertinent to the direction that IBM is taking here, because data is not exactly like oil or not exactly like diamonds, in the sense that, oil and diamonds still follow the laws of scarcity. The value of data increases, and I know you've made this point, as you use it more. >> Right. >> So on the one hand, the storage has to provide the flexibility that developers can go after the same data at different times and in different ways. But still have that data be persistent and related to that obviously is that you want to ensure that you're able to drive that through-put through the system as aggressively as possible without creating a whole bunch of administrative headaches. So if we pivot for a second to NVMe, what does that mean to introduce things like NVMe to those five things we just talked about? Especially you know, the performance and the flexibility of having multiple applications and groups being able to go at the same data, perhaps do some snapshots and copies? >> So, couple things. From a software perspective that sits on top of all of our products, we've taken the approach of modern data protection. It's not let's just do an incremental back-up like in the old days. So what we do today is we have basically incessant snapshotting which is a full boat copy. What you can do is you can check those out with our Spectrum Content data manager which we didn't announce anything new on that, but we announced it last year. And with that, you can have unending snapshots. The dev-ops guys can grab a real piece of software, a real piece of data. So when they're doing their development, they're not using a faux set. And that faux set often can introduce more bugs. It doesn't get up as quickly. >> And so now you got more data, so you take the snapshot. By the way, it's self service. They can check it out themselves. Now when you look at it from the IT guy's perspective, guess what? There's a log of who's got what. So if there was a security issue, they can say, oh Eric Herzog, you're the one that had that. It looks like that leaked out from you. Even if it was inadvertent, the point is the dev-op guys can go in and grab from this new modern data production paradigm that we have. At the same time, the IT guys can at least track what is going on, so it's interesting. Then from a NVMe perspective, the key thing that NVMe has is A, all of the existing infrastructures, InfiniBand, Fabric, Fibre Channel Fabric, and Ethernet Fabric will be supported. Okay, over time, we're announcing today an InfiniBand Fabric solution, but all of the arrays that you buy today, if you for example bought a flash system V9000 and you wanted to do NVMe over Ethernet later in the year, software upgrade only. You buy the hardware now, you're done, okay? Our A9000 flash systems, Fibre Channel Connect, you buy they Fibre Channel now, you just upgrade the software a little bit later. So the key things within a NVMe configuration is A, the box is already highly resistant, highly available. Okay, they resist failures. They're easy to fix if there is a hardware failure for example, failed power supply. You know it's going to happen, okay? The smart business has an extra power supply sitting on the shelf. He pulls it out, he swaps it then sends it back to IBM. And when it's under warranty, boom, we take care of it. Okay? So that's the resiliency and the availability aspect from a physical perspective. But with NVMe, you get a better performance, which means that the arrays can handle more workloads. So as you go to a truly digital business built around the private cloud that Wikibon has been talking about now for 18 months, as you go to that model, you want to get more and apps pounding on the same storage, if you will. And with an NVMe Fabric solution, NVMe over time in the sub system itself, all that gives you more apps can work on the same set of storage. Now, do I have enough capacity, which is a separate topic. But as far as can the array handle the workload with NVMe from a Fabric perspective and NVMe in a storage sub system? You can handle additional workloads on the same physical infrastructure which saves you time, saves you money and gives you the performance for all workloads. Not just for a few niche workloads and all the other ones have to be slow. >> So Eric, you're out spending a lot of time with customers. Tell us a little about how they see their environments changing as a consequence of these and other related announcements. Are developers going to be looking at storage more as a potential source of value? How are administrators dealing with this? And give us some examples if you would. >> Sure, sure. So I think the key thing is with things like our content data manager. As we've got customers right now and they're able to check it out to all the test step guides which they couldn't do before. They're getting work done faster with real data. So the amount of bugs that come up with internal developers just like commercial developers like IBM or any other software company, the Microsofts, the Oracles, everybody has bugs. Well, guess what? In house developers got the same bugs. But, we help reduce that bug count. We make it easier for them to fix. Cause we're working on a real data set and not a fake data set, right? The IT guys love it because the dev-op guys don't say can you spin this up, spin this down? They do it on their own, right? Which accelerates them in doing their work. And the IT guys aren't bothered for it. That one concern on security, guess what? You got that long saying who's got what. >> Right, right. >> Burris has this. Herzog has that. >> That's a big deal because the IT guys ultimately, if something leaks out or there's a security issue, they get the call from the Chief Legal Officer, not the dev-ops guy. So this way, everybody is happy. The dev-op guys are happy. The IT guys are happy. The IT guys can focus on spinning up and spinning down for the dev guys. You can build it all yourself. Our copy data management and all of our storage softwares are API driven. Rest API's, integration with all of the object storage interfaces including S3. So it's easier and easier for the IT guy to make the dev-ops guys happy and give the dev-op guys self service, which, as you know, self service is one of the key attributes of the private cloud that Wikibon keeps talking about is self service. So we can give more through the software side. >> So I have one more question Eric. As we think about kind of where this announcement is, most important to businesses that are trying to affect that type of transformation we're talking about, is there one specific feature that is your conversation with customers, your conversations with the channels, since you're also very very close with the channel, that keeps popping to the top of the list of things to focus on as companies? As I said, try to figure out how to use data and assets differently? >> Well I think what the key thing from a storage guy perspective is one, interfacing with all the API's which we've done across our whole family, okay? Second thing is automation, automation, automation. The dev-ops guys like it. In a smaller shop, there may be only one IT guy who has to take care of their entire infrastructure. So the fact that our Spectrum Protect Plus for example can do VMware hyper V back-up except it can be done by the VMware hyper V guy or a general IT guy not a storage guy or a back-up admin. In the enterprise, sure there's a back-up admin in the big enterprises, but if you're at Herzog's Bar and Grill there is no back-up admin. So that ease of use, that simplicity, that integration with common API's and automating as much as possible is critical as people go to the digital business based on private clouds. >> Excellent. Eric Herzog, CMO, Vice-President of Channels at IBM storage group, talking about a number of things that were announced today as businesses try to marry their storage capability and their digital business strategy more closely together. Thanks for being here. >> Great, thank you very much. >> Once again, I'm Peter Burris. This has been a Wikibon CUBE Conversation with Eric Herzog of IBM. (fast orchestral music)

Published Date : Feb 20 2018

SUMMARY :

and welcome to another Wikibon CUBE Conversation. Peter, thank you very much. the degree to which storage and business And the digital business is driven by data. So for example, a lot of it was around to the storage, capacity storage plans you want. the way you buy your power bill the need for what we call the true private client approach that container support is what you need to create and the notion of containers as a dissociative storage. allows for the capability of doing persistent storage for spinning the containers up and down. in the server while at the same time persisting the storage for block, and the associated family of arrays as opposed to being at each others' throats. You get the persistent storage that is not, you know, And so Eric, the reason I ask questions is because that in a digital business, data is the oil. the Spectrum family and to all of our storage arrays. oil and diamonds still follow the laws of scarcity. So on the one hand, the storage has to provide And with that, you can have unending snapshots. in the sub system itself, all that gives you more apps And give us some examples if you would. So the amount of bugs that come up with internal developers Burris has this. So it's easier and easier for the IT guy that keeps popping to the top of the list of things So the fact that our Spectrum Protect Plus for example that were announced today as businesses try to marry with Eric Herzog of IBM.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
EricPERSON

0.99+

Eric HerzogPERSON

0.99+

IBMORGANIZATION

0.99+

Peter BurrisPERSON

0.99+

MicrosoftsORGANIZATION

0.99+

PeterPERSON

0.99+

OraclesORGANIZATION

0.99+

20 February 2018DATE

0.99+

next weekDATE

0.99+

A9000COMMERCIAL_ITEM

0.99+

firstQUANTITY

0.99+

last yearDATE

0.99+

18 monthsQUANTITY

0.99+

five thingsQUANTITY

0.99+

both casesQUANTITY

0.99+

FirstQUANTITY

0.99+

two directionsQUANTITY

0.99+

oneQUANTITY

0.99+

twoQUANTITY

0.99+

WikibonORGANIZATION

0.99+

todayDATE

0.99+

bothQUANTITY

0.98+

Spectrum VirtualizedTITLE

0.98+

last year and halfDATE

0.98+

TodayDATE

0.98+

first thingQUANTITY

0.97+

Herzog's Bar and GrillORGANIZATION

0.96+

Spectrum ConnectTITLE

0.96+

two more thingsQUANTITY

0.95+

one more questionQUANTITY

0.94+

V9000COMMERCIAL_ITEM

0.93+

VMware hyper VTITLE

0.93+

HerzogPERSON

0.93+

Spectrum Protect PlusCOMMERCIAL_ITEM

0.93+

hundreds of thousandsQUANTITY

0.93+

Spectrum Tech PlusCOMMERCIAL_ITEM

0.93+

Spectrum ProtectCOMMERCIAL_ITEM

0.91+

Q4DATE

0.9+

one specific featureQUANTITY

0.89+

Spectrum Protect PlusTITLE

0.88+

Spectrum VirtualizeCOMMERCIAL_ITEM

0.86+

InfiniBandORGANIZATION

0.86+

SpectrumTITLE

0.85+

earlier todayDATE

0.84+

S3TITLE

0.82+

Second thingQUANTITY

0.81+

hundreds andQUANTITY

0.81+

Storewize V7000 FCOMMERCIAL_ITEM

0.8+

Q4 ofDATE

0.76+

Spectrum AccelerateTITLE

0.76+

SpectrumCOMMERCIAL_ITEM

0.76+

Spectrum ProtectTITLE

0.74+

ConnectCOMMERCIAL_ITEM

0.74+

a couple years agoDATE

0.74+

SpectrumORGANIZATION

0.73+

DSCOMMERCIAL_ITEM

0.73+

Chief Legal OfficerPERSON

0.71+

couple thingsQUANTITY

0.7+

VicePERSON

0.68+

Eric Herzog, IBM | Cisco Live EU 2018


 

>> Announcer: Live from Barcelona, Spain it's theCUBE covering Cisco Live 2018. Brought to you by Cisco, Veeam, and theCUBE's ecosystem partners. >> Hello everyone and welcome back. This is theCUBE live here in Barcelona for Cisco Live Europe. I'm John Furrier, the co-host of theCUBE, with Stu Miniman analyst at Wikibon, covering networking storage and all infrastructure cloud. Stu Miniman, Stu. Our next guest is Eric Herzog, who's the Chief Marketing Officer at IBM Storage Systems. Eric, CUBE alumni, he's been on so many times I can't even count. You get the special VIP badge. We're here breaking down all the top stories at Cisco Live in Europe, kicking off 2018. Although it's the European show, not the big show, certainly kicking off the year with a lot of new concepts that aren't necessarily new, but they're innovative. Eric, welcome to theCUBE again. >> Well, thank you. We always love participating in theCUBE. IBM is a strong supporter of theCUBE and all the things you do for us, so thank you very much for having us again. >> A lot of great thought leadership from IBM, really appreciate you guys' support over the years. But now we're in a sea change. IBM had their first quarter of great results, and that will be well-reported on SiliconANGLE, but the sea change is happening. You've been living this generation, you've seen couple cycles in the past. Cisco putting forth a vision of the future, which is pretty right on. They were right on Internet of Things ten years ago, they had it all right, but they're a networking company that's transformed up the stack over the years. Now on the front lines of no perimeter, okay, more security challenges, cloud big whales with no networking and storage. You're in the middle of it. Break it down. Why is Cisco Live so important now than ever before? >> Well, for us it's very important because one, we have a strategic relationship with Cisco, the Storage Division does a product with Cisco called the VersaStack, converged infrastructure, and in fact one of our key constituents for the VersaStack are MSPs and CSPs, which is a key constituent of Cisco, especially with their emphasis on the cloud. Second thing for us is IBM storage has gone heavily cloud. So going heavily cloud with our software, in addition to what we do with our solutions as a foundation for CSPs and MSPs. Just what we've integrated into our software-defined storage for cloud makes Cisco Live an ideal venue for us, and Cisco an ideal partner. >> So I've got to ask you, we've had conversations on theCUBE before, they're all on youtube.com/siliconangle, just search Eric Herzog, you'll find them. But I want to recycle this one point and get your comments and reaction here in Barcelona. You guys have transformed with software at IBM big-time with storage. Okay, you're positioned well for the cloud. What's the most important thing that companies have to do, like IBM and Cisco, to play an innovator role in the cloud game as we have software at the center of the value proposition? >> Well I think the key thing is, when you look at cloud infrastructure, first of all, the cloud's got to run on something. So you need some sort of structural, infrastructure foundation. Servers, networking, and compute. So at IBM and with Cisco, we're positioning ourselves as the ideal rock-solid foundation for the cloud building, if you will. So that's item number one. Item number two, our software in particular can survive, not only on premises, but can bridge and go from on-premise to a public cloud, creating a hybrid infrastructure, and that allows us to also run cloud instantiation. Several of our products are available from IBM Cloud Division, Amazon offers some of the IBM storage software, over three hundred cloud service providers, smaller ones, offer IBM Spectrum Protect as a back-up service. So we've already morphed into storage software, either A, bridging the cloud in a hybrid config, or being used by cloud providers as some of their storage offerings for end-users and businesses. >> Eric, wanted to get to, one of the partnership areas that you've talked about with Cisco is VersaStack. We've talked with you a number of times about converged infrastructure, that partnership, Cisco UCS taking all the virtualization. The buzz in the market, there's a lot of discussion, oh it's hyper-converged, it's cloud. Why is converged infrastructure still relevant today? >> Well, when you look at the analysts that track the numbers, you can see that the overall converged market is growing and hyper-converged is viewed as a subset. When you look at those numbers, this year close to 17 billion US, about 75% of it is still standard converged versus hyper-converged. One of the other differences, it's the right tool for the right job. So customers need to go in eyes open. So when you do a hyper-converged infrastructure, by the way IBM offers a hyper-converged infrastructure currently with Nutanix, so we actually have both, the Nutanix partnership offering hyper-converged and a partnership with Cisco on standard converged. It's really, how do you size the right tool for the right job? And one of the negatives of hyper-converged, very easy to deploy, that's great, but one of the negatives is every time you need more storage, you have to add more server. Every time you need more server, you add more storage. With this traditional converged infrastructure, you can add servers only, or networking only, or storage only. So I think when you're in certain configurations, workloads, and applications, hyper-converged is the right solution, IBM's got a solution. In other situations, particularly as your middle-sized and bigger apps, regular converged is better 'cause you can basically parse and size up or down compute, networking, and the storage independent of each other, whereas in hyper-converged you have to do it at the same time. And that's a negative where you're either over-buying your storage when you don't need it, or you're over-buying your compute when you don't need it. With standard converged, you don't have that issue. You buy what you need when you need it. But I think most big companies, for sure, have certain workloads that are best with hyper-converged, and we've got that, and other workloads that are best with converged, and we have that as well. >> Okay, the other big growth area in storage for the last bunch of years has been flash. IBM's got a strong position in all-flash arrays. What's new there, how are some of the technologies changing? Any impact on the network that we should be really understanding at this show? >> Sure, so couple things. So first of all, we just brought out some very high-density all-flash arrays in Q4. We can put 220 terabytes in two rack U, which is a building block that we use in several different of our all-flash configurations, including our all-flash VersaStack. The other thing we do is we embed software-defined storage on our, software-defined storage actually on our physical all-flash arrays. Most companies don't do that, so they've got an all-flash offering and if they have a software-defined offering it's actually a different piece of software. For us it's the same, so it's easier to deploy, it's easier to train, it's easier to license, it's easier for a reseller to sell if you happen to be using a reseller. And the other thing is it's battle-hardened, because it's not only standalone software, but it's actually on the arrays as well. So from a test infrastructure quality issue, versus other vendors that have certain software that goes on their all-flash array, and then a different set of software for all software-defined. It doesn't make logical sense when you can cover it with one thing. So that's an important difference for us, and a big innovator. I think the last thing you're going to see that does impact networking is the rise of NVMe over fabrics. IBM did a statement of direction last May outlining what we're doing. We did a public demonstration of an InfiniBand fabric at the AI summit in New York in December, and we will be having an announcement around NVMe fabrics on the 20th of February. So stay tuned to hear us then. We'll be launching some more NVMe with fabric infrastructure at that time. >> Eric, I just, people that have been watching, there's been a lot of discussion about NVMe for a number of years, and NVMe over fabric more recently. How big a deal is this for the industry? You've seen many of these waves. Is this transformational or is it, you know, every storage company I talk to is working on this, so how's it going to be differentiated? What should users be looking to be able to, who do they partner with, how do they choose that solution, and when's it going to be ready? >> So first of all, I view it as an evolution, okay. If you take storage in general, arrays, you know we used to do punch cards. I'm old enough I remember using punch cards at the University of California. Then, it all went to tape. And if you look at old Schwarzenegger movies from the 80s, I love Schwarzenegger spy movies, what's there? IBM systems with big IBM tape, and not for back-up, for primary storage. Then in the late-80s, early-90s, IBM and a few other vendors came out with hard drive-based arrays that got hooked up to mainframes and then obviously into minis and to the rise of the LAN. Those have given away to all-flash arrays. From a connectivity perspective, you've had SCSI, you had ultra SCSI, you had ultra fast SCSI, ultra fast wide SCSI. Then you had fiber channel. So now as an infrastructure both in an array, as a connectivity between storage and the CPUs used in an array system, will be NVMe, and then you're going to have NVMe running over fabrics. So I view this as an evolution, right? >> John: What's the driver, performance or flexibility? >> A little bit of both. So from the in-box perspective, inside of an array solution, the major chip manufacturers are putting NVMe to increase the speed from storage going into the CPUs. So that will benefit the performance to the end-user for applications, workloads, and use cases. Then what they've done is Intel has pushed, with all the industry, IBM's a member of the NVMe consortium as well, has pushed using the NVMe protocol over fabrics, which also gives some added performance over fabric networks as well. So you've got it, but again I view this again as evolution, because punch cards, tape was faster, hard drive arrays were faster than tape, then flash arrays are faster, now you're going to have NVMe in the flash array, and also NVMe over fabric with connecting all-flash array. >> So I have to ask you the real question that's on everyone's mind that's out there, because storage is one of those areas that you never see it stopping. There's always venture back start-ups, you see new hot start-ups coming out of the woodwork, and there's been some failures lately and some blame NVMe's innovation to kind of killing some start-ups, I won't name names. But the real issue is the lines that were once blurred are now forming, and there's the wrong side of history and the right side of history. So I've got to ask you, what's going to be the right side of history in the storage architecture that people need to get onto to win in the future? >> So, there's a couple key points. One, all storage infrastructure and storage software needs to interface with cloud infrastructure. Got to be hybrid, if you have a software play like we do, where the software, such as our Spectrum Scale or our Spectrum Protect or Spectrum Protect Plus, can exist as a cloud service through a service rider, that's where you want to be. You don't want to have just a standard array and that's all you sell. So you want to have an array business, you want to make sure that's highly performant, you want to make sure that's the position, and the infrastructure underneath clouds, which means not only very fast, but also incredibly resilient. And that includes both cloud configs and AI. If you're going to do real-time AI, if you're going to do dark trading on Wall Street using AI instead of human beings, A, if the storage isn't really fast you're going to miss a 10 million dollar, hundred million dollar transaction. Second thing, if it's not resilient and always available, you're really in trouble. And god forbid when they bring AI to healthcare, and I mean AI in the operating room, boy if that storage fails when I'm on the table, wow. That's not going to be good. So those are the things you got to integrate with in the future. AI and cloud, whether it's software-defined in the array space, or if you're like IBM in both markets. >> John: Performance and resilient. >> Performance and resiliency is critical. >> All right, so Eric I have a non-storage question for you. >> Eric: Absolutely. >> So you've got the CMO hat for a division of IBM. You've been CMO of a start-up, you've been in this industry for a while. What's the changing role of the CMO in today's digital world? >> So I think the key thing is digital is a critical method of the overall marketing mix. And everything needs to reinforce everything. So let's take an example. One of the large storage websites and magazines recently announced that IBM is a finalist for four product-of-the-year awards. Two for all-flash arrays and two for software-defined storage. So guess what we've done? We've amplified it over LinkedIn, over IBM Facebook, through our Twitter handle, we leverage that. We use it at trade shows. So digital is A, the first foray, right? People look on your website and look at what you're doing socially before they even decide, should I really call them up, or should I really go to their booth a trade show? >> So discovery and learning is happening online. >> Discovery and learning, but even progression. We just, I just happened to tweet and LinkedIn this morning, Clarinet, a large European cloud MSP and CSP, just selected IBM all-flash arrays, IBM Spectrum Protect, and IBM Spectrum Virtualize for their cloud infrastructure. And obviously their target, they sell to end-users and companies, right? But the key thing is we tweeted it, we linked it in, we're going to use it here at the show, we're going to use it in PR efforts. So digital is a critical element of the marketing mix, it's not a fad. It also can be a lead dog. So if you're going to a trade show, you should tweet about it and link it in, just the way you guys do. We all knew you were coming to this show, we know you're going to IBM Think, we know you're going to VM World and Oracle, all these great shows. How do we find out? We follow you on social media and on the digital market space, so it's critical. >> And video, video a big role in - >> Video is critical. We use your videos all the time, obviously. I always tweet them and link them in once I'm posted. >> Clip and stick is the new buzzword. Clip 'em and stick 'em. Our new clipper tool, you've seen that. >> (laughs) Yes, I have. So it's really critical, though, that, you can, and remember, I'm like one of the oldest guys in the storage business, I'm 60 years old, I've been doing this 32 years, seven start-ups, EMC, IBM twice, Mac store Seagate, so I've done big and small. This is a sea change transformation in marketing. The key thing is you have to make it not stand on its own, integrate everything. PR, analyst relations, digital in everything you do, digital with shows and how you integrate the whole buyer's journey, and put it together. And people are using digital more and more, in fact I saw a survey from a biz school, 75% of people are looking at you digitally before they ever even call you up or call one of your resellers if you use the channel, to talk about your products. That's a sea change. >> You guys do a great job with content marketing, hats off to you guys. All right, final question for you, take a minute to just quickly explain the relationship that IBM has with Cisco and the importance of it, specifically what you guys are doing with them, how you guys go on to market to customers, and what's the impact to the customer. >> So, first of all, we have a very broad relationship with Cisco, Obviously I'm the CMO of the Storage Division, so I focus on storage, but several other divisions of IBM have powerful relationships. The IoT group, the Collaboration group. Cisco's one of our valued partners. We don't have networking products, so our Global Technology Services Division is one of the largest resellers of Cisco in the world, whether it be networking, servers, converge, what-have-you, so it's a strong, powerful relationship. From an end-user perspective, the importance is they know that the two companies are working together hand-in-glove. Sometimes you have two companies where you buy solutions from the A and B, and A and B don't even talk to each other, and yes they both go to the PlugFest or the Compatibility Lab, but they don't really work together, and their technology doesn't work together. IBM and Cisco have gone well beyond that to make sure that we work closely together in all of the divisions, including the storage division, with our Cisco-validated designs. And then lastly, whether it's delivered through the direct sales model or through the valued business partners that IBM and Cisco share, it's critical the end-user know, and the partners know, they're getting something that works together and doesn't just have the works option. It's tightly-honed and finely-integrated, whether it be storage or the IoT Division, the Collaboration Division, Cisco is a heavy proponent of IBM Security Division. >> Product teams work together? >> Yeah, all the product teams work together, trade APIs back and forth, not just doing the, and let's go do a test, compatibility test. Which everybody does that, but we go well beyond that with IBM and Cisco together. >> And it's a key relationship for you guys? >> Key relationship for the Storage Division, as well as for many of the other divisions of IBM, it's a critical relationship with Cisco. >> All right, Eric Herzog, Chief Marketing Officer for the Storage Systems group at IBM. It's theCUBE live coverage in Barcelona, I'm John Furrier, Stu Miniman, back with more from Barcelona Cisco Live Europe after this short break. (upbeat techno music)

Published Date : Jan 30 2018

SUMMARY :

Brought to you by Cisco, Veeam, I'm John Furrier, the co-host of theCUBE, and all the things you do for us, You're in the middle of it. for the VersaStack are MSPs and CSPs, What's the most important thing for the cloud building, if you will. The buzz in the market, there's a lot of discussion, And one of the negatives of hyper-converged, Any impact on the network that we should be but it's actually on the arrays as well. Is this transformational or is it, you know, and the CPUs used in an array system, will be NVMe, So from the in-box perspective, and the right side of history. and the infrastructure underneath clouds, What's the changing role of the CMO So digital is A, the first foray, right? just the way you guys do. We use your videos all the time, obviously. Clip and stick is the new buzzword. and remember, I'm like one of the oldest guys and the importance of it, and doesn't just have the works option. Yeah, all the product teams work together, Key relationship for the Storage Division, for the Storage Systems group at IBM.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IBMORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

EricPERSON

0.99+

Eric HerzogPERSON

0.99+

Stu MinimanPERSON

0.99+

JohnPERSON

0.99+

AmazonORGANIZATION

0.99+

John FurrierPERSON

0.99+

TwoQUANTITY

0.99+

StuPERSON

0.99+

two companiesQUANTITY

0.99+

BarcelonaLOCATION

0.99+

NutanixORGANIZATION

0.99+

OracleORGANIZATION

0.99+

twoQUANTITY

0.99+

75%QUANTITY

0.99+

DecemberDATE

0.99+

220 terabytesQUANTITY

0.99+

CUBEORGANIZATION

0.99+

VeeamORGANIZATION

0.99+

New YorkLOCATION

0.99+

32 yearsQUANTITY

0.99+

OneQUANTITY

0.99+

10 million dollarQUANTITY

0.99+

Global Technology Services DivisionORGANIZATION

0.99+

LinkedInORGANIZATION

0.99+

ClarinetORGANIZATION

0.99+

SchwarzeneggerPERSON

0.99+

theCUBEORGANIZATION

0.99+

Barcelona, SpainLOCATION

0.99+

EuropeLOCATION

0.99+

20th of FebruaryDATE

0.99+

VM WorldORGANIZATION

0.99+

EMCORGANIZATION

0.99+

IBM Storage SystemsORGANIZATION

0.99+

SeagateORGANIZATION

0.98+

two rackQUANTITY

0.98+

oneQUANTITY

0.98+

Chandra Mukhyala, IBM - DataWorks Summit Europe 2017 - #DW17 - #theCUBE


 

>> Narrator: theCUBE covering, DataWorks Summit Europe 2017. Brought to you by Hortonworks. >> Welcome back to the DataWorks Summit in Munich everybody. This is The Cube, the leader in live tech coverage. Chandra Mukhyala is here. He's the offering manager for IBM Storage. Chandra, good to see you. It always comes back to storage. >> It does, it's the foundation. We're here at a Data Show, and you got to put the data somewhere. How's the show going? What are you guys doing here? >> The show's going good. We have lots of participation. I didn't expect this big a crowd, but there is good crowd. Storage, people don't look at it as the most sexy thing but I still see a lot of people coming and asking. "What do you have to do with Hadoop?" kind of questions which is exactly the kind of question I expect. So, going good, we're able to-- >> It's interesting, in the early days of Hadoop and big data, I remember we interviewed, John and I interviewed Jeff Hammerbacher, founder of Cloudera and he was at Facebook and he said, "My whole goal at Facebook "when we're working with Hadoop was to "eliminate the storage container "and the expensive storage container." They succeeded, but now you see guys like you coming in and saying, "Hey, we have better storage." Why does the world need anything different than HDFS? >> This has been happening for the last two decades, right? In storage, every few years a startup comes, they address one problem very well. They address one problem and create a whole storage solution around that. Everybody understands the benefit of it and that becomes part of the main storage. When I say main storage, because these new point solutions address one problem but what about all the rest of the features storage has been developing for decades. Same thing happened with other solutions, for example, deduplication. Very popular, right at one point, dedupe appliances. Nowadays, every storage solution has dedupe in. I think same thing with HDFS right? HDFS's purpose is built for Hadoop. It solves that problem in terms of giving local access storage, scalable storage, big plural storage. But, it's missing out many things you know. One of the biggest problems they have with HDFS is it's siloed storage, meaning that data is only available, the data in HDFS is only for Hadoop. You can't, what about the rest of the applications in the organizations, who may need it through traditional protocols like NFS, or SMB or they maybe need it through new applications like S3 interfaces or Swift interfaces. So, you don't want that siloed storage. That's one of the biggest problems we have. >> So, you're putting forth a vision of some kind horizontal infrastructure that can be leveraged across your application portfolio... >> Chandra: Yes. >> How common is that? And what's the value of that? >> It's not really common, that's one of the stories, messages we're trying to get out. And I've been talking to data scientists in the last one year, a lot of them. One of the first things they do when they are implementing a Hadoop project is, they have to copy a lot data into HDFS Because before they could enter it just as HDFS they can't on any set. That copy process takes days. >> Dave: That's a big move, yeah. >> It's not only wasting time from a data scientist, but it also makes the data stale. I tell them you don't have to do that if your data was on something like IBM Spectrum Scale. You can run Hadoop straight off that, why do you even have to copy into HDFS. You can use the same existing applications map, and just applications with zero change to it and pour in them at Spectrum Scale it can still use the HSFS API. You don't have to copy that. And every data scientists I talk to is like, "Really?" "I don't know how to do this, I'm wasting time?" Yes. So, it's not very well known that, you know, most people think that there's only one way to do Hadoop applications, in sometimes HDFS. You don't have to. And advantages there is, one, you don't have to copy, you can share the data with the rest of the applications but its no more stale data. But also, one other big difference between the HDFS type of storage versus shared storages. In the shared, which is what HDFS is, the various scale is by adding new nodes, which adds both compute and storage. What if our applications, which don't necessarily need need more compute, all they need is more throughput. You're wasting computer resources, right? So there are certain applications where a share nothing is a better architecture. Now the solution which IBM has, will allow you to deploy it in either way. Share nothing or shared storage but that's one of the main reasons, people want to, data scientists especially, want to look at these alternative solutions for storage. >> So when I go back to my Hammerbacher example, it worked for a Facebook of the early days because they didn't have a bunch of legacy data hanging around, they could start with, pretty much, a blank piece of paper. >> Yes. >> Re-architect, plus they had such scale, they probably said, "Okay, we don't want to go to EMC "and NetApp or IBM, or whomever and buy storage, "we want to use commodity components." Not every enterprise can do that, is what you're saying. >> Yes, exactly. It's probably okay for somebody like a very large search engine, when all they're doing is analytics, nothing else. But if you to any large commercial enterprise, they have lots of, the whole point around analytics is they want to pool all of the data and look at that. So, find the correlations, right? It's not about analyzing one small, one dataset from one business function. It's about pooling everything together and see what insights can I get out of it. So that's one of the reasons it's very important to have support to access the data for your legacy enterprise applications, too, right? Yeah, so NFS and SMB are pretty important, so are S3 and Swift, but also for these analytics applications, one of the advantage of IBM Solution here is we provide local access for file system. Not necessarily through mass protocols like an access, we do that, but we also have PO SIX access to have data local access to the file system. With that, HDFS you have to first copy the file into HDFS, you had to bring it back to do anything with that. All those copy operations go away. And this is important, again in enterprise, not just for data sharing but also to get local access. >> You're saying your system is Hadoop ready. >> Chandra: It is. >> Okay. And then, the other thing you hear a lot from IT practitioners anyway, not so much from from the line of businesses, that when people spin up these Hadoop projects, big data projects, they go outside of the edicts of the organization in terms of governance and compliance, and often, security. How do you solve, do you solve that problem? >> Yeah, that's one of the reason to consider again, the enterprise storage, right? It's not just because you have, you're able to share the data with rest of applications, but also the whole bunch of data management features, including data governance features. You can talk about encryption there, you can talk about auditing there, you can talk about features like WAN, right, WAN, so data is, especially archival data, once you write you can't modify that. There are a whole bunch of features around data retention, data governance, those are all part of the data management stack we have. You get that for free. You not only get universal access, unified access, but you also get data governance. >> So is this one of the situations where, on the face of it, when you look at the CapEx, you say, "Oh, wow, I cause use commodity components, save a bunch of money." You know, you remember the client server days. "Oh, wow, cheap, cheap, cheep, "microprocessor based solution," and then all the sudden, people realize we have to manage this. Have we seen a similar sort of trend with Hadoop, with the ability to or the complexity of managing all of this infrastructure? It's so high than it actually drives costs up. >> Actually there are two parts to it, right? There is actually value in utilizing commodity hardware, industry standards. That does reduce your costs right? If you can just buy a standard XL6 server we can, a storage server and utilize that, why not. That is kind of just because. But the real value in any kind of a storage data manage solution is in the software stack. Now you can reduce CapEx by using industry standards. It's a good thing to do and we should, and we support that but in the end, the data management is there in the software stack. What I'm saying is HDFS is solving one problem by dismissing the whole data management problems, which we just touched on. And that all comes in software which goes down under service. >> Well, and you know, it's funny, I've been saying for years, that if you peel back the onion on any storage device, the vast majority anyway, they're all based on standard components. It's the software that you're paying for. So it's sort of artificial in that a company like IBM will say, "Okay, we've got all this value in here, "but it's on top of commodity components, "we're going to charge for the value." >> Right. >> And so if you strip that out, sure, you do it yourself. >> Yeah, exactly. And it's all standard service. It's been like that always. Now one difference is ten years ago people used propriety array controllers. Now all of the functionalities coming into software-- >> ASICs, >> Recording. >> Yeah, 3PAR still has an ASIC, but most don't. >> Right, that's funny, they only come in like.. Almost everybody has some kind of a software-based recording and they're able to utilize sharing server. Now the reason advantage in appliance more over, because, yes it can run on industry's standard, but this is storage, this is where, that's a foundation of all of your inter sectors. And you want RAS, or you want reliability and availability. The only way to get that is a fully integrated, tight solution, where you're doing a lot of testing on the software and the hardware. Yes, it's supposed to work, but what really happens when it fails, how does the sub react. And that's where I think there is still a value for integrated systems. If you're a large customer, you have a lot of storage saving, source of the administrators and they know to build solutions and validate it. Yes, software based storage is the right answer for you. And you're the offering manager for Spectrum Scale, which is the file offering, right, that's right? >> Yes, right yes. >> And it includes object as well, or-- >> Spectrum Sale is a file and object storage pack. It supports both file and protocols. It also supports object protocols. The thing about object storage is it means different things to different people. To some people, it's the object interface. >> Yeah, to me it means get put. >> Yeah, that's what the definition is, then it is objectivity. But the fact is that everybody's supposed to stay in now. But to some of the people, it's not about the protocol, because they're going to still access by finding those protocols, but to them, it's about the object store, which means it's a flat name space and there's no hierarchical name structure, and you can get into billions of finites without having any scalable issues. That's an object store. But to some other people it's neither of those, it's about a range of coding which object storage, so it's cheap storage. It allows you to run on storage and service, and you get cheap storage. So it's three different things. So if you're talking about protocols yes, but their skill is by their definition is object storage, also. >> So in thinking about, well let's start with Spectrum Scale generally. But specifically, your angle in big data and Hadoop, and we talked about that a little bit, but what are you guys doing here, what are you showing, what's your partership with Hortonworks. Maybe talk about that a little bit. >> So we've been supporting this, what we call as Hadoop connector on Spectrum Scale for almost a year now, which is allowing our existing Spectrum Scale customers to run Hadoop straight on it. But if you look at the Hadoop distributions, there are two or three major ones, right? Cloudera, Hortonworks, maybe MapArt. One of the first questions we get is, we tell our customers you can run Hadoop on this. "Oh, is this supported by my distribution?" So that has been a problem. So what we announced is, we found a partnership with Hortonworks, so now Hortonwords is certifying IBM Spectrum Scale. It's not new code changes, it's not new features, but it's a validation and a stamp from Hortonworks, that's in the process. The result of is, Hortonworks certified reference architecture, which is what we announced. We announced it about a month ago. We should be publishing that soon. Now customers can have more confidence in the joint solutions. It's not just IBM saying that it's Hadoop ready, but it's Hortonworks backing that up. >> Okay, and your scope, correct me if I'm wrong, is sort of on prem and hybrid, >> Chandra: Yes. >> Not cloud services. That's kind of you might sell your technology internally, but-- >> Correct so IBM storage is primarily focused on on prem storage. We do have a separate cloud division, but almost every IBM storage production, especially Spectrum Scale, is what I can speak of, we treat them as hybrid cloud storage. What we mean that is we have built in capabilities, we have feature. Most of our products call transfer in cloud tiering, it allows you to set a policy on when data should be automatically tiered to the cloud. Everybody wants public, everybody wants on prem. Obviously there are pros and cons of on primary storage, versus off primary storage, but basially, it boils down to, if you want performance and security, you want to be on premises. But there's always some which is better to be in the cloud, and we try to automate that with our feature called transfer and cloud data. You set a policy based on age, based on the type of data, based on the ownership. The system will automatically tier the data to the cloud, and when a user access that cloud, it comes back automatically, too. It's all transferred to the end. So yes, we're a non primary storage business but our solutions are hybrid cloud storage. >> So, as somebody who knows the file business pretty well, let's talk about kind of the business file and sort of where it's headed. There's some mega trends and dislocations. There's obviously software defined. You guys have made a big investment in software defined a year and a half, two years ago. There's cloud, Amazon with S3 sort of shook up the world. I mean, at first it was sort of small, but then now, it's really catching on. Object obviously fits in there. What do you see as the future of file. >> That's a great question. When it comes to data layout, there's really a block file of object. Software defined and cloud are various ways of consuming storage. If you're large service probably, you would prefer a software based solution so you can run it on your existing service. But who are your preferred solutions? Depending on the organization's preferences for security, and how concerned they are about security and performance needs, they will prefer to run some of the applications on cloud. These are different ways of consuming storage. But coming back to file, an object right? So object is perfect if you are not going to modify the data. You're done writing that data, and you're not going to change. It just belongs an object store, right? It's more scalable storage, I say scalable because file systems are hierarchical in nature. Because it's a file system tree, you have travels through the various subtype trees. Beyond a few million subtype trees, it slows you down. But file systems have a strength. When you want to modify the file, any application which is going to edit the file, which is going to modify the file, that application belongs on file storage, not on object. But let's say you are dealing with medical images. You're not going to modify an x-ray once it's done. That's better suited on an object storage. So file storage will always have a place. Take video editing and all these videos they are doing, you know video, we do a lot of video editing. That belongs on file storage, not on object. If you care about file modifications and file performance, file is your answer, but if you're done and you just want to archive it, you know, you want a scalable storage, billions of objects, then object is answer. Now either of these can be software based storage or it could be appliance. That's again an organization's preference for do you want to integrate a robust ready, ready made solution, then appliance is an answer. "Ah, no I'm a large organization. "I have a lot of storage administered," as they can build something on their own, then software based is answer. Having most windows will give you a choice. >> What brought you to IBM. You used to be at NetApp. IBM's buying the weather company. Dell's buying EMC. What attracted you to IBM? Storage is the foundation which we have, but it's really about data, and it's really about making sense of it, right? And everybody saying data is the new oil, right? And IBM is probably the only company I can think of, which has the tools and the IT to make sense of all this. NetApp, it was great in early 2000s. Even as a storage foundation, they have issues, with scale out and a true scale out, not just a single name space. EMC is pure storage company. In the future it's all about, the reason we are here at this conference is about analyzing the data. What tools do you have to make sense of that. And that's where machine learning, then deep learning comes. Watson is very well-known for that. IBM has the IT and it has a rightful research going on behind that, and I think storage will make more sense here. And also, IBM is doing the right thing by investing almost a billion dollars in software defined storage. They are one of the first companies who did not hesitate to take the software from the integrated systems, for example, XIV, and made the software available as software only. We did the same thing with Store-Wise. We took the software off it and made available as Spectrum Virtualize. We did not hesitate at all to take the same software which was available, to some other vendors, "I can't do that. "I'm going to lose all my margins." We didn't hesitate. We made it available as software. 'Cause we believe that's an important need for our customers. >> So the vision of the company, cognitive, the halo effect of that business, that's the future, is going to bring a lot of storage action, is sort of the premise there. >> Chandra: Yes. >> Excellent, well Chandra, thanks very much for coming to theCUBE. It was great to have you, and good luck with attacking the big data world. >> Thank you, thanks for having me. >> You're welcome. Keep it right there everybody. We'll be back with our next guest. We're live from Munich. This is DataWorks 2017. Right back. (techno music)

Published Date : Apr 5 2017

SUMMARY :

Brought to you by Hortonworks. This is The Cube, the leader It does, it's the foundation. at it as the most sexy thing in the early days of Hadoop and big data, and that becomes part of the main storage. of some kind horizontal infrastructure One of the first things they do but it also makes the data stale. of legacy data hanging around, that, is what you're saying. So that's one of the You're saying your of the organization in terms of governance but also the whole bunch of the client server days. It's a good thing to do and we should, It's the software that you're paying for. And so if you strip that Now all of the functionalities an ASIC, but most don't. is the right answer for you. To some people, it's the object interface. it's not about the protocol, but what are you guys doing One of the first questions we get is, That's kind of you might sell based on the type of data, let's talk about kind of the business file of the applications on cloud. And also, IBM is doing the right thing is sort of the premise there. to theCUBE. This is DataWorks 2017.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jeff HammerbacherPERSON

0.99+

JohnPERSON

0.99+

IBMORGANIZATION

0.99+

HortonworksORGANIZATION

0.99+

DavePERSON

0.99+

HortonwordsORGANIZATION

0.99+

twoQUANTITY

0.99+

MunichLOCATION

0.99+

Chandra MukhyalaPERSON

0.99+

FacebookORGANIZATION

0.99+

ChandraPERSON

0.99+

two partsQUANTITY

0.99+

AmazonORGANIZATION

0.99+

billionsQUANTITY

0.99+

EMCORGANIZATION

0.99+

DellORGANIZATION

0.99+

DataWorks SummitEVENT

0.99+

SwiftTITLE

0.99+

early 2000sDATE

0.99+

OneQUANTITY

0.99+

one problemQUANTITY

0.99+

DataWorks SummitEVENT

0.99+

ClouderaORGANIZATION

0.99+

S3TITLE

0.98+

oneQUANTITY

0.98+

bothQUANTITY

0.98+

MapArtORGANIZATION

0.98+

firstQUANTITY

0.98+

Spectrum ScaleTITLE

0.97+

ten years agoDATE

0.97+

two years agoDATE

0.97+

first questionsQUANTITY

0.96+

first companiesQUANTITY

0.96+

billions of objectsQUANTITY

0.95+

HadoopTITLE

0.95+

#DW17EVENT

0.95+

one pointQUANTITY

0.95+

2017EVENT

0.94+

decadesQUANTITY

0.94+

one business functionQUANTITY

0.94+

zeroQUANTITY

0.94+

a year and a halfDATE

0.93+

DataWorks Summit Europe 2017EVENT

0.92+

one datasetQUANTITY

0.92+

one wayQUANTITY

0.92+

three different thingsQUANTITY

0.92+

DataWorks 2017EVENT

0.91+

SMBTITLE

0.91+

CapExORGANIZATION

0.9+

last one yearDATE

0.89+

Eric Herzog | IBM Interconnect 2017


 

>> Narrator: Live, from Las Vegas, it's The Cube. Covering InterConnect 2017. Brought to you by IBM. >> Welcome back, everyone. Live here in Las Vegas, this is The Cube's coverage of IBM's Interconnect 2017. I'm John Furrier with my co-host Dave Vellante. Our next guest is Eric Herzog, Cube alumni, Vice President of Product Market at IBM storage. Welcome back to The Cube. Good to see you with the shirt on. You got the IBM tag there, look at that. >> I do. Well, you know, I've worn a Hawaiian shirt now, I think, ten Cubes in a row, so I got to keep the streak going. >> So, pretty sunny here in Vegas, great weather. Storage is looking up as well. Give us the update. Obviously, this is never going away, we talk about it all the time, but now cloud, more than ever, a lot of action happening with storage, and data is a big part of it. >> Yeah, the big thing with us has been around hybrid cloud. So our software portfolio, the spectrum family, Spectrum Virtualize, Spectrum Protect, our backup package, Spectrum Scale, our scale out NAS, IBM Cloud Object Storage, all will move data transparently from on-premises configurations out to multiple cloud vendors, including IBM Bluemix. But also other vendors, as well. That software's embedded on our array products, including our VersaStack. And just two weeks ago, at Cisco Live in Melbourne, Australia, we did a announcement with Cisco around our VersaStack for the hybrid cloud. >> So what's the hybrid cloud equation look like for you guys right now, because it is the hottest topic. It's almost like brute force, everywhere you see, it's hybrid cloud, that's what people want. How does it change the storage configurations? What's the solutions look like? What's different now than it was a year ago? >> I think the key thing you've got to be able to do is to make sure the data can move transparently from an on-premise location, or a private cloud, you could have started as a private cloud config and then decid it's OK to use a public cloud with the right security protocols. So, whether you've got a private cloud moving to a public cloud provider, like Bluemix, or an on-premises configuration moving to a public cloud provider, like Bluemix, the idea is they can move that data back and forth. Now, with our Cisco announcement, Cisco, with their cloud center, is also providing the capability and moving applications back and forth. We move the data layer back and forth with Spectrum Virtualize or IBM's copy data management product, Spectrum Copy Data Management, and with Cloud Center, or the ECS, Enterprise Cloud Sweep, from Cisco, you can move the application layer back and forth with that configuration on our VersaStacks. >> So this whole software-defined thing starts, it started when people realized, hey, we can run our data centers kind of the way the big hyper-scalers do. IBM pivoted hard toward software-defined. What's been the impact that you've seen with customers? Are they actually, I mean, there was a big branding announcement with Spectrum and everything a while back. What's been the business impact of that shift? >> Well, for us, it's been very strong. So if you look at the last couple quarters, according to the analysts that track the numbers, from a total storage perspective, we've moved into the number two position, and have been, now for the last two years. And for software-defined storage, we're the number one provider of software-defined storage in the world, and have been for the last three years in a row. So we've been continuing to grow that business on the software-defined side. We've got scale-up block configurations, scale-out block configurations, object storage with IBM Cloud Object Storage, and scale-out NAS and file with our IBM Spectrum Scale. So if you're file, block, or object, we've got you covered. And you can use either A, our competitor's storage, we work with all our competitor's gear, or you could go with your reseller, and have them, or your distributor provide the raw infrastructure, the servers, the storage, flash or hard drives, and then use our software on top to create essentially your own arrays. >> So when you say competitor's gear, you're talking about what used to be known as the SAN Volume Controller, and now is Spectrum Virtualize, right? Did I get that right? >> Yes, well, we still sell the SAN Volume Controller. When you buy the Spectrum Virtualize, it comes as just a piece of software. When you buy the SAN Volume Controller as well as our FlashSystem V9000, and our Storwize V7 and V5000, they come with Spectrum Virtualize pre-loaded on the array. So we have three ways where the array is pre-loaded: SAN Volume Controller, FlashSystems V9000, and then the Storwiz products, so it's pre-loaded. Or, you can buy the stand-alone software Spectrum Virtualize and put it on any hardware you want, either way. >> So, I know we're at an IBM conference, and IBM hates, they don't talk about the competition directly, but I have to ask the competitive questions. You've had a lot of changes in the business. Obviously, cloud's coming in in a big way. The Dell EMC merger has dislocated things, and you still see a zillion starups in storage, which is amazing to me, alright? Everybody says, oh, storage is dead, but then all this VC money still funneling in and all this innovation. What's happening in the storage landscape from your perspective? >> Well, I think there's a couple things. So, first of all, software-defined has got its legs, now. When you look at it from a market perspective, last quarter ended up at almost 400 million, which put it on a, let's say, a 1.6 to 2 billion dollar trajectory for calendar 2017, out of a total software market of around 16 billion. So it's gone from nothing to roughly 2 billion out of 16 billion for all storage software of all various types, so that's hot. All flash arrays are still hot. You're looking at, right now, last year, all flash arrays end up at roughly 25% of all arrays shipped. They're now in price parity, so an all-flash array is not more expensive. So you see a lot of innovation around that. You're still seeing innovation around backup, right? You've got guys trying to challenge us with our SpectrumProtect with some of these other vendors trying to challenge us, even though backup is the most mature of the storage software spaces, there's people trying to challenge that. So, I'd say storage is still a white-hot space. As you know, the overall market is flat, so it is totally a drag out, knock-down fight. You know, the MMA and the UFC guys got nothing on what goes on in the storage business. So, make sure you wear your flak jacket if you're a storage guy. >> Meaning, you got to gain share to grow, right? >> Yes, and it's all about fighting it out. This Hawaiian shirt looks Hawaiian, but just so you know, this is Kevlar. Just in case there's another storage company here at the show. >> So what are the top conversations now with storage buyers? Because we saw Candy's announcement about the object store, Flex, for the cold storage. It changes the price points. It's always going to be a price sensitive market, but they're still buying storage. What are those conversations that you're having? You mentioned moving data around, do they want to move the data around? Do they want to keep it at the edge? Is it moving the application around? What are some of those key conversations that you're involved in? >> So we've done a couple innovative things. One of the things we've done is worked with our sales team to create what we call, the conversations. You know, I've been doing this storage gig now for 31 years. Seven start-ups, IBM twice, EMC, Maxtor and Seagate- >> John: You're a hardened veteran. >> I'm a storage veteran, that's why this is a Kevlar Hawaiian shirt. But no CIO's a storage guy, I've never met one, in 31 years, ever, ever, ever met a storage guy. So what we have to do is elevate the conversation when you're talking to the customer, about why it's important for their cloud, why it's important for machine learning, for cognitive, for artificial intelligence. You know, this about it, I'm a Star Trek guy. I like Star Wars, too, but in Star Trek, Bones, of course, wands the body. So guess what that is? That's the edge device going through the cloud to a big, giant server farm. If that storage is not super resilient, the guy on the table might not make it. And if the storage isn't super fast, the guys on the table might not make it. And while Watson isn't there, yet, Watson Health, they're getting there. So, in ten years from now, I expect when I go to the doctor, he's just like in Star Trek, waving the wand, and boy, you better make sure the storage that that wand is talking to better be highly resilient and high performing. >> Define resilient, in your terms. >> So, resilient means you really can't have more than 30 seconds, 50 seconds a year of down time. Because whoever's on the table when that thing goes down has got a real problem. So you've got to be up all the time, and if you take it out of the healthcare space and look at other applications, whether you look at trading applications, data is the new gold. Data is the new diamonds. It's about data. Yes, I'd love to have a mound of gold, but you know what, if you have the right amount of data, it worth way more than a mound of gold is. >> You're right about the CIO and storage. They don't want to worry about storage. They don't want to spend a lot of time thinking about it. A CIO once said to me, "I care about storage like this, "I want it to be dirt cheap, lightning fast, and rock solid." Now, the industry has done a decent job with rock solid, I would say, but up until Flash, not really that great with lightning fast, and really not that great with dirt cheap. Price has come down for the hardware, but the management has been so expensive. So, is the industry attacking that problem? And what's IBM doing? >> Yeah, so the big thing is all about automation. So when I talk about moving to the hybrid cloud, I'm talking about transparent migration, transparent movement. That's an automation play. So you want to automate as much as you can, and we've got some things that we're not willing to disclose yet that'll make our storage even more automated whether it be from a predictive analytics perspective, self-healing storage that actually will heal itself, you know, go out and grab a code load and put the new code on because it knows there's a bug in the old code, and do that transparently so the user doesn't have to do anything. It's all about automating data movement and data activity. So we've already been doing that with the Spectrum family, and that Spectrum family ships on our storage systems and on our VersaStack, but automation is the critical key in storage. >> So I wonder, does that bring up new KPIs? Like, I presume you guys dog food your own storage internally, and your own IT. >> Eric: Yes >> Are you seeing, because it used to be, OK, the light's green on the disc drive, and you know, this is our uptime or downtime, planned downtime, you know, sort of standard metrics that we've known for 30-40 years. With automation, are we seeing a new set of metrics in KPIs emerge? You know, self-healing, percentage of problems that corrected themselves, or- >> Well, and you're also seeing things like time spent. So if you go back to the downturn of seven, eight, and nine, IT was devastated, right? And, as you know, you've seen a lot of surveys that IT spend is basically back up to '08, OK, the pre-08 crash. When you open up that envelope, they're not hiring storage guys anymore, and usually not infrastructure guys. They're hiring guys to do devops and testdev, and do cloud-based applications, which means there's not a lot of guys to run the storage. So one of the metrics we're seeing is, how much guys do I have managing my storage, or, my infrastructure? I used to have 50, now I'm a big bank, can I do it with 25? Can I do it with 20? Can I do it with 15? And then, how much time do they spend between the networking, the storage, the facilities themselves. These data center guys have to manage all of that. So there are new metrics about, what is the workload that my actual human beings are doing? How much of that is storage versus something else? And there's way less guys doing storage as a full-time job, anyway, because what happened in the downturn? And, so automation is critical to a guy running a datacenter, whether he's a cloud guy, whether he's a small shop. And clearly in the Fortune, global 2500, those guys, where they've got in-house IT, they've cut back on the infrastructure team and the storage team, so it's all about automation. So, part of the KPIs are not just about the storage itself, such as uptime, cost per Gig, cost per transaction, the bandwidth, you know, those sorts of KPIs. But it's also about how much time do I really spend managing the storage? So if I've only got five guys, now, and I used to have 15 guys, those five guys are managing, usually, three, to four, to five times more storage than they did in 2008 and 2009. So now you've got to do it with five guys instead of 15, so there's a KPI, right there. >> So, what about cloud? We heard David Kinney talk today about the object store with that funny name, and then he talked about this cloud-tiering thing, and I couldn't stay. I had to get ready for theCube. How do you work with those guys? How do you sell a hybrid story, together, because cloud is eating away at the traditional infrastructure business, but it's all sort of one big, happy family, I'm sure. But how do you work with a cloud group to really drive, to make the water level higher for IBM? >> So, all of our products from the Spectrum family, not all, but almost all our products from the Spectrum family, will automatically move data to the cloud, including IBM Bluemix/SoftLayer. So our on-premises can do it. If you buy our software only, and don't buy our storage arrays, or don't buy a Storwize, or don't buy a flash system, you still can automatically move that data to the cloud, including the IBM cloud object store. Our Spectrum Scale product, for example, ScaleOut NAS, and file system, which is very highly used in big data analytics and cognitive workloads, automatically, by policy, will tier-data to IBM cloud object storage. Spectrum Protect can be set up to automatically take data and back it up from on-premises to IBM cloud object storage. So we've automated those processes between our software and our array family, and IBM cloud object storage, and Bluemix and SoftLayer. And, by the way, in all honesty, we also work with other cloud vendors, just like they work with other storage vendors. All storage vendors can put data in Bluemix. Well, guess what, we can put data in clouds that are not Bluemix, as well. Of course, we prefer Bluemix. We all have IBM employee stock purchase, so of course we want Bluemix first, but if the customer, for whatever reason, doesn't see the light and doesn't go to Bluemix and goes with something else, then we want to make sure that customer's happy. We want to get at least some of the PO, and our Spectrum family, and our VersaStack family, and all of our array family can get that part of the PO. >> You need versatility to be on any cloud. >> Eric: We can be on any cloud. >> So my question for you is, the thing that came out of our big data, Silicon Valley event last week was, Hadoop was a great example, and that's kind of been, now, a small feature of the overall data ecosystem, is that batch and real time are coming together. So that conversation you're having, that you mentioned earlier, is about more real time than there is anything else more than ever. >> Well, and real time gets back to my examples of Bones on Star Trek wanding you over healthcare. That is real time, he's got a phaser burn, a broken leg, a this and that, and then we know how to fix the guy. But if you don't get that from the wand, then that's not real time analytics. >> Speaking of Star Trek, just how much data do you think the Enterprise was throwing off, just from an IOT standpoint? >> I'm sure that they had about a hundred petabytes. All stored on IBM Flash Systems arrays, by the way. >> Eric, thanks for coming on. Real quick, in the next 30 seconds, just give the folks a quick update on why IBM storage is compelling now more than ever. >> I think the key thing is, most people don't realize, IBM is the number two storage company in the world, and it has been for the last several years. But I think the big thing is our embracing of the hybrid cloud, our capability of automating all these processes. When they've got less guys doing storage and infrastructure in their shop, they need something that's automated, that works with the cloud. And that's what IBM storage does. >> All right, Eric Herzog, here, inside theCube, Vice President of Product Market for IBM Storage. I'm John Furrier, and Dave Velante. More live coverage from IBM InterConnect after this short break. Stay with us. (tech music)

Published Date : Mar 21 2017

SUMMARY :

Brought to you by IBM. You got the IBM tag there, look at that. Well, you know, I've worn the time, but now cloud, Yeah, the big thing with us is the hottest topic. center, is also providing the capability our data centers kind of the and have been, now for the last two years. the SAN Volume Controller. What's happening in the storage landscape is the most mature of the here at the show. Is it moving the application around? One of the things we've done And if the storage isn't super fast, data is the new gold. So, is the industry and put the new code on Like, I presume you guys and you know, this is our the bandwidth, you know, at the traditional can get that part of the PO. to be on any cloud. the thing that came out of our But if you don't get that from the wand, Systems arrays, by the way. seconds, just give the folks IBM is the number two I'm John Furrier, and Dave Velante.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Eric HerzogPERSON

0.99+

Dave VelantePERSON

0.99+

CiscoORGANIZATION

0.99+

IBMORGANIZATION

0.99+

SeagateORGANIZATION

0.99+

2008DATE

0.99+

Eric HerzogPERSON

0.99+

31 yearsQUANTITY

0.99+

EMCORGANIZATION

0.99+

2009DATE

0.99+

five guysQUANTITY

0.99+

MaxtorORGANIZATION

0.99+

John FurrierPERSON

0.99+

Star TrekTITLE

0.99+

five timesQUANTITY

0.99+

EricPERSON

0.99+

VegasLOCATION

0.99+

Star WarsTITLE

0.99+

1.6QUANTITY

0.99+

David KinneyPERSON

0.99+

last yearDATE

0.99+

15 guysQUANTITY

0.99+

fourQUANTITY

0.99+

16 billionQUANTITY

0.99+

threeQUANTITY

0.99+

50QUANTITY

0.99+

Watson HealthORGANIZATION

0.99+

V5000COMMERCIAL_ITEM

0.99+

Star TrekTITLE

0.99+

15QUANTITY

0.99+

CubeORGANIZATION

0.99+

BluemixORGANIZATION

0.99+

JohnPERSON

0.99+

Las VegasLOCATION

0.99+

last quarterDATE

0.99+

sevenQUANTITY

0.99+

Melbourne, AustraliaLOCATION

0.99+

OneQUANTITY

0.99+

two weeks agoDATE

0.99+

last weekDATE

0.99+

eightQUANTITY

0.99+

WatsonORGANIZATION

0.99+

nineQUANTITY

0.99+

oneQUANTITY

0.98+

30-40 yearsQUANTITY

0.98+

more than 30 secondsQUANTITY

0.98+

todayDATE

0.98+

around 16 billionQUANTITY

0.98+

2 billionQUANTITY

0.97+

a year agoDATE

0.97+

25QUANTITY

0.97+

2017DATE

0.97+

20QUANTITY

0.97+

KevlarORGANIZATION

0.96+

BonesTITLE

0.96+

UFCORGANIZATION

0.96+

almost 400 millionQUANTITY

0.96+

'08DATE

0.95+