Image Title

Search Results for Spectrum Scale:

Eric Herzog & Sam Werner, IBM | CUBEconversation


 

(upbeat music) >> Hello everyone, and welcome to this "Cube Conversation." My name is Dave Vellante and you know, containers, they used to be stateless and ephemeral but they're maturing very rapidly. As cloud native workloads become more functional and they go mainstream persisting, and protecting the data that lives inside of containers, is becoming more important to organizations. Enterprise capabilities such as high availability or reliability, scalability and other features are now more fundamental and important and containers are linchpin of hybrid cloud, cross-cloud and edge strategies. Now fusing these capabilities together across these regions in an abstraction layer that hides that underlying complexity of the infrastructure, is where the entire enterprise technology industry is headed. But how do you do that without making endless copies of data and managing versions not to mention the complexities and costs of doing so. And with me to talk about how IBM thinks about and is solving these challenges are Eric Herzog, who's the Chief Marketing Officer and VP of Global Storage Channels. For the IBM Storage Division is Sam Werner is the vice president of offering management and the business line executive for IBM Storage. Guys, great to see you again, wish should, were face to face but thanks for coming on "theCUBE." >> Great to be here. >> Thanks Dave, as always. >> All right guys, you heard me my little spiel there about the problem statement. Eric, maybe you could start us off. I mean, is it on point? >> Yeah, absolutely. What we see is containers are going mainstream. I frame it very similarly to what happened with virtualization, right? It got brought in by the dev team, the test team, the applications team, and then eventually of course, it became the main state. Containers is going through exactly that right now. Brought in by the dev ops people, the software teams. And now it's becoming again, persistent, real use clients that want to deploy a million of them. Just the way they historically have deployed a million virtual machines, now they want a million containers or 2 million. So now it's going mainstream and the feature functions that you need once you take it out of the test sort of play with stage to the real production phase, really changes the ball game on the features you need, the quality of what you get, and the types of things you need the underlying storage and the data services that go with that storage,. to do in a fully container world. >> So Sam how'd we get here? I mean, container has been around forever. You look inside a Linux, right? But then they did, as Eric said, go mainstream. But it started out the, kind of little experimental, As I said, their femoral didn't really need to persist them, but it's changed very quickly. Maybe you could talk to that evolution and how we got here. >> I mean, well, it's been a look, this is all about agility right? It's about enterprises trying to accelerate their innovation. They started off by using virtual machines to try to accelerate access to IT for developers, and developers are constantly out, running ahead. They got to go faster and they have to deliver new applications. Business lines need to figure out new ways to engage with their customers. Especially now with the past year we had it even further accelerated this need to engage with customers in new ways. So it's about being agile. Containers promise or provide a lot of the capabilities you need to be agile. What enterprises are discovering, a lot of these initiatives are starting within the business lines and they're building these applications or making these architectural decisions, building dev ops environments on containers. And what they're finding is they're not bringing the infrastructure teams along with them. And they're running into challenges that are inhibiting their ability to achieve the agility they want because their storage needs aren't keeping up. So this is a big challenge that enterprises face. They want to use containers to build a more agile environment to do things like dev ops, but they need to bring the infrastructure teams along. And that's what we're focused on now. Is how do you make that agile infrastructure to support these new container worlds? >> Got it, so Eric, you guys made an announcement to directly address these issues. Like it's kind of a fire hose of innovation. Maybe you could take us through and then we can unpack that a little bit. >> Sure, so what we did is on April 27th, we announced IBM Spectrum Fusion. This is a fully container native software defined storage technology that integrates a number of proven battle-hardened technologies that IBM has been deploying in the enterprise for many years. That includes a global scalable file system that can span edge core and cloud seamlessly with a single copy of the data. So no more data silos and no more 12 copies of the data which of course drive up CapEx and OpEx. Spectrum Fusion reduces that and makes it easier to manage. Cuts the cost from a CapEx perspective and cuts a cost for an OpEx perspective. By being fully container native, it's ready to go for the container centric world and could span all types of areas. So what we've done is create a storage foundation which is what you need at the bottom. So things like the single global namespace, single accessibility, we have local caching. So with your edge core cloud, regardless of where the data is, you think the data's right with you, even if it physically is not. So that allows people to work on it. We have file locking and other technologies to ensure that the data is always good. And then of course we'd imbued it with the HA Disaster Recovery, the backup and restore technology, which we've had for years and have now made of fully container native. So spectrum fusion basically takes several elements of IBM's existing portfolio has made them container native and brought them together into a single piece of software. And we'll provide that both as a software defined storage technology early in 2022. And our first pass will be as a hyperconverged appliance which will be available next quarter in Q3 of 2021. That of course means it'll come with compute, it'll come with storage, come with a rack even, come with networking. And because we can preload everything for the end users or for our business partners, it would also include Kubernetes, Red Gat OpenShift and Red Hat's virtualization technology all in one simple package, all ease of use and a single management gooey to manage everything, both the software side and the physical infrastructure that's part of the hyperconverged system level technologies. >> So, maybe it can help us understand the architecture and maybe the prevailing ways in which people approach container storage, what's the stack look like? And how have you guys approached it? >> Yeah, that's a great question. Really, there's three layers that we look at when we talk about container native storage. It starts with the storage foundation which is the layer that actually lays the data out onto media and does it in an efficient way and makes that data available where it's needed. So that's the core of it. And the quality of your storage services above that depend on the quality of the foundation that you start with. Then you go up to the storage services layer. This is where you bring in capabilities like HA and DR. People take this for granted, I think as they move to containers. We're talking about moving mission critical applications now into a container and hybrid cloud world. How do you actually achieve the same levels of high availability you did in the past? If you look at what large enterprises do, they run three site, for site replication of their data with hyper swap and they can ensure high availability. How do you bring that into a Kubernetes environment? Are you ready to do that? We talk about how only 20% of applications have really moved into a hybrid cloud world. The thing that's inhibiting the other 80% these types of challenges, okay? So the storage services include HA DR, data protection, data governance, data discovery. You talked about making multiple copies of data creates complexity, it also creates risk and security exposures. If you have multiple copies of data, if you needed data to be available in the cloud you're making a copy there. How do you keep track of that? How do you destroy the copy when you're done with it? How do you keep track of governance and GDPR, right? So if I have to delete data about a person how do I delete it everywhere? So there's a lot of these different challenges. These are the storage services. So we talk about a storage services layer. So layer one data foundation, layer two storage services, and then there needs to be connection into the application runtime. There has to be application awareness to do things like high availability and application consistent backup and recovery. So then you have to create the connection. And so in our case, we're focused on open shift, right? When we talk about Kubernetes how do you create the knowledge between layer two, the storage services and layer three of the application services? >> And so this is your three layer cake. And then as far as like the policies that I want to inject, you got an API out and entries in, can use whatever policy engine I want. How does that work? >> So we're creating consistent sets of APIs to bring those storage services up into the application, run time. We in IBM have things like IBM cloud satellite which bring the IBM public cloud experience to your data center and give you a hybrid cloud or into other public cloud environments giving you one hybrid cloud management experience. We'll integrate there, giving you that consistent set of storage services within an IBM cloud satellite. We're also working with Red Hat on their Advanced Cluster Manager, also known as RACM to create a multi-cluster management of your Kubernetes environment and giving that consistent experience. Again, one common set of APIs. >> So the appliance comes first? Is that a no? Okay, so is that just time to market or is there a sort of enduring demand for appliances? Some customers, you know, they want that, maybe you could explain that strategy. >> Yeah, so first let me take it back a second. Look at our existing portfolio. Our award-winning products are both software defined and system-based. So for example Spectrum Virtualize comes on our flash system. Spectrum Scale comes on our elastic storage system. And we've had this model where we provide the exact same software, both on an array or as standalone piece of software. This is unique in the storage industry. When you look at our competitors, when they've got something that's embedded in their array, their array manager, if you will, that's not what they'll try to sell you. It's software defined storage. And of course, many of them don't offer software defined storage in any way, shape or form. So we've done both. So with spectrum fusion, we'll have a hyper-converged configuration which will be available in Q3. We'll have a software defined configuration which were available at the very beginning of 2022. So you wanted to get out of this market feedback from our clients, feedback from our business partners by doing a container native HCI technology, we're way ahead. We're going to where the park is. We're throwing the ball ahead of the wide receiver. If you're a soccer fan, we're making sure that the mid guy got it to the forward ahead of time so you could kick the goal right in. That's what we're doing. Other technologies lead with virtualization, which is great but virtualization is kind of old hat, right? VMware and other virtualization layers have been around for 20 now. Container is where the world is going. And by the way, we'll support everything. We still have customers in certain worlds that are using bare metal, guess what? We work fine with that. We worked fine with virtual as we have a tight integration with both hyper V and VMware. So some customers will still do that. And containers is a new wave. So with spectrum fusion, we are riding the wave not fighting the wave and that way we could meet all the needs, right? Bare metal, virtual environments, and container environments in a way that is all based on the end users applications, workloads, and use cases. What goes, where and IBM Storage can provide all of it. So we'll give them two methods of consumption, by early next year. And we started with a hyper-converged first because, A, we felt we had a lead, truly a lead. Other people are leading with virtualization. We're leading with OpenShift and containers where the first full container-native OpenShift ground up based hyper-converged of anyone in the industry versus somebody who's done VMware or some other virtualization layer and then sort of glommed on containers and as an afterthought. We're going to where the market is moving, not to where the market has been. >> So just follow up on that. You kind of, you got the sort of Switzerland DNA. And it's not just OpenShift and Red Hat and the open source ethos. I mean, it just goes all the way back to San Volume Controller back in the day where you could virtualize anybody's storage. How is that carrying through to this announcement? >> So Spectrum Fusion is doing the same thing. Spectrum Fusion, which has many key elements brought in from our history with Spectrum Scale supports not IBM storage, for example, EMC Isilon NFS. It will support, Fusion will support Spectrum Scale, Fusion will support our elastic storage system. Fusion will support NetApp filers as well. Fusion will support IBM cloud object storage both software defined storage, or as an array technology and Amazon S3 object stores and any other object storage vendor who's compliant with S3. All of those can be part of the global namespace, scalable file system. We can bring in, for example, object data without making a duplicate copy. The normal way to do that as you make a duplicate copy. So you had a copy in the object store. You make a copy and to bring that into the file. Well, guess what, we don't have to do that. So again, cutting CapEx and OpEx and ease of management. But just as we do with our flash systems product and our Spectrum Virtualize and the SAN Volume Controller, we support over 550 storage arrays that are not ours that are our competitors. With Spectrum Fusion, we've done the same thing, fusion, scale the IBM ESS, IBM cloud object storage, Amazon S3 object store, as well as other compliance, EMC Isilon NFS, and NFS from NetApp. And by the way, we can do the discovery model as well not just integration in the system. So we've made sure that we really do protect existing investments. And we try to eliminate, particularly with discovery capability, you've got AI or analytics software connecting with the API, into the discovery technology. You don't have to traverse and try to find things because the discovery will create real time, metadata cataloging, and indexing, not just of our storage but the other storage I'd mentioned, which is the competition. So talk about making it easier to use, particularly for people who are heterogeneous in their storage environment, which is pretty much the bulk of the global fortune 1500, for sure. And so we're allowing them to use multiple vendors but derive real value with Spectrum Fusion and get all the capabilities of Spectrum Fusion and all the advantages of the enterprise data services but not just for our own product but for the other products as well that aren't ours. >> So Sam, we understand the downside of copies, but then, so you're not doing multiple copies. How do you deal with latency? What's the secret sauce here? Is it the file system? Is there other magic in here? >> Yeah, that's a great question. And I'll build a little bit off of what Eric said, but look one of the really great and unique things about Spectrum Scale is its ability to consume any storage. And we can actually allow you to bring in data sets from where they are. It could have originated in object storage we'll cash it into the file system. It can be on any block storage. It can literally be on any storage you can imagine as long as you can integrate a file system with it. And as you know most applications run on top of the file system. So it naturally fits into your application stack. Spectrum Scale uniquely is a globally parallel file system. So there's not very many of them in the world and there's none that can achieve what Spectrum Scale can do. We have customers running in the exabytes of data and the performance improves with scales. So you can actually deploy Spectrum Scale on-prem, build out an environment of it, consuming whatever storage you have. Then you can go into AWS or IBM cloud or Azure, deploy an instance of it and it will now extend your file system into that cloud. Or you can deploy it at the edge and it'll extend your file system to that edge. This gives you the exact same set of files and visibility and we'll cash in only what's needed. Normally you would have to make a copy of data into the other environment. Then you'd have to deal with that copy later, let's say you were doing a cloud bursting use case. Let's look at that as an example, to make this real. You're running an application on-prem. You want to spin up more compute in the cloud for your AI. The data normally you'd have to make a copy of the data. You'd run your AI. They have to figure out what to do with that data. Do you copy some of the fact? Do we sync them? Do you delete it? What do you do? With Spectrum Scale just automatically cash in whatever you need. It'll run there and you get assigned to spin it down. Your copy is still on-prem. You know, no data is lost. We can actually deal with all of those scenarios for you. And then if you look at what's happening at the edge, a lot of say video surveillance, data pouring in. Looking at the manufacturing {for} looking for defects. You can run a AI right at the edge, make it available in the cloud, make that data available in your data center. Again, one file system going across all. And that's something unique in our data foundation built on Spectrum Scale. >> So there's some metadata magic in there as well, and that intelligence based on location. And okay, so you're smart enough to know where the data lives. What's the sweet spot for this Eric? Are there any particular use cases or industries that we should be focused on or is it through? >> Sure, so first let's talk about the industries. We see certain industries going more container quicker than other industries. So first is financial services. We see it happening there. Manufacturing, Sam already talked about AI based manufacturing platforms. We actually have a couple clients right now. We're doing autonomous driving software with us on containers right now, even before Spectrum Fusion with Spectrum Scale. We see public of course, healthcare and in healthcare don't just think delivery at IBM. That includes the research guys. So the genomic companies, the biotech companies, the drug companies are all included in that. And then of course, retail, both on-prem and off-prem. So those are sort of the industries. Then we see from an application workload, basically AI analytics and big data applications or workloads are the key things that Spectrum Fusion helps you because of its file system. It's high performance. And those applications are tending to spread across core ,edge and cloud. So those applications are spreading out. They're becoming broader than just running in the data center. And by the way they want to run it just into the data center, that's fine. Or perfect example, we had giant global auto manufacturer. They've got factories all over. And if you think there isn't compute resources in every factory, there is because those factories I just saw an article, actually, those factories cost about a billion dollars to build them, a billion. So they've got their own IT, now it's connected to their core data center as well. So that's a perfect example that enterprise edge where spectrum fusion would be an ideal solution whether they did it as software defined only, or of course when you got a billion dollar factory, just to make it let alone produce the autos or whatever you're producing. Silicon, for example, those fabs, all cost a billion. That's where the enterprise edge fits in very well with Spectrum Fusion. >> So are those industries, what's driving the adoption of containers? Is it just, they just want to modernize? Is it because they're doing some of those workloads that you mentioned or is there's edge? Like you mentioned manufacturing, I could see that potentially being an edge is the driver. >> Well, it's a little bit of all of those Dave. For example, virtualization came out and virtualization offered advantages over bare metal, okay? Now containerization has come out and containerization is offering advantage over virtualization. The good thing at IBM is we know we can support all three. And we know again, in the global fortune 2000, 1500 they're probably going to run all three based on the application workload or use case. And our storage is really good at bare metal. Very good at virtualization environments. And now with Spectrum Fusion are container native outstanding for container based environments. So we see these big companies will probably have all three and IBM storage is one of the few vendors if not the only vendor that could adroitly support all three of those various workload types. So that's why we see this as a huge advantage. And again, the market is going to containers. We are, I'm a native California. You don't fight the wave, you ride the wave. and the wave is containers and we're riding that wave. >> If you don't ride the wave you become driftwood as Pat Gelsinger would say. >> And that is true, another native California. I'm a whole boss. >> So okay, so, I wonder Sam I sort of hinted upfront in my little narrative there but the way we see this, as you've got on-prem hybrid, you got public clouds across cloud moving to the edge. Open shift is I said is the linchpin to enabling some of those. And what we see is this layer that abstracts the complexity, hides the underlying complexity of the infrastructure that becomes kind of an implementation detail. Eric talked about skating to the park or whatever sports analogy you want to use. Is that where the park is headed? >> Yeah, I mean, look, the bottom line is you have to remove the complexity for the developers. Again, the name of the game here is all about agility. You asked why these industries are implementing containers? It's about accelerating their innovation and their services for their customers. It's about leveraging AI to gain better insights about their customers and delivering what they want and proving their experience. So if it's all about agility developers don't want to wait around for infrastructure. You need to automate it as much as possible. So it's about building infrastructure that's automated, which requires consistent API APIs. And it requires abstracting out the complexity of things like HA and DR. You don't want every application owner to have to figure out how to implement that. You want to make those storage services available and easy for a developer to implement and integrate into what they're doing. You want to ensure security across everything you do as you bring more and more of your data of your information about your customers into these container worlds. You've got to have security rock solid. You can't leave any exposures there and you can't afford downtime. There's increasing threats from things like ransomware. You don't see it in the news every day but it happens every single day. So how do you make sure you can recover when an event happens to you? So yes, you need to build a abstracted layer of storage services and you need to make it simply available to the developers in these dev ops environments. And that's what we're doing with spectrum fusion. We're taking, I think, extremely unique and one of a kind storage foundation with Spectrum Scale that gives you single namespace globally. And we're building onto it an incredible set of storage services, making extremely simple to deploy enterprise class container applications. >> So what's the bottom line business impact. I mean, how does this change? I mean, Sam, you I think articulated very well through all about serving the developers versus you know, storage, admin provisioning, a LUN. So how does this change my organization, my business? What's the impact there? >> I've mentioned one other point that we talk about an IBM a lot, which is the AI ladder. And it's about how do you take all of this information you have and be able to take it to build new insights, to give your company and advantage. An incumbent in an industry shouldn't be able to be disrupted if they're able to leverage all the data they have about the industry and their customers. But in order to do that, you have to be able to get to a single source of data and be able to build it into the fabric of your business operations. So that all decisions you're making in your company, all services you deliver to your customers, are built on that data foundation and information and the only way to do that and infuse it into your culture is to make this stuff real time. And the only way to do that is to build out a containerized application environment that has access to real-time data. The ultimate outcome, sorry, I know you asked for business results is that you will, in real time understand your clients, understand your industry and deliver the best possible services. And the absolute, business outcome is you will continue to gain market share and your environment and grow revenue. I mean, that's the outcome every business wants. >> Yeah, it's all about speed. Everybody's kind of, everybody's last year was forced into digital transformation. It was sort of rushed into and compressed and now they get some time to do it right. And so modernizing apps, containers, dev ops developer led sort of initiatives are really key to modernization. All right, Eric, we've got, we're out of time but give us the bottom summary. We didn't talk, actually, we had to talk about the 3,200. Maybe you could give us a little insight on that before we close. >> Sure, so in addition to what we're doing with Fusion we also introduced a new elastic storage system, 3,200 and it's all flash. It gets 80 gigs, a second sustained at the node level and we can cluster them infinitely. So for example, I've got 10 of them. I'm delivering 800 gigabytes, a second sustained. And of course, AI, big data analytic workloads are extremely, extremely susceptible to bandwidth and or data transfer rate. That's what they need to deliver their application base properly. It comes with Spectrum Scale built in so that comes with it. So you get the advantage of Spectrum Scale. We talked a lot about Spectrum Scale because it is if you will, one of the three fathers of spectrum fusion. So it's ideal with it's highly parallel file system. It's used all over in high performance computing and super computing, in drug research, in health care in finance, probably about 80% of the world's largest banks in the world use Spectrum Scale already for AI, big data analytics. So the new 3,200 is an all flash version twice as fast as the older version and all the benefit of Spectrum Scale including the ability of seamlessly integrating into existing Spectrum Scale or ESS deployments. And when Fusion comes out, you'll be able to have Fusion. And you could also add 3,200 to it if you want to do that because of the capability of our global namespace and our single file system across edge, core and cloud. So that's the 3,200 in a nutshell, Dave. >> All right, give us a bottom line, Eric. And we got to go, what's the bumper sticker. >> Yeah, bumper sticker is, you got to ride the wave of containers and IBM storage is company that can take you there so that you win the big surfing context and get the big prize. >> Eric and Sam, thanks so much, guys. It's great to see you and miss you guys. Hopefully we'll get together soon. So get your jabs and we'll have a beer. >> All right. >> All right, thanks, Dave. >> Nice talking to you. >> All right, thank you for watching everybody. This is Dave Vellante for "theCUBE." We'll see you next time. (upbeat music)

Published Date : Apr 28 2021

SUMMARY :

and protecting the data about the problem statement. and the types of things you Maybe you could talk to that a lot of the capabilities Got it, so Eric, you the data is, you think So that's the core of it. you got an API out and entries in, into the application, run time. So the appliance comes first? that the mid guy got it to in the day where you could And by the way, we can do Is it the file system? and the performance improves with scales. What's the sweet spot for this Eric? And by the way they want to run it being an edge is the driver. and IBM storage is one of the few vendors If you don't ride the And that is true, but the way we see this, as So how do you make sure What's the impact there? and the only way to do that and infuse it and now they get some time to do it right. So that's the 3,200 in a nutshell, Dave. the bumper sticker. so that you win the big It's great to see you and miss you guys. All right, thank you

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
EricPERSON

0.99+

Eric HerzogPERSON

0.99+

Dave VellantePERSON

0.99+

Pat GelsingerPERSON

0.99+

SamPERSON

0.99+

IBMORGANIZATION

0.99+

Sam WernerPERSON

0.99+

April 27thDATE

0.99+

DavePERSON

0.99+

10QUANTITY

0.99+

80 gigsQUANTITY

0.99+

12 copiesQUANTITY

0.99+

3,200QUANTITY

0.99+

CaliforniaLOCATION

0.99+

80%QUANTITY

0.99+

AWSORGANIZATION

0.99+

2 millionQUANTITY

0.99+

AmazonORGANIZATION

0.99+

bothQUANTITY

0.99+

last yearDATE

0.99+

firstQUANTITY

0.99+

CapExTITLE

0.99+

800 gigabytesQUANTITY

0.99+

oneQUANTITY

0.99+

IBM StorageORGANIZATION

0.99+

single copyQUANTITY

0.99+

OpExTITLE

0.98+

three layersQUANTITY

0.98+

Spectrum FusionCOMMERCIAL_ITEM

0.98+

20%QUANTITY

0.98+

EMCORGANIZATION

0.98+

first passQUANTITY

0.98+

S3TITLE

0.98+

Global Storage ChannelsORGANIZATION

0.98+

a billionQUANTITY

0.97+

twoQUANTITY

0.97+

20QUANTITY

0.97+

Spectrum ScaleTITLE

0.97+

three fathersQUANTITY

0.97+

early next yearDATE

0.97+

threeQUANTITY

0.97+

GDPRTITLE

0.96+

Red HatORGANIZATION

0.96+

OpenShiftTITLE

0.96+

Eric Herzog, IBM & Sam Werner, IBM | CUBE Conversation, October 2020


 

(upbeat music) >> Announcer: From theCUBE Studios in Palo Alto and Boston, connecting with thought leaders all around the world. This is a CUBE conversation. >> Hey, welcome back everybody. Jeff Frick here with the CUBE, coming to you from our Palo Alto studios today for a CUBE conversation. we've got a couple of a CUBE alumni veterans who've been on a lot of times. They've got some exciting announcements to tell us today, so we're excited to jump into it, So let's go. First we're joined by Eric Herzog. He's the CMO and VP worldwide storage channels for IBM Storage, made his time on theCUBE Eric, great to see you. >> Great, thanks very much for having us today. >> Jeff: Absolutely. And joining him, I think all the way from North Carolina, Sam Werner, the VP of, and offering manager business line executive storage for IBM. Sam, great to see you as well. >> Great to be here, thank you. >> Absolutely. So let's jump into it. So Sam you're in North Carolina, I think that's where the Red Hat people are. You guys have Red Hat, a lot of conversations about containers, containers are going nuts. We know containers are going nuts and it was Docker and then Kubernetes. And really a lot of traction. Wonder if you can reflect on, on what you see from your point of view and how that impacts what you guys are working on. >> Yeah, you know, it's interesting. We talk, everybody hears about containers constantly. Obviously it's a hot part of digital transformation. What's interesting about it though is most of those initiatives are being driven out of business lines. I spend a lot of time with the people who do infrastructure management, particularly the storage teams, the teams that have to support all of that data in the data center. And they're struggling to be honest with you. These initiatives are coming at them, from application developers and they're being asked to figure out how to deliver the same level of SLAs the same level of performance, governance, security recovery times, availability. And it's a scramble for them to be quite honest they're trying to figure out how to automate their storage. They're trying to figure out how to leverage the investments they've made as they go through a digital transformation and keep in mind, a lot of these initiatives are accelerating right now because of this global pandemic we're living through. I don't know that the strategy's necessarily changed, but there's been an acceleration. So all of a sudden these storage people kind of trying to get up to speed or being thrown right into the mix. So we're working directly with them. You'll see, in some of our announcements, we're helping them, you know, get on that journey and provide the infrastructure their teams need. >> And a lot of this is driven by multicloud and hybrid cloud, which we're seeing, you know, a really aggressive move to before it was kind of this rush to public cloud. And that everybody figured out, "Well maybe public cloud isn't necessarily right for everything." And it's kind of this horses for courses, if you will, with multicloud and hybrid cloud, another kind of complexity thrown into the storage mix that you guys have to deal with. >> Yeah, and that's another big challenge. Now in the early days of cloud, people were lifting and shifting applications trying to get lower capex. And they were also starting to deploy DevOps, in the public cloud in order to improve agility. And what they found is there were a lot of challenges with that, where they thought lifting and shifting an application will lower their capital costs the TCO actually went up significantly. Where they started building new applications in the cloud. They found they were becoming trapped there and they couldn't get the connectivity they needed back into their core applications. So now we're at this point where they're trying to really, transform the rest of it and they're using containers, to modernize the rest of the infrastructure and complete the digital transformation. They want to get into a hybrid cloud environment. What we found is, enterprises get two and a half X more value out of the IT when they use a hybrid multicloud infrastructure model versus an all public cloud model. So what they're trying to figure out is how to piece those different components together. So you need a software-driven storage infrastructure that gives you the flexibility, to deploy in a common way and automate in a common way, both in a public cloud but on premises and give you that flexibility. And that's what we're working on at IBM and with our colleagues at Red Hat. >> So Eric, you've been in the business a long time and you know, it's amazing as it just continues to evolve, continues to evolve this kind of unsexy thing under the covers called storage, which is so foundational. And now as data has become, you know, maybe a liability 'cause I have to buy a bunch of storage. Now it is the core asset of the company. And in fact a lot of valuations on a lot of companies is based on its value, that's data and what they can do. So clearly you've got a couple of aces in the hole you always do. So tell us what you guys are up to at IBM to take advantage of the opportunity. >> Well, what we're doing is we are launching, a number of solutions for various workloads and applications built with a strong container element. For example, a number of solutions about modern data protection cyber resiliency. In fact, we announced last year almost a year ago actually it's only a year ago last week, Sam and I were on stage, and one of our developers did a demo of us protecting data in a container environment. So now we're extending that beyond what we showed a year ago. We have other solutions that involve what we do with AI big data and analytic applications, that are in a container environment. What if I told you, instead of having to replicate and duplicate and have another set of storage right with the OpenShift Container configuration, that you could connect to an existing external exabyte class data lake. So that not only could your container apps get to it, but the existing apps, whether they'll be bare-metal or virtualized, all of them could get to the same data lake. Wow, that's a concept saving time, saving money. One pool of storage that'll work for all those environments. And now that containers are being deployed in production, that's something we're announcing as well. So we've got a lot of announcements today across the board. Most of which are container and some of which are not, for example, LTO-9, the latest high performance and high capacity tape. We're announcing some solutions around there. But the bulk of what we're announcing today, is really on what IBM is doing to continue to be the leader in container storage support. >> And it's great, 'cause you talked about a couple of very specific applications that we hear about all the time. One obviously on the big data and analytics side, you know, as that continues to do, to kind of chase history of honor of ultimately getting the right information to the right people at the right time so they can make the right decision. And the other piece you talked about was business continuity and data replication, and to bring people back. And one of the hot topics we've talked to a lot of people about now is kind of this shift in a security threat around ransomware. And the fact that these guys are a little bit more sophisticated and will actually go after your backup before they let you know that they're into your primary storage. So these are two, really important market areas that we could see continue activity, as all the people that we talk to every day. You must be seeing the same thing. >> Absolutely we are indeed. You know, containers are the wave. I'm a native California and I'm coming to you from Silicon Valley and you don't fight the wave, you ride it. So at IBM we're doing that. We've been the leader in container storage. We, as you know, way back when we invented the hard drive, which is the foundation of almost this entire storage industry and we were responsible for that. So we're making sure that as container is the coming wave that we are riding that in and doing the right things for our customers, for our channel partners that support those customers, whether they be existing customers, and obviously, with this move to containers, is going to be some people searching for probably a new vendor. And that's something that's going to go right into our wheelhouse because of the things we're doing. And some of our capabilities, for example, with our FlashSystems, with our Spectrum Virtualize, we're actually going to be able to support CSI snapshots not only for IBM Storage, but our Spectrum Virtualize products supports over 500 different arrays, most of which aren't ours. So if you got that old EMC VNX2 or that HPE, 3PAR or aNimble or all kinds of other storage, if you need CSI snapshot support, you can get it from IBM, with our Spectrum Virtualize software that runs on our FlashSystems, which of course cuts capex and opex, in a heterogeneous environment, but gives them that advanced container support that they don't get, because they're on older product from, you know, another vendor. We're making sure that we can pull our storage and even our competitor storage into the world of containers and do it in the right way for the end user. >> That's great. Sam, I want to go back to you and talk about the relationship with the Red Hat. I think it was about a year ago, I don't have my notes in front of me, when IBM purchased Red Hat. Clearly you guys have been working very closely together. What does that mean for you? You've been in the business for a long time. You've been at IBM for a long time, to have a partner you know, kind of embed with you, with Red Hat and bringing some of their capabilities into your portfolio. >> It's been an incredible experience, and I always say my friends at Red Hat because we spend so much time together. We're looking at now, leveraging a community that's really on the front edge of this movement to containers. They bring that, along with their experience around storage and containers, along with the years and years of enterprise class storage delivery that we have in the IBM Storage portfolio. And we're bringing those pieces together. And this is a case of truly one plus one equals three. And you know, an example you'll see in this announcement is the integration of our data protection portfolio with their container native storage. We allow you to in any environment, take a snapshot of that data. You know, this move towards modern data protection is all about a movement to doing data protection in a different way which is about leveraging snapshots, taking instant copies of data that are application aware, allowing you to reuse and mount that data for different purposes, be able to protect yourself from ransomware. Our data protection portfolio has industry leading ransomware protection and detection in it. So we'll actually detect it before it becomes a problem. We're taking that, industry leading data protection software and we are integrating it into Red Hat, Container Native Storage, giving you the ability to solve one of the biggest challenges in this digital transformation which is backing up your data. Now that you're moving towards, stateful containers and persistent storage. So that's one area we're collaborating. We're working on ensuring that our storage arrays, that Eric was talking about, that they integrate tightly with OpenShift and that they also work again with, OpenShift Container Storage, the Cloud Native Storage portfolio from, Red Hat. So we're bringing these pieces together. And on top of that, we're doing some really, interesting things with licensing. We allow you to consume the Red Hat Storage portfolio along with the IBM software-defined Storage portfolio under a single license. And you can deploy the different pieces you need, under one single license. So you get this ultimate investment protection and ability to deploy anywhere. So we're, I think we're adding a lot of value for our customers and helping them on this journey. >> Yeah Eric, I wonder if you could share your perspective on multicloud management. I know that's a big piece of what you guys are behind and it's a big piece of kind of the real world as we've kind of gotten through the hype and now we're into production, and it is a multicloud world and it is, you got to manage this stuff it's all over the place. I wonder if you could speak to kind of how that challenge you know, factors into your design decisions and how you guys are about, you know, kind of the future. >> Well we've done this in a couple of ways in things that are coming out in this launch. First of all, IBM has produced with a container-centric model, what they call the Multicloud Manager. It's the IBM Cloud Pak for multicloud management. That product is designed to manage multiple clouds not just the IBM Cloud, but Amazon, Azure, et cetera. What we've done is taken our Spectrum Protect Plus and we've integrated it into the multicloud manager. So what that means, to save time, to save money and make it easier to use, when the customer is in the multicloud manager, they can actually select Spectrum Protect Plus, launch it and then start to protect data. So that's one thing we've done in this launch. The other thing we've done is integrate the capability of IBM Spectrum Virtualize, running in a FlashSystem to also take the capability of supporting OCP, the OpenShift Container Platform in a Clustered environment. So what we can do there, is on-premise, if there really was an earthquake in Silicon Valley right now, that OpenShift is sitting on a server. The servers just got crushed by the roof when it caved in. So you want to make sure you've got disaster recovery. So what we can do is take that OpenShift Container Platform Cluster, we can support it with our Spectrum Virtualize software running on our FlashSystem, just like we can do heterogeneous storage that's not ours, in this case, we're doing it with Red Hat. And then what we can do is to provide disaster recovery and business continuity to different cloud vendors not just to IBM Cloud, but to several cloud vendors. We can give them the capability of replicating and protecting that Cluster to a cloud configuration. So if there really was an earthquake, they could then go to the cloud, they could recover that Red Hat Cluster, to a different data center and run it on-prem. So we're not only doing the integration with a multicloud manager, which is multicloud-centric allowing ease of use with our Spectrum Protect Plus, but incase of a really tough situation of fire in a data center, earthquake, hurricane, whatever, the Red Hat OpenShift Cluster can be replicated out to a cloud, with our Spectrum Virtualize Software. So in most, in both cases, multicloud examples because in the first one of course the multicloud manager is designed and does support multiple clouds. In the second example, we support multiple clouds where our Spectrum Virtualize for public clouds software so you can take that OpenShift Cluster replicate it and not just deal with one cloud vendor but with several. So showing that multicloud management is important and then leverage that in this launch with a very strong element of container centricity. >> Right >> Yeah, I just want to add, you know, and I'm glad you brought that up Eric, this whole multicloud capability with, the Spectrum Virtualize. And I could see the same for our Spectrum Scale Family, which is our storage infrastructure for AI and big data. We actually, in this announcement have containerized the client making it very simple to deploy in Kubernetes Cluster. But one of the really special things about Spectrum Scale is it's active file management. This allows you to build out a file system not only on-premises for your, Kubernetes Cluster but you can actually extend that to a public cloud and it automatically will extend the file system. If you were to go into a public cloud marketplace which it's available in more than one, you can go in there click deploy, for example, in AWS Marketplace, click deploy it will deploy your Spectrum Scale Cluster. You've now extended your file system from on-prem into the cloud. If you need to access any of that data, you can access it and it will automatically cash you on locally and we'll manage all the file access for you. >> Yeah, it's an interesting kind of paradox between, you know, kind of the complexity of what's going on in the back end, but really trying to deliver simplicity on the front end. Again, this ultimate goal of getting the right data to the right person at the right time. You just had a blog post Eric recently, that you talked about every piece of data isn't equal. And I think it's really highlighted in this conversation we just had about recovery and how you prioritize and how you, you know, think about, your data because you know, the relative value of any particular piece might be highly variable, which should drive the way that you treated in your system. So I wonder if you can speak a little bit, you know, to helping people think about data in the right way. As you know, they both have all their operational data which they've always had, but now they've got all this unstructured data that's coming in like crazy and all data isn't created equal, as you said. And if there is an earthquake or there is a ransomware attack, you need to be smart about what you have available to bring back quickly. And maybe what's not quite so important. >> Well, I think the key thing, let me go to, you know a modern data protection term. These are two very technical terms was, one is the recovery time. How long does it take you to get that data back? And the second one is the recovery point, at what point in time, are you recovering the data from? And the reason those are critical, is when you look at your datasets, whether you replicate, you snap, you do a backup. The key thing you've got to figure out is what is my recovery time? How long is it going to take me? What's my recovery point. Obviously in certain industries you want to recover as rapidly as possible. And you also want to have the absolute most recent data. So then once you know what it takes you to do that, okay from an RPO and an RTO perspective, recovery point objective, recovery time objective. Once you know that, then you need to look at your datasets and look at what does it take to run the company if there really was a fire and your data center was destroyed. So you take a look at those datasets, you see what are the ones that I need to recover first, to keep the company up and rolling. So let's take an example, the sales database or the support database. I would say those are pretty critical to almost any company, whether you'd be a high-tech company, whether you'd be a furniture company, whether you'd be a delivery company. However, there also is probably a database of assets. For example, IBM is a big company. We have buildings all over, well, guess what? We don't lease a chair or a table or a whiteboard. We buy them. Those are physical assets that the company has to pay, you know, do write downs on and all this other stuff, they need to track it. If we close a building, we need to move the desk to another building. Like even if we leasing a building now, the furniture is ours, right? So does an asset database need to be recovered instantaneously? Probably not. So we should focus on another thing. So let's say on a bank. Banks are both online and brick and mortar. I happened to be a Wells Fargo person. So guess what? There's Wells Fargo banks, two of them in the city I'm in, okay? So, the assets of the money, in this case now, I don't think the brick and mortar of the building of Wells Fargo or their desks in there but now you're talking financial assets or their high velocity trading apps. Those things need to be recovered almost instantaneously. And that's what you need to do when you're looking at datasets, is figure out what's critical to the business to keep it up and rolling, what's the next most critical. And you do it in basically the way you would tear anything. What's the most important thing, what's the next most important thing. It doesn't matter how you approach your job, how you used to approach school, what are the classes I have to get an A and what classes can I not get an A and depending on what your major was, all that sort of stuff, you're setting priorities, right? And the dataset, since data is the most critical asset of any company, whether it's a Global Fortune 500 or whether it's Herzog Cigar Store, all of those assets, that data is the most valuable. So you've got to make sure, recover what you need as rapidly as you need it. But you can't recover all of it. You just, there's just no way to do that. So that's why you really ranked the importance of the data to use sameware, with malware and ransomware. If you have a malware or ransomware attack, certain data you need to recover as soon as you can. So if there, for example, as a, in fact there was one Jeff, here in Silicon Valley as well. You've probably read about the University of California San Francisco, ended up having to pay over a million dollars of ransom because some of the data related to COVID research University of California, San Francisco, it was the health care center for the University of California in Northern California. They are working on COVID and guess what? The stuff was held for ransom. They had no choice, but to pay them. And they really did pay, this is around end of June, of this year. So, okay, you don't really want to do that. >> Jeff: Right >> So you need to look at everything from malware and ransomware, the importance of the data. And that's how you figure this stuff out, whether be in a container environment, a traditional environment or virtualized environment. And that's why data protection is so important. And with this launch, not only are we doing the data protection we've been doing for years, but now taking it to the heart of the new wave, which is the wave of containers. >> Yeah, let me add just quickly on that Eric. So think about those different cases you talked about. You're probably going to want for your mission critically. You're going to want snapshots of that data that can be recovered near instantaneously. And then, for some of your data, you might decide you want to store it out in cloud. And with Spectrum Protect, we just announced our ability to now store data out in Google cloud. In addition to, we already supported AWS Azure IBM Cloud, in various on-prem object stores. So we already provided that capability. And then we're in this announcement talking about LTL-9. And you got to also be smart about which data do you need to keep, according to regulation for long periods of time, or is it just important to archive? You're not going to beat the economics nor the safety of storing data out on tape. But like Eric said, if all of your data is out on tape and you have an event, you're not going to be able to restore it quickly enough at least the mission critical things. And so those are the things that need to be in snapshot. And that's one of the main things we're announcing here for Kubernetes environments is the ability to quickly snapshot application aware backups, of your mission critical data in your Kubernetes environments. It can very quickly to be recovered. >> That's good. So I'll give you the last word then we're going to sign off, we are out of time, but I do want to get this in it's 2020, if I didn't ask the COVID question, I would be in big trouble. So, you know, you've all seen the memes and the jokes about really COVID being an accelerant to digital transformation, not necessarily change, but certainly a huge accelerant. I mean, you guys have a, I'm sure a product roadmap that's baked pretty far and advanced, but I wonder if you can speak to, you know, from your perspective, as COVID has accelerated digital transformation you guys are so foundational to executing that, you know, kind of what is it done in terms of what you're seeing with your customers, you know, kind of the demand and how you're seeing this kind of validation as to an accelerant to move to these better types of architectures? Let's start with you Sam. >> Yeah, you know I, and I think i said this, but I mean the strategy really hasn't changed for the enterprises, but of course it is accelerating it. And I see storage teams more quickly getting into trouble, trying to solve some of these challenges. So we're working closely with them. They're looking for more automation. They have less people in the data center on-premises. They're looking to do more automation simplify the management of the environment. We're doing a lot around Ansible to help them with that. We're accelerating our roadmaps around that sort of integration and automation. They're looking for better visibility into their environments. So we've made a lot of investments around our storage insights SaaS platform, that allows them to get complete visibility into their data center and not just in their data center. We also give them visibility to the stores they're deploying in the cloud. So we're making it easier for them to monitor and manage and automate their storage infrastructure. And then of course, if you look at everything we're doing in this announcement, it's about enabling our software and our storage infrastructure to integrate directly into these new Kubernetes, initiatives. That way as this digital transformation accelerates and application developers are demanding more and more Kubernetes capabilities. They're able to deliver the same SLAs and the same level of security and the same level of governance, that their customers expect from them, but in this new world. So that's what we're doing. If you look at our announcement, you'll see that across, across the sets of capabilities that we're delivering here. >> Eric, we'll give you the last word, and then we're going to go to Eric Cigar Shop, as soon as this is over. (laughs) >> So it's clearly all about storage made simple, in a Kubernetes environment, in a container environment, whether it's block storage, file storage, whether it be object storage and IBM's goal is to offer ever increasing sophisticated services for the enterprise at the same time, make it easier and easier to use and to consume. If you go back to the old days, the storage admins manage X amount of gigabytes, maybe terabytes. Now the same admin is managing 10 petabytes of data. So the data explosion is real across all environments, container environments, even old bare-metal. And of course the not quite so new anymore virtualized environments. The admins need to manage that more and more easily and automated point and click. Use AI based automated tiering. For example, we have with our Easy Tier technology, that automatically moves data when it's hot to the fastest tier. And when it's not as hot, it's cool, it pushes down to a slower tier, but it's all automated. You point and you click. Let's take our migration capabilities. We built it into our software. I buy a new array, I need to migrate the data. You point, you click, and we automatic transparent migration in the background on the fly without taking the servers or the storage down. And we always favor the application workload. So if the application workload is heavy at certain times a day, we slow the migration. At night for sake of argument, If it's a company that is not truly 24 by seven, you know, heavily 24 by seven, and at night, it slows down, we accelerate the migration. All about automation. We've done it with Ansible, here in this launch, we've done it with additional integration with other platforms. So our Spectrum Scale for example, can use the OpenShift management framework to configure and to grow our Spectrum Scale or elastic storage system clusters. We've done it, in this case with our Spectrum Protect Plus, as you saw integration into the multicloud manager. So for us, it's storage made simple, incredibly new features all the time, but at the same time we do that, make sure that it's easier and easier to use. And in some cases like with Ansible, not even the real storage people, but God forbid, that DevOps guy messes with a storage and loses that data, wow. So by, if you're using something like Ansible and that Ansible framework, we make sure that essentially the DevOps guy, the test guy, the analytics guy, basically doesn't lose the data and screw up the storage. And that's a big, big issue. So all about storage made simple, in the right way with incredible enterprise features that essentially we make easy and easy to use. We're trying to make everything essentially like your iPhone, that easy to use. That's the goal. And with a lot less storage admins in the world then there has been an incredible storage growth every single year. You'd better make it easy for the same person to manage all that storage. 'Cause it's not shrinking. It is, someone who's sitting at 50 petabytes today, is 150 petabytes the next year and five years from now, they'll be sitting on an exabyte of production data, and they're not going to hire tons of admins. It's going to be the same two or four people that were doing the work. Now they got to manage an exabyte, which is why this storage made simplest is such a strong effort for us with integration, with the Open, with the Kubernetes frameworks or done with OpenShift, heck, even what we used to do in the old days with vCenter Ops from VMware, VASA, VAAI, all those old VMware tools, we made sure tight integration, easy to use, easy to manage, but sophisticated features to go with that. Simplicity is really about how you manage storage. It's not about making your storage dumb. People want smarter and smarter storage. Do you make it smarter, but you make it just easy to use at the same time. >> Right. >> Well, great summary. And I don't think I could do a better job. So I think we'll just leave it right there. So congratulations to both of you and the teams for these announcement after a whole lot of hard work and sweat went in, over the last little while and continued success. And thanks for the, check in, always great to see you. >> Thank you. We love being on theCUBE as always. >> All right, thanks again. All right, he's Eric, he was Sam, I'm I'm Jeff, you're watching theCUBE. We'll see you next time, thanks for watching. (upbeat music)

Published Date : Nov 2 2020

SUMMARY :

leaders all around the world. coming to you from our Great, thanks very Sam, great to see you as well. on what you see from your point of view the teams that have to that you guys have to deal with. and complete the digital transformation. So tell us what you guys are up to at IBM that you could connect to an existing And the other piece you talked and I'm coming to you to have a partner you know, and ability to deploy anywhere. of what you guys are behind and make it easier to use, And I could see the same for and how you prioritize that the company has to pay, So you need to look at and you have an event, to executing that, you know, of security and the same Eric, we'll give you the last word, And of course the not quite so new anymore So congratulations to both of you We love being on theCUBE as always. We'll see you next time,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeffPERSON

0.99+

Eric HerzogPERSON

0.99+

IBMORGANIZATION

0.99+

Sam WernerPERSON

0.99+

SamPERSON

0.99+

twoQUANTITY

0.99+

EricPERSON

0.99+

Silicon ValleyLOCATION

0.99+

Jeff FrickPERSON

0.99+

Wells FargoORGANIZATION

0.99+

October 2020DATE

0.99+

Wells FargoORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

BostonLOCATION

0.99+

50 petabytesQUANTITY

0.99+

10 petabytesQUANTITY

0.99+

North CarolinaLOCATION

0.99+

Palo AltoLOCATION

0.99+

last yearDATE

0.99+

150 petabytesQUANTITY

0.99+

CaliforniaLOCATION

0.99+

oneQUANTITY

0.99+

University of CaliforniaORGANIZATION

0.99+

2020DATE

0.99+

a year agoDATE

0.99+

both casesQUANTITY

0.99+

24QUANTITY

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

CUBEORGANIZATION

0.99+

AWSORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

next yearDATE

0.99+

threeQUANTITY

0.99+

bothQUANTITY

0.99+

second exampleQUANTITY

0.99+

Eric Cigar ShopORGANIZATION

0.99+

Herzog Cigar StoreORGANIZATION

0.99+

OpenShiftTITLE

0.99+

todayDATE

0.99+

DevOpsTITLE

0.98+

over 500 different arraysQUANTITY

0.98+

end of JuneDATE

0.98+

four peopleQUANTITY

0.98+

vCenter OpsTITLE

0.98+

Eric Herzog, IBM | Cisco Live EU Barcelona 2020


 

>> Announcer: Live from Barcelona, Spain, it's theCUBE, covering Cisco Live 2020, brought to you by Cisco and its ecosystem partners. >> Welcome back to Barcelona, everybody, we're here at Cisco Live, and you're watching theCUBE, the leader in live tech coverage. We go to the events and extract the signal from the noise. This is day one, really, we started day zero yesterday. Eric Herzog is here, he's the CMO and Vice President of Storage Channels. Probably been on theCUBE more than anybody, with the possible exception of Pat Gelsinger, but you might surpass him this week, Eric. Great to see you. >> Great to see you guys, love being on theCUBE, and really appreciate the coverage you do of the entire industry. >> This is a big show for you guys. I was coming down the escalator, I saw up next Eric Herzog, so I sat down and caught the beginning of your presentation yesterday. You were talking about multicloud, which we're going to get into, you talked about cybersecurity, well let's sort of recap what you told the audience there and really let's dig in. >> Sure, well, first thing is, IBM is a strong partner of Cisco, I mean they're a strong partner of ours both ways. We do all kinds of joint activities with them on the storage side, but in other divisions as well. The security guys do stuff with Cisco, the services guys do a ton of stuff with Cisco. So Cisco's one of our valued partners, which is why we're here at the show, and obviously, as you guys know, with a lot of the coverage you do to the storage industry, that is considered one of the big storage shows, you know, in the industry, and has been a very strong show for IBM Storage and what we do. >> Yeah, and I feel like, you know, it brings together storage folks, whether it's data protection, or primary storage, and sort of is a collection point, because Cisco is a very partner-friendly organization. So talk a little bit about how you go to market, how you guys see the multicloud world, and what each of you brings to the table. >> Well, so we see it in a couple of different facts. So first of all, the day of public cloud only or on-prem only is long gone. There are a few companies that use public cloud only, but yeah, when you're talking mid-size enterprise, and certainly into let's say the global 2500, that just doesn't work. So certain workloads reside well in the cloud, and certain workloads reside well on-prem, and there's certain that can back and forth, right, developed in a cloud but then move it back on, for example, highly transactional workload, once you get going on that, you're not going to run that on any cloud provider, but that doesn't mean you can't develop the app, test the app, out in the cloud and then bring it back on. So we also see that the days of a cloud provider for big enterprise and again up to the 2500 of the global fortunes, that's not true either, because just as with other infrastructure and other technologies, they often have multiple vendors, and in fact, you know, what I've seen from talking to CIOs is, if they have three cloud providers, that's low. Many of 'em talk about five or six, whether that be for legal reasons, whether that be for security reasons, or of course the easy one, which is, we need to get a good price, and if we just use one vendor, we're not going to get a good price. And cloud is mature, cloud's not new anymore, the cloud is pretty old, it's basically, sort of, version three of the internet, (laughs) and so, you know, I think some of the procurement guys are a little savvy about why would you only use Amazon or only use Azure or only use Google or only use IBM Cloud. Why not use a couple to keep them, you know, which is kind of normal when procurement gets involved, and say, cloud is not new anymore, so that means procurement gets involved. >> Well, and it's kind of, comes down to the workload. You got certain clouds that are better, you have Microsoft if you want collaboration, you have Amazon if you want infrastructure for devs, on-prem if you want, you know, family jewels. So I got a question for you. So if you look at, you know, it's early 2020, entering a new decade, if you look at the last decade, some of the big themes. You had the consumerization of IT, you had, you know, Web 2.0, you obviously had the big data meme, which came and went and now it's got an AI. And of course you had cloud. So those are the things that brought us here over the last 10 years of innovation. How do you see the next 10 years? What are going to be those innovation drivers? >> Well I think one of the big innovations from a cloud perspective is like, truly deploying cloud. Not playing with the cloud, but really deploying the cloud. Obviously when I say cloud, I would include private cloud utilization. Basically, when you think on-prem in my world, on-prem is really a private cloud talking to a public cloud. That's how you get a multicloud, or, if you will, a hybrid cloud. Some people still think when you talk hybrid, like literally, bare metal servers talking to the cloud, and that just isn't true, because when you look at certainly the global 2500, I can't think any of them what isn't essentially running a private cloud inside their own walls, and then, whether they're going out or not, most do, but the few that don't, they mimic a public cloud inside because of the value they see in moving workloads around, easy deployment, and scale up and scale down, whether that be storage or servers or whatever the infrastructure is, let alone the app. So I think what you're going to see now is a recognization that it's not just private cloud, it's not just public cloud, things are going to go back and forth, and basically, it's going to be a true hybrid cloud world, and I also think with the cloud maturity, this idea of a multicloud, 'cause some people think multicloud is basically private cloud talking to public cloud, and I see multicloud as not just that, but literally, I'm a big company, I'm going to use eight or nine cloud providers to keep everybody honest, or, as you just said, Dave, and put it out, certain clouds are better for certain workloads, so just as certain storage or certain servers are better when it's on-prem, that doesn't surprise us, certain cloud vendors specialize in the apps. >> Right, so Eric, we know IBM and Cisco have had a very successful partnership with the VersaStack. If you talk about in your data center, in IBM Storage, Cisco networking in servers. When I hear both IBM and Cisco talking about the message for hybrid and multicloud, they talk the software solutions you have, the management in various pieces and integration that Cisco's doing. Help me understand where VersaStack fits into that broader message that you were just talking about. >> So we have VersaStack solutions built around primarily our FlashSystems which use our Spectrum Virtualize software. Spectrum Virtualize not only supports IBM arrays, but over 500 other arrays that are not ours. But we also have a version of Spectrum Virtualize that will work with AWS and IBM Cloud and sits in a virtual machine at the cloud providers. So whether it be test and dev, whether it be migration, whether it business continuity and disaster recovery, or whether it be what I'll call logical cloud error gapping. We can do that for ourselves, when it's not a VersaStack, out to the cloud and back. And then we also have solutions in the VersaStack world that are built around our Spectrum Scale product for big data and AI. So Spectrum Scale goes out and back to the cloud, Spectrum Virtualize, and those are embedded on the arrays that come in a VersaStack solution. >> I want to bring it back to cloud a little bit. We were talking about workloads and sort of what Furrier calls horses for courses. IBM has a public cloud, and I would put forth that your wheelhouse, IBM's wheelhouse for cloud workload is the hybrid mission-critical work that's being done on-prem today in the large IBM customer base, and to the extent that some of that work's going to move into the cloud. The logical place to put that is the IBM Cloud. Here's why. You could argue speeds and feeds and features and function all day long. The migration cost of moving data and workloads from wherever, on-prem into a cloud or from on-prem into another platform are onerous. Any CIO will tell you that. So to the extent that you can minimize those migration costs, the business case for, in IBM's case, for staying within that blue blanket, is going to be overwhelmingly positive relative to having to migrate. That's my premise. So I wonder if you could comment on that, and talk about, you know, what's happening in that hybrid world specifically with your cloud? >> Well, yeah, the key thing from our perspective is we are basically running block data or file data, and we just see ourselves sitting in IBM Cloud. So when you've got a FlashSystem product or you've got our Elastic Storage System 3000, when you're talking to the IBM Cloud, you think you're talking to another one of our boxes sitting on-prem. So what we do is make that transition completely seamless, and moving data back and forth is seamless, and that's because we take a version of our software and stick in a virtual machine running at the cloud provider, in this case IBM Cloud. So the movement of data back and forth, whether it be our FlashSystem product, even we have our DS8000 can do the same thing, is very easy for an IBM customer to move to an IBM Cloud. That said, just to make sure that we're covering, and in the year of multicloud, remember the IBM Cloud division just released the Multicloud Manager, you know, second half of last year, recognizing that while they want people to focus on the IBM Cloud, they're being realistic that they're going to have multiple cloud vendors. So we've followed that mantra too, and made sure that we've followed what they're doing. As they were going to multicloud, we made sure we were supporting other clouds besides them. But from IBM to IBM Cloud it's easy to do, it's easy to traverse, and basically, our software sits on the other side, and it basically is as if we're talking to an array on prem but we're really not, we're out in the cloud. We make it seamless. >> So testing my premise, I mean again, my argument is that the complexity of that migration is going to determine in part what cloud you should go to. If it's a simple migration, and it's better, and the customer decides okay it's better off on AWS, you as a storage supplier don't care. >> That is true. >> It's agnostic to you. IBM, as a supplier of multicloud management doesn't care. I'm sure you'd rather have it run on the IBM Cloud, but if the customer says, "No, we're going to run it "over here on Azure", you say, "Great. "We're going to help you manage that experience across clouds". >> Absolutely. So, as an IBM shareholder, we wanted to go to IBM Cloud. As a realist, with what CIOs say, which is I'm probably going to use multiple clouds, we want to make sure whatever cloud they pick, hopefully IBM first, but they're going to have a secondary cloud, we want to make sure we capture that footprint regardless, and that's what we've done. As I've said for years and years, a partial PO is better than no PO. So if they use our storage and go to a competitor of IBM Cloud, while I don't like that as a shareholder, it's still good for IBM, 'cause we're still getting money from the storage division, even though we're not working with IBM Cloud. So we make it as flexible as possible for the customer, The Multicloud Manager is about customer choice, which is leading with IBM Cloud, but if they want to use a, and again, I think it's a realization at IBM Corporate that no one's going to use just one cloud provider, and so we want to make sure we empower that. Leading with IBM Cloud first, always leading with IBM Cloud first, but we want to get all of their business, and that means, other areas, for example, the Red Hat team. Red Hat works with every cloud, right? And they don't really necessarily lead with IBM Cloud, but they work with IBM Cloud all right, but guess what, IBM gets the revenue no matter what. So I don't see it's like the old traditional component guy with an OEM deal, but it kind of sort of is. 'Cause we can make money no matter what, and that's good for the IBM Corporation, but we do always lead with IBM Cloud first but we work with everybody. >> Right, so Eric, we'd agree with your point that data is not just going to live one place. One area that there's huge opportunity that I'd love to get your comment here on is edge. So we talked about, you know, the data center, we talked about public cloud. Cisco's talking a lot about their edge strategy, and one of our questions is how will they enable their partners and help grow that ecosystem? So love to hear your thoughts on edge, and any synergies between what Cisco's doing and IBM in that standpoint. >> So the thing from an edge perspective for us, is built around our new Elastic Storage System 3000, which we announced in Q4. And while it's ideal for the typical big data and AI workloads, runs Spectrum Scale, we have many a customers with Scale that are exabytes in production, so we can go big, but we also go small. It's a compact 2U all-flash array, up to 400 terabytes, that can easily be deployed at a remote location, an oil well, right, or I should say, a platform, oil platform, could be deployed obviously if you think about what's going on in the building space or I should say the skyscraper space, they're all computerized now. So you'd have that as an edge processing box, whether that be for the heating systems, the security systems, we can do that at the edge, but because of Spectrum Scale you could also send it back to whatever their core is, whether that be their core data center or whether they're working with a cloud provider. So for us, the ideal solution for us, is built around the Elastic Storage System 3000. Self-contained, two rack U, all-flash, but with Spectrum Scale on it, versus what we normally sell with our all-flash arrays, which tends to be our Spectrum Virtualize for block. This is file-based, can do the analytics at the edge, and then move the data to whatever target they want. So the source would be the ESS 3000 at the edge box, doing processing at the edge, such as an oil platform or in, I don't know what really you call it, but, you know, the guys that own all the buildings, right, who have all this stuff computerized. So that's at the edge, and then wherever their core data center is, or their cloud partner they can go that way. So it's an ideal solution because you can go back and forth to the cloud or back to their core data center, but do it with a super-compact, very high performance analytics engine that can sit at the edge. >> You know, I want to talk a little bit about business. I remember seven years ago, we covered, theCUBE, the z13 announcement, and I was talking to a practitioner at a very large bank, and I said, "You going to buy this thing?", this is the z13, you know, a couple of generations ago. He says, "Yeah, absolutely, I'll buy it sight unseen". I said, "Really, sight unseen?" He goes, "Yeah, no question. "By going to the upgrade, I'm able to drive "more transactions through my system "in a certain amount of time. "That's dropping revenue right to my bottom line. "It's a no-brainer for me." So fast forward to the z15 announcement in September in my breaking analysis, I said, "Look, IBM's going to have a great Q4 in systems", and the thing you did in storage is you synchronized, I don't know if it was by design or what, you synchronized the DS8000, new 8000 announcement with the z15, and I predicted at the time you're going to see an uptick in both the systems business, which we saw, huge, 63%, and the storage business grew I think three points as well. So I wonder if you can talk about that. Was that again by design, was it a little bit of luck involved, and you know, give us an update. >> So that was by design. When the z14 came out, which is right when I first come over from EMC, one of the things I said to my guys is, "Let's see, we have "the number one storage platform on the mainframe "in revenue, according to the analysts that check revenue. "When they launch a box, why are we not launching with them?" So for example, we were in that original press release on the z14, and then they ran a series of roadshows all over the world, probably 60. I said, "Well don't you guys do the roadshows?", and my team said, "No, we didn't do that on z12 and 13". I said, "Well were are now, because we're the number one "mainframe storage company". Why would we not go out there, get 20 minutes to speak, the bulk of it would be on the Zs. So A, we did that of course with this launch, but we also made sure that on day one launch, we were part of the launch and truly integrated. Why IBM hadn't been doing for a while is kind of beyond me, especially with our market position. So it helped us with a great quarter, helped us in the field, now by the way, we did talk about other areas that grew publicly, so there were other areas, particularly all-flash. Now we do have an all-flash 8900 of course, and the high-end tape grew as well, but our overall all-flash, both at the high end, mid range and entry, all grew. So all-flash for us was a home run. Yeah, I would argue that, you know, on the Z side, it was grand slam home run, but it was a home run even for the entry flash, which did very, very well as well. So, you know, we're hitting the right wheelhouse on flash, we led with the DS8900 attached to the Z, but some of that also pulls through, you get the magic fairy dust stuff, well they have an all-flash array on the Z, 'cause last time we didn't have an all, we had all-flash or hybrids, before that was hybrid and hard drive. This time we just said, "Forget that hybrid stuff. "We're going all-flash." So this helps, if you will, the magic fairy dust across the entire portfolio, because of our power with the mainframe, and you know, even in fact the quarter before, our entry products, we announced six nines of availability on an array that could be as low cost as $US16,000 for RAID 5 all-flash array, and most guys don't offer six nines of availability at the system level, let alone we have 100% availability guaranteed. We do charge extra for that, but most people won't even offer that on entry product, we do. So that's helped overall, and then the Z was a great launch for us. >> Now you guys, you obviously can't give guidance, you have to be very careful about that, but I, as I say, predicted in September that you'd have a good quarter in systems and storage both. I'm on the record now I'm going to say that you're going to continue to see growth, particularly in the storage side, I would say systems as well. So I would look for that. The other thing I want to point out is, you guys, you sell a lot of storage, you sell a lot of storage that sometimes the analysts don't track. When you sell into cloud, for example, IBM Storage Cloud, I don't think you get credit for that, or maybe the services, the global services division. So there's a big chunk of revenue that you don't get credited for, that I just want to highlight. Is that accurate? >> Yeah, so think about it, IBM is a very diverse company, all kinds of acquisitions, tons of different divisions, which we document publicly, and, you know, we do it differently than if it was Zoggan Store. So if I were Zoggan Store, a standalone storage company, I'd get all credit for supporting services, there's all kinds of things I'd get credit for, but because of IBM's history of how the company grew and how company acquired, stuff that is storage that Ed Walsh, or GM, does own, it's somewhat dispersed, and so we don't always get credit on it publicly, but the number we do in storage is substantially larger than what we report, 'cause all we really report is our storage systems business. Even our storage software, which one of the analysts that does numbers has us as the number two storage software company, when we do our public stuff, we don't take credit for that. Now, luckily that analyst publishes a report on the numbers side, and we are shown to be the number two storage software company in the world, but when we do our financial reporting, that, because just the history of IBM, is spread out over other parts of the company, even though our guys do the work on the sales side, the marketing side, the development side, all under Ed Walsh, but you know, part of that's just the history of the company, and all the acquisitions over years and years, remember it's a 100-year-old company. So, you know, just we don't always get all the credit, but we do own it internally, and our teams take and manage most of what is storage in the minds of storage analysts like you guys, you know what storage is, most of that is us. >> I wanted to point that out because a lot of times, practitioners will look at the data, and they'll say, oh wow, the sales person of the competitor will come in and say, "Look at this, we're number one!" But you really got to dig in, ask the questions, and obviously make the decisions for yourself. Eric, great to see you. We're going to see you later on this week as well we're going to dig into cyber. Thanks so much for coming back. >> Great, well thank you, you guys do a great job and theCUBE is literally the best at getting IT information out, particularly all the shows you do all over the world, you guys are top notch. >> Thank you. All right, and thank you for watching everybody, we'll be back with our next guest right after this break. We're here at Cisco Live in Barcelona, Dave Vellante, Stu Miniman, John Furrier. We'll be right back.

Published Date : Jan 28 2020

SUMMARY :

covering Cisco Live 2020, brought to you by Cisco but you might surpass him this week, Eric. and really appreciate the coverage you do and caught the beginning of your presentation yesterday. and obviously, as you guys know, Yeah, and I feel like, you know, and in fact, you know, what I've seen from talking So if you look at, you know, it's early 2020, and that just isn't true, because when you look at that broader message that you were just talking about. So Spectrum Scale goes out and back to the cloud, So to the extent that you can minimize the Multicloud Manager, you know, second half of last year, is going to determine in part what cloud you should go to. "We're going to help you manage that experience across clouds". and that's good for the IBM Corporation, So we talked about, you know, the data center, the security systems, we can do that at the edge, and the thing you did in storage is you synchronized, and you know, even in fact the quarter before, I'm on the record now I'm going to say in the minds of storage analysts like you guys, We're going to see you later on this week as well particularly all the shows you do all over the world, All right, and thank you for watching everybody,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
EricPERSON

0.99+

Dave VellantePERSON

0.99+

CiscoORGANIZATION

0.99+

Eric HerzogPERSON

0.99+

IBMORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

SeptemberDATE

0.99+

Pat GelsingerPERSON

0.99+

eightQUANTITY

0.99+

100%QUANTITY

0.99+

20 minutesQUANTITY

0.99+

John FurrierPERSON

0.99+

AWSORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

$US16,000QUANTITY

0.99+

63%QUANTITY

0.99+

EMCORGANIZATION

0.99+

Zoggan StoreORGANIZATION

0.99+

oneQUANTITY

0.99+

yesterdayDATE

0.99+

BarcelonaLOCATION

0.99+

VersaStackTITLE

0.99+

Ed WalshPERSON

0.99+

z14COMMERCIAL_ITEM

0.99+

DavePERSON

0.99+

z15COMMERCIAL_ITEM

0.99+

GoogleORGANIZATION

0.99+

this weekDATE

0.99+

Red HatORGANIZATION

0.99+

GMORGANIZATION

0.99+

Barcelona, SpainLOCATION

0.98+

sixQUANTITY

0.98+

DS8000COMMERCIAL_ITEM

0.98+

early 2020DATE

0.98+

both waysQUANTITY

0.98+

seven years agoDATE

0.98+

ESS 3000COMMERCIAL_ITEM

0.98+

bothQUANTITY

0.98+

IBM CorporationORGANIZATION

0.98+

100-year-oldQUANTITY

0.98+

six ninesQUANTITY

0.97+

z13COMMERCIAL_ITEM

0.97+

multicloudORGANIZATION

0.97+

DS8900COMMERCIAL_ITEM

0.97+

lpsQUANTITY

0.97+

z12COMMERCIAL_ITEM

0.96+

Eric Herzog, IBM Storage | CUBE Conversation December 2019


 

(funky music) >> Hello and welcome to theCUBE Studios in Palo Alto, California for another CUBE conversation, where we go in-depth with thought leaders driving innovation across the tech industry. I'm your host Peter Burris. Well, as I sit here in our CUBE studios, 2020's fast approaching, and every year as we turn the corner on a new year, we bring in some of our leading thought leaders to ask them what they see the coming year holding in the particular technology domain in which they work. And this one is no different. We've got a great CUBE guest, a frequent CUBE guest, Eric Herzog, the CMO and VP of Global Channels, IBM Storage, and Eric's here to talk about storage in 2020. Eric? >> Peter, thank you. Love being here at theCUBE. Great solutions. You guys do a great job on educating everyone in the marketplace. >> Well, thanks very much. But let's start really quickly, quick update on IBM Storage. >> Well, been a very good year for us. Lots of innovation. We've brought out a new Storwize family in the entry space. Brought out some great solutions for big data and AI solutions with our Elastic Storage System 3000. Support for backup in container environments. We've had persistent storage for containers, but now we can back it up with our award-winning Spectrum Protect and Protect Plus. We've got a great set of solutions for the hybrid multicloud world for big data and AI and the things you need to get cyber resiliency across your enterprise in your storage estate. >> All right, so let's talk about how folks are going to apply those technologies. You've heard me say this a lot. The difference between business and digital business is the role that data plays in a digital business. So let's start with data and work our way down into some of the trends. >> Okay. >> How are, in your conversations with customers, 'cause you talk to a lot of customers, is that notion of data as an asset starting to take hold? >> Most of our clients, whether it be big, medium, or small, and it doesn't matter where they are in the world, realize that data is their most valuable asset. Their customer database, their product databases, what they do for service and support. It doesn't matter what the industry is. Retail, manufacturing. Obviously we support a number of other IT players in the industry that leverage IBM technologies across the board, but they really know that data is the thing that they need to grow, they need to nurture, and they always need to make sure that data's protected or they could be out of business. >> All right, so let's now, starting with that point, in the tech industry, storage has always kind of been the thing you did after you did your server, after you did your network. But there's evidence that as data starts taking more center stage, more enterprises are starting to think more about the data services they need, and that points more directly to storage hardware, storage software. Let's start with that notion of the ascension of storage within the enterprise. >> So with data as their most valuable asset, what that means is storage is the critical foundation. As you know, if the storage makes a mistake, that data's gone. >> Right. >> If you have a malware or ransomware attack, guess what? Storage can help you recover. In fact, we even got some technology in our Spectrum Protect product that can detect anomalous activity and help the backup admin or the storage admins realize they're having a ransomware or malware attack, and then they could take the right corrective action. So storage is that foundation across all their applications, workloads, and use cases that optimizes it, and with data as the end result of those applications, workloads, and use cases, if the storage has a problem, the data has a problem. >> So let's talk about what you see as in that foundation some of the storage services we're going to be talking most about in 2020. >> Eric: So I think one of the big things is-- >> Oh, I'm sorry, data services that we're going to be talking most about in 2020. >> So I think one of the big things is the critical nature of the storage to help protect their data. People when they think of cyber security and resiliency think about keeping the bad guy out, and since it's not an issue of if, it's when, chasing the bad guy down. But I've talked to CIOs and other executives. Sometimes they get the bad guy right away. Other times it takes them weeks. So if you don't have storage with the right cyber resiliency, whether that be data at rest encryption, encrypting data when you send it out transparently to your hybrid multicloud environment, whether malware and ransomware detection, things like air gap, whether it be air gap to tape or air gap to cloud. If you don't think about that as part of your overall security strategy, you're going to leave yourself vulnerable, and that data could be compromised and stolen. So I can almost say that in 2020, we're going to talk more about how the relationship between security and data and storage is going to evolve, almost to the point where we're actually going to start thinking about how security can be, it becomes almost a feature or an attribute of a storage or a data object. Have I got that right? >> Yeah, I mean, think of it as storage infused with cyber resiliency so that when it does happen, the storage helps you be protected until you get the bad guy and track him down. And until you do, you want that storage to resist all attacks. You need that storage to be encrypted so they can't steal it. So that's a thing, when you look at an overarching security strategy, yes, you want to keep the bad guy out. Yes, you want to track the bad guy down. But when they get in, you'd better make sure that what's there is bolted to the wall. You know, it's the jewelry in the floor safe underneath the carpet. They don't even know it's there. So those are the types of things you need to rely on, and your storage can do almost all of that for you once the bad guy's there till you get him. >> So the second thing I want to talk about along this vein is we've talked about the difference between hardware and software, software-defined storage, but still it ends up looking like a silo for most of the players out there. And I've talked to a number of CIOs who say, you know, buying a lot of these software-defined storage systems is just like buying not a piece of hardware, but a piece of software as a separate thing to manage. At what point in time do you think we're going to start talking about a set of technologies that are capable of spanning multiple vendors and delivering a more broad, generalized, but nonetheless high function, highly secure storage infrastructure that brings with it software-defined, cloud-like capabilities. >> So what we see is the capability of A, transparently traversing from on-prem to your hybrid multicloud seamlessly. They can't, it can't be hard to do. It's got to happen very easily. The cloud is a target, and by the way, most mid-size enterprise and up don't use one cloud, they use many, so you've got to be able to traverse those many, move data back and forth transparently. Second thing we see coming this year is taking the overcomplexity of multiple storage platforms coupled with hybrid cloud and merging them across. So you could have an entry system, mid-range system, a high-end system, traversing the cloud with a single API, a single data management platform, performance and price points that vary depending on your application workload and use case. Obviously you use entry storage for certain things, high-end storage for other things. But if you could have one way to manage all that data, and by the way, for certain solutions, we've got this with one of our products called Spectrum Virtualize. We support enterprise-class data service including moving the data out to cloud not only on IBM storage, but over 450 other arrays which are not IBM-logoed. Now, that's taking that seamlessness of entry, mid-range, on-prem enterprise, traversing it to the cloud, doing it not only for IBM storage, but doing it for our competitors, quite honestly. >> Now, once you have that flexibility, now it introduces a lot of conversations about how to match workloads to the right data technologies. How do you see workloads evolving, some of these data-first workloads, AI, ML, and how is that going to drive storage decisions in the next year, year and a half, do you think? >> Well, again, as we talked about already, storage is that critical foundation for all of your data needs. So depending on the data need, you've got multiple price points that we've talked about traversing out to the cloud. The second thing we see is there's different parameters that you can leverage. For example, AI, big data, and analytic workloads are very dependent on bandwidth. So if you can take a scalable infrastructure that scales to exabytes of capacity, can scale to terabytes per second of bandwidth, then that means across a giant global namespace, for example, we've got with our Spectrum Scale solutions and our Elastic Storage System 3000 the capability of racking and stacking two rack U at a time, growing the capacity seamlessly, growing the performance seamlessly, providing that high-performance bandwidth you need for AI, analytic, and big data workloads. And by the way, guess what, you could traverse it out to the cloud when you need to archive it. So looking at AI as a major force in the coming, not just next year, but in the coming years to go, it's here to stay, and the characteristics that IBM sees that we've had in our Spectrum Scale products, we've had for years that have really come out of the supercomputing and the high-performance computing space, those are the similar characteristics to AI workloads, machine workloads, to the big data workloads and analytics. So we've got the right solution. In fact, the two largest supercomputers on this planet have almost an exabyte of IBM storage focused on AI, analytics, and big data. So that's what we see traversing everywhere. And by the way, we also see these AI workloads moving from just the big enterprise guys down into small shops, as well. So that's another trend you're going to see. The easier you make that storage foundation underneath your AI workloads, the more easy it is for the big company, the mid-size company, the small company all to get into AI and get the value. The small companies have to compete with the big guys, so they need something, too, and we can provide that starting with a little simple two rack U unit and scaling up into exabyte-class capabilities. >> So all these new workloads and the simplicity of how you can apply them nonetheless is still driving questions about how the storage hierarchies evolved. Now, this notion of the storage hierarchy's been around for, what, 40, 50 years, or something like that. >> Eric: Right. >> You know, tape and this and, but there's some new entrants here and there are some reasons why some of the old entrants are still going to be around. So I want to talk about two. How do you see tape evolving? Is that, is there still need for that? Let's start there. >> So we see tape as actually very valuable. We've had a real strong uptick the last couple years in tape consumption, and not just in the enterprise accounts. In fact, several of the largest cloud providers use IBM tape solutions. So when you need to provide incredible amounts of data, you need to provide primary, secondary, and I'd say archive workloads, and you're looking at petabytes and petabytes and petabytes and exabytes and exabytes and exabytes and zetabytes and zetabytes, you've got to have a low-cost platform, and tape provides still by far the lowest cost platform. So tape is here to stay as one of those key media choices to help you keep your costs down yet easily go out to the cloud or easily pull data back. >> So tape still is a reasonable, in fact, a necessary entrant in that overall storage hierarchy. One of the new ones that we're starting to hear more about is storage-class memory, the idea of filling in that performance gap between external devices and memory itself so that we can have a persistent store that can service all the new kinds of parallelism that we're introducing into these systems. How do you see storage-class memory playing out in the next couple years? >> Well, we already publicly announced in 2019 that in 2020, in the first half, we'd be shipping storage-class memory. It would not only working some coming systems that we're going to be announcing in the first half of the year, but they would also work on some of our older products such as the FlashSystem 9100 family, the Storwize V7000 gen three will be able to use storage-class memory, as well. So it is a way to also leverage AI-based tiering. So in the old days, flash would tier to disk. You've created a hybrid array. With storage-class memory, it'll be a different type of hybrid array in the future, storage-class memory actually tiering to flash. Now, obviously the storage-class memory is incredibly fast and flash is incredibly fast compared to disk, but it's all relative. In the old days, a hybrid array was faster than an all hard drive array, and that was flash and disk. Now you're going to see hybrid arrays that'll be storage-class memory and with our easy tier function, which is part of our Spectrum Virtualize software, we use AI-based tiering to automatically move the data back and forth when it's hot and when it's cool. Now, obviously flash is still fast, but if flash is that secondary medium in a configuration like that, it's going to be incredibly fast, but it's still going to be lower cost. The other thing in the early years that storage-class memory will be an expensive option from all vendors. It will, of course, over time get cheap, just the way flash did. >> Sure. >> Flash was way more expensive than hard drives. Over time it, you know, now it's basically the same price as what were the old 15,000 RPM hard drives, which have basically gone away. Storage-class over several years will do that, of course, as well, and by the way, it's very traditional in storage, as you, and I've been around so long and I've worked at hard drive companies in the old days. I remember when the fast hard drive was a 5400 RPM drive, then a 7200 RPM drive, then a 10,000 RPM drive. And if you think about it in the hard drive world, there was almost always two to three different spin speeds at different price points. You can do the same thing now with storage-class memory as your fastest tier, and now a still incredibly fast tier with flash. So it'll allow you to do that. And that will grow over time. It's going to be slow to start, but it'll continue to grow. We're there at IBM already publicly announcing. We'll have products in the first half of 2020 that will support storage-class memory. >> All right, so let's hit flash, because there's always been this concern about are we going to have enough flash capacity? You know, is enough going to, enough product going to come online, but also this notion that, you know, since everybody's getting flash from the same place, the flash, there's not going to be a lot of innovation. There's not going to be a lot of differentiation in the flash drives. Now, how do you see that playing out? Is there still room for innovation on the actual drive itself or the actual module itself? >> So when you look at flash, that's what IBM has funded on. We have focused on taking raw flash and creating our own flash modules. Yes, we can use industry standard solid state disks if you want to, but our flash core modules, which have been out since our FlashSystem product line, which is many years old. We just announced a new set in 2018 in the middle of the year that delivered in a four-node cluster up to 15 million IOPS with under 100 microseconds of latency by creating our own custom flash. At the same time when we launched that product, the FlashSystem 9100, we were able to launch it with NVME technology built right in. So we were one of the first players to ship NVME in a storage subsystem. By the way, we're end-to-end, so you can go fiber channel of fabric, InfiniBand over fabric, or ethernet over fabric to NVME all the way on the back side at the media level. But not only do we get that performance and that latency, we've also been able to put up to two petabytes in only two rack U. Two petabytes in two rack U. So incredibly rack density. So those are the things you can do by innovating in a flash environment. So flash can continue to have innovation, and in fact, you should watch for some of the things we're going to be announcing in the first half of 2020 around our flash core modules and our FlashSystem technology. >> Well, I look forward to that conversation. But before you go here, I got one more question for you. >> Sure. >> Look, I've known you for a long time. You spend as much time with customers as anybody in this world. Every CIO I talk to says, "I want to talk to the guy who brings me "or the gal who brings me the great idea." You know, "I want those new ideas." When Eric Herzog walks into their office, what's the good idea that you're bringing them, especially as it pertains to storage for the next year? >> So, actually, it's really a couple things. One, it's all about hybrid and multicloud. You need to seamlessly move data back and forth. It's got to be easy to do. Entry platform, mid-range, high-end, out to the cloud, back and forth, and you don't want to spend a lot of time doing it and you want it to be fully automated. >> So storage doesn't create any barriers. >> Storage is that foundation that goes on and off-prem and it supports multiple cloud vendors. >> Got it. >> Second thing is what we already talked about, which is because data is your most valuable asset, if you don't have cyber-resiliency on the storage side, you are leaving yourself exposed. Clearly big data and AI, and the other thing that's been a hot topic, which is related, by the way, to hybrid multiclouds, is the rise of the container space. For primary, for secondary, how do you integrate with Red Hat? What do you do to support containers in a Kubernetes environment? That's a critical thing. And we see the world in 2020 being trifold. You're still going to have applications that are bare metal, right on the server. You're going to have tons of applications that are virtualized, VMware, Hyper-V, KVM, OVM, all the virtualization layers. But you're going to start seeing the rise of the container admin. Containers are not just going to be the purview of the devops guy. We have customers that talk about doing 10,000, 20,000, 30,000 containers, just like they did when they first started going into the VM worlds, and now that they're going to do that, you're going to see customers that have bare metal, virtual machines, and containers, and guess what? They may start having to have container admins that focus on the administration of containers because when you start doing 30, 40, 50,000, you can't have the devops guy manage that 'cause you're deploying it all over the place. So we see containers. This is the year that containers starts to go really big-time. And we're there already with our Red Hat support, what we do in Kubernetes environments. We provide primary storage support for persistency containers, and we also, by the way, have the capability of backing that up. So we see containers really taking off in how it relates to your storage environment, which, by the way, often ties to how you configure hybrid multicloud configs. >> Excellent. Eric Herzog, CMO and vice president of partner strategies for IBM Storage. Once again, thanks for being on theCUBE. >> Thank you. >> And thanks for joining us for another CUBE conversation. I'm Peter Burris. See you next time. (funky music)

Published Date : Dec 29 2019

SUMMARY :

in the particular technology everyone in the marketplace. But let's start really quickly, and the things you need is the role that data plays that data is the thing of been the thing you did is the critical foundation. and help the backup admin some of the storage services that we're going to be talking of the storage to help protect their data. once the bad guy's there till you get him. So the second thing I want including moving the data out to cloud and how is that going to and the characteristics that IBM sees and the simplicity of are still going to be around. and not just in the enterprise accounts. that can service all the So in the old days, and by the way, it's very in the flash drives. in the middle of the year that delivered But before you go here, storage for the next year? and you don't want to spend and it supports multiple cloud vendors. and now that they're going to do that, Eric Herzog, CMO and vice See you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Eric HerzogPERSON

0.99+

Peter BurrisPERSON

0.99+

2019DATE

0.99+

EricPERSON

0.99+

PeterPERSON

0.99+

December 2019DATE

0.99+

2018DATE

0.99+

2020DATE

0.99+

IBMORGANIZATION

0.99+

15,000 RPMQUANTITY

0.99+

5400 RPMQUANTITY

0.99+

30QUANTITY

0.99+

twoQUANTITY

0.99+

10,000QUANTITY

0.99+

7200 RPMQUANTITY

0.99+

40QUANTITY

0.99+

10,000 RPMQUANTITY

0.99+

50 yearsQUANTITY

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

two rackQUANTITY

0.99+

IBM StorageORGANIZATION

0.99+

next yearDATE

0.99+

Two petabytesQUANTITY

0.99+

Global ChannelsORGANIZATION

0.99+

this yearDATE

0.99+

oneQUANTITY

0.99+

Elastic Storage System 3000COMMERCIAL_ITEM

0.99+

CUBEORGANIZATION

0.98+

first halfQUANTITY

0.98+

Second thingQUANTITY

0.98+

under 100 microsecondsQUANTITY

0.98+

20,000QUANTITY

0.98+

second thingQUANTITY

0.97+

OneQUANTITY

0.97+

one wayQUANTITY

0.96+

firstQUANTITY

0.96+

one more questionQUANTITY

0.96+

FlashSystem 9100COMMERCIAL_ITEM

0.95+

four-nodeQUANTITY

0.95+

singleQUANTITY

0.94+

theCUBEORGANIZATION

0.94+

two petabytesQUANTITY

0.93+

CMOORGANIZATION

0.92+

first playersQUANTITY

0.92+

first half of 2020DATE

0.91+

two largest supercomputersQUANTITY

0.89+

Red HatTITLE

0.89+

terabytesQUANTITY

0.89+

over 450 other arraysQUANTITY

0.88+

theCUBE StudiosORGANIZATION

0.86+

next couple yearsDATE

0.85+

year and a halfQUANTITY

0.85+

up to 15 million IOPSQUANTITY

0.84+

Spectrum ProtectCOMMERCIAL_ITEM

0.84+

yearsQUANTITY

0.84+

Eric Herzog, IBM Storage | VMworld 2019


 

>> Voiceover: Live from San Francisco, celebrating 10 years of high tech coverage, it's theCUBE. Covering VMworld 2019. Brought to you by VMware and its ecosystem partners. >> Welcome back, everyone, CUBE's live coverage for VMworld 2019 in Moscone North, in San Francisco, California. I'm John Furrier with Dave Vellante. Dave, our 10 years, we have Eric Herzog, the CMO and vice president of Global Storage Channels at IBM. CUBE alum, this is his 11th appearance on theCUBE at VMworld. That's the number one position. >> Dave: It's just at VMworld. >> Congratulations, welcome back. >> Well, thank you very much. Always love to come to theCUBE. >> John: Sporting the nice shirt and the IBM badge, well done. >> Thank you, thank you. >> What's going on with IBM in VMworld? First, get the news out. What's happening for you guys here? >> So for us, we just had a big launch actually in July. That was all about big data, storage for big data and AI, and also storage for cyber-resiliency. So we just had a big launch in July, so we're just sort of continuing that momentum. We have some exciting things coming out on September 12th in the high end of our storage product line, and then some additional things very heavily around containers at the end of October. >> So the open shift is the first question I have that pops into my head. You know, I think of IBM, I think of IBM Storage, I think of Red Hat, the acquisition, OpenShift's been very successful. Pat Gelsinger was talking containers, Kubernetes-- >> Eric: Right. >> OpenShift has been a big part of Red Hat's offering, now part of IBM. Has that Red Shift, I mean OpenShift's come in, to your world, and how do you guys view that? I mean, it's containers, obviously, is there any impact there at all? >> So from a storage perspective, no. IBM storage has been working with Red Hat for over 15 years, way before the company ever thought about buying them. So we went to the old Red Hat Summits, it was two guys, a dog, and a note, and IBM was there. So we've been supporting Red Hat for years, and years, and years. So for the storage division, it's probably one of the least changes to the direction, compared to the rest of IBM 'cause we were already doing so much with Red Hat. >> You guys were present at the creation of the whole Red Hat movement. >> Yeah, I mean we were-- >> We've seen the summits, but I was kind of teeing up the question, but legitimately though, now that you have that relationship under your belt-- >> Eric: Right. >> And IBM's into creating OpenShift in all the services, you're starting to see Red Hat being an integral part across IBM-- >> Eric: Right. >> Does that impact you guys at all? >> So we've already talked about our support for Red Hat OpenShift. We do support it. We also support any sort of container environment. So we've made sure that if it's not OpenShift and someone's going to leverage something else, that our storage will work with it. We've had support for containers now for two and half years. We also support the CSI Standard. We publicly announced that earlier in the year, that we'd be having products at the end of the year and into the next year around the CSI specification. So, we're working on that as well. And then, IBM also came out with a thing that are called the Cloud Paks. These Cloud Paks are built around Red Hat. These are add-ons that across multiple divisions, and from that perspective, we're positioned as, you know, really that ideal rock solid foundation underneath any of those Cloud Paks with our support for Red Hat and the container world. >> How about protecting containers? I mean, you guys obviously have a lot of history in data protection of containers. They're more complicated. There's lots of them. You spin 'em up, spin 'em down. If they don't spin 'em down, they're an attack point. What are your thoughts on that? >> Well, first thing I'd say is stay tuned for the 22nd of October 'cause we will be doing a big announcement around what we're doing for modern data protection in the container space. We've already publicly stated we would be doing stuff. Right, already said we'd be having stuff either the end of this year in Q4 or in Q1. So, we'll be doing actually our formal launch on the 22nd of October from Prague. And we'll be talking much more detail about what we're doing for modern data protection in the container space. >> Now, why Prague? What's your thinking? >> Oh, IBM has a big event called TechU, it's a Technical University, and there'll be about 2,000 people there. So, we'll be doing our launch as part of the TechU process. So, Ed Walsh, who you both know well and myself will be doing a joint keynote at that event on the 22nd. >> So, talk a little bit more about multi-cloud. You hear all kinds of stuff on multi-cloud here, and we've been talkin' on theCUBE for a while. It's like you got IBM Red Hat, you got Google, CISCO's throwin' a hat in the ring. Obviously, VMware has designs on it. You guys are an arms dealer, but of course, you're, at the same time, IBM. IBM just bought Red Hat so what are your thoughts on multi-cloud? First, how real is it? Sizeable opportunity, and from a storage perspective, storage divisions perspective, what's your strategy there? >> Well, from our strategy, we've already been takin' hybrid multi-cloud for several years. In fact, we came to Wikibon, your sister entity, and actually, Ed and I did a presentation to you in July of 2017. I looked it up, the title says hybrid multi-cloud. (Dave laughs) Storage for hybrid multi-cloud. So, before IBM started talkin' about it, as a company, which now is, of course, our official line hybrid multi-cloud, the IBM storage division was supporting that. So, we've been supporting all sorts of cloud now for several years. What we have called transparent cloud tiering where we basically just see cloud as a tier. Just the way Flash would see hard drive or tape as a tier, we now see cloud as a tier, and our spectrum virtualized for cloud sits in a VM either in Amazon or in IBM Cloud, and then, several of our software products the Spectrum line, Spectrum Protect, Spectrum Scale, are available on the AWS Marketplace as well as the IBM Cloud Marketplace. So, for us, we see multi-cloud from a software perspective where the cloud providers offer it on their marketplaces, our solutions, and we have several, got some stuff with Google as well. So, we don't really care what cloud, and it's all about choice, and customers are going to make that choice. There's been surveys done. You know, you guys have talked about it that certainly in the enterprise space, you're not going to use one cloud. You use multiple clouds, three, four, five, seven, so we're not going to care what cloud you use, whether it be the big four, right? Google, IBM, Amazon, or Azure. Could it be NTT in Japan? We have over 400 small and medium cloud providers that use our Spectrum Protect as the engine for their backup as a service. We love all 400 of them. By the way, there's another 400 we'd like to start selling Spectrum Protect as a service. So, from our perspective, we will work with any cloud provider, big, medium, and small, and believe that that's where the end users are going is to use not just one cloud provider but several. So, we want to be the storage connected. >> That's a good bet, and again, you bring up a good point, which I'll just highlight for everyone watching, you guys have made really good bets early, kind of like we were just talking to Pat Gelsinger. He was making some great bets. You guys have made some, the right calls on a lot of things. Sometimes, you know, Dave's critical of things in there that I don't really have visibility in the storage analyst he is, but generally speaking, you, Red Hat, software, the systems group made it software. How would you describe the benefits of those bets paying off today for customers? You mentioned versatility, all these different partners. Why is IBM relevant now, and from those bets that you've made, what's the benefit to the customers? How would you talk about that? Because it's kind of a big message. You got a lot going on at IBM Storage, but you've made some good bets that turned out to be on the right side of tech history. What are those bets? And what are they materializing into? >> Sure, well, the key thing is you know I always wear a Hawaiian shirt on theCUBE. I think once maybe I haven't. >> You were forced to wear a white shirt. You were forced to wear the-- >> Yes, an IBM white shirt, and once, I actually had a shirt from when I used to work for Pat at the EMC, but in general, Hawaiian shirt, and why? Because you don't fight the wave, you ride the wave, and we've been riding the wave of technology. First, it was all about AI and automation inside of storage. Our easy tier product automatically tiers. You don't have, all you do is set it up once, and after that, it automatically moves data back and forth, not only to our arrays, but over 450 arrays that aren't ours, and the data that's hottest goes to the fastest tier. If you have 15,000 RPM drives, that's your fastest, it automatically knows that and moves data back and forth between hot, fast, and cold. So, one was putting AI and automation in storage. Second wave we've been following was clearly Flash. It's all about Flash. We create our own Flash, we buy raw Flash, create our own modules. They are in the industry standard form factor, but we do things, for example, like embed encryption with no performance hit into the Flash. Latency as low as 20 microseconds, things that we can do because we take the Flash and customize it, although it is in industry standard form factor. The other one is clearly storage software and software-defined storage. All of our arrays come with software. We don't sell hardware. We sell a storage solution. They either come with Spectrum Virtualize or Spectrum Scale, but those packages are also available stand-alone. If you want to go to your reseller or your distributor and buy off-the-shelf white-box componentry, storage-rich servers, you can create your own array with Spectrum Virtualize for block, Spectrum Scale for File, IBM Object Storage for Cloud. So, if someone wants to buy software only, just the way Pat was talking about software-defined networking, we'll sell 'em software for file blocker object, and they don't buy any infrastructure from us. They only buy the software, so-- >> So, is that why you have a large customer base? Is that why there's so much, diverse set of implementations? >> Well, we've got our customers that are system-oriented, right, some you have Flash system. Got other customers that say, "Look, I just want to buy Spectrum Scale. "I don't want to buy your infrastructure. "Just I'll build my own," and we're fine with that. And the other aspect we have, of course, is we've got the modern data protection with Spectrum Protect. So, you've got a lot of vendors out on the floor. They only sell backup. That's all they sell, and you got other people on the floor, they only sell an array. They have nice little arrays, but they can't do an array and software-defined storage and modern data protection one throat to choke, one tech support, entity to deal with one set of business partners to deal with, and we can do that, which is why it's so diverse. We have people who don't have any of IBM storage at all, but they back up everything with Spectrum Protect. We have other customers who have Flash systems, but they use backup from one of our competitors, and that's okay 'cause we'll always get a PO one way or another, right? >> So, you want the choice as factor. >> Right. >> Question on the ecosystem and your relationship with VMware. As John said, 10th year at VMworld, if you go back 10 years, storage, VMware storage was limited. They had very few resources. They were throwin' out APIs to the storage industry and sayin' here, you guys, fix this problem, and you had this cartel, you know, it was EMC, IBM was certainly in there, and NetApp, a couple others, HPE, HP at the time, Dell, I don't know, I'm not sure if Dell was there. They probably were, but you had the big Cos that actually got the SDK early, and then, you'd go off and try to sell all the storage problems. Of course, EMC at the time was sort of puttin' the brakes on VMware. Now, it's totally different. You've got, actually similar cartel. Although, you've got different ownership structure with Dell, EMC, and you got (mumbles) VMwware's doin' its own software finally. The cuffs are off. So, your thoughts on the changes that have gone on in the ecosystem. IBM's sort of position and your relationship with VMware, how that's evolved. >> So, the relationship for us is very tight. Whether it be the old days of VASA, VAAI, V-center op support, right, then-- >> Dave: V-Vault, yeah yeah. >> Now, V-Vault two so we've been there every single time, and again, we don't fight the wave, we ride the wave. Virtualization's a wave. It's swept the industry. It swept the end users. It's swept every aspect of compute. We just were riding that wave and making sure our storage always worked with it with VMware, as well as other hypervisors as well, but we always supported VMware first. VMware also has a strong relationship with the cloud division, as you know, they've now solved all kinds of different things with IBM Cloud so we're making sure that we stay there with them and are always up front and center. We are riding all the waves that they start. We're not fighting it. We ride it. >> You got the Hawaiian shirt. You're riding the waves. You're hanging 10, as you used to say. Toes on the nose, as the expression goes. As Pat Gelsinger says, ride the new wave, you're a driftwood. Eric, great to see you, CMO of IBM Storage, great to have you all these years and interviewing you, and gettin' the knowledge. You're a walking storage encyclopedia, Wikipedia, thanks for comin' on. >> Great, thank you. >> All right, it's more CUBE coverage here live in San Francisco. I'm John Furrier for Dave Vellante, stay with us. I got Sanjay Putin coming up, and we have all the big executives who run the different divisions. We're going to dig into them. We're going to get the data, share with you. We'll be right back. (upbeat music)

Published Date : Aug 27 2019

SUMMARY :

Brought to you by VMware and its ecosystem partners. That's the number one position. Well, thank you very much. and the IBM badge, well done. First, get the news out. in the high end of our storage product line, So the open shift is the first question I have to your world, and how do you guys view that? it's probably one of the least changes to the direction, of the whole Red Hat movement. We publicly announced that earlier in the year, I mean, you guys obviously have a lot of history for the 22nd of October So, Ed Walsh, who you both know well and myself and we've been talkin' on theCUBE for a while. and actually, Ed and I did a presentation to you You guys have made some, the right calls on a lot of things. Sure, well, the key thing is you know I always wear You were forced to wear a white shirt. They are in the industry standard form factor, And the other aspect we have, of course, that actually got the SDK early, So, the relationship for us is very tight. We are riding all the waves that they start. and gettin' the knowledge. and we have all the big executives who run

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Eric HerzogPERSON

0.99+

Dave VellantePERSON

0.99+

JohnPERSON

0.99+

Ed WalshPERSON

0.99+

IBMORGANIZATION

0.99+

Pat GelsingerPERSON

0.99+

CISCOORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

EricPERSON

0.99+

DavePERSON

0.99+

July of 2017DATE

0.99+

John FurrierPERSON

0.99+

San FranciscoLOCATION

0.99+

DellORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

JapanLOCATION

0.99+

Sanjay PutinPERSON

0.99+

JulyDATE

0.99+

September 12thDATE

0.99+

EdPERSON

0.99+

10 yearsQUANTITY

0.99+

PragueLOCATION

0.99+

HPORGANIZATION

0.99+

PatPERSON

0.99+

two guysQUANTITY

0.99+

Red HatORGANIZATION

0.99+

VMwareORGANIZATION

0.99+

FirstQUANTITY

0.99+

EMCORGANIZATION

0.99+

CUBEORGANIZATION

0.99+

VMworldORGANIZATION

0.99+

Moscone NorthLOCATION

0.99+

22nd of OctoberDATE

0.99+

Global Storage ChannelsORGANIZATION

0.99+

OpenShiftORGANIZATION

0.99+

Red HatTITLE

0.99+

first questionQUANTITY

0.99+

20 microsecondsQUANTITY

0.99+

VAAIORGANIZATION

0.99+

HPEORGANIZATION

0.99+

WikibonORGANIZATION

0.99+

10th yearQUANTITY

0.99+

15,000 RPMQUANTITY

0.98+

VMworld 2019EVENT

0.98+

AzureORGANIZATION

0.98+

11th appearanceQUANTITY

0.98+

AWSORGANIZATION

0.98+

OpenShiftTITLE

0.98+

Eric Herzog, IBM | CUBEConversation, March 2019


 

(upbeat music) [Announcer] From our studios in the heart of Silicon Valley Palo Alto, California. This is a CUBE conversation. >> Hi, I'm Peter Burris, and welcome to another CUBE conversation from our studios in beautiful Palo Alto, California. One of the biggest challenges that every user faces is how are they going to arrange their resources that are responsible for storing, managing, delivering, and protecting data. And that's a significant challenge, but it gets even worse when we start talking about multi-cloud. So, today we've got Eric Herzog who's the CMO and VP of Worldwide Storage Channels at IBM Storage to talk a bit about the evolving relationship of what constitutes a modern, comprehensive storage portfolio and multi-cloud. Eric, welcome to theCUBE. >> Peter, Thank you, thank you. >> So, start off, what's happening with IBM Storage these days, and let's get into this kind of how multi-cloud is affecting some of your decisions, and some of your customer's decisions. >> So, what we've done, is we've started talking about multi-cloud over two years ago. When Ed Walsh joined the company as a general manager, we went on an analyst roadshow, in fact, we came here to theCUBE and shot a video, and we talked about how the IBM Storage Division is all about multi-cloud. And we look about that in three ways. First of all, if you are creating a private cloud, we work with you. From a container, whether you're Vmware based, whether you are doing a more traditional cloud- private cloud. Now the modern private cloud, all container based. Second is Hybrid Cloud, data on parem, out to a public cloud provider. And the third aspect, and in fact, you guys have written about it in one of your studies is that no one is going to use one public cloud provider, they're going to use multiple cloud providers. So whether that be IBM Cloud, which of course we love because we're IBM shareholders, but we work with Amazon, we work with Google, and in fact we work with any cloud provider. Our Spectrum Protect backup product, which is one of the most awarded enterprise backup packages can backup to any cloud. In fact, over 350 small to medium cloud providers, the engine for their backup as a service, is Spectrum Protect. Again, completely heterogeneous, we don't care what cloud you use, we support everyone. And we started that mantra two and a half years ago, when Ed first joined the company. >> Now, I remember when you came on, we talked a lot about this notion of data first and the idea that data driven was what we talked about >> Right, data driven. >> And increasingly, we talked about, or we made the observation that enterprises were going to take a look at the natural arrangement of their data, and that was going to influence a lot of their cloud, a lot of their architecture, and certainly a lot of their storage divisions or decisions. How is that playing out? Is that still obtaining? Are you still seeing more enterprises taking this kind of data driven approach to thinking about their overall cloud architectures? >> Well the world is absolutely data-centric. Where does the data go? What are security issues with that data? How is it close to the compute when I need it? How do I archive I, how do I back it up? How do I protect it? We're here in Silicon Valley. I'm a native Palo Alton, by the way, and we really do have earthquakes here, and they really do have earthquakes in Japan and China and there is all kinds of natural disasters. And of course as you guys have pointed out, as have almost all of the analysts, the number one cause of data loss besides humans is actually still fire. Even with fire suppressant data centers. >> And we have fires out here in Northern California too. >> That's true. So, you've got to make sure that you're backing up that data, you're archiving the data. Cloud could be part of that strategy. When does it need to be on parem, when does it need to be off parem? So, it's all about being a data-driven, and companies look at the data, profile the date and time, What sort of storage do I need? Can I go high end, mid-range and entry, profile that data, figure that out, what they need to do. And then do the same thing now with on parem and off parem. For certain data sets, for security reasons, legal reasons you probably are not going to put it out into a public cloud provider. But other data sets are ideal for that and so all of those decisions that are being made by: What's the security of the data? What's the legality of that data? What's the performance I need of that data? And, how often do I need the data? If you're going to constantly go back and forth, pull data back in, going to a public cloud provider, which charge both for in and out of the data, that actually may cost more than buying an Array on parem. And so, everyone's using that data-centricity to figure out how do they spend their money, and how do they optimize the data to use it in their applications, workloads and use cases. >> So, if you think about it, the reality is by application, workload, location, regulatory issues, we're seeing enterprises start to recognize and increase specialization of their data assets. And that's going to lead to a degree of specializations in the classes of data management and storage technologies that they utilize. Now, what is the challenge of choosing a specific solution versus looking at more of a portfolio of solutions, that perhaps provide a little bit more commonality? How are customers, how are the IMB customer base dealing with that question. >> Well, for us the good thing was to have a broad portfolio. When you look at the base storage Arrays we have file, block and object, they're all award winning. We can go big, we can go medium, and we can go small. And because of what we do with our Array family we have products that tend to be expensive because of what they do, products that mid-price and products that are perfect for Herzog's Bar and Grill. Or maybe for 5,000 different bank branches, 'cause that bank is not going to buy expensive storage for every branch. They have a small Array there in case core goes down, of course. When you or I go in to get a check or transact, if the core data center is down, that Wells Fargo, BofA, Bank of Tokyo. >> Still has to do business. >> They are all transacting. There's a small Array there. Well you don't want to spend a lot of money for that, you need a good, reliable all flash Array with the right RAS capability, right? The availability, capability, that's what you need, And we can do that. The other thing we do is, we have very much, cloud-ified everything we do. We can tier to the cloud, we can backup to the cloud. With object storage we can place it in the cloud. So we've made the cloud, if you will, a seamless tier to the storage infrastructure for our customers. Whether that be backup data, archive data, primary data, and made it so it's very easy to do. Remember, with that downturn in '08 and '09 a lot of storage people left their job. And while IT headcount is back up to where it used to be, in fact it's actually exceeded, if there was 50 storage guys at Company X, and they had to let go 25 of them, they didn't hire 25 storage guys now, but they got 10 times the data. So they probably have 2 more storage guys, they're from 25 to 27, except they're managing 10 times the data, so automation, seamless integration with clouds, and being multi-cloud, supporting hybrid clouds is a critical thing in today's storage world. >> So you've talked a little bit about format, data format issues still impact storage decisions. You've talked about how disasters or availability still impact storage decisions, certainly cost does. But you've also talked about some of the innovative things that are happening, security, encryption, evolved backup and and restore capabilities, AI and how that's going to play, what are some of the key thing that your customer base is asking for that's really driving some of your portfolio decisions? >> Sure, well when we look beyond making sure we integrate with every cloud and make it seamless, the other aspect is AI. AI has taken off, machine learning, big data, all those. And there it's all about having the right platform from an Array perspective, but then marrying it with the right software. So for example, our scale-out file system, Spectrum Scale can go to Exabyte Class, in fact the two fastest super computers on this planet have almost half an exabyte of IBM Spectrum Scale for big data, analytics, and machine learning workloads. At the same time you need to have Object Store. If you're generating that huge amount of data set in AI world, you want to be able to put it out. We also now have Spectrum discover, which allows you to use Metadata, which is the data about the data, and allow and AI app, a machine learning app, or an analytics app to actually access the metadata through an API. So that's one area, so cloud, then AI, is a very important aspect. And of course, cyber resiliency, and cyber security is critical. Everyone thinks, I got to call a security company, so the IBM Security Division, RSA, Check Point, Symantec, McAfee, all of these things. But the reality is, as you guys have noted, 98% of all enterprises are going to get broken into. So while they're in your house, they can steal you blind. Before the cops show up, like the old movie, what are they doing? They're loading up the truck before the cops show up. Well guess what, what if that happened, cops didn't show up for 20 minutes, but they couldn't steal anything, or the TV was tied to your fingerprint? So guess what, they couldn't use the TV, so they couldn't steal it, that's what we've done. So, whether it be encryption everywhere, we can encrypt backup sets, we can encrypt data at rest, we can even encrypt Arrays that aren't ours with our Spectrum Virtualize family. Air gapping, so that if you have ransomware or malware you can air-gap to tape. We've actually created air gapping out with a cloud snapshot. We have a product called Safeguard Copy which creates what I'll call a faux air gap in the mainframe space, but allows that protection so it's almost as if it was air gapped even though it's on an Array. So that's a ransomware and malware, being able to detect that, our backup products when they see an unusual activity will flag the backup restore jam and say there is unusual activity. Why, because ransomware and malware generate unusual activity on back up data sets in particular, so it's flaky. Now we don't go out and say, "By the way, that's Herzog ransomware, or "Peter Burris ransomware." But we do say "something is wrong, you need to take a look." So, integrating that sort of cyber resiliency and cyber security into the entire storage portfolio doesn't mean we solve everything. Which is why when you get an overall security strategy, you've got that Great Wall of China to keep the enemy out, you've got the what I call, chase software to get the bad guy once he's in the house, the cops that are coming to get the bad guy. But you've got to be able to lock everything down, you'll do it. So a comprehensive security strategy, and resiliency strategy involves not only your security vendor, but actually your storage vendor. And IBM's got the right cyber resiliency and security technology on the storage side to marry up, regardless of which security vendor they choose. >> Now you mention a number of things that are associated with how an enterprise is going to generate greater leverage, greater value out of data that you already know. So, you mentioned, you know, encryption end to end, you mention being able to look at metadata for AI applications. As we move to a software driven world of storage where physical volumes can still be made more virtual so you can move them around to different workloads. >> Right. >> And associate the data more easily, tell us a little bit about how data movement becomes an issue in the storage world, because the storage has already been associated with it's here. But increasingly, because of automation, because of AI, because of what businesses are trying to do, it's becoming more associated with intelligent, smart, secure, optimized movement of data. How is that starting to impact the portfolio? >> So we look at that really as data mobility. And data mobility can be another number of different things, for example, we already mentioned, we treat clouds as transparent tiers. We can backup to cloud, that's data mobility. We also tier data, we can tier data within an Array, or the Spectrum Virtualize product. We can tier data, block data cross 450 Arrays, most of which aren't IBM logo'd. We can tier from IBM to EMC, EMC can then tier to HDS, HDS can tier to Hitachi, and we do that on Arrays that aren't ours. So in that case what you're doing is looking for the optimal price point, whether it be- >> And feature set. >> And feature sets, and you move things, data around all transparently, so it's all got to be automated, that's another thing, in the old days we thought we had Nirvana when the tiering was automatically moved the data when it's 30 days old. What if we automatically move data with our Easy Tier technology through AI, when the data is hot moves it to the hottest tier, when the data is cold it puts it out to the lowest cost tier. That's real automation leveraging AI technology. Same thing, something simple, migration. How much money have all the storage companies made on migration services? What if you could do transparent block migration in the background on the fly, without ever taking your servers down, we can do that. And what we do is, it's so intelligent we always favor the data set, so when the data is being worked on, migration slows down. When the data set slows down, guess what? Migration picks up. But the point is, data mobility, in this case from an old Array to an new Array. So whether it be migrating data, whether it be tiering data, whether you're moving data out to the cloud, whether it be primary data or backup data, or object data for archive, the bottom line is we've infused not only the cloudification of our storage portfolio, but the mobility aspects of the portfolio. Which does of course include cloud. But all tiering more likely is on premise. You could tier to the cloud, but all flash Array to a cheap 7200 RPM Array, you save a lot of money and we can do that using AI technology with Easy Tier. All examples of moving data around transparently, quickly, efficiently, to save cost both in CapEx, using 7200 RPM Arrays of course to cut costs, but actually OpEx the storage admin, there aren't a hundred storage admins at Burris Incorporated. You had to let them go, you've hired 100 of the people back, but you hired them all for DevOps so you have 50 guys in storage >> Actually there are, but I'm a lousy businessman so I'm not going to be in business long. (laughing) One more question, Eric. I mean look you're an old style road warrior, you're out with customers a lot. Increasingly, and I know this because we've talked about it, you're finding yourself trying to explain to business people, not just IT people how digital business, data and storage come together. When you're having these conversations with executives on the business side, how does this notion of data services get discussed? What are some of the conversations like? >> Well I think the key thing you got to point out is storage guys love to talk speeds and feeds. I'm so old I can still talk TPI and BPI on hard drives and no one does that anymore, right? But, when you're talking to the CEO or the CFO or the business owner, it's all about delivering data at the right performance level you need for your applications, workloads and use cases, your right resiliency for applications, workloads and use cases, your right availability, so it's all about application, workloads, and use cases. So you don't talk about storage speeds and feeds that you would with Storage Admin, or maybe in the VP of infrastructure in the Fortune 500, you'd talk about it's all about the data, keeping the data secure, keeping the data reliable, keeping it at right performance. So if it's on the type of workload that needs performance, for example, let's take the easy one, Flash. Why do I need Flash? Well, Mr. CEO, do you use logistics? Of course we do! Who do you use, SAP. Oh, how long does that logistics workload take? Oh, it takes like 24 hours to run. What if I told you you could run that every night, in an hour? That's the power of Flash. So you translate what you and I are used to, storage nerdiness, we translate it into businessfied, in this case, running that SAP workload in an hour vs. 24 has a real business impact. And that's the way you got to talk about storage these days. When you're out talking to a storage admin, with the admin, yes, you want to talk latency and IOPS and bandwidth. But the CEO is just going to turn his nose up. But when you say I can run the MongoDB workload, or I can do this or do that, and I can do it. What was 24 hours in an hour, or half an hour. That translates to real data, and real value out of that data. And that's what they're looking for, is how to extract value from the data. If the data isn't performant, you get less value. If the data isn't there, you clearly have no value. And if the data isn't available enough so that it's down part time, if you are doing truly digital business. So, if Herzog's Bar and Grill, actually everything is done digitally, so before you get that pizza, or before you get that cigar, you have to order it online. If my website, which has a database underneath, of course, so I can handle the transactions right, I got to take the credit card, I got to get the orders right. If that is down half the time, my business is down, and that's an example of taking IT and translating it to something as simple as a Bar and Grill. And everyone is doing it these days. So when you talk about, do you want that website up all the time? Do you need your order entry system up all the time? Do you need your this or that? Then they actually get it, and then obviously, making sure that the applications run quickly, swiftly, and smoothly. And storage is, if you will, that critical foundation underneath everything. It's not the fancy windows, it's not the fancy paint. But if that foundation isn't right, what happens? The whole building falls down. And that's exactly what storage delivers regardless of the application workload. That right critical foundation of performance, availability, reliability. That's what they need, when you have that all applications run better, and your business runs better. >> Yeah, and the one thing I'd add to that, Eric, is increasingly the conversations that we're having is options. And one of the advantages of a large portfolio or a platform approach is that the things you're doing today, you'll discover new things that you didn't anticipate, and you want the option to be able to do them quickly. >> Absolutely. >> Very, very important thing. So, applications, workload, use cases, multi-cloud storage portfolio. Eric, thanks again for coming on theCUBE, always love having you. >> Great, thank you. >> And once again, I'm Peter Burris, talking with Eric Herzog, CMO, VP of Worldwide Storage Channels at IBM Storage. Thanks again for watching this CUBE conversation, until next time. (upbeat music)

Published Date : Mar 22 2019

SUMMARY :

[Announcer] From our studios in the heart One of the biggest challenges that every user faces how multi-cloud is affecting some of your And the third aspect, and in fact, you guys have take a look at the natural arrangement of their And of course as you guys have pointed out, as have What's the legality of that data? How are customers, how are the IMB customer base And because of what we do with our Array family We can tier to the cloud, we can backup to the cloud. AI and how that's going to play, But the reality is, as you guys have noted, 98% of data that you already know. And associate the data more easily, tell us a little HDS, HDS can tier to Hitachi, and we cloudification of our storage portfolio, but the What are some of the conversations like? And that's the way you got to talk about storage these days. Yeah, and the one thing I'd add to that, Eric, is multi-cloud storage portfolio. And once again, I'm Peter Burris, talking with

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
EMCORGANIZATION

0.99+

BofAORGANIZATION

0.99+

IBMORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

Eric HerzogPERSON

0.99+

McAfeeORGANIZATION

0.99+

Peter BurrisPERSON

0.99+

GoogleORGANIZATION

0.99+

EricPERSON

0.99+

SymantecORGANIZATION

0.99+

Ed WalshPERSON

0.99+

HitachiORGANIZATION

0.99+

JapanLOCATION

0.99+

Wells FargoORGANIZATION

0.99+

10 timesQUANTITY

0.99+

24 hoursQUANTITY

0.99+

March 2019DATE

0.99+

Silicon ValleyLOCATION

0.99+

25QUANTITY

0.99+

50 guysQUANTITY

0.99+

Bank of TokyoORGANIZATION

0.99+

RSAORGANIZATION

0.99+

Check PointORGANIZATION

0.99+

20 minutesQUANTITY

0.99+

Northern CaliforniaLOCATION

0.99+

Company XORGANIZATION

0.99+

98%QUANTITY

0.99+

Burris IncorporatedORGANIZATION

0.99+

Herzog's Bar and GrillORGANIZATION

0.99+

ChinaLOCATION

0.99+

PeterPERSON

0.99+

'08DATE

0.99+

half an hourQUANTITY

0.99+

IBM StorageORGANIZATION

0.99+

SecondQUANTITY

0.99+

50 storage guysQUANTITY

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

'09DATE

0.99+

third aspectQUANTITY

0.99+

24QUANTITY

0.99+

7200 RPMQUANTITY

0.99+

27QUANTITY

0.99+

oneQUANTITY

0.99+

FirstQUANTITY

0.99+

IBM Security DivisionORGANIZATION

0.99+

Bar and GrillORGANIZATION

0.98+

firstQUANTITY

0.98+

Easy TierOTHER

0.98+

IBM StorageORGANIZATION

0.98+

bothQUANTITY

0.98+

an hourQUANTITY

0.98+

three waysQUANTITY

0.98+

One more questionQUANTITY

0.97+

OneQUANTITY

0.97+

Great Wall of ChinaLOCATION

0.97+

todayDATE

0.96+

Palo AltonLOCATION

0.96+

5,000 different bank branchesQUANTITY

0.95+

two and a half years agoDATE

0.93+

CUBEORGANIZATION

0.93+

hundred storage adminsQUANTITY

0.93+

2 more storage guysQUANTITY

0.93+

SpectrumCOMMERCIAL_ITEM

0.91+

Silicon Valley Palo Alto, CaliforniaLOCATION

0.91+

VmwareORGANIZATION

0.91+

two fastest super computersQUANTITY

0.9+

one areaQUANTITY

0.89+

25 storage guysQUANTITY

0.87+

paremORGANIZATION

0.87+

30 days oldQUANTITY

0.86+

HerzogORGANIZATION

0.86+

FlashTITLE

0.85+

EdPERSON

0.85+

Spectrum VirtualizeCOMMERCIAL_ITEM

0.84+

IBM Storage DivisionORGANIZATION

0.83+

Eric Herzog, IBM & Sam Werner, IBM | IBM Think 2019


 

>> Live from San Francisco, it's theCUBE covering IBM Think 2019. Brought to you by IBM. >> Welcome back, we're here at Moscone North. You're watching theCUBE, the leader in live tech coverage. This is day four of our wall to wall coverage of IBM the Think. The second annual IBM Think, first year at Moscone. Dave Vellante here with Stu Miniman. Eric Herzog is here, he's the CMO of IBM Storage and Sam Werner is the VP of Offering Management for Storage Software at IBM. Guys welcome back to theCUBE. Always good to see ya both. >> Thanks >> Thank you. >> So we were joking yesterday and today, of course multi cloud, the clouds opened, it's been raining, it's been sunny today, so multi cloud is all the rage. Evidently you guys have done some work in multi cloud. Some research that you can share with us. >> Yeah, so couple things. First of all, the storage vision in multi cloud at IBM for years. We work with all the cloud providers including IBM cloud, but we work with Amazon and we work with Azure, we work with Google cloud and in fact our Spectrum Protect, modern data protection product, has about 350 small and medium cloud providers across the world that use it for the engine for their back up as a service. So we've been doing that for a long time, but I think what you're getting is, what we found in a survey multi cloud and I actually had had a panel yesterday and all three of my panelists, including Aetna, use a minimum of five different public cloud providers. So what we're seeing is hybrid is a subset of that, right? On and off, but even if someone is saying, I'm using cloud providers, they're using between five and 10, not counting software as a service because many of the people in the survey didn't realize software as a service is theoretically a type of cloud deployment, right? >> So that's obviously not just the big three or the big five, we're talking about a lot of small guys. Some of the guys maybe you could have used in your Spectrum Protect for back up, local cloud providers, right? And then add sas to that, you could probably double or triple it, right? >> Right, well we've have been very successful with sas providers so for example, one of people on the panel, a company called Follett, they're a privately held, in the mid close to a billion dollars, they provide services to universities and school districts and they have a software package for universities for the bookstores to manage the textbooks and another software as a service for school districts across the United States. They have 1,500 and it's all software service. No on prem licensing and that's an example. That's in my mind, that's a cloud deployment, right? >> Ginni talked Tuesday about chapter two how chapter one was kind of, I call it commodity cloud, but you know, apps that are customer facing, chapter two, a lot of chapter two anyways, is going to be about hybrid and multi cloud. I feel like to date it's largely been, not necessarily a purposeful strategy to go multi cloud, it's just we're multi vendor. Do you see customers actually starting to think about a multi cloud strategy? If so, what's behind that and then more specifically, what are you guys doing from a software stand point to support that? >> Yeah, so in the storage space where we are, we find customers are now trying to come up with a data management strategy in a multi cloud model, especially as they want to bring all their data together to come up with insights. So as they start wanting to build an AI strategy and extend what they're doing with analytics and try to figure out how to get value out of the data they're building a model that's able to consolidate the data, allow them to ingest it and then actually build out AI models that can gain insights from it. So for our software portfolio, we're working with the different types of service providers. We're working closely with all the big cloud providers and getting our software out there and giving our customers flexible ways to move and manage their data between the clouds and also have clear visibility into all the data so they can bring it together. >> You know, I wonder sort of what the catalyst is there? I wrote an article that's going up on SiliconANGLE later and I talked about how the first phase was kind of tire kicking of cloud and then when the down turn hit, people went from capex to opex. It was sort of a CFO mandate and then coming out of the down turn, the lines of business were like, whoa agility, I love this. So shadow IT and then IT sort of bought in and said, "we got to clean up this mess." and that seems to be why, at least one catalyst, for companies saying, "hey, we want a single data management strategy." Are you seeing that or is there more to it? >> Well I think first of all, we're absolutely seeing it and there's a lot of drivers behind it There's absolutely IT realizing they need to get control over this again. >> Governance, compliance, security, edix >> And think about all the new regulations. GDPR's had a huge impact. All a sudden, these IT organizations need to really track the data and be able to take action on it and now you have all these new roles in organizations, like data scientists who want to get their hands on data. How do you make sure that you have governance models around that data to ensure you're not handing them things like pi? So they realized very quickly that they need to have much better control. The other thing you've seen is, the rise of the vulnerabilities. You see much more public attacks on data. You've seen C level executives lose their jobs over this. So there's a lot more stress about how we're keeping all this data safe. >> You're right. Boards are gettin' flipped and it's a big, big risk these days >> Well the other thing you're seeing is legal issues. Canada, the data has to stay in Canada. So if you're multi national and you're a Japanese company, all your Canadian offices, the data has to be some cloud of ours got an office in Canada. So if you're a Japanese headquarter company, using NTT cloud, then you got to use IBM or Amazon or Azure, 'cause you have to have a data center inside the country just to have the cloud data. You also have shier maturity in the market. I would argue, the cloud used to be called the web and before it was the web, it was called the internet and so now that you're doing that, what happens in the bigger companies, procurement is involved, just the way they've been involved in storage servers and networking for a long time. Great you're using CISCO for the network. You did get a quote from HP or using IBM storage, but make sure you get at least one other quote so as that influences aside from definitely getting the control is when procurement get involved, everything goes out for RFP or RFQ or at ten dure, as they say in Europe and you have to have multiple vendors and you sometimes may end up for purely, we need the way to club 'em on price so we need IBM cloud and Microsoft so we can keep 'em honest. So when everyone rushed the cloud, they didn't necessarily do that, but now that it's maturing >> Yeah, it's a sign of maturity. >> It's a sign of maturity that people want to control pricing. >> Alright, so one of the other big themes we've been talking a lot about this week is AI. So Eric talks about, when we roll back the clock, I think back to the storage world, we've been talking about intelligence in storage for longer than my career. So Sam, maybe you can tell us what's different about AI in storage than the intelligence we've been talking and what's the latest about how AI fits into the portfolio? >> Yeah, that's a great question and actually a lot of times we talk about AI and how storage is really important to make the data available for AI, but we're also embedding AI in our storage products. If you think about it, if you have a problem with your storage product, you don't just take down one application. You can take down an entire company, so you've got to make sure your storage is really resilient. So we're building AI in that can actually predict failures before they happen so that our storage never takes any outages or has any down time. We can also predict by looking at behavior out in the network, we can predict or identify issues that a host might be causing on the network and proactively tell a customer before they get the call that the applications are slowing down and we can point out exactly which host is causing the problem. So we're actually proactively finding problems out on the storage network before they become an issue. >> Yeah and Eric, what is it about the storage portfolio that IBM has that makes it a good solution for customers that are deploying AI as an application in use cases? >> Yeah so we look at all, so one is AI, in the box if you will, in the array and we've done a ton of work there, but the other is as the underlying foundation for AI workloads and applications so a couple things. Clearly, AI often is performance dependent and we're focused on all flash. Second thing as Sam already put it out, resilience and availability. If you're going to use AI in an automotive factory to control the supply chain and to control the actual factory floor, you can't have it go down because they could be out tens of millions, hundreds of millions of year just for that day of building Mercedes or Toyotas or whatever they're building if you have an automated factory. The other areas we've created what we call, the data pipeline and it involves three, four members of our storage software family. Our Spectrum Scale, a highly parallel file system that allows incredible performance for AI. Our Spectrum Discover which allows you to use meta data which is information about the data to more accurately plan and the AI software from any vendor can use an API and go in and see this meta data information to make the AI software more efficient that they would use. Our IBM Cloud Object Storage and our Spectrum Archive, you have to archive the data, but easily bring it back because AI is like a human. We are, smart humans are learning non-stop, whether you're five, whether you're 25, or whether you're 75, you're always learning. You read the newspaper, you see of course theCUBE and you learn new things, but you're always comparing that to what you used to know. Are the Russians our friends or our enemies? It depends on your point in time. Do we love what's going on in Germany? It depends on your point in time. In 1944, I'd say probably not. Today you'd say, what a great Democratic country, but you have to learn and so this data pipeline, this loop, our software is on our storage arrays and allows it to be used. We'll even sell the software without our storage arrays for use on any AI server platform, so that softwares really the huge differentiator for us. >> So can you, as a follow up to that, can you address the programmability of your portfolio? Whether it's through software or maybe the infrastructure as well. Infrastructure, I'm thinking infrastructure's code. You mentioned you know API's. You mentioned the ability to go into like Spectrum Discover for example, access meta data. How programmable is your infrastructure and how are you enabling that? >> I mean across our entire portfolio, we build restful API's to make our infrastructure completely extensible. We find that more and more enterprises are looking to automate the deployment of the infrastructure and so we provide API's for programming and deploying that. We're also moving towards containerizing most of our storage products so that as enterprises move towards cubernetes type clusters, we work with both Red Hat and with our own ICP and as customers move towards those deployment models and automate the deployment of their clusters, we're making all of our storage's available to be deployed within those environments. >> So do you see an evolution of the role of a storage admin, from one that's sort of provisioning luns to one that's actually becoming a coder, maybe learning Python, learning how to interact through API's, maybe even at some point developing applications for automation? Is that happening? >> I think there's absolutely a shift in the skills. I think you've got skills going in two directions. One, in the way of somebody else to administer hardware and replace parts as they fail. So you have lower skilled jobs on that side and then I believe that yes, people who are managing the infrastructure have to move up and move towards coding and automating the infrastructure. As the amount of data grows, it becomes too difficult to manage it in the old manual ways of doing it. You need automation and intelligence in the storage infrastructure that can identify problems and readjust. For example, in our storage infrastructure, we have automated data placement that puts it on the correct tier. That use to be something a storage administrator had to do manually and figure out how to place data. Now the storage can do it themselves, so now they need to move up into the automation stack. >> Yeah, so we've been talking about automation and storage also for a lot of years. Eric, how are enterprises getting over that fear that either I'm going to lose my job or you know, this is my business we're talking about here. How do I let go and trust? I love, I saw downstairs, there was a in the automation booth for IBM, it was free the humans, so we understand that we need to go there. We can't not put automation with the scale and how things are moving, but what's the reality out in the field? >> So I think that the big difference is and this is going to sound funny, but the economic down turn of seven, eight and nine, when downturn hit and certainly was all over the IT press, layoff, layoff, layoff, layoff, layoffs, so we also know that storage is growing exponentially, so for example, if I'm Fortune 500 company x and I had 100 people doing storage across the planet. If I laid off 50 of them and now I'm recovered. I'm making tons of money, my IT budget is back up. I didn't go to the CIO and say, you can hire the 50 storage people back. You can hire 50 people back, but no more than five or six can be storage people. Everything else has to be dev ops or something else. So what that means is, they are managing an un-Godly amounts of more storage every year with essentially the same people they had in 2008 or maybe a tiny bit more. So what matters is, you don't manage a peta bite or in the old days, half a peta bite. Now, one storage admin or back up admin or anyone in that space, they want you to manage 20 peta bites and if you don't have automation, that will never happen. >> Stu and I were interviewing Steven Hill from KPMG yesterday and he was talking about the macro numbers show we're not (stutters) as globally and even in the US, we're not seeing productivity gains. I'm saying yeah, you're not looking at the storage business you know, right? Because if you look at anybody who's running storage, they're doing way more with much less, to your point. >> Which is why, so for example when Sam talked about our easy tier, we can tier, not only as AI base. So in the old days, when you guys weren't even born yet, when I was doing it. >> Well I don't know about that >> What was it? It was move the data after 90, so first it was manual movement, then it was set up something, a policy. Remember policy automation was the big deal 10 years ago? Automatically move the data when its 90, 60, or 30 days old. AI based, what we have an easy tier, automatically will determine what tier it should go on, whether when the data's hot or when the data's cold and on top of that, because we can tier over 440 arrays that are not IBM logo'd, multi vendor tiering, we can tier from our box to an EMC box. So if you have a flash array, you've got an old or all hard drive that you've moved into your back up in archive tier, we can automatically tier to that. We can tier from the EMC array out to the Cloud, but it's all done automatically. The admin doesn't do anything, it just says source and target and the AI does all the work. That's how you get the productivity that you're talking about, that you need in storage and back ups even worse because you got to keep everything now, which Sam mentioned GDPR, all these new regulations and the Federal Government its like keep the data forever. >> But in that case, the machine can determine whether or not it's okay to put it in the Cloud, if it's in Canada or Germany or wherever, the machine can adjudicate and make those decisions. >> And that's what the AI, so in that case you're using AI inside of the storage system versus what we talked about with our other software that makes our storage systems a great platform for other AI workloads that are not, if you will, AI for storage. AI for everything else, cars or hospitals or resume analysis. That's what the platform can, but we put all this AI inside of the system 'cause there aren't that big, giant, global, Fortune 500 has 55 storage admins and in 2007 or eight, they had 100, but they've quintupled the amount of storage easily if not 10x'd it, so who's going to manage that? Automation. >> Guys, good discussion. Not everyday, boring, old storage. It's talking about intelligence, real intelligence this time. Eric, Sam, thanks very much for coming to theCUBE. Great to see you guys again. >> Thank you. >> Thank you. >> You're welcome. Alright, keep it right there everybody. Stu and I will be back with our next guest shortly, right after this break. John Furrier is also here. IBM Think, Day four, you're watching theCUBE. Be right back. (tech music)

Published Date : Feb 14 2019

SUMMARY :

Brought to you by IBM. and Sam Werner is the VP of Offering Management Some research that you can share with us. and we work with Azure, we work with Google cloud Some of the guys maybe you could have used for the bookstores to manage the textbooks but you know, apps that are customer facing, consolidate the data, allow them to ingest it and that seems to be why, at least one catalyst, they need to get control over this again. and now you have all these new roles in organizations, and it's a big, big risk these days and so now that you're doing that, that people want to control pricing. about AI in storage than the intelligence that a host might be causing on the network so one is AI, in the box if you will, You mentioned the ability to go into like and automate the deployment of their clusters, the infrastructure have to move up that either I'm going to lose my job or you know, and I had 100 people doing storage across the planet. as globally and even in the US, So in the old days, when you guys weren't even born yet, So if you have a flash array, But in that case, the machine can determine and in 2007 or eight, they had 100, Great to see you guys again. Stu and I will be back with our next guest shortly,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Eric HerzogPERSON

0.99+

SamPERSON

0.99+

Dave VellantePERSON

0.99+

EuropeLOCATION

0.99+

CanadaLOCATION

0.99+

Sam WernerPERSON

0.99+

IBMORGANIZATION

0.99+

EricPERSON

0.99+

2008DATE

0.99+

MicrosoftORGANIZATION

0.99+

GermanyLOCATION

0.99+

HPORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

TuesdayDATE

0.99+

50QUANTITY

0.99+

StuPERSON

0.99+

2007DATE

0.99+

MercedesORGANIZATION

0.99+

San FranciscoLOCATION

0.99+

GinniPERSON

0.99+

Steven HillPERSON

0.99+

fiveQUANTITY

0.99+

USLOCATION

0.99+

John FurrierPERSON

0.99+

FollettORGANIZATION

0.99+

AetnaORGANIZATION

0.99+

1,500QUANTITY

0.99+

CISCOORGANIZATION

0.99+

25QUANTITY

0.99+

Stu MinimanPERSON

0.99+

75QUANTITY

0.99+

100QUANTITY

0.99+

todayDATE

0.99+

100 peopleQUANTITY

0.99+

yesterdayDATE

0.99+

30 daysQUANTITY

0.99+

tens of millionsQUANTITY

0.99+

50 peopleQUANTITY

0.99+

10xQUANTITY

0.99+

United StatesLOCATION

0.99+

TodayDATE

0.99+

ToyotasORGANIZATION

0.99+

20 peta bitesQUANTITY

0.99+

sevenQUANTITY

0.99+

PythonTITLE

0.99+

KPMGORGANIZATION

0.99+

60QUANTITY

0.99+

1944DATE

0.99+

90QUANTITY

0.99+

first phaseQUANTITY

0.99+

sixQUANTITY

0.99+

threeQUANTITY

0.99+

oneQUANTITY

0.99+

nineQUANTITY

0.99+

10 years agoDATE

0.98+

55 storage adminsQUANTITY

0.98+

eightQUANTITY

0.98+

MosconeLOCATION

0.98+

10QUANTITY

0.98+

two directionsQUANTITY

0.98+

GDPRTITLE

0.98+

50 storageQUANTITY

0.98+

singleQUANTITY

0.98+

OneQUANTITY

0.97+

first yearQUANTITY

0.97+

firstQUANTITY

0.97+

FirstQUANTITY

0.97+

bothQUANTITY

0.96+

capexORGANIZATION

0.96+

Eric Herzog, IBM | DataWorks Summit 2018


 

>> Live from San Jose in the heart of Silicon Valley, it's theCUBE, covering DataWorks Summit 2018, brought to you by Hortonworks. >> Welcome back to theCUBE's live coverage of DataWorks here in San Jose, California. I'm your host, Rebecca Knight, along with my co-host, James Kobielus. We have with us Eric Herzog. He is the Chief Marketing Officer and VP of Global Channels at the IBM Storage Division. Thanks so much for coming on theCUBE once again, Eric. >> Well, thank you. We always love to be on theCUBE and talk to all of theCUBE analysts about various topics, data, storage, multi-cloud, all the works. >> And before the cameras were rolling, we were talking about how you might be the biggest CUBE alum in the sense of you've been on theCUBE more times than anyone else. >> I know I'm in the top five, but I may be number one, I have to check with Dave Vellante and crew and see. >> Exactly and often wearing a Hawaiian shirt. >> Yes. >> Yes, I was on theCUBE last week from CISCO Live. I was not wearing a Hawaiian shirt. And Stu and John gave me a hard time about why was not I wearing a Hawaiian shirt? So I make sure I showed up to the DataWorks show- >> Stu, Dave, get a load. >> You're in California with a tan, so it fits, it's good. >> So we were talking a little bit before the cameras were rolling and you were saying one of the points that is sort of central to your professional life is it's not just about the storage, it's about the data. So riff on that a little bit. >> Sure, so at IBM we believe everything is data driven and in fact we would argue that data is more valuable than oil or diamonds or plutonium or platinum or silver to anything else. It is the most viable asset, whether you be a global Fortune 500, whether you be a midsize company or whether you be Herzogs Bar and Grill. So data is what you use with your suppliers, with your customers, with your partners. Literally everything around your company is really built around the data so most effectively managing it and make sure, A, it's always performant because when it's not performant they go away. As you probably know, Google did a survey that one, two, after one, two they go off your website, they click somewhere else so has to be performant. Obviously in today's 365, 7 by 24 company it needs to always be resilient and reliable and it always needs to be available, otherwise if the storage goes down, guess what? Your AI doesn't work, your Cloud doesn't work, whatever workload, if you're more traditional, your Oracle, Sequel, you know SAP, none of those workloads work if you don't have a solid storage foundation underneath your data driven enterprise. >> So with that ethos in mind, talk about the products that you are launching, that you newly launched and also your product roadmap going forward. >> Sure, so for us everything really is that storage is this critical foundation for the data driven, multi Cloud enterprise. And as I've said before on theCube, all of our storage software's now Cloud-ified so if you need to automatically tier out to IBM Cloud or Amazon or Azure, we automatically will move the data placement around from one premise out to a Cloud and for certain customers who may be multi Cloud, in this case using multiple private Cloud providers, which happens due to either legal reasons or procurement reasons or geographic reasons for the larger enterprises, we can handle that as well. That's part of it, second thing is we just announced earlier today an artificial intelligence, an AI reference architecture, that incorporates a full stack from the very bottom, both servers and storage, all the way up through the top layer, then the applications on top, so we just launched that today. >> AI for storage management or AI for run a range of applications? >> Regular AI, artificial intelligence from an application perspective. So we announced that reference architecture today. Basically think of the reference architecture as your recipe, your blueprint, of how to put it all together. Some of the components are from IBM, such as Spectrum Scale and Spectrum Computing from my division, our servers from our Cloud division. Some are opensource, Tensor, Caffe, things like that. Basic gives you what the stack needs to be, and what you need to do in various AI workloads, applications and use cases. >> I believe you have distributed deep learning as an IBM capability, that's part of that stack, is that correct? >> That is part of the stack, it's like in the middle of the stack. >> Is it, correct me if I'm wrong, that's containerization of AI functionality? >> Right. >> For distributed deployment? >> Right. >> In an orchestrated Kubernetes fabric, is that correct? >> Yeah, so when you look at it from an IBM perspective, while we clearly support the virtualized world, the VM wares, the hyper V's, the KVMs and the OVMs, and we will continue to do that, we're also heavily invested in the container environment. For example, one of our other divisions, the IBM Cloud Private division, has announced a solution that's all about private Clouds, you can either get it hosted at IBM or literally buy our stack- >> Rob Thomas in fact demoed it this morning, here. >> Right, exactly. And you could create- >> At DataWorks. >> Private Cloud initiative, and there are companies that, whether it be for security purposes or whether it be for legal reasons or other reasons, don't want to use public Cloud providers, be it IBM, Amazon, Azure, Google or any of the big public Cloud providers, they want a private Cloud and IBM either A, will host it or B, with IBM Cloud Private. All of that infrastructure is built around a containerized environment. We support the older world, the virtualized world, and the newer world, the container world. In fact, our storage, allows you to have persistent storage in a container's environment, Dockers and Kubernetes, and that works on all of our block storage and that's a freebie, by the way, we don't charge for that. >> You've worked in the data storage industry for a long time, can you talk a little bit about how the marketing message has changed and evolved since you first began in this industry and in terms of what customers want to hear and what assuages their fears? >> Sure, so nobody cares about speeds and feeds, okay? Except me, because I've been doing storage for 32 years. >> And him, he might care. (laughs) >> But when you look at it, the decision makers today, the CIOs, in 32 years, including seven start ups, IBM and EMC, I've never, ever, ever, met a CIO who used to be a storage guy, ever. So, they don't care. They know that they need storage and the other infrastructure, including servers and networking, but think about it, when the app is slow, who do they blame? Usually they blame the storage guy first, secondarily they blame the server guy, thirdly they blame the networking guy. They never look to see that their code stack is improperly done. Really what you have to do is talk applications, workloads and use cases which is what the AI reference architecture does. What my team does in non AI workloads, it's all about, again, data driven, multi Cloud infrastructure. They want to know how you're going to make a new workload fast AI. How you're going to make their Cloud resilient whether it's private or hybrid. In fact, IBM storage sells a ton of technology to large public Cloud providers that do not have the initials IBM. We sell gobs of storage to other public Cloud providers, both big, medium and small. It's really all about the applications, workloads and use cases, and that's what gets people excited. You basically need a position, just like I talked about with the AI foundations, storage is the critical foundation. We happen to be, knocking on wood, let's hope there's no earthquake, since I've lived here my whole life, and I've been in earthquakes, I was in the '89 quake. Literally fell down a bunch of stairs in the '89 quake. If there's an earthquake as great as IBM storage is, or any other storage or servers, it's crushed. Boom, you're done! Okay, well you need to make sure that your infrastructure, really your data, is covered by the right infrastructure and that it's always resilient, it's always performing and is always available. And that's what IBM drives is about, that's the message, not about how many gigabytes per second in bandwidth or what's the- Not that we can't spew that stuff when we talk to the right person but in general people don't care about it. What they want to know is, "Oh that SAP workload took 30 hours and now it takes 30 minutes?" We have public references that will say that. "Oh, you mean I can use eight to ten times less storage for the same money?" Yes, and we have public references that will say that. So that's what it's really about, so storage is really more from really a speeds and feeds Nuremberger sort of thing, and now all the Nurembergers are doing AI and Caffe and TensorFlow and all of that, they're all hackers, right? It used to be storage guys who used to do that and to a lesser extent server guys and definitely networking guys. That's all shifted to the software side so you got to talk the languages. What can we do with Hortonworks? By the way we were named in Q1 of 2018 as the Hortonworks infrastructure partner of the year. We work with Hortonworks all time, at all levels, whether it be with our channel partners, whether it be with our direct end users, however the customer wants to consume, we work with Hortonworks very closely and other providers as well in that big data analytics and the AI infrastructure world, that's what we do. >> So the containerizations side of the IBM AI stack, then the containerization capabilities in Hortonworks Data Platform 3.0, can you give us a sense for how you plan to, or do you plan at IBM, to work with Hortonworks to bring these capabilities, your reference architecture, into more, or bring their environment for that matter, into more of an alignment with what you're offering? >> So we haven't an exact decision of how we're going to do it, but we interface with Hortonworks on a continual basis. >> Yeah. >> We're working to figure out what's the right solution, whether that be an integrated solution of some type, whether that be something that we do through an adjunct to our reference architecture or some reference architecture that they have but we always make sure, again, we are their partner of the year for infrastructure named in Q1, and that's because we work very tightly with Hortonworks and make sure that what we do ties out with them, hits the right applications, workloads and use cases, the big data world, the analytic world and the AI world so that we're tied off, you know, together to make sure that we deliver the right solutions to the end user because that's what matters most is what gets the end users fired up, not what gets Hortonworks or IBM fired up, it's what gets the end users fired up. >> When you're trying to get into the head space of the CIO, and get your message out there, I mean what is it, what would you say is it that keeps them up at night? What are their biggest pain points and then how do you come in and solve them? >> I'd say the number one pain point for most CIOs is application delivery, okay? Whether that be to the line of business, put it this way, let's take an old workload, okay? Let's take that SAP example, that CIO was under pressure because they were trying, in this case it was a giant retailer who was shipping stuff every night, all over the world. Well guess what? The green undershirts in the wrong size, went to Paducah, Kentucky and then one of the other stores, in Singapore, which needed those green shirts, they ended up with shoes and the reason is, they couldn't run that SAP workload in a couple hours. Now they run it in 30 minutes. It used to take 30 hours. So since they're shipping every night, you're basically missing a cycle, essentially and you're not delivering the right thing from a retail infrastructure perspective to each of their nodes, if you will, to their retail locations. So they care about what do they need to do to deliver to the business the right applications, workloads and use cases on the right timeframe and they can't go down, people get fired for that at the CIO level, right? If something goes down, the CIO is gone and obviously for certain companies that are more in the modern mode, okay? People who are delivering stuff and their primary transactional vehicle is the internet, not retail, not through partners, not through people like IBM, but their primary transactional vehicle is a website, if that website is not resilient, performant and always reliable, then guess what? They are shut down and they're not selling anything to anybody, which is to true if you're Nordstroms, right? Someone can always go into the store and buy something, right, and figure it out? Almost all old retailers have not only a connection to core but they literally have a server and storage in every retail location so if the core goes down, guess what, they can transact. In the era of the internet, you don't do that anymore. Right? If you're shipping only on the internet, you're shipping on the internet so whether it be a new workload, okay? An old workload if you're doing the whole IOT thing. For example, I know a company that I was working with, it's a giant, private mining company. They have those giant, like three story dump trucks you see on the Discovery Channel. Those things cost them a hundred million dollars, so they have five thousand sensors on every dump truck. It's a fricking dump truck but guess what, they got five thousand sensors on there so they can monitor and make sure they take proactive action because if that goes down, whether these be diamond mines or these be Uranium mines or whatever it is, it costs them hundreds of millions of dollars to have a thing go down. That's, if you will, trying to take it out of the traditional, high tech area, which we all talk about, whether it be Apple or Google, or IBM, okay great, now let's put it to some other workload. In this case, this is the use of IOT, in a big data analytics environment with AI based infrastructure, to manage dump trucks. >> I think you're talking about what's called, "digital twins" in a networked environment for materials management, supply chain management and so forth. Are those requirements growing in terms of industrial IOT requirements of that sort and how does that effect the amount of data that needs to be stored, the sophistication of the AI and the stream competing that needs to be provisioned? Can you talk to that? >> The amount of data is growing exponentially. It's growing at yottabytes and zettabytes a year now, not at just exabytes anymore. In fact, everybody on their iPhone or their laptop, I've got a 10GB phone, okay? My laptop, which happens to be a Power Book, is two terabytes of flash, on a laptop. So just imagine how much data's being generated if you're doing in a giant factory, whether you be in the warehouse space, whether you be in healthcare, whether you be in government, whether you be in the financial sector and now all those additional regulations, such as GDPR in Europe and other regulations across the world about what you have to do with your healthcare data, what you have to do with your finance data, the amount of data being stored. And then on top of it, quite honestly, from an AI big data analytics perspective, the more data you have, the more valuable it is, the more you can mine it or the more oil, it's as if the world was just oil, forget the pollution side, let's assume oil didn't cause pollution. Okay, great, then guess what? You would be using oil everywhere and you wouldn't be using solar, you'd be using oil and by the way you need more and more and more, and how much oil you have and how you control that would be the power. That right now is the power of data and if anything it's getting more and more and more. So again, you always have to be able to be resilient with that data, you always have to interact with things, like we do with Hortonworks or other application workloads. Our AI reference architecture is another perfect example of the things you need to do to provide, you know, at the base infrastructure, the right foundation. If you have the wrong foundation to a building, it falls over. Whether it be your house, a hotel, this convention center, if it had the wrong foundation, it falls over. >> Actually to follow the oil analogy just a little bit further, the more of this data you have, the more PII there is and it usually, and the more the workloads need to scale up, especially for things like data masking. >> Right. >> When you have compliance requirements like GDPR, so you want to process the data but you need to mask it first, therefore you need clusters that conceivably are optimized for high volume, highly scalable masking in real time, to drive the downstream app, to feed the downstream applications and to feed the data scientist, you know, data lakes, whatever, and so forth and so on? >> That's why you need things like Incredible Compute which IBM offers with the Power Platform. And why you need storage that, again, can scale up. >> Yeah. >> Can get as big as you need it to be, for example in our reference architecture, we use both what we call Spectrum Scale, which is a big data analytics workload performance engine, it has multiple threaded, multi tasking. In fact one of the largest banks in the world, if you happen to bank with them, your credit card fraud is being done on our stuff, okay? But at the same time we have what's called IBM Cloud Object Storage which is an object store, you want to take every one of those searches for fraud and when they find out that no one stole my MasterCard or the Visa, you still want to put it in there because then you mine it later and see patterns of how people are trying to steal stuff because it's all being done digitally anyway. You want to be able to do that. So you A, want to handle it very quickly and resiliently but then you want to be able to mine it later, as you said, mining the data. >> Or do high value anomaly detection in the moment to be able to tag the more anomalous data that you can then sift through later or maybe in the moment for realtime litigation. >> Well that's highly compute intensive, it's AI intensive and it's highly storage intensive on a performance side and then what happens is you store it all for, lets say, further analysis so you can tell people, "When you get your Am Ex card, do this and they won't steal it." Well the only way to do that, is you use AI on this ocean of data, where you're analyzing all this fraud that has happened, to look at patterns and then you tell me, as a consumer, what to do. Whether it be in the financial business, in this case the credit card business, healthcare, government, manufacturing. One of our resellers actually developed an AI based tool that can scan boxes and cans for faults on an assembly line and actually have sold it to a beer company and to a soda company that instead of people looking at the cans, like you see on the Food Channel, to pull it off, guess what? It's all automatically done. There's no people pulling the can off, "Oh, that can is damaged" and they're looking at it and by the way, sometimes they slip through. Now, using cameras and this AI based infrastructure from IBM, with our storage underneath the hood, they're able to do this. >> Great. Well Eric thank you so much for coming on theCUBE. It's always been a lot of fun talking to you. >> Great, well thank you very much. We love being on theCUBE and appreciate it and hope everyone enjoys the DataWorks conference. >> We will have more from DataWorks just after this. (techno beat music)

Published Date : Jun 19 2018

SUMMARY :

in the heart of Silicon He is the Chief Marketing Officer and talk to all of theCUBE analysts in the sense of you've been on theCUBE I know I'm in the top five, Exactly and often And Stu and John gave me a hard time about You're in California with and you were saying one of the points and it always needs to be available, that you are launching, for the data driven, and what you need to do of the stack, it's like in in the container environment. Rob Thomas in fact demoed it And you could create- and that's a freebie, by the Sure, so nobody cares And him, he might care. and the AI infrastructure So the containerizations So we haven't an exact decision so that we're tied off, you know, together and the reason is, they of the AI and the stream competing and by the way you need more of this data you have, And why you need storage that, again, my MasterCard or the Visa, you still want anomaly detection in the moment at the cans, like you of fun talking to you. the DataWorks conference. We will have more from

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Diane GreenePERSON

0.99+

Eric HerzogPERSON

0.99+

James KobielusPERSON

0.99+

Jeff HammerbacherPERSON

0.99+

DianePERSON

0.99+

IBMORGANIZATION

0.99+

Mark AlbertsonPERSON

0.99+

MicrosoftORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

Rebecca KnightPERSON

0.99+

JenniferPERSON

0.99+

ColinPERSON

0.99+

Dave VellantePERSON

0.99+

CiscoORGANIZATION

0.99+

Rob HofPERSON

0.99+

UberORGANIZATION

0.99+

Tricia WangPERSON

0.99+

FacebookORGANIZATION

0.99+

SingaporeLOCATION

0.99+

James ScottPERSON

0.99+

ScottPERSON

0.99+

Ray WangPERSON

0.99+

DellORGANIZATION

0.99+

Brian WaldenPERSON

0.99+

Andy JassyPERSON

0.99+

VerizonORGANIZATION

0.99+

Jeff BezosPERSON

0.99+

Rachel TobikPERSON

0.99+

AlphabetORGANIZATION

0.99+

Zeynep TufekciPERSON

0.99+

TriciaPERSON

0.99+

StuPERSON

0.99+

Tom BartonPERSON

0.99+

GoogleORGANIZATION

0.99+

Sandra RiveraPERSON

0.99+

JohnPERSON

0.99+

QualcommORGANIZATION

0.99+

Ginni RomettyPERSON

0.99+

FranceLOCATION

0.99+

Jennifer LinPERSON

0.99+

Steve JobsPERSON

0.99+

SeattleLOCATION

0.99+

BrianPERSON

0.99+

NokiaORGANIZATION

0.99+

EuropeLOCATION

0.99+

Peter BurrisPERSON

0.99+

Scott RaynovichPERSON

0.99+

RadisysORGANIZATION

0.99+

HPORGANIZATION

0.99+

DavePERSON

0.99+

EricPERSON

0.99+

Amanda SilverPERSON

0.99+

Eric Herzog, IBM | IBM Think 2018


 

>> Announcer: Live from Las Vegas, it's theCUBE. Covering IBM Think 2018. (upbeat music) Brought to you by IBM. >> Welcome back to IBM Think 2018 everybody. My name is Dave Vellante and I'm with my co-host Peter Burris. You're watching theCUBE, the leader in live tech coverage. This is day three of our wall to wall coverage of IBM Think. The inaugural Think conference. Good friend Eric Herzog is here. He runs marketing for IBM storage. They're kicking butt. You've been in three years, making a difference, looking great, new Hawaiian shirt. (laughter) Welcome back my friend. >> Thank you, thank you. >> Good to see you. >> Always love being on theCUBE. >> So this is crazy. I mean, I miss Edge, I loved that show, but you know, one stop shopping. >> Well, a couple things. One when you look at other shows in the tech industry, they tend to be for the whole company so we had a lot of small shows and that was great and it allowed focus, but the one thing it didn't do is every division, including storage, we have all kinds of IBM customers who are not IBM storage customers. So this allows us to do some cross pollination and go and talk to those IBM customers who are not IBM storage customers which we can always do at a third party show like a VM World or Oracle World, but you know those guys tend to have a show that's focused on every division they have. So it could be a real advantage for IBM to do it that way, give us more mass. And it also, you know, helps us spend more on third party shows to go after a whole bunch of new prospects and new clients in other venues. >> You, you've attracted some good storage DNA. Yourself and some others, Ed Walsh was on yesterday. He said Joe Tucci made a comment years ago Somebody asked him what's your biggest fear. If IBM wakes up and figures it out in storage. Looks like you guys are figuring it out. >> Whipping it up and figuring it out. >> Four quarters of consistent growth, you know redefining your portfolio towards software defined. One of the things we've talked about a lot, and I know you brought this was the discipline around you know communicating, getting products to market, faster cycles, because people buy products and solutions right? So you guys have really done a good job there, but what's your perspective on how you guys have been winning in the last year or so? >> Well I think there's a couple of things. One is pure accident, okay. Which is not just us, is one of the leaders in the industry, where I used to work and Ed used to work has clearly stubbed its toe and has lost its way and that has benefited not only IBM but actually even some of our other competitors have grown at the expense of, you know, EMC. And they're not doing as well as they used to do and they've been cutting head count and you know, there's a big difference in the engineering spend of what EMC does versus what Michael Dell likes to spend on engineering. We have been continuing to invest. Sales resources, marketing resources, tech support resources in the field, technical resources from a development perspective. The other thing we did as Ed came back was rationalize the portfolio. Make sure that you don't have 27 products that overlap, you have one. And maybe it has a slight overlap with the product next to it, but you don't have to have three things that do the same thing and quite honestly, IBM, before I showed up, we did have that. So that's benefited us and then I think the third thing is we've gone to a solution-oriented focus. So can we talk about, as nerdy as tracks per sector and TPI and BPI and, I mean all the way down to the hard drive or to the flash layer? Sure we can. You know what, have you ever... You guys have been doing this forever. Ever met a CIO who was a storage guy? >> No, no. CIOs don't care about storage. >> Exactly, so you've got to... >> We've had quite a couple of ex-CIOs who were storage guys. (laughter) >> So you've really got to talk about applications, workloads, and use cases. How you solve the business problems. We've created a whole set of sales tools that we call the conversations available to the IBM sales team and our business partners which is how to talk to a CIO, how to talk to a line of business owner, how to talk to the VP of software development in a global enterprise who doesn't care at all, and also to get people to understand that it's not... Storage is a critical foundation for cloud, for AI, for other workloads, but if you talk latency right off the top, especially with a CIO or the senior executive, it's like what are you talking about? What you have to say is we can make your cloud sing, we can make your cloud never go down. We can make sure that the response time on the web browser is in a second. Whereas you know Google did that test about if you click and it takes more than two and a half seconds, they go away. Well even if that's your own private cloud, guess what they do the same thing. So you've got to be able to show them how the storage enables cloud and AI and other workloads. >> Let's talk about that for a second. Because I was having a thought here. It's maybe my only interesting thought here at Think, being pretty much overwhelmed. But the thought that I had was if you think about all the things that IBM is talking about, block chain, analytics, cloud, go on down the list, none of them would have been possible if we were still working at 10, 20, 30 milliseconds of wait time on a disc head. The fundamental change that made all of this possible is the move from disc to flash. >> Eric: Right. >> Storage is the fundamental change in this industry that has made all of this possible. What do you think about that? >> So I would agree with that. There is no doubt and that's part of the reason I had said storage is a critical foundation for cloud or AI workloads. Whether you're talking not just pure performance but availability and reliability. So we have a public reference Medicat. They deliver healthcare services as a service, so it's a software as a service model. Well guess what? They provide patient records into hospitals and clinics that tend to be focused at the university level like the University of California Health Center for the students. Well guess what? If not only does it need to be fast, if it's not available then you can't get the healthcare records can you? So, and while it's a cloud model, you have to be able to have that availability characteristic, reliability. So storage is, again, that critical foundation. If you build a building in a major city and the foundation isn't very good, the building falls over. And storage is that critical foundation for any cloud, any AI, and even for the older workloads like an SAP Hana or a Oracle workload, right? If, again if the storage is not resilient, oh well you can't access the shipping database or the payroll database or the accounts receivable database cause the storage is down and then obviously if it's not fast, it takes forever to get Dave Vellante's bill, right. And that's a waste of time. >> So it's plumbing, but the plumbing's getting more intelligent isn't it? >> Well that's the other thing we've done is we are automating everything. We are imbuing our software, and we announced this, that our range are going to be having an intelligent infrastructure software plane if you will that is going to help do diagnostics. For example, in one of the coming releases, if a customer allows access, if a power supply is going bad, we will tell them it's going bad and it'll automatically send a PO to IBM with a serial number, the address, and say please send me a new power supply before the power supply actually fails. But it also means they don't have to stock a power supply on their shelf which means they have a higher cost of cap ex. And for a big shop there's a bunch of power supplies, a bunch of flash modules, maybe hard drives if they're still dinosauric in how they behave. And they have those things and they buy them from us and our competitors. So imbuing it with intelligence, automating everything we can automate. So automatically tiering data, moving data around from tier to tier, moving it out to the cloud, what we do with the reuse of backup sets. Instead of doing it the old way of back up. And I know you've got Sam Warner coming on later today and he'll talk about modern data protection, how that is revolutionizing what dev ops and other guys can do with their, essentially, what we would've called in the old days back up data. >> Let's talk about your spectrum launch. Spectrum NAS, give us some plugs for that. What's the update there? >> So we announced on the 20th of February a whole set of changes regarding the Spectrum family. We have things around Spectrum PROTECT, with GDPR, Spectrum PROTECT Plus as a service as well as some additional granularity features and I know Sam Warner's going to come on later today. Spectrum NAS software defined network attached storage. Okay, we're not going to sell any infrastructure with it. We have for big data analytics our Spectrum scale, but think of Spectrum NAS as traditional network attached storage workloads. Home directories. Things like that. Small file service where Spectrum scale has one of our public references, and they were here actually at Edge a couple of years ago, one of the largest banks in the world, their entire fraud detection system is based on Spectrum scale. That's not what you would use Spectrum NAS for. So, and it's often common as you know in the file world to have sort of a traditional file system and then a big one that does big data, analytics and AI and is very focused on that and so that's what we've done. Spectrum NAS is a software only, software defined, rounds out our block, now gives a traditional file. We had scale out file already and IBM cloud object storage is also software defined. >> Well how about the get put world. What's happening there? I mean we've been waiting for it to explode. >> Ah so the get put world is all about NVME. NVME, new storage protocol as you know it's been scuzzy forever. Scuzzy and/or SATA. And it's been that way for years and years and years and years, but now you've got flash. As Peter pointed out spinning disc is really slow. Flash is really fast and the protocol of Scuzzy was not keeping up with the performance so NVME is coming out. We announced an NVME over InfiniBand Fabric solution. We announced that we will be adding a fiber channel. NVME fabric based and also in ethernet. Those will come and one of the key things we're doing is our hardware, our infrastructure's all ready to go so all you have to do is a non-disruptive software upgrade and for anyone who's bought today, it'll be free. So you can start off with fiber channel or ethernet fabrics today or InfiniBand fabric now that we can ship, but on the ethernet and fiber channel side, they buy the array today and then later this year in the second half software upgrade and then they'll have NVME over fiber channel or NVME over ethernet. >> Explain why NVME and NVME over fabric is so important generally but in particular for this sort of new class of applications that's emerging. >> Well the key thing with the new class of applications is they're incredibly performance and latency sensitive. So we're trying to do real artificial intelligence and they're trying to, for example, I just did a presentation and one of our partners, Mark III has created a manufacturing system using AI and Watson. So you use cameras all over, which has been common, but it actually will learn. So it'll tell you whether cans are bad. Another one of our customers is in the healthcare space and they're working on a genomic process for breast cancer along with radiology and they've collected over 20 million radiological samples of breast cancer analysis. So guess what, how are you going to sort through that? Are you or I could sort through 20 million images? Well guess what, AI can do that, narrow it down, and say whether it's this type of breast cancer or that type of breast cancer. And then the doctor can decide what to do about it. And that's all empowered by AI and that requires incredible performance which is what NVME delivers. Again, that underlying foundation of AI, in this case going from flash with Scuzzy, flash to NVME, increasing the power that AI can deliver because of its storage foundation. >> But even those are human time transactions. What about when we start taking the output of that AI and put it directly into operational transactions that have to run like a bat out of hell. >> Which is where NVME will come in as well. You cannot have the performance that we've had these last almost 30 years with Scuzzy and even slower when you talk about SATA. That's just not going to cut it with flash. And by the way, you know there's going to be things beyond flash that will be faster than flash. So flash two, flash three, it's just the way it was with the hard drive world, right? It was 2400 RPM then 36 then 54 then 72 then 10k then 15/5. >> More size, more speed, lower energy. >> Which is what NVME will help you do and you can do it as a fabric infrastructure or you can do it in the array itself. You dual in box and out of box connectivity with NVME increasing the performance within your array and increasing the performance outside of the array as you go out to your host and out into your switching infrastructure. >> So I'm loving Think. It's too many people to count, I've been joking all week. 30,000 40,000. We're still tallying up. I'm going to miss Edge for sure. I'm going to miss the updates in the you know, late spring. But so let's get 'em now. What can we expect? What are you trying to accomplish in the next six to nine months? What should we be looking for without giving any confidential information. >> Well we've already publicly announced that we'll be fleshing out NVME across the board. >> Dave: Right. >> So we already publicly announced that. That will be a big to-do. The other thing we're looking at is continuing to imbue what we do with additional solution sets. So that's something we have a wide set of software. For example, we publicly announced this week that the Versa stack, all flash array will be available with IBM cloud private with a CYSCO validated design in May. So again, in this case taking a very powerful system, the Versa Stack all flash, which already delivers ROI and TCO, but still is if you will a box. Now that box is a converge box with compute with switching with all flash array and with a virtual environment. But now we're putting, again as a bundle, IBM cloud private on there. So you'll see more and more of those types of solutions both with the rest of IBM but also from third parties. So if that offers the right solution set to cut capx/opx, automate processes, and again, for the cloud workloads, AI workloads and any workloads, storage is that foundation. The critical foundation. So we will make sure that we'll have solutions wrapped around that throughout the rest of this year. >> So it's great to see the performance in the storage division. Great people. We're under counting it. We're not even counting all the cloud storage that goes and counts somewhere else. You guys are doing a great job. You know, best of luck and really keep it up Eric, thanks very much for coming back on theCUBE. >> Great thank you very much. >> We really appreciate it. >> Thanks again Peter. >> Alright keep it right there everybody we'll be back with our next segment right after this short break. You're watching theCUBE live from Think 2018. (upbeat music)

Published Date : Mar 22 2018

SUMMARY :

Brought to you by IBM. Welcome back to IBM Think 2018 everybody. but you know, one stop shopping. and it allowed focus, but the one thing it didn't do Looks like you guys are figuring and figuring it out. and I know you brought this was the discipline have grown at the expense of, you know, EMC. CIOs don't care about storage. who were storage guys. We can make sure that the response time is the move from disc to flash. Storage is the fundamental change and clinics that tend to be focused Well that's the other thing we've done What's the update there? So, and it's often common as you know Well how about the get put world. all ready to go so all you have to do is so important generally but in particular Well the key thing with the new class of applications the output of that AI and put it directly And by the way, you know there's outside of the array as you go in the next six to nine months? that we'll be fleshing out NVME across the board. So if that offers the right solution set to cut capx/opx, So it's great to see the performance with our next segment right after this short break.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
TeresaPERSON

0.99+

Peter BurrisPERSON

0.99+

Eric HerzogPERSON

0.99+

CiscoORGANIZATION

0.99+

Dave VellantePERSON

0.99+

CaliforniaLOCATION

0.99+

USDOTORGANIZATION

0.99+

DavePERSON

0.99+

JohnPERSON

0.99+

sixQUANTITY

0.99+

AWSORGANIZATION

0.99+

IBMORGANIZATION

0.99+

SecuritasORGANIZATION

0.99+

Jeff FrickPERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

Ed WalshPERSON

0.99+

PeterPERSON

0.99+

Teresa CarlsonPERSON

0.99+

Kim MajerusPERSON

0.99+

Joe TucciPERSON

0.99+

ChicagoLOCATION

0.99+

AmazonORGANIZATION

0.99+

seven weeksQUANTITY

0.99+

EricPERSON

0.99+

MondayDATE

0.99+

WashingtonLOCATION

0.99+

twoQUANTITY

0.99+

$1.8 millionQUANTITY

0.99+

John FurrierPERSON

0.99+

50%QUANTITY

0.99+

MayDATE

0.99+

2010DATE

0.99+

Hardik BhattPERSON

0.99+

GoogleORGANIZATION

0.99+

Federal Highway AdministrationORGANIZATION

0.99+

300%QUANTITY

0.99+

two thingsQUANTITY

0.99+

Stu MinimanPERSON

0.99+

27 productsQUANTITY

0.99+

85%QUANTITY

0.99+

five yearsQUANTITY

0.99+

$60 millionQUANTITY

0.99+

six monthsQUANTITY

0.99+

Allied UniversalORGANIZATION

0.99+

three peopleQUANTITY

0.99+

49 daysQUANTITY

0.99+

Michael DellPERSON

0.99+

Washington DCLOCATION

0.99+

Sam WarnerPERSON

0.99+

University of California Health CenterORGANIZATION

0.99+

United StatesLOCATION

0.99+

New OrleansLOCATION

0.99+

Uturn Data SolutionsORGANIZATION

0.99+

120 citiesQUANTITY

0.99+

two hundredQUANTITY

0.99+

EMCORGANIZATION

0.99+

last yearDATE

0.99+

20 million imagesQUANTITY

0.99+

Department of TransportationORGANIZATION

0.99+

14 statesQUANTITY

0.99+

10kQUANTITY

0.99+

Steven Kenniston, The Storage Alchemist & Eric Herzog, IBM | VMworld 2017


 

>> Announcer: Live from Las Vegas it's theCUBE covering VM World 2017, brought to you by VMWare and its ecosystem partners. (upbeat techno music) >> Hey, welcome back to day two of VM World 2017 theCUBE's continuing coverage, I am Lisa Martin with my co-host Dave Vellante and we have a kind of a cute mafia going on here. We have Eric Herzog the CMO of IBM Storage back with us, and we also have Steve Kenniston, another CUBE alumni, Global Spectrum Software Distance Development Executive at IBM, welcome guys! >> Thank you. >> Thank you, great to be here. >> So lots of stuff going on, IBM Storage business health, first question, Steve, to you, what's going on there, tell us about that. >> Steve: What's going on in IBM Storage? >> Yes. >> All kinds of great things. I mean, first of all, I think we were walking the show floor just talking about how VMWare, VM World use to be a storage show and then it wasn't for a long time. Now you're walking around there, you see all kinds of storage. Now IBM, really stepping up its game. We've got two booths. We're talking all about not just, you know, the technologies, cognitive, IOT, that sort of thing but also where do those bits and bytes live? That's your, that's your assets. You got to store that information someplace and then you got to protect that information. We got all we're showcasing all kinds of solutions on the show floor including Versus Stack and that sort of thing where you you know make your copies of your data, store your data, reissue your data, protect your data, it's a great show. >> Lisa: Go ahead. >> Please. So I ever want to get into it, right, I mean, we've watched, this is our eighth year doing theCUBE at VM World, and doing theCUBE in general, but to see the evolution of this ecosystem in this community, you're right, it was storage world, and part of the reason was, and you know this well Eric, it was such a problem, you know. And all the APIs that VMWare released to really solve that storage problem, Flash obviously has changed the game a little bit but I want to talk about data protection if you're my backup specifically. Steve you and I have talked over the years about the ascendancy of VMWare coincided with a reduction in the physical capacity that was allocated to things like the applications like Backup. That was a real problem so the industry had their re-architect its backup and then companies like VM exploded on the scene, simplicity was a theme, and now we're seeing a sort of a similar scene change around cloud. So what's your perspective on that sort of journey in the evolution of data protection and where we are today, especially in the context of cloud? >> Yeah, I think there's been a couple, a couple big trends. I think you talked about it correctly, Dave, from the standpoint of when you think about your data protection capacity being four X at a minimum greater than your primary storage capacity, the next thing you start understanding is now with a growth in data, I need to be able to leverage and use that data. The number one thing, the number one driver to putting data in the cloud is data protection, right? And then it's now how can I reuse that data that's in the cloud and you look at things like AWS and that sort of thing the ability to spin up applications, and now what I need to do is I need to connect to with the data to be able to run those applications and if I'm going to do a test development environment if I'm going to run an analytics report or I'm going to do something, I want to connect to my data. So we have solutions that help you promote that data into the cloud, leverage that data, take advantage of that data and it it's just continually growing and continually shifting. >> So you guys are really leaning in to to VM World this year, got a big presence. What's going on there? You know, one would think, okay you know VMWare it's you know clearly grabbing a big piece of the market. You got them doing more storage. What's going on Eric, is it just, "Hey, we're a good partner." "Hey we're not going to let them you know elbow us out." "We're going to be competitive with the evil machine company." What's the dynamic in the VMWare ecosystem with you guys? >> Well I think the big thing for us is IBM has had a powerful partnership with VMWare since day one. Way back when IBM use to have an Intel Server Division everything was worked with VMWare, been a VMR partner, years and years ago on the server world as that division transferred away to Lenovo the storage division became front and center so all kinds of integration with our all Flash arrays our Versus Stack which you do jointly with Cisco and VMWare providing a conversion for structured solution, the products that Steve's team just brought out Spectrum Protect Plus installs in 30 minutes, recovers instantly off of a VM, can handle multiple VMs, can recover VMs or files, can be used to back up hundreds and thousands of virtual machines if that's what you've got in your infrastructure. So, the world has gone virtual and cloud. IBM is there with virtual and cloud. You need to move data out to IBM cloud, or exams on our azure Spectrum Protect Plus, Spectrum Scale, Spectrum Virtualize all members of our software family, and the arrays that they ship on all can transparently move data to a cloud; move it back and forth at the blink of an eye. With VMWare you need that same sort of level of integration we've had it on the array side we've now brought that out with Spectrum Protect Plus to make sure that backups are, in fact Spectrum Protect Plus is so easily, even me with my masters degree in Chinese history can back up and protect my data in a VMWare environment. So it's designed to be used by the VMWare admin or the app owner, not by the backup guy or the storage admin. Not that they won't love it too, but it's designed for the guys who don't know much about storage. >> I'll tell you Dave I saw it, I watched him get a demo and then I watched him turn around and present it, it was impressive. (laughing) >> I want to ask you a quick question. Long time partners IBM and VMWare as you as you've just said. You were an EMC guy that's where I first met you, from a marketing and a positioning perspective what have you guys done in the last year since the combination has completed to continue to differentiate the IBM VMWare strengths as now VMWare's part of Dell EMC. >> So I think the key thing is VMWare always has been the switch under the storage business. When I was at EMC, we owned 81 percent of the company and you walked into Palo Alto and Pat Gelsinger who I used to work for at EMC is now the CEO of VMWare you walk into the data center and there's IBM arrays, EMC arrays, HP arrays, Dell arrays and then app arrays. And a bunch of all small guys so the good thing is they've always been the switch under the storage industry, IBM because of it's old history and the server industry has always had tight integration with them, and we've just made sure we've done. I think they key difference we've done is it's all about the data. CEO, CIO they hate talking about storage. It's all about the data and that's what we're doing. Spectrum Protect Plus is all about keeping the data safe protected, and as Steve talked about using it in the cloud using real data sets for test and dev for dev opps, that's unique. Not everyone's doing that we're one of the few guys that do that. It's all about the data and you sell the storage as a foundation of that data. >> Well I mean IBM's always been good about not selling speeds and feeds, but selling at the boardroom level, the C level I mean you're IBM. That's your brand. Having said that, there's a lot of knife fights going on tactically in the business and you guys are knife fighters I know you both you're both startup guys, you're not afraid to get you know down and dirty. So Steve, how do you address the skepticism that somebody might have and say, "Alright, you know I hear you, this all sounds great but, you know I need simplicity." You guys you talk simplicity your Chinese History background, but I'm still skeptical. What can you tell me, proof points share with us to convince us that you really are from a from a simplicity standpoint competitive with the pack? >> I think, I think you seen a pretty big transformation over the last 18 months with what the some of the stuff that we've done with the software portfolio. So, a lot of folks can talk a good game about a software defined strategy. The fact that we put the entire Spectrum suite now under one portfolio now things are starting to really gel and come together. We done things like interesting skunkworks project with Spectrum Protect Plus and now we even had business partners in our booth who are backup architects talking about the solution who sell everybody else's solution on the floor saying, "This is, the I can't believe it, "I can't believe this is IBM. "They're putting together solutions that "are just unbelievably easy to use." They need that and I think you're exactly right, Dave. It used to be where you have a lot of technical technicians in the field and people wanted to architect things and put things together. Those days are gone, right? Now what you're finding is the younger generation coming in they're iPhone type people they want click simplicity just want to use it that sort of thing. We've started to recognize that and we've had to build that into our product. We were, we are a humbler IBM now. We are listening to our business partners. We are asking them what do we need to be doing to help you be successful in the field, not just from a product set, but also a selling you know, a selling motion. The Spectrum suite all the products under one thing now working and they're operating together, the ability to buy them more easily, the ability to leverage them use them, put it in a sandbox, test it out, not get charged for it, okay I like it, now I want to deploy it. We've really made it a lot easier to consume technology in a much easier way, right, software defined, and we're making the products easier to use. >> How've you been able to achieve that transformation is it cultural, is it somebody came down and said you thou shalt simplify and I mean you've been there a couple years now. >> Yeah so I think, I think the real thing is IBM has brought into the division a bunch of people from outside the division. So Ed Walsh our general manager who's going to be on shortly five startups. Steve, five startups. Me, seven startups. Our new VP of Offering Manager in the Solutions said not only Net Up, four startups. Our new VP of North American sales, HDS, three startups. So we've brought in a bunch of guys who A, use to work at the big competition, EMC, Net Up, Hitachi, et cetera. And we've also brought in a bunch of people who are startup guys who are used to turning on dime, it's all about ease of use, it's all about simplicity, it's all about automation. So between the infusion of this intellectual capital from a number of us who've been outside the company, particularly in the startup world, and the incredible technical depth of IBM storage teams and our test teams and all the other teams that we leverage, we've just sort of pointed them in the direction like it needs to be installed in 30 minutes. Well guess what, they knew how to do that 30 years ago. They just never did because they were, you know stuck in the IBM silo if you will, and now the big model we have at IBM is outside in, not inside out, outside in. And the engineering teams have responded to that and made things that are easy to use, incredibly automated, work with everyone's gear not just ours. There're other guys that sell storage software. But other than in the protection space all the other guys, it only works with EMC or it only works with Net Up or it only works HP. Our software works with everyone's stuff including every one of our major competitors, and we're fine with that. So that's come from this infusion and combination of the incredible technical depth and DNA of IBM with a bunch of group of people about 10 of us who've all for come from either A, from the big competitors, but also from a bunch of startups. And we've just merged that over the last two years into something that's fortunate and incredibly powerful. We are now the number one storage software company in the world, and in overall storage, both systems and software, we're number two. >> Dave: So where's that data, is that IDC data? >> Yeah, that's the IDC data. >> And what are they what are they, when they count that what are they counting? Are they sort of eliminating any hardware you know, associated with that or? >> Well no, storage systems would be external systems our all Flash arrays, our Ver, that's all the systems side. Software's purely software only, so. >> Dave: No appliances >> Yeah, yeah, yeah. >> Dave: The value of those licenses associated with that >> Well as our CFO pointed out, so if you take a look at our track record of last at the beginning of this year, we grew seven percent in Q1, one of the only storage companies to grow certainly of the majors, we grew eight percent in Q2, again one of the only storage companies to grow of the major players, and as our CFO pointed out in his call in Q2, over 40 percent of the division's revenue is storage software, not full system, just stand alone software particularly with the strength of the suite and all the things we're doing you know, to make it easy to use install in 30 minutes and have a mastery in Chinese History be able to protect his data and never lose it. And that's what we want to be able to do. >> Okay so that's a licensed model, and is it a, is it a, is it a ratable model is it a sort of a perpetual model, what is it? >> We've got both depending on the solution. We have cloud engagement models, we can consume it in the cloud. We got some guys are traditionalists, gimme an ELA, you know Enterprise License Agreement. So we, we're the pasta guys. We have the best pasta in the world. Do you want red sauce, white sauce, or pesto? Dave I said that because you're part Italian, I'm half Italian on my mother's side. >> I like Italian. >> So we have the best pasta, whatever the right sauce is for you we deliver the best pasta on the planet, in our case the best storage software on the planet. >> You heard, you heard Michael up there on stage today, "Don't worry about it." He was invoking his best Italian, I have an affinity for that. So, so Steve, this is your second stint at IBM, Ed's second stint, I'm very intrigued that Doug Balog is now moved over this is a little inside baseball here, but running sales again. >> Steve: Right. >> So that's unique, actually, to see have a guy who use to run storage, leave, go be the general manager of the Power Systems division, in OpenPOWER, then come back, to drive storage sales. So, you're seeing, it's like a little gravity action. Guys sort of coming back in, what's going on from your perspective? >> Well I I think, I think Eric said it best. We've done an outside in. We've been bringing a lot of people in and I think that the development team and I wanted to to to bounce off of what Eric had said was they always knew how to do it, it's just they they needed to see and understand the motivation behind why they wanted to do it or why they needed to do it. And now they're seeing these people come in and talk about in a very, you know caring way that this is how the world is changing, and they believe it, and they know how to do it and they're getting excited. So now there's a lot more what what people might think, "Oh I'm just going to go develop my code and go home," and whatever they're not; they're excited. They want to build new products. They want to make these things interoperate together. They're, they're passionate about hearing from the customer, they're passionate about tell me what I can do to make it better. And all of those things when one group hears something that's going on to make something better they want to do the same thing, right. So it's it's really, it's breathing good energy into the storage division, I think. >> So question for you guys on that front you talked about Eric, we don't leave it at storage anymore, right? C levels don't care about that. But you've just talked about two very strong quarters in storage revenue perspective. What's driving that what's or what's dragging that? Is it data protection? What are some of the other business level drivers that are bringing that storage sale along? >> So for us it's been a couple things. So when you look at just the pure product perspective the growth has been around our all Flash arrays. We have a broad portfolio; we have very cost effective stuff we have stuff for the mainframe we have super high performance stuff we have stuff for big data analytic workloads. So again there isn't one Flash, you know there's a couple startups that started with one Flash and that's all they had. We think it's the right Flash tool for the right job. It's all about data, applications, workloads and use case. Big data analytics is not the same as your Oracle database to do your ERP system or your logistics system if you're someone like a Walmart. You need a different type of Flash for that. We tune everything to that. So Flash has been a growth engine for us, the other's been software defined storage. The fact that we suited it up, we have the broadest software portfolio in the industry. We have Block, we have File, we have Object, we have Backup, we have Archive. We've got Management Plane. We've got that and by packaging it into a suite, I hate to say we stole it from the old Microsoft Office, but we did. And for the in-user base it's up to a 40 percent discount. I'm old enough to remember the days of the computer store I think Dave might've gone to a computer store once or twice too. And there it was: Microsoft Office at eye level for $999. Excel, Powerpoint and Word above it at $499. Which would you buy? So we've got the Spectrum suite up to a 40 percent savings, and we let the users use all of the software for free in their dev environments at no charge and it's not a timeout version, it's not a lite version, it is a full version of the software. So you get to try a full thing out for free and then at the suite level you save up to 40 percent. What's not to like? >> And I think, I just wanted to compliment that too. I think to also answer the question is one of the things we've done so we've talked about development really growing, getting excited, wanting to build things. The other thing that's also happening is that is at the field level, we've stopped talking speeds and feeds like directly, right, so it has become this, this higher level conversation and now IBMers who go and sell things like cognitive and IOT and that sort of thing, they're now wanting to bring us in, because we're not talking about the feeds and speeds and screwing up like how they like to sell. We're talking about, Jenny will come out and say data is your is your most valuable asset in your company. And we say, okay, I got to store those bits and bytes some place, right? We provide that mechanism. We provide it in a multitude of different ways. And we want to compliment what they're doing. So now when I put presentations together to help the sales field, I talk about storage in a way that is more, how does it help cognitive? How does it help IOT? How does it help test and dev? And and by the way there's a suite, it's storing it, it's using it, it's protecting it. It's all of those things and now it's complimenting their selling motion. >> Well your both of the passion and the energy coming from both of you is very palpable. So thank you for sticking around, Eric, and Steve for coming back to theCUBE and sharing all the exciting things that are going on at IBM. That energy is definitely electric. So wish you guys the best of luck in the next day or so of the show and again, thank you for spending some time with us this afternoon. >> Thanks for having us. >> Thanks for having us. >> Absolutely, and for my co-host Dave Vellante, I'm Lisa Martin, you're watching theCUBE's live continuing coverage of the Emerald 2017 day two. Stick around, we'll be right back. (techno music)

Published Date : Aug 30 2017

SUMMARY :

covering VM World 2017, brought to you by VMWare We have Eric Herzog the CMO of IBM Storage back with us, first question, Steve, to you, what's going on there, and that sort of thing where you you know and part of the reason was, and you know this well Eric, that's in the cloud and you look at things What's the dynamic in the VMWare ecosystem with you guys? and the arrays that they ship on and then I watched him turn around and present it, I want to ask you a quick question. and the server industry and you guys are knife fighters I know you both the ability to leverage them use them, How've you been able to achieve that transformation and now the big model we have at IBM is outside in, that's all the systems side. and all the things we're doing We have the best pasta in the world. in our case the best storage software on the planet. You heard, you heard Michael up there on stage today, go be the general manager of the Power Systems division, and talk about in a very, you know caring way So question for you guys on that front and then at the suite level you save up to 40 percent. And and by the way there's a suite, and sharing all the exciting things live continuing coverage of the Emerald 2017 day two.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

StevePERSON

0.99+

Eric HerzogPERSON

0.99+

Lisa MartinPERSON

0.99+

Steve KennistonPERSON

0.99+

IBMORGANIZATION

0.99+

EricPERSON

0.99+

HitachiORGANIZATION

0.99+

DavePERSON

0.99+

Doug BalogPERSON

0.99+

$499QUANTITY

0.99+

WalmartORGANIZATION

0.99+

Steven KennistonPERSON

0.99+

LisaPERSON

0.99+

Ed WalshPERSON

0.99+

CiscoORGANIZATION

0.99+

MichaelPERSON

0.99+

seven percentQUANTITY

0.99+

81 percentQUANTITY

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

eight percentQUANTITY

0.99+

Pat GelsingerPERSON

0.99+

30 minutesQUANTITY

0.99+

JennyPERSON

0.99+

EMCORGANIZATION

0.99+

WordTITLE

0.99+

EdPERSON

0.99+

$999QUANTITY

0.99+

ExcelTITLE

0.99+

second stintQUANTITY

0.99+

AWSORGANIZATION

0.99+

eighth yearQUANTITY

0.99+

Las VegasLOCATION

0.99+

IntelORGANIZATION

0.99+

OpenPOWERORGANIZATION

0.99+

last yearDATE

0.99+

PowerpointTITLE

0.99+

OracleORGANIZATION

0.99+

twiceQUANTITY

0.99+

Net UpORGANIZATION

0.99+

oneQUANTITY

0.99+

bothQUANTITY

0.99+

HPORGANIZATION

0.99+

two boothsQUANTITY

0.99+

Power SystemsORGANIZATION

0.99+

VM World 2017EVENT

0.99+

onceQUANTITY

0.99+

LenovoORGANIZATION

0.98+

VMWareTITLE

0.98+

IBM StorageORGANIZATION

0.98+

over 40 percentQUANTITY

0.98+

Steve Roberts, IBM– DataWorks Summit Europe 2017 #DW17 #theCUBE


 

>> Narrator: Covering DataWorks Summit, Europe 2017, brought to you by Hortonworks. >> Welcome back to Munich everybody. This is The Cube. We're here live at DataWorks Summit, and we are the live leader in tech coverage. Steve Roberts is here as the offering manager for big data on power systems for IBM. Steve, good to see you again. >> Yeah, good to see you Dave. >> So we're here in Munich, a lot of action, good European flavor. It's my second European, formerly Hadoop Summit, now DataWorks. What's your take on the show? >> I like it. I like the size of the venue. It's the ability to interact and talk to a lot of the different sponsors and clients and partners, so the ability to network with a lot of people from a lot of different parts of the world in a short period of time, so it's been great so far and I'm looking forward to building upon this and towards the next DataWorks Summit in San Jose. >> Terri Virnig VP in your organization was up this morning, had a keynote presentation, so IBM got a lot of love in front of a fairly decent sized audience, talking a lot about the sort of ecosystem and that's evolving, the openness. Talk a little bit about open generally at IBM, but specifically what it means to your organization in the context of big data. >> Well, I am from the power systems team. So we have an initiative that we have launched a couple years ago called Open Power. And Open Power is a foundation of participants innovating from the power processor through all aspects, through accelerators, IO, GPUs, advanced analytics packages, system integration, but all to the point of being able to drive open power capability into the market and have power servers delivered not just through IBM, but through a whole ecosystem of partners. This compliments quite well with the Apache, Hadoop, and Spark philosophy of openness as it relates to software stack. So our story's really about being able to marry the benefits of open ecosystem for open power as it relates to the system infrastructure technology, which drives the same time to innovation, community value, and choice for customers as it relates to a multi-vendor ecosystem and coupled with the same premise as it relates to Hadoop and Spark. And of course, IBM is making significant contributions to Spark as part of the Apache Spark community and we're a key active member, as is Hortonworks with the ODPi organization forwarding the standards around Hadoop. So this is a one, two combo of open Hadoop, open Spark, either from Hortonworks or from IBM sitting on the open power platform built for big data. No other story really exists like that in the market today, open on open. >> So Terri mentioned cognitive systems. Bob Picciano has recently taken over and obviously has some cognitive chops, and some systems chops. Is this a rebranding of power? Is it sort of a layer on top? How should we interpret this? >> No, think of it more as a layer on top. So power will now be one of the assets, one of the sort of member family of the cognitive systems portion on IBM. System z can also be used as another great engine for cognitive in certain clients, certain use cases where they want to run cognitive close to the data and they have a lot of data sitting on System z. So power systems as a server really built for big data and machine learning, in particular our S822LC for high performance computing. This is a server which is landing very well in the deep learning, machine learning space. It offers the Tesla P100 GPU and with the NVIDIA NVLink technology can offer up to 2.8x bandwidth benefits CPU to GPU over what would be available through a PCIe Intel combination today. So this drives immediate value when you need to ensure that not just you're exploiting GPUs, but you of course need to move your data quickly from the processor to the GPU. >> So I was going to ask you actually, sort of what make power so well suited for big data and cognitive applications, particularly relative to Intel alternatives. You touched on that. IBM talks a lot about Moore's Law starting to hit its peak, that innovation is going to come from other places. I love that narrative 'cause it's really combinatorial innovation that's going to lead us in the next 50 years, but can we stay on that thread for a bit? What makes power so substantially unique, uniquely suited and qualified to run cognitive systems and big data? >> Yeah, it actually starts with even more of the fundamentals of the power processors. The power processor has eight threads per core in contrast to Intel's two threads per core. So this just means for being able to parallelize your workloads and workloads that come up in the cognitive space, whether you're running complex queries and need to drive SQL over a lot of parallel pipes or you're writing iterative computation, the same data set as when you're doing model training, these can all benefit from highly parallelized workloads, which can benefit from this 4x thread advantage. But of course to do this, you also need large, fast memory, and we have six times more cache per core versus Broadwell, so this just means you have a lot of memory close to the processor, driving that throughput that you require. And then on top of that, now we get to the ability to add accelerators, and unique accelerators such as I mentioned the NVIDIA in the links scenario for GPU or using the open CAPI as an approach to attach FPGA or Flash to get access speeds, processor memory access speeds, but with an attached acceleration device. And so this is economies of scale in terms of being able to offload specialized compute processing to the right accelerator at the right time, so you can drive way more throughput. The upper bounds are driving workload through individual nodes and being able to balance your IO and compute on an individual node is far superior with the power system server. >> Okay, so multi-threaded, giant memories, and this open CAPI gives you primitive level access I guess to a memory extension, instead of having to-- >> Yeah, pluggable accelerators through this high speed memory extension. >> Instead of going through, what I often call the horrible storage stack, aka SCSI, And so that's cool, some good technology discussion there. What's the business impact of all that? What are you seeing with clients? >> Well, the business impact is not everyone is going to start with supped up accelerated workloads, but they're going to get there. So part of the vision that clients need to understand is to begin to get more insights from their data is, it's hard to predict where your workloads are going to go. So you want to start with a server that provides you some of that upper room for growth. You don't want to keep scaling out horizontally by requiring to add nodes every time you need to add storage or add more compute capacity. So firstly, it's the flexibility, being able to bring versatile workloads onto a node or a small number of nodes and be able to exploit some of these memory advantages, acceleration advantages without necessarily having to build large scale out clusters. Ultimately, it's about improving time to insights. So with accelerators and with large memory, running workloads on a similar configured clusters, you're simply going to get your results faster. For example, recent benchmark we did with a representative set of TPC-DS queries on Hortonworks running on Linux and power servers, we're able to drive 70% more queries per hour over a comparable Intel configuration. So this is just getting more work done on what is now similarly priced infrastructure. 'Cause power family is a broad family that now includes 1U, 2U, scale out servers, along with our 192 core horsepowers for enterprise grade. So we can directly price compete on a scale out box, but we offer a lot more flexible choice as clients want to move up in the workload stack or to bring accelerators to the table as they start to experiment with machine learning. >> So if I understand that right, I can turn two knobs. I can do the same amount of work for less money, TCO play. Or, for the same amount of money, I can do more work. >> Absolutely >> Is that fair? >> Absolutely, now in some cases, especially in the Hadoop space, the size of your cluster is somewhat gated by how much storage you require. And if you're using the classic scale up storage model, you're going to have so many nodes no matter what 'cause you can only put so much storage on the node. So in that case, >> You're scaling storage. >> Your clusters can look the same, but you can put a lot more workload on that cluster or you can bring in IBM, a solution like IBM Spectrum Scale our elastic storage server, which allows you to essentially pull that storage off the nodes, put it in a storage appliance, and at that point, you now have high speed access to storage 'cause of course the network bandwidth has increased to the point that the performance benefit of local storage is no longer really a driving factor to a classic Hadoop deployment. You can get that high speed access in a storage appliance mode with the resiliency at far less cost 'cause you don't need 3x replication, you just have about a 30% overhead for the software erasure coding. And now with your compete nodes, you can really choose and scale those nodes just for your workload purposes. So you're not bound by the number of nodes equal total storage required by storage per node, which is a classic, how big is my cluster calculation. That just doesn't work if you get over 10 nodes, 'cause now you're just starting to get to the point where you're wasting something right? You're either wasting storage capacity or typically you're wasting compute capacity 'cause you're over provisioned on one side or the other. >> So you're able to scale compute and storage independent and tune that for the workload and grow that resource efficiently, more efficiently? >> You can right size the compute and storage for your cluster, but also importantly is you gain the flexibility with that storage tier, that data plan can be used for other non-HDFS workloads. You can still have classic POSIX applications or you may have new object based applications and you can with a single copy of the data, one virtual file system, which could also be geographically distributed, serving both Hadoop and non-Hadoop workloads, so you're saving then additional replicas of the data from being required by being able to onboard that onto a common data layer. >> So that's a return on asset play. You got an asset that's more fungible across the application portfolio. You can get more value out of it. You don't have to dedicate it to this one workload and then over provision for another one when you got extra capacity sitting here. >> It's a TCO play, but it's also a time saver. It's going to get you time to insight faster 'cause you don't have to keep moving that data around. The time you spend copying data is time you should be spending getting insights from the data, so having a common data layer removes that delay. >> Okay, 'cause it's HDFS ready I don't have to essentially move data from my existing systems into this new stovepipe. >> Yeah, we just present it through the HDFS API as it lands in the file system from the original application. >> So now, all this talk about rings of flexibility, agility, etc, what about cloud? How does cloud fit into this strategy? What do are you guys doing with your colleagues and cohorts at Bluemix, aka SoftLayer. You don't use that term anymore, but we do. When we get our bill it says SoftLayer still, but any rate, you know what I'm talking about. The cloud with IBM, how does it relate to what you guys are doing in power systems? >> Well the cloud is still, really the born on the cloud philosophy of IBM software analytics team is still very much the motto. So as you see in the data science experience, which was launched last year, born in the cloud, all our analytics packages whether it be our BigInsights software or our business intelligence software like Cognos, our future generations are landing first in the cloud. And of course we have our whole arsenal of Watson based analytics and APIs available through the cloud. So what we're now seeing as well as we're taking those born in the cloud, but now also offering a lot of those in an on-premise model. So they can also participate in the hybrid model, so data science experience now coming on premise, we're showing it at the booth here today. Bluemix has a on premise version as well, and the same software library, BigInsights, Cognos, SPSS are all available for on prem deployment. So power is still ideal place for hosting your on prem data and to run your analytics close to the data, and now we can federate that through hybrid access to these elements running in the cloud. So the focus is really being able to, the cloud applications being able to leverage the power and System z's based data through high speed connectors and being able to build hybrid configurations where you're running your analytics where they most make sense based upon your performance requirements, data security and compliance requirements. And a lot of companies, of course, are still not comfortable putting all their jewels in the cloud, so typically there's going to be a mix and match. We are expanding the footprint for cloud based offerings both in terms of power servers offered through SoftLayer, but also through other cloud providers, Nimbix is a partner we're working with right now who actually is offering our Power AI package. Power AI is a package of open source, deep learning frameworks, packaged by IBM, optimized for Power in an easily deployed package with IBM support available. And that's, could be deployed on premise in a power server, but also available on a pay per drink purpose through the Nimbix cloud. >> All right, we covered a lot of ground here. We talked strategy, we talked strategic fit, which I guess is sort of a adjunct to strategy, we talked a little bit about the competition and where you differentiate, some of the deployment models, like cloud, other bits and pieces of your portfolio. Can we talk specifically about the announcements that you have here at this event, just maybe summarize for use? >> Yeah, no absolutely. As it relates to IBM, and Hadoop, and Spark, we really have the full stack support, the rich analytics capabilities that I was mentioning, deep insight, prescriptive insights, streaming analytics with IBM Streams, Cognos Business Intelligence, so this set of technologies is available for both IBMs, Hadoop stack, and Hortonworks Hadoop stack today. Our BigInsights and IOP offering, is now out for tech preview, their next release their 4.3 release, is available for technical preview will be available for both Linux on Intel, Linux on power towards the end of this month, so that's kind of one piece of new Hadoop news at the analytics layer. As it relates to power systems, as Hortonworks announced this morning, HDP 2.6 is now available for Linux on power, so we've been partnering closely with Hortonworks to ensure that we have an optimized story for HDP running on power system servers as the data point I shared earlier with the 70% improved queries per hour. At the storage layer, we have a work in progress to certify Hortonworks, to certify Spectrum Scale file system, which really now unlocks abilities to offer this converged storage alternative to the classic Hadoop model. Spectrum Scale actually supports and provides advantages in both a classic Hadoop model with local storage or it can provide the flexibility of offering the same sort of multi-application support, but in a scale out model for storage that it also has the ability to form a part of a storage appliance that we call Elastic Storage Server, which is a combination of power servers and high density storage enclosures, SSD or spinning disk, depending upon the, or flash, depending on the configuration, and that certification will now have that as an available storage appliance, which could underpin either IBM Open Platform or HDP as a Hadoop data leg. But as I mentioned, not just for Hadoop, really for building a common data plane behind mixed analytics workloads that reduces your TCO through converged storage footprint, but more importantly, provides you that flexibility of not having to create data copies to support multiple applications. >> Excellent, IBM opening up its portfolio to the open source ecosystem. You guys have always had, well not always, but in the last 20 years, major, major investments in open source. They continue on, we're seeing it here. Steve, people are filing in. The evening festivities are about to begin. >> Steve: Yeah, yeah, the party will begin shortly. >> Really appreciate you coming on The Cube, thanks very much. >> Thanks a lot Dave. >> You're welcome. >> Great to talk to you. >> All right, keep it right there everybody. John and I will be back with a wrap up right after this short break, right back.

Published Date : Apr 6 2017

SUMMARY :

brought to you by Hortonworks. Steve, good to see you again. Munich, a lot of action, so the ability to network and that's evolving, the openness. as it relates to the system and some systems chops. from the processor to the GPU. in the next 50 years, and being able to balance through this high speed memory extension. What's the business impact of all that? and be able to exploit some of these I can do the same amount of especially in the Hadoop space, 'cause of course the network and you can with a You don't have to dedicate It's going to get you I don't have to essentially move data as it lands in the file system to what you guys are and to run your analytics a adjunct to strategy, to ensure that we have an optimized story but in the last 20 years, Steve: Yeah, yeah, the you coming on The Cube, John and I will be back with a wrap up

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IBMORGANIZATION

0.99+

JohnPERSON

0.99+

StevePERSON

0.99+

Steve RobertsPERSON

0.99+

DavePERSON

0.99+

MunichLOCATION

0.99+

Bob PiccianoPERSON

0.99+

HortonworksORGANIZATION

0.99+

TerriPERSON

0.99+

3xQUANTITY

0.99+

six timesQUANTITY

0.99+

70%QUANTITY

0.99+

last yearDATE

0.99+

San JoseLOCATION

0.99+

two knobsQUANTITY

0.99+

BluemixORGANIZATION

0.99+

NVIDIAORGANIZATION

0.99+

eight threadsQUANTITY

0.99+

LinuxTITLE

0.99+

HadoopTITLE

0.99+

bothQUANTITY

0.98+

oneQUANTITY

0.98+

NimbixORGANIZATION

0.98+

todayDATE

0.98+

DataWorks SummitEVENT

0.98+

SoftLayerTITLE

0.98+

secondQUANTITY

0.97+

Hadoop SummitEVENT

0.97+

IntelORGANIZATION

0.97+

SparkTITLE

0.97+

IBMsORGANIZATION

0.95+

single copyQUANTITY

0.95+

end of this monthDATE

0.95+

WatsonTITLE

0.95+

S822LCCOMMERCIAL_ITEM

0.94+

EuropeLOCATION

0.94+

this morningDATE

0.94+

firstlyQUANTITY

0.93+

HDP 2.6TITLE

0.93+

firstQUANTITY

0.93+

HDFSTITLE

0.91+

one pieceQUANTITY

0.91+

ApacheORGANIZATION

0.91+

30%QUANTITY

0.91+

ODPiORGANIZATION

0.9+

DataWorks Summit Europe 2017EVENT

0.89+

two threads per coreQUANTITY

0.88+

SoftLayerORGANIZATION

0.88+

Chandra Mukhyala, IBM - DataWorks Summit Europe 2017 - #DW17 - #theCUBE


 

>> Narrator: theCUBE covering, DataWorks Summit Europe 2017. Brought to you by Hortonworks. >> Welcome back to the DataWorks Summit in Munich everybody. This is The Cube, the leader in live tech coverage. Chandra Mukhyala is here. He's the offering manager for IBM Storage. Chandra, good to see you. It always comes back to storage. >> It does, it's the foundation. We're here at a Data Show, and you got to put the data somewhere. How's the show going? What are you guys doing here? >> The show's going good. We have lots of participation. I didn't expect this big a crowd, but there is good crowd. Storage, people don't look at it as the most sexy thing but I still see a lot of people coming and asking. "What do you have to do with Hadoop?" kind of questions which is exactly the kind of question I expect. So, going good, we're able to-- >> It's interesting, in the early days of Hadoop and big data, I remember we interviewed, John and I interviewed Jeff Hammerbacher, founder of Cloudera and he was at Facebook and he said, "My whole goal at Facebook "when we're working with Hadoop was to "eliminate the storage container "and the expensive storage container." They succeeded, but now you see guys like you coming in and saying, "Hey, we have better storage." Why does the world need anything different than HDFS? >> This has been happening for the last two decades, right? In storage, every few years a startup comes, they address one problem very well. They address one problem and create a whole storage solution around that. Everybody understands the benefit of it and that becomes part of the main storage. When I say main storage, because these new point solutions address one problem but what about all the rest of the features storage has been developing for decades. Same thing happened with other solutions, for example, deduplication. Very popular, right at one point, dedupe appliances. Nowadays, every storage solution has dedupe in. I think same thing with HDFS right? HDFS's purpose is built for Hadoop. It solves that problem in terms of giving local access storage, scalable storage, big plural storage. But, it's missing out many things you know. One of the biggest problems they have with HDFS is it's siloed storage, meaning that data is only available, the data in HDFS is only for Hadoop. You can't, what about the rest of the applications in the organizations, who may need it through traditional protocols like NFS, or SMB or they maybe need it through new applications like S3 interfaces or Swift interfaces. So, you don't want that siloed storage. That's one of the biggest problems we have. >> So, you're putting forth a vision of some kind horizontal infrastructure that can be leveraged across your application portfolio... >> Chandra: Yes. >> How common is that? And what's the value of that? >> It's not really common, that's one of the stories, messages we're trying to get out. And I've been talking to data scientists in the last one year, a lot of them. One of the first things they do when they are implementing a Hadoop project is, they have to copy a lot data into HDFS Because before they could enter it just as HDFS they can't on any set. That copy process takes days. >> Dave: That's a big move, yeah. >> It's not only wasting time from a data scientist, but it also makes the data stale. I tell them you don't have to do that if your data was on something like IBM Spectrum Scale. You can run Hadoop straight off that, why do you even have to copy into HDFS. You can use the same existing applications map, and just applications with zero change to it and pour in them at Spectrum Scale it can still use the HSFS API. You don't have to copy that. And every data scientists I talk to is like, "Really?" "I don't know how to do this, I'm wasting time?" Yes. So, it's not very well known that, you know, most people think that there's only one way to do Hadoop applications, in sometimes HDFS. You don't have to. And advantages there is, one, you don't have to copy, you can share the data with the rest of the applications but its no more stale data. But also, one other big difference between the HDFS type of storage versus shared storages. In the shared, which is what HDFS is, the various scale is by adding new nodes, which adds both compute and storage. What if our applications, which don't necessarily need need more compute, all they need is more throughput. You're wasting computer resources, right? So there are certain applications where a share nothing is a better architecture. Now the solution which IBM has, will allow you to deploy it in either way. Share nothing or shared storage but that's one of the main reasons, people want to, data scientists especially, want to look at these alternative solutions for storage. >> So when I go back to my Hammerbacher example, it worked for a Facebook of the early days because they didn't have a bunch of legacy data hanging around, they could start with, pretty much, a blank piece of paper. >> Yes. >> Re-architect, plus they had such scale, they probably said, "Okay, we don't want to go to EMC "and NetApp or IBM, or whomever and buy storage, "we want to use commodity components." Not every enterprise can do that, is what you're saying. >> Yes, exactly. It's probably okay for somebody like a very large search engine, when all they're doing is analytics, nothing else. But if you to any large commercial enterprise, they have lots of, the whole point around analytics is they want to pool all of the data and look at that. So, find the correlations, right? It's not about analyzing one small, one dataset from one business function. It's about pooling everything together and see what insights can I get out of it. So that's one of the reasons it's very important to have support to access the data for your legacy enterprise applications, too, right? Yeah, so NFS and SMB are pretty important, so are S3 and Swift, but also for these analytics applications, one of the advantage of IBM Solution here is we provide local access for file system. Not necessarily through mass protocols like an access, we do that, but we also have PO SIX access to have data local access to the file system. With that, HDFS you have to first copy the file into HDFS, you had to bring it back to do anything with that. All those copy operations go away. And this is important, again in enterprise, not just for data sharing but also to get local access. >> You're saying your system is Hadoop ready. >> Chandra: It is. >> Okay. And then, the other thing you hear a lot from IT practitioners anyway, not so much from from the line of businesses, that when people spin up these Hadoop projects, big data projects, they go outside of the edicts of the organization in terms of governance and compliance, and often, security. How do you solve, do you solve that problem? >> Yeah, that's one of the reason to consider again, the enterprise storage, right? It's not just because you have, you're able to share the data with rest of applications, but also the whole bunch of data management features, including data governance features. You can talk about encryption there, you can talk about auditing there, you can talk about features like WAN, right, WAN, so data is, especially archival data, once you write you can't modify that. There are a whole bunch of features around data retention, data governance, those are all part of the data management stack we have. You get that for free. You not only get universal access, unified access, but you also get data governance. >> So is this one of the situations where, on the face of it, when you look at the CapEx, you say, "Oh, wow, I cause use commodity components, save a bunch of money." You know, you remember the client server days. "Oh, wow, cheap, cheap, cheep, "microprocessor based solution," and then all the sudden, people realize we have to manage this. Have we seen a similar sort of trend with Hadoop, with the ability to or the complexity of managing all of this infrastructure? It's so high than it actually drives costs up. >> Actually there are two parts to it, right? There is actually value in utilizing commodity hardware, industry standards. That does reduce your costs right? If you can just buy a standard XL6 server we can, a storage server and utilize that, why not. That is kind of just because. But the real value in any kind of a storage data manage solution is in the software stack. Now you can reduce CapEx by using industry standards. It's a good thing to do and we should, and we support that but in the end, the data management is there in the software stack. What I'm saying is HDFS is solving one problem by dismissing the whole data management problems, which we just touched on. And that all comes in software which goes down under service. >> Well, and you know, it's funny, I've been saying for years, that if you peel back the onion on any storage device, the vast majority anyway, they're all based on standard components. It's the software that you're paying for. So it's sort of artificial in that a company like IBM will say, "Okay, we've got all this value in here, "but it's on top of commodity components, "we're going to charge for the value." >> Right. >> And so if you strip that out, sure, you do it yourself. >> Yeah, exactly. And it's all standard service. It's been like that always. Now one difference is ten years ago people used propriety array controllers. Now all of the functionalities coming into software-- >> ASICs, >> Recording. >> Yeah, 3PAR still has an ASIC, but most don't. >> Right, that's funny, they only come in like.. Almost everybody has some kind of a software-based recording and they're able to utilize sharing server. Now the reason advantage in appliance more over, because, yes it can run on industry's standard, but this is storage, this is where, that's a foundation of all of your inter sectors. And you want RAS, or you want reliability and availability. The only way to get that is a fully integrated, tight solution, where you're doing a lot of testing on the software and the hardware. Yes, it's supposed to work, but what really happens when it fails, how does the sub react. And that's where I think there is still a value for integrated systems. If you're a large customer, you have a lot of storage saving, source of the administrators and they know to build solutions and validate it. Yes, software based storage is the right answer for you. And you're the offering manager for Spectrum Scale, which is the file offering, right, that's right? >> Yes, right yes. >> And it includes object as well, or-- >> Spectrum Sale is a file and object storage pack. It supports both file and protocols. It also supports object protocols. The thing about object storage is it means different things to different people. To some people, it's the object interface. >> Yeah, to me it means get put. >> Yeah, that's what the definition is, then it is objectivity. But the fact is that everybody's supposed to stay in now. But to some of the people, it's not about the protocol, because they're going to still access by finding those protocols, but to them, it's about the object store, which means it's a flat name space and there's no hierarchical name structure, and you can get into billions of finites without having any scalable issues. That's an object store. But to some other people it's neither of those, it's about a range of coding which object storage, so it's cheap storage. It allows you to run on storage and service, and you get cheap storage. So it's three different things. So if you're talking about protocols yes, but their skill is by their definition is object storage, also. >> So in thinking about, well let's start with Spectrum Scale generally. But specifically, your angle in big data and Hadoop, and we talked about that a little bit, but what are you guys doing here, what are you showing, what's your partership with Hortonworks. Maybe talk about that a little bit. >> So we've been supporting this, what we call as Hadoop connector on Spectrum Scale for almost a year now, which is allowing our existing Spectrum Scale customers to run Hadoop straight on it. But if you look at the Hadoop distributions, there are two or three major ones, right? Cloudera, Hortonworks, maybe MapArt. One of the first questions we get is, we tell our customers you can run Hadoop on this. "Oh, is this supported by my distribution?" So that has been a problem. So what we announced is, we found a partnership with Hortonworks, so now Hortonwords is certifying IBM Spectrum Scale. It's not new code changes, it's not new features, but it's a validation and a stamp from Hortonworks, that's in the process. The result of is, Hortonworks certified reference architecture, which is what we announced. We announced it about a month ago. We should be publishing that soon. Now customers can have more confidence in the joint solutions. It's not just IBM saying that it's Hadoop ready, but it's Hortonworks backing that up. >> Okay, and your scope, correct me if I'm wrong, is sort of on prem and hybrid, >> Chandra: Yes. >> Not cloud services. That's kind of you might sell your technology internally, but-- >> Correct so IBM storage is primarily focused on on prem storage. We do have a separate cloud division, but almost every IBM storage production, especially Spectrum Scale, is what I can speak of, we treat them as hybrid cloud storage. What we mean that is we have built in capabilities, we have feature. Most of our products call transfer in cloud tiering, it allows you to set a policy on when data should be automatically tiered to the cloud. Everybody wants public, everybody wants on prem. Obviously there are pros and cons of on primary storage, versus off primary storage, but basially, it boils down to, if you want performance and security, you want to be on premises. But there's always some which is better to be in the cloud, and we try to automate that with our feature called transfer and cloud data. You set a policy based on age, based on the type of data, based on the ownership. The system will automatically tier the data to the cloud, and when a user access that cloud, it comes back automatically, too. It's all transferred to the end. So yes, we're a non primary storage business but our solutions are hybrid cloud storage. >> So, as somebody who knows the file business pretty well, let's talk about kind of the business file and sort of where it's headed. There's some mega trends and dislocations. There's obviously software defined. You guys have made a big investment in software defined a year and a half, two years ago. There's cloud, Amazon with S3 sort of shook up the world. I mean, at first it was sort of small, but then now, it's really catching on. Object obviously fits in there. What do you see as the future of file. >> That's a great question. When it comes to data layout, there's really a block file of object. Software defined and cloud are various ways of consuming storage. If you're large service probably, you would prefer a software based solution so you can run it on your existing service. But who are your preferred solutions? Depending on the organization's preferences for security, and how concerned they are about security and performance needs, they will prefer to run some of the applications on cloud. These are different ways of consuming storage. But coming back to file, an object right? So object is perfect if you are not going to modify the data. You're done writing that data, and you're not going to change. It just belongs an object store, right? It's more scalable storage, I say scalable because file systems are hierarchical in nature. Because it's a file system tree, you have travels through the various subtype trees. Beyond a few million subtype trees, it slows you down. But file systems have a strength. When you want to modify the file, any application which is going to edit the file, which is going to modify the file, that application belongs on file storage, not on object. But let's say you are dealing with medical images. You're not going to modify an x-ray once it's done. That's better suited on an object storage. So file storage will always have a place. Take video editing and all these videos they are doing, you know video, we do a lot of video editing. That belongs on file storage, not on object. If you care about file modifications and file performance, file is your answer, but if you're done and you just want to archive it, you know, you want a scalable storage, billions of objects, then object is answer. Now either of these can be software based storage or it could be appliance. That's again an organization's preference for do you want to integrate a robust ready, ready made solution, then appliance is an answer. "Ah, no I'm a large organization. "I have a lot of storage administered," as they can build something on their own, then software based is answer. Having most windows will give you a choice. >> What brought you to IBM. You used to be at NetApp. IBM's buying the weather company. Dell's buying EMC. What attracted you to IBM? Storage is the foundation which we have, but it's really about data, and it's really about making sense of it, right? And everybody saying data is the new oil, right? And IBM is probably the only company I can think of, which has the tools and the IT to make sense of all this. NetApp, it was great in early 2000s. Even as a storage foundation, they have issues, with scale out and a true scale out, not just a single name space. EMC is pure storage company. In the future it's all about, the reason we are here at this conference is about analyzing the data. What tools do you have to make sense of that. And that's where machine learning, then deep learning comes. Watson is very well-known for that. IBM has the IT and it has a rightful research going on behind that, and I think storage will make more sense here. And also, IBM is doing the right thing by investing almost a billion dollars in software defined storage. They are one of the first companies who did not hesitate to take the software from the integrated systems, for example, XIV, and made the software available as software only. We did the same thing with Store-Wise. We took the software off it and made available as Spectrum Virtualize. We did not hesitate at all to take the same software which was available, to some other vendors, "I can't do that. "I'm going to lose all my margins." We didn't hesitate. We made it available as software. 'Cause we believe that's an important need for our customers. >> So the vision of the company, cognitive, the halo effect of that business, that's the future, is going to bring a lot of storage action, is sort of the premise there. >> Chandra: Yes. >> Excellent, well Chandra, thanks very much for coming to theCUBE. It was great to have you, and good luck with attacking the big data world. >> Thank you, thanks for having me. >> You're welcome. Keep it right there everybody. We'll be back with our next guest. We're live from Munich. This is DataWorks 2017. Right back. (techno music)

Published Date : Apr 5 2017

SUMMARY :

Brought to you by Hortonworks. This is The Cube, the leader It does, it's the foundation. at it as the most sexy thing in the early days of Hadoop and big data, and that becomes part of the main storage. of some kind horizontal infrastructure One of the first things they do but it also makes the data stale. of legacy data hanging around, that, is what you're saying. So that's one of the You're saying your of the organization in terms of governance but also the whole bunch of the client server days. It's a good thing to do and we should, It's the software that you're paying for. And so if you strip that Now all of the functionalities an ASIC, but most don't. is the right answer for you. To some people, it's the object interface. it's not about the protocol, but what are you guys doing One of the first questions we get is, That's kind of you might sell based on the type of data, let's talk about kind of the business file of the applications on cloud. And also, IBM is doing the right thing is sort of the premise there. to theCUBE. This is DataWorks 2017.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jeff HammerbacherPERSON

0.99+

JohnPERSON

0.99+

IBMORGANIZATION

0.99+

HortonworksORGANIZATION

0.99+

DavePERSON

0.99+

HortonwordsORGANIZATION

0.99+

twoQUANTITY

0.99+

MunichLOCATION

0.99+

Chandra MukhyalaPERSON

0.99+

FacebookORGANIZATION

0.99+

ChandraPERSON

0.99+

two partsQUANTITY

0.99+

AmazonORGANIZATION

0.99+

billionsQUANTITY

0.99+

EMCORGANIZATION

0.99+

DellORGANIZATION

0.99+

DataWorks SummitEVENT

0.99+

SwiftTITLE

0.99+

early 2000sDATE

0.99+

OneQUANTITY

0.99+

one problemQUANTITY

0.99+

DataWorks SummitEVENT

0.99+

ClouderaORGANIZATION

0.99+

S3TITLE

0.98+

oneQUANTITY

0.98+

bothQUANTITY

0.98+

MapArtORGANIZATION

0.98+

firstQUANTITY

0.98+

Spectrum ScaleTITLE

0.97+

ten years agoDATE

0.97+

two years agoDATE

0.97+

first questionsQUANTITY

0.96+

first companiesQUANTITY

0.96+

billions of objectsQUANTITY

0.95+

HadoopTITLE

0.95+

#DW17EVENT

0.95+

one pointQUANTITY

0.95+

2017EVENT

0.94+

decadesQUANTITY

0.94+

one business functionQUANTITY

0.94+

zeroQUANTITY

0.94+

a year and a halfDATE

0.93+

DataWorks Summit Europe 2017EVENT

0.92+

one datasetQUANTITY

0.92+

one wayQUANTITY

0.92+

three different thingsQUANTITY

0.92+

DataWorks 2017EVENT

0.91+

SMBTITLE

0.91+

CapExORGANIZATION

0.9+

last one yearDATE

0.89+