Eric Herzog & Sam Werner, IBM | CUBEconversation
(upbeat music) >> Hello everyone, and welcome to this "Cube Conversation." My name is Dave Vellante and you know, containers, they used to be stateless and ephemeral but they're maturing very rapidly. As cloud native workloads become more functional and they go mainstream persisting, and protecting the data that lives inside of containers, is becoming more important to organizations. Enterprise capabilities such as high availability or reliability, scalability and other features are now more fundamental and important and containers are linchpin of hybrid cloud, cross-cloud and edge strategies. Now fusing these capabilities together across these regions in an abstraction layer that hides that underlying complexity of the infrastructure, is where the entire enterprise technology industry is headed. But how do you do that without making endless copies of data and managing versions not to mention the complexities and costs of doing so. And with me to talk about how IBM thinks about and is solving these challenges are Eric Herzog, who's the Chief Marketing Officer and VP of Global Storage Channels. For the IBM Storage Division is Sam Werner is the vice president of offering management and the business line executive for IBM Storage. Guys, great to see you again, wish should, were face to face but thanks for coming on "theCUBE." >> Great to be here. >> Thanks Dave, as always. >> All right guys, you heard me my little spiel there about the problem statement. Eric, maybe you could start us off. I mean, is it on point? >> Yeah, absolutely. What we see is containers are going mainstream. I frame it very similarly to what happened with virtualization, right? It got brought in by the dev team, the test team, the applications team, and then eventually of course, it became the main state. Containers is going through exactly that right now. Brought in by the dev ops people, the software teams. And now it's becoming again, persistent, real use clients that want to deploy a million of them. Just the way they historically have deployed a million virtual machines, now they want a million containers or 2 million. So now it's going mainstream and the feature functions that you need once you take it out of the test sort of play with stage to the real production phase, really changes the ball game on the features you need, the quality of what you get, and the types of things you need the underlying storage and the data services that go with that storage,. to do in a fully container world. >> So Sam how'd we get here? I mean, container has been around forever. You look inside a Linux, right? But then they did, as Eric said, go mainstream. But it started out the, kind of little experimental, As I said, their femoral didn't really need to persist them, but it's changed very quickly. Maybe you could talk to that evolution and how we got here. >> I mean, well, it's been a look, this is all about agility right? It's about enterprises trying to accelerate their innovation. They started off by using virtual machines to try to accelerate access to IT for developers, and developers are constantly out, running ahead. They got to go faster and they have to deliver new applications. Business lines need to figure out new ways to engage with their customers. Especially now with the past year we had it even further accelerated this need to engage with customers in new ways. So it's about being agile. Containers promise or provide a lot of the capabilities you need to be agile. What enterprises are discovering, a lot of these initiatives are starting within the business lines and they're building these applications or making these architectural decisions, building dev ops environments on containers. And what they're finding is they're not bringing the infrastructure teams along with them. And they're running into challenges that are inhibiting their ability to achieve the agility they want because their storage needs aren't keeping up. So this is a big challenge that enterprises face. They want to use containers to build a more agile environment to do things like dev ops, but they need to bring the infrastructure teams along. And that's what we're focused on now. Is how do you make that agile infrastructure to support these new container worlds? >> Got it, so Eric, you guys made an announcement to directly address these issues. Like it's kind of a fire hose of innovation. Maybe you could take us through and then we can unpack that a little bit. >> Sure, so what we did is on April 27th, we announced IBM Spectrum Fusion. This is a fully container native software defined storage technology that integrates a number of proven battle-hardened technologies that IBM has been deploying in the enterprise for many years. That includes a global scalable file system that can span edge core and cloud seamlessly with a single copy of the data. So no more data silos and no more 12 copies of the data which of course drive up CapEx and OpEx. Spectrum Fusion reduces that and makes it easier to manage. Cuts the cost from a CapEx perspective and cuts a cost for an OpEx perspective. By being fully container native, it's ready to go for the container centric world and could span all types of areas. So what we've done is create a storage foundation which is what you need at the bottom. So things like the single global namespace, single accessibility, we have local caching. So with your edge core cloud, regardless of where the data is, you think the data's right with you, even if it physically is not. So that allows people to work on it. We have file locking and other technologies to ensure that the data is always good. And then of course we'd imbued it with the HA Disaster Recovery, the backup and restore technology, which we've had for years and have now made of fully container native. So spectrum fusion basically takes several elements of IBM's existing portfolio has made them container native and brought them together into a single piece of software. And we'll provide that both as a software defined storage technology early in 2022. And our first pass will be as a hyperconverged appliance which will be available next quarter in Q3 of 2021. That of course means it'll come with compute, it'll come with storage, come with a rack even, come with networking. And because we can preload everything for the end users or for our business partners, it would also include Kubernetes, Red Gat OpenShift and Red Hat's virtualization technology all in one simple package, all ease of use and a single management gooey to manage everything, both the software side and the physical infrastructure that's part of the hyperconverged system level technologies. >> So, maybe it can help us understand the architecture and maybe the prevailing ways in which people approach container storage, what's the stack look like? And how have you guys approached it? >> Yeah, that's a great question. Really, there's three layers that we look at when we talk about container native storage. It starts with the storage foundation which is the layer that actually lays the data out onto media and does it in an efficient way and makes that data available where it's needed. So that's the core of it. And the quality of your storage services above that depend on the quality of the foundation that you start with. Then you go up to the storage services layer. This is where you bring in capabilities like HA and DR. People take this for granted, I think as they move to containers. We're talking about moving mission critical applications now into a container and hybrid cloud world. How do you actually achieve the same levels of high availability you did in the past? If you look at what large enterprises do, they run three site, for site replication of their data with hyper swap and they can ensure high availability. How do you bring that into a Kubernetes environment? Are you ready to do that? We talk about how only 20% of applications have really moved into a hybrid cloud world. The thing that's inhibiting the other 80% these types of challenges, okay? So the storage services include HA DR, data protection, data governance, data discovery. You talked about making multiple copies of data creates complexity, it also creates risk and security exposures. If you have multiple copies of data, if you needed data to be available in the cloud you're making a copy there. How do you keep track of that? How do you destroy the copy when you're done with it? How do you keep track of governance and GDPR, right? So if I have to delete data about a person how do I delete it everywhere? So there's a lot of these different challenges. These are the storage services. So we talk about a storage services layer. So layer one data foundation, layer two storage services, and then there needs to be connection into the application runtime. There has to be application awareness to do things like high availability and application consistent backup and recovery. So then you have to create the connection. And so in our case, we're focused on open shift, right? When we talk about Kubernetes how do you create the knowledge between layer two, the storage services and layer three of the application services? >> And so this is your three layer cake. And then as far as like the policies that I want to inject, you got an API out and entries in, can use whatever policy engine I want. How does that work? >> So we're creating consistent sets of APIs to bring those storage services up into the application, run time. We in IBM have things like IBM cloud satellite which bring the IBM public cloud experience to your data center and give you a hybrid cloud or into other public cloud environments giving you one hybrid cloud management experience. We'll integrate there, giving you that consistent set of storage services within an IBM cloud satellite. We're also working with Red Hat on their Advanced Cluster Manager, also known as RACM to create a multi-cluster management of your Kubernetes environment and giving that consistent experience. Again, one common set of APIs. >> So the appliance comes first? Is that a no? Okay, so is that just time to market or is there a sort of enduring demand for appliances? Some customers, you know, they want that, maybe you could explain that strategy. >> Yeah, so first let me take it back a second. Look at our existing portfolio. Our award-winning products are both software defined and system-based. So for example Spectrum Virtualize comes on our flash system. Spectrum Scale comes on our elastic storage system. And we've had this model where we provide the exact same software, both on an array or as standalone piece of software. This is unique in the storage industry. When you look at our competitors, when they've got something that's embedded in their array, their array manager, if you will, that's not what they'll try to sell you. It's software defined storage. And of course, many of them don't offer software defined storage in any way, shape or form. So we've done both. So with spectrum fusion, we'll have a hyper-converged configuration which will be available in Q3. We'll have a software defined configuration which were available at the very beginning of 2022. So you wanted to get out of this market feedback from our clients, feedback from our business partners by doing a container native HCI technology, we're way ahead. We're going to where the park is. We're throwing the ball ahead of the wide receiver. If you're a soccer fan, we're making sure that the mid guy got it to the forward ahead of time so you could kick the goal right in. That's what we're doing. Other technologies lead with virtualization, which is great but virtualization is kind of old hat, right? VMware and other virtualization layers have been around for 20 now. Container is where the world is going. And by the way, we'll support everything. We still have customers in certain worlds that are using bare metal, guess what? We work fine with that. We worked fine with virtual as we have a tight integration with both hyper V and VMware. So some customers will still do that. And containers is a new wave. So with spectrum fusion, we are riding the wave not fighting the wave and that way we could meet all the needs, right? Bare metal, virtual environments, and container environments in a way that is all based on the end users applications, workloads, and use cases. What goes, where and IBM Storage can provide all of it. So we'll give them two methods of consumption, by early next year. And we started with a hyper-converged first because, A, we felt we had a lead, truly a lead. Other people are leading with virtualization. We're leading with OpenShift and containers where the first full container-native OpenShift ground up based hyper-converged of anyone in the industry versus somebody who's done VMware or some other virtualization layer and then sort of glommed on containers and as an afterthought. We're going to where the market is moving, not to where the market has been. >> So just follow up on that. You kind of, you got the sort of Switzerland DNA. And it's not just OpenShift and Red Hat and the open source ethos. I mean, it just goes all the way back to San Volume Controller back in the day where you could virtualize anybody's storage. How is that carrying through to this announcement? >> So Spectrum Fusion is doing the same thing. Spectrum Fusion, which has many key elements brought in from our history with Spectrum Scale supports not IBM storage, for example, EMC Isilon NFS. It will support, Fusion will support Spectrum Scale, Fusion will support our elastic storage system. Fusion will support NetApp filers as well. Fusion will support IBM cloud object storage both software defined storage, or as an array technology and Amazon S3 object stores and any other object storage vendor who's compliant with S3. All of those can be part of the global namespace, scalable file system. We can bring in, for example, object data without making a duplicate copy. The normal way to do that as you make a duplicate copy. So you had a copy in the object store. You make a copy and to bring that into the file. Well, guess what, we don't have to do that. So again, cutting CapEx and OpEx and ease of management. But just as we do with our flash systems product and our Spectrum Virtualize and the SAN Volume Controller, we support over 550 storage arrays that are not ours that are our competitors. With Spectrum Fusion, we've done the same thing, fusion, scale the IBM ESS, IBM cloud object storage, Amazon S3 object store, as well as other compliance, EMC Isilon NFS, and NFS from NetApp. And by the way, we can do the discovery model as well not just integration in the system. So we've made sure that we really do protect existing investments. And we try to eliminate, particularly with discovery capability, you've got AI or analytics software connecting with the API, into the discovery technology. You don't have to traverse and try to find things because the discovery will create real time, metadata cataloging, and indexing, not just of our storage but the other storage I'd mentioned, which is the competition. So talk about making it easier to use, particularly for people who are heterogeneous in their storage environment, which is pretty much the bulk of the global fortune 1500, for sure. And so we're allowing them to use multiple vendors but derive real value with Spectrum Fusion and get all the capabilities of Spectrum Fusion and all the advantages of the enterprise data services but not just for our own product but for the other products as well that aren't ours. >> So Sam, we understand the downside of copies, but then, so you're not doing multiple copies. How do you deal with latency? What's the secret sauce here? Is it the file system? Is there other magic in here? >> Yeah, that's a great question. And I'll build a little bit off of what Eric said, but look one of the really great and unique things about Spectrum Scale is its ability to consume any storage. And we can actually allow you to bring in data sets from where they are. It could have originated in object storage we'll cash it into the file system. It can be on any block storage. It can literally be on any storage you can imagine as long as you can integrate a file system with it. And as you know most applications run on top of the file system. So it naturally fits into your application stack. Spectrum Scale uniquely is a globally parallel file system. So there's not very many of them in the world and there's none that can achieve what Spectrum Scale can do. We have customers running in the exabytes of data and the performance improves with scales. So you can actually deploy Spectrum Scale on-prem, build out an environment of it, consuming whatever storage you have. Then you can go into AWS or IBM cloud or Azure, deploy an instance of it and it will now extend your file system into that cloud. Or you can deploy it at the edge and it'll extend your file system to that edge. This gives you the exact same set of files and visibility and we'll cash in only what's needed. Normally you would have to make a copy of data into the other environment. Then you'd have to deal with that copy later, let's say you were doing a cloud bursting use case. Let's look at that as an example, to make this real. You're running an application on-prem. You want to spin up more compute in the cloud for your AI. The data normally you'd have to make a copy of the data. You'd run your AI. They have to figure out what to do with that data. Do you copy some of the fact? Do we sync them? Do you delete it? What do you do? With Spectrum Scale just automatically cash in whatever you need. It'll run there and you get assigned to spin it down. Your copy is still on-prem. You know, no data is lost. We can actually deal with all of those scenarios for you. And then if you look at what's happening at the edge, a lot of say video surveillance, data pouring in. Looking at the manufacturing {for} looking for defects. You can run a AI right at the edge, make it available in the cloud, make that data available in your data center. Again, one file system going across all. And that's something unique in our data foundation built on Spectrum Scale. >> So there's some metadata magic in there as well, and that intelligence based on location. And okay, so you're smart enough to know where the data lives. What's the sweet spot for this Eric? Are there any particular use cases or industries that we should be focused on or is it through? >> Sure, so first let's talk about the industries. We see certain industries going more container quicker than other industries. So first is financial services. We see it happening there. Manufacturing, Sam already talked about AI based manufacturing platforms. We actually have a couple clients right now. We're doing autonomous driving software with us on containers right now, even before Spectrum Fusion with Spectrum Scale. We see public of course, healthcare and in healthcare don't just think delivery at IBM. That includes the research guys. So the genomic companies, the biotech companies, the drug companies are all included in that. And then of course, retail, both on-prem and off-prem. So those are sort of the industries. Then we see from an application workload, basically AI analytics and big data applications or workloads are the key things that Spectrum Fusion helps you because of its file system. It's high performance. And those applications are tending to spread across core ,edge and cloud. So those applications are spreading out. They're becoming broader than just running in the data center. And by the way they want to run it just into the data center, that's fine. Or perfect example, we had giant global auto manufacturer. They've got factories all over. And if you think there isn't compute resources in every factory, there is because those factories I just saw an article, actually, those factories cost about a billion dollars to build them, a billion. So they've got their own IT, now it's connected to their core data center as well. So that's a perfect example that enterprise edge where spectrum fusion would be an ideal solution whether they did it as software defined only, or of course when you got a billion dollar factory, just to make it let alone produce the autos or whatever you're producing. Silicon, for example, those fabs, all cost a billion. That's where the enterprise edge fits in very well with Spectrum Fusion. >> So are those industries, what's driving the adoption of containers? Is it just, they just want to modernize? Is it because they're doing some of those workloads that you mentioned or is there's edge? Like you mentioned manufacturing, I could see that potentially being an edge is the driver. >> Well, it's a little bit of all of those Dave. For example, virtualization came out and virtualization offered advantages over bare metal, okay? Now containerization has come out and containerization is offering advantage over virtualization. The good thing at IBM is we know we can support all three. And we know again, in the global fortune 2000, 1500 they're probably going to run all three based on the application workload or use case. And our storage is really good at bare metal. Very good at virtualization environments. And now with Spectrum Fusion are container native outstanding for container based environments. So we see these big companies will probably have all three and IBM storage is one of the few vendors if not the only vendor that could adroitly support all three of those various workload types. So that's why we see this as a huge advantage. And again, the market is going to containers. We are, I'm a native California. You don't fight the wave, you ride the wave. and the wave is containers and we're riding that wave. >> If you don't ride the wave you become driftwood as Pat Gelsinger would say. >> And that is true, another native California. I'm a whole boss. >> So okay, so, I wonder Sam I sort of hinted upfront in my little narrative there but the way we see this, as you've got on-prem hybrid, you got public clouds across cloud moving to the edge. Open shift is I said is the linchpin to enabling some of those. And what we see is this layer that abstracts the complexity, hides the underlying complexity of the infrastructure that becomes kind of an implementation detail. Eric talked about skating to the park or whatever sports analogy you want to use. Is that where the park is headed? >> Yeah, I mean, look, the bottom line is you have to remove the complexity for the developers. Again, the name of the game here is all about agility. You asked why these industries are implementing containers? It's about accelerating their innovation and their services for their customers. It's about leveraging AI to gain better insights about their customers and delivering what they want and proving their experience. So if it's all about agility developers don't want to wait around for infrastructure. You need to automate it as much as possible. So it's about building infrastructure that's automated, which requires consistent API APIs. And it requires abstracting out the complexity of things like HA and DR. You don't want every application owner to have to figure out how to implement that. You want to make those storage services available and easy for a developer to implement and integrate into what they're doing. You want to ensure security across everything you do as you bring more and more of your data of your information about your customers into these container worlds. You've got to have security rock solid. You can't leave any exposures there and you can't afford downtime. There's increasing threats from things like ransomware. You don't see it in the news every day but it happens every single day. So how do you make sure you can recover when an event happens to you? So yes, you need to build a abstracted layer of storage services and you need to make it simply available to the developers in these dev ops environments. And that's what we're doing with spectrum fusion. We're taking, I think, extremely unique and one of a kind storage foundation with Spectrum Scale that gives you single namespace globally. And we're building onto it an incredible set of storage services, making extremely simple to deploy enterprise class container applications. >> So what's the bottom line business impact. I mean, how does this change? I mean, Sam, you I think articulated very well through all about serving the developers versus you know, storage, admin provisioning, a LUN. So how does this change my organization, my business? What's the impact there? >> I've mentioned one other point that we talk about an IBM a lot, which is the AI ladder. And it's about how do you take all of this information you have and be able to take it to build new insights, to give your company and advantage. An incumbent in an industry shouldn't be able to be disrupted if they're able to leverage all the data they have about the industry and their customers. But in order to do that, you have to be able to get to a single source of data and be able to build it into the fabric of your business operations. So that all decisions you're making in your company, all services you deliver to your customers, are built on that data foundation and information and the only way to do that and infuse it into your culture is to make this stuff real time. And the only way to do that is to build out a containerized application environment that has access to real-time data. The ultimate outcome, sorry, I know you asked for business results is that you will, in real time understand your clients, understand your industry and deliver the best possible services. And the absolute, business outcome is you will continue to gain market share and your environment and grow revenue. I mean, that's the outcome every business wants. >> Yeah, it's all about speed. Everybody's kind of, everybody's last year was forced into digital transformation. It was sort of rushed into and compressed and now they get some time to do it right. And so modernizing apps, containers, dev ops developer led sort of initiatives are really key to modernization. All right, Eric, we've got, we're out of time but give us the bottom summary. We didn't talk, actually, we had to talk about the 3,200. Maybe you could give us a little insight on that before we close. >> Sure, so in addition to what we're doing with Fusion we also introduced a new elastic storage system, 3,200 and it's all flash. It gets 80 gigs, a second sustained at the node level and we can cluster them infinitely. So for example, I've got 10 of them. I'm delivering 800 gigabytes, a second sustained. And of course, AI, big data analytic workloads are extremely, extremely susceptible to bandwidth and or data transfer rate. That's what they need to deliver their application base properly. It comes with Spectrum Scale built in so that comes with it. So you get the advantage of Spectrum Scale. We talked a lot about Spectrum Scale because it is if you will, one of the three fathers of spectrum fusion. So it's ideal with it's highly parallel file system. It's used all over in high performance computing and super computing, in drug research, in health care in finance, probably about 80% of the world's largest banks in the world use Spectrum Scale already for AI, big data analytics. So the new 3,200 is an all flash version twice as fast as the older version and all the benefit of Spectrum Scale including the ability of seamlessly integrating into existing Spectrum Scale or ESS deployments. And when Fusion comes out, you'll be able to have Fusion. And you could also add 3,200 to it if you want to do that because of the capability of our global namespace and our single file system across edge, core and cloud. So that's the 3,200 in a nutshell, Dave. >> All right, give us a bottom line, Eric. And we got to go, what's the bumper sticker. >> Yeah, bumper sticker is, you got to ride the wave of containers and IBM storage is company that can take you there so that you win the big surfing context and get the big prize. >> Eric and Sam, thanks so much, guys. It's great to see you and miss you guys. Hopefully we'll get together soon. So get your jabs and we'll have a beer. >> All right. >> All right, thanks, Dave. >> Nice talking to you. >> All right, thank you for watching everybody. This is Dave Vellante for "theCUBE." We'll see you next time. (upbeat music)
SUMMARY :
and protecting the data about the problem statement. and the types of things you Maybe you could talk to that a lot of the capabilities Got it, so Eric, you the data is, you think So that's the core of it. you got an API out and entries in, into the application, run time. So the appliance comes first? that the mid guy got it to in the day where you could And by the way, we can do Is it the file system? and the performance improves with scales. What's the sweet spot for this Eric? And by the way they want to run it being an edge is the driver. and IBM storage is one of the few vendors If you don't ride the And that is true, but the way we see this, as So how do you make sure What's the impact there? and the only way to do that and infuse it and now they get some time to do it right. So that's the 3,200 in a nutshell, Dave. the bumper sticker. so that you win the big It's great to see you and miss you guys. All right, thank you
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Eric | PERSON | 0.99+ |
Eric Herzog | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
Sam | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Sam Werner | PERSON | 0.99+ |
April 27th | DATE | 0.99+ |
Dave | PERSON | 0.99+ |
10 | QUANTITY | 0.99+ |
80 gigs | QUANTITY | 0.99+ |
12 copies | QUANTITY | 0.99+ |
3,200 | QUANTITY | 0.99+ |
California | LOCATION | 0.99+ |
80% | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
2 million | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
first | QUANTITY | 0.99+ |
CapEx | TITLE | 0.99+ |
800 gigabytes | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
IBM Storage | ORGANIZATION | 0.99+ |
single copy | QUANTITY | 0.99+ |
OpEx | TITLE | 0.98+ |
three layers | QUANTITY | 0.98+ |
Spectrum Fusion | COMMERCIAL_ITEM | 0.98+ |
20% | QUANTITY | 0.98+ |
EMC | ORGANIZATION | 0.98+ |
first pass | QUANTITY | 0.98+ |
S3 | TITLE | 0.98+ |
Global Storage Channels | ORGANIZATION | 0.98+ |
a billion | QUANTITY | 0.97+ |
two | QUANTITY | 0.97+ |
20 | QUANTITY | 0.97+ |
Spectrum Scale | TITLE | 0.97+ |
three fathers | QUANTITY | 0.97+ |
early next year | DATE | 0.97+ |
three | QUANTITY | 0.97+ |
GDPR | TITLE | 0.96+ |
Red Hat | ORGANIZATION | 0.96+ |
OpenShift | TITLE | 0.96+ |
Eric Herzog, IBM | CUBEConversation, March 2019
(upbeat music) [Announcer] From our studios in the heart of Silicon Valley Palo Alto, California. This is a CUBE conversation. >> Hi, I'm Peter Burris, and welcome to another CUBE conversation from our studios in beautiful Palo Alto, California. One of the biggest challenges that every user faces is how are they going to arrange their resources that are responsible for storing, managing, delivering, and protecting data. And that's a significant challenge, but it gets even worse when we start talking about multi-cloud. So, today we've got Eric Herzog who's the CMO and VP of Worldwide Storage Channels at IBM Storage to talk a bit about the evolving relationship of what constitutes a modern, comprehensive storage portfolio and multi-cloud. Eric, welcome to theCUBE. >> Peter, Thank you, thank you. >> So, start off, what's happening with IBM Storage these days, and let's get into this kind of how multi-cloud is affecting some of your decisions, and some of your customer's decisions. >> So, what we've done, is we've started talking about multi-cloud over two years ago. When Ed Walsh joined the company as a general manager, we went on an analyst roadshow, in fact, we came here to theCUBE and shot a video, and we talked about how the IBM Storage Division is all about multi-cloud. And we look about that in three ways. First of all, if you are creating a private cloud, we work with you. From a container, whether you're Vmware based, whether you are doing a more traditional cloud- private cloud. Now the modern private cloud, all container based. Second is Hybrid Cloud, data on parem, out to a public cloud provider. And the third aspect, and in fact, you guys have written about it in one of your studies is that no one is going to use one public cloud provider, they're going to use multiple cloud providers. So whether that be IBM Cloud, which of course we love because we're IBM shareholders, but we work with Amazon, we work with Google, and in fact we work with any cloud provider. Our Spectrum Protect backup product, which is one of the most awarded enterprise backup packages can backup to any cloud. In fact, over 350 small to medium cloud providers, the engine for their backup as a service, is Spectrum Protect. Again, completely heterogeneous, we don't care what cloud you use, we support everyone. And we started that mantra two and a half years ago, when Ed first joined the company. >> Now, I remember when you came on, we talked a lot about this notion of data first and the idea that data driven was what we talked about >> Right, data driven. >> And increasingly, we talked about, or we made the observation that enterprises were going to take a look at the natural arrangement of their data, and that was going to influence a lot of their cloud, a lot of their architecture, and certainly a lot of their storage divisions or decisions. How is that playing out? Is that still obtaining? Are you still seeing more enterprises taking this kind of data driven approach to thinking about their overall cloud architectures? >> Well the world is absolutely data-centric. Where does the data go? What are security issues with that data? How is it close to the compute when I need it? How do I archive I, how do I back it up? How do I protect it? We're here in Silicon Valley. I'm a native Palo Alton, by the way, and we really do have earthquakes here, and they really do have earthquakes in Japan and China and there is all kinds of natural disasters. And of course as you guys have pointed out, as have almost all of the analysts, the number one cause of data loss besides humans is actually still fire. Even with fire suppressant data centers. >> And we have fires out here in Northern California too. >> That's true. So, you've got to make sure that you're backing up that data, you're archiving the data. Cloud could be part of that strategy. When does it need to be on parem, when does it need to be off parem? So, it's all about being a data-driven, and companies look at the data, profile the date and time, What sort of storage do I need? Can I go high end, mid-range and entry, profile that data, figure that out, what they need to do. And then do the same thing now with on parem and off parem. For certain data sets, for security reasons, legal reasons you probably are not going to put it out into a public cloud provider. But other data sets are ideal for that and so all of those decisions that are being made by: What's the security of the data? What's the legality of that data? What's the performance I need of that data? And, how often do I need the data? If you're going to constantly go back and forth, pull data back in, going to a public cloud provider, which charge both for in and out of the data, that actually may cost more than buying an Array on parem. And so, everyone's using that data-centricity to figure out how do they spend their money, and how do they optimize the data to use it in their applications, workloads and use cases. >> So, if you think about it, the reality is by application, workload, location, regulatory issues, we're seeing enterprises start to recognize and increase specialization of their data assets. And that's going to lead to a degree of specializations in the classes of data management and storage technologies that they utilize. Now, what is the challenge of choosing a specific solution versus looking at more of a portfolio of solutions, that perhaps provide a little bit more commonality? How are customers, how are the IMB customer base dealing with that question. >> Well, for us the good thing was to have a broad portfolio. When you look at the base storage Arrays we have file, block and object, they're all award winning. We can go big, we can go medium, and we can go small. And because of what we do with our Array family we have products that tend to be expensive because of what they do, products that mid-price and products that are perfect for Herzog's Bar and Grill. Or maybe for 5,000 different bank branches, 'cause that bank is not going to buy expensive storage for every branch. They have a small Array there in case core goes down, of course. When you or I go in to get a check or transact, if the core data center is down, that Wells Fargo, BofA, Bank of Tokyo. >> Still has to do business. >> They are all transacting. There's a small Array there. Well you don't want to spend a lot of money for that, you need a good, reliable all flash Array with the right RAS capability, right? The availability, capability, that's what you need, And we can do that. The other thing we do is, we have very much, cloud-ified everything we do. We can tier to the cloud, we can backup to the cloud. With object storage we can place it in the cloud. So we've made the cloud, if you will, a seamless tier to the storage infrastructure for our customers. Whether that be backup data, archive data, primary data, and made it so it's very easy to do. Remember, with that downturn in '08 and '09 a lot of storage people left their job. And while IT headcount is back up to where it used to be, in fact it's actually exceeded, if there was 50 storage guys at Company X, and they had to let go 25 of them, they didn't hire 25 storage guys now, but they got 10 times the data. So they probably have 2 more storage guys, they're from 25 to 27, except they're managing 10 times the data, so automation, seamless integration with clouds, and being multi-cloud, supporting hybrid clouds is a critical thing in today's storage world. >> So you've talked a little bit about format, data format issues still impact storage decisions. You've talked about how disasters or availability still impact storage decisions, certainly cost does. But you've also talked about some of the innovative things that are happening, security, encryption, evolved backup and and restore capabilities, AI and how that's going to play, what are some of the key thing that your customer base is asking for that's really driving some of your portfolio decisions? >> Sure, well when we look beyond making sure we integrate with every cloud and make it seamless, the other aspect is AI. AI has taken off, machine learning, big data, all those. And there it's all about having the right platform from an Array perspective, but then marrying it with the right software. So for example, our scale-out file system, Spectrum Scale can go to Exabyte Class, in fact the two fastest super computers on this planet have almost half an exabyte of IBM Spectrum Scale for big data, analytics, and machine learning workloads. At the same time you need to have Object Store. If you're generating that huge amount of data set in AI world, you want to be able to put it out. We also now have Spectrum discover, which allows you to use Metadata, which is the data about the data, and allow and AI app, a machine learning app, or an analytics app to actually access the metadata through an API. So that's one area, so cloud, then AI, is a very important aspect. And of course, cyber resiliency, and cyber security is critical. Everyone thinks, I got to call a security company, so the IBM Security Division, RSA, Check Point, Symantec, McAfee, all of these things. But the reality is, as you guys have noted, 98% of all enterprises are going to get broken into. So while they're in your house, they can steal you blind. Before the cops show up, like the old movie, what are they doing? They're loading up the truck before the cops show up. Well guess what, what if that happened, cops didn't show up for 20 minutes, but they couldn't steal anything, or the TV was tied to your fingerprint? So guess what, they couldn't use the TV, so they couldn't steal it, that's what we've done. So, whether it be encryption everywhere, we can encrypt backup sets, we can encrypt data at rest, we can even encrypt Arrays that aren't ours with our Spectrum Virtualize family. Air gapping, so that if you have ransomware or malware you can air-gap to tape. We've actually created air gapping out with a cloud snapshot. We have a product called Safeguard Copy which creates what I'll call a faux air gap in the mainframe space, but allows that protection so it's almost as if it was air gapped even though it's on an Array. So that's a ransomware and malware, being able to detect that, our backup products when they see an unusual activity will flag the backup restore jam and say there is unusual activity. Why, because ransomware and malware generate unusual activity on back up data sets in particular, so it's flaky. Now we don't go out and say, "By the way, that's Herzog ransomware, or "Peter Burris ransomware." But we do say "something is wrong, you need to take a look." So, integrating that sort of cyber resiliency and cyber security into the entire storage portfolio doesn't mean we solve everything. Which is why when you get an overall security strategy, you've got that Great Wall of China to keep the enemy out, you've got the what I call, chase software to get the bad guy once he's in the house, the cops that are coming to get the bad guy. But you've got to be able to lock everything down, you'll do it. So a comprehensive security strategy, and resiliency strategy involves not only your security vendor, but actually your storage vendor. And IBM's got the right cyber resiliency and security technology on the storage side to marry up, regardless of which security vendor they choose. >> Now you mention a number of things that are associated with how an enterprise is going to generate greater leverage, greater value out of data that you already know. So, you mentioned, you know, encryption end to end, you mention being able to look at metadata for AI applications. As we move to a software driven world of storage where physical volumes can still be made more virtual so you can move them around to different workloads. >> Right. >> And associate the data more easily, tell us a little bit about how data movement becomes an issue in the storage world, because the storage has already been associated with it's here. But increasingly, because of automation, because of AI, because of what businesses are trying to do, it's becoming more associated with intelligent, smart, secure, optimized movement of data. How is that starting to impact the portfolio? >> So we look at that really as data mobility. And data mobility can be another number of different things, for example, we already mentioned, we treat clouds as transparent tiers. We can backup to cloud, that's data mobility. We also tier data, we can tier data within an Array, or the Spectrum Virtualize product. We can tier data, block data cross 450 Arrays, most of which aren't IBM logo'd. We can tier from IBM to EMC, EMC can then tier to HDS, HDS can tier to Hitachi, and we do that on Arrays that aren't ours. So in that case what you're doing is looking for the optimal price point, whether it be- >> And feature set. >> And feature sets, and you move things, data around all transparently, so it's all got to be automated, that's another thing, in the old days we thought we had Nirvana when the tiering was automatically moved the data when it's 30 days old. What if we automatically move data with our Easy Tier technology through AI, when the data is hot moves it to the hottest tier, when the data is cold it puts it out to the lowest cost tier. That's real automation leveraging AI technology. Same thing, something simple, migration. How much money have all the storage companies made on migration services? What if you could do transparent block migration in the background on the fly, without ever taking your servers down, we can do that. And what we do is, it's so intelligent we always favor the data set, so when the data is being worked on, migration slows down. When the data set slows down, guess what? Migration picks up. But the point is, data mobility, in this case from an old Array to an new Array. So whether it be migrating data, whether it be tiering data, whether you're moving data out to the cloud, whether it be primary data or backup data, or object data for archive, the bottom line is we've infused not only the cloudification of our storage portfolio, but the mobility aspects of the portfolio. Which does of course include cloud. But all tiering more likely is on premise. You could tier to the cloud, but all flash Array to a cheap 7200 RPM Array, you save a lot of money and we can do that using AI technology with Easy Tier. All examples of moving data around transparently, quickly, efficiently, to save cost both in CapEx, using 7200 RPM Arrays of course to cut costs, but actually OpEx the storage admin, there aren't a hundred storage admins at Burris Incorporated. You had to let them go, you've hired 100 of the people back, but you hired them all for DevOps so you have 50 guys in storage >> Actually there are, but I'm a lousy businessman so I'm not going to be in business long. (laughing) One more question, Eric. I mean look you're an old style road warrior, you're out with customers a lot. Increasingly, and I know this because we've talked about it, you're finding yourself trying to explain to business people, not just IT people how digital business, data and storage come together. When you're having these conversations with executives on the business side, how does this notion of data services get discussed? What are some of the conversations like? >> Well I think the key thing you got to point out is storage guys love to talk speeds and feeds. I'm so old I can still talk TPI and BPI on hard drives and no one does that anymore, right? But, when you're talking to the CEO or the CFO or the business owner, it's all about delivering data at the right performance level you need for your applications, workloads and use cases, your right resiliency for applications, workloads and use cases, your right availability, so it's all about application, workloads, and use cases. So you don't talk about storage speeds and feeds that you would with Storage Admin, or maybe in the VP of infrastructure in the Fortune 500, you'd talk about it's all about the data, keeping the data secure, keeping the data reliable, keeping it at right performance. So if it's on the type of workload that needs performance, for example, let's take the easy one, Flash. Why do I need Flash? Well, Mr. CEO, do you use logistics? Of course we do! Who do you use, SAP. Oh, how long does that logistics workload take? Oh, it takes like 24 hours to run. What if I told you you could run that every night, in an hour? That's the power of Flash. So you translate what you and I are used to, storage nerdiness, we translate it into businessfied, in this case, running that SAP workload in an hour vs. 24 has a real business impact. And that's the way you got to talk about storage these days. When you're out talking to a storage admin, with the admin, yes, you want to talk latency and IOPS and bandwidth. But the CEO is just going to turn his nose up. But when you say I can run the MongoDB workload, or I can do this or do that, and I can do it. What was 24 hours in an hour, or half an hour. That translates to real data, and real value out of that data. And that's what they're looking for, is how to extract value from the data. If the data isn't performant, you get less value. If the data isn't there, you clearly have no value. And if the data isn't available enough so that it's down part time, if you are doing truly digital business. So, if Herzog's Bar and Grill, actually everything is done digitally, so before you get that pizza, or before you get that cigar, you have to order it online. If my website, which has a database underneath, of course, so I can handle the transactions right, I got to take the credit card, I got to get the orders right. If that is down half the time, my business is down, and that's an example of taking IT and translating it to something as simple as a Bar and Grill. And everyone is doing it these days. So when you talk about, do you want that website up all the time? Do you need your order entry system up all the time? Do you need your this or that? Then they actually get it, and then obviously, making sure that the applications run quickly, swiftly, and smoothly. And storage is, if you will, that critical foundation underneath everything. It's not the fancy windows, it's not the fancy paint. But if that foundation isn't right, what happens? The whole building falls down. And that's exactly what storage delivers regardless of the application workload. That right critical foundation of performance, availability, reliability. That's what they need, when you have that all applications run better, and your business runs better. >> Yeah, and the one thing I'd add to that, Eric, is increasingly the conversations that we're having is options. And one of the advantages of a large portfolio or a platform approach is that the things you're doing today, you'll discover new things that you didn't anticipate, and you want the option to be able to do them quickly. >> Absolutely. >> Very, very important thing. So, applications, workload, use cases, multi-cloud storage portfolio. Eric, thanks again for coming on theCUBE, always love having you. >> Great, thank you. >> And once again, I'm Peter Burris, talking with Eric Herzog, CMO, VP of Worldwide Storage Channels at IBM Storage. Thanks again for watching this CUBE conversation, until next time. (upbeat music)
SUMMARY :
[Announcer] From our studios in the heart One of the biggest challenges that every user faces how multi-cloud is affecting some of your And the third aspect, and in fact, you guys have take a look at the natural arrangement of their And of course as you guys have pointed out, as have What's the legality of that data? How are customers, how are the IMB customer base And because of what we do with our Array family We can tier to the cloud, we can backup to the cloud. AI and how that's going to play, But the reality is, as you guys have noted, 98% of data that you already know. And associate the data more easily, tell us a little HDS, HDS can tier to Hitachi, and we cloudification of our storage portfolio, but the What are some of the conversations like? And that's the way you got to talk about storage these days. Yeah, and the one thing I'd add to that, Eric, is multi-cloud storage portfolio. And once again, I'm Peter Burris, talking with
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
EMC | ORGANIZATION | 0.99+ |
BofA | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Eric Herzog | PERSON | 0.99+ |
McAfee | ORGANIZATION | 0.99+ |
Peter Burris | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Eric | PERSON | 0.99+ |
Symantec | ORGANIZATION | 0.99+ |
Ed Walsh | PERSON | 0.99+ |
Hitachi | ORGANIZATION | 0.99+ |
Japan | LOCATION | 0.99+ |
Wells Fargo | ORGANIZATION | 0.99+ |
10 times | QUANTITY | 0.99+ |
24 hours | QUANTITY | 0.99+ |
March 2019 | DATE | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
25 | QUANTITY | 0.99+ |
50 guys | QUANTITY | 0.99+ |
Bank of Tokyo | ORGANIZATION | 0.99+ |
RSA | ORGANIZATION | 0.99+ |
Check Point | ORGANIZATION | 0.99+ |
20 minutes | QUANTITY | 0.99+ |
Northern California | LOCATION | 0.99+ |
Company X | ORGANIZATION | 0.99+ |
98% | QUANTITY | 0.99+ |
Burris Incorporated | ORGANIZATION | 0.99+ |
Herzog's Bar and Grill | ORGANIZATION | 0.99+ |
China | LOCATION | 0.99+ |
Peter | PERSON | 0.99+ |
'08 | DATE | 0.99+ |
half an hour | QUANTITY | 0.99+ |
IBM Storage | ORGANIZATION | 0.99+ |
Second | QUANTITY | 0.99+ |
50 storage guys | QUANTITY | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
'09 | DATE | 0.99+ |
third aspect | QUANTITY | 0.99+ |
24 | QUANTITY | 0.99+ |
7200 RPM | QUANTITY | 0.99+ |
27 | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
IBM Security Division | ORGANIZATION | 0.99+ |
Bar and Grill | ORGANIZATION | 0.98+ |
first | QUANTITY | 0.98+ |
Easy Tier | OTHER | 0.98+ |
IBM Storage | ORGANIZATION | 0.98+ |
both | QUANTITY | 0.98+ |
an hour | QUANTITY | 0.98+ |
three ways | QUANTITY | 0.98+ |
One more question | QUANTITY | 0.97+ |
One | QUANTITY | 0.97+ |
Great Wall of China | LOCATION | 0.97+ |
today | DATE | 0.96+ |
Palo Alton | LOCATION | 0.96+ |
5,000 different bank branches | QUANTITY | 0.95+ |
two and a half years ago | DATE | 0.93+ |
CUBE | ORGANIZATION | 0.93+ |
hundred storage admins | QUANTITY | 0.93+ |
2 more storage guys | QUANTITY | 0.93+ |
Spectrum | COMMERCIAL_ITEM | 0.91+ |
Silicon Valley Palo Alto, California | LOCATION | 0.91+ |
Vmware | ORGANIZATION | 0.91+ |
two fastest super computers | QUANTITY | 0.9+ |
one area | QUANTITY | 0.89+ |
25 storage guys | QUANTITY | 0.87+ |
parem | ORGANIZATION | 0.87+ |
30 days old | QUANTITY | 0.86+ |
Herzog | ORGANIZATION | 0.86+ |
Flash | TITLE | 0.85+ |
Ed | PERSON | 0.85+ |
Spectrum Virtualize | COMMERCIAL_ITEM | 0.84+ |
IBM Storage Division | ORGANIZATION | 0.83+ |
Eric Herzog, IBM | DataWorks Summit 2018
>> Live from San Jose in the heart of Silicon Valley, it's theCUBE, covering DataWorks Summit 2018, brought to you by Hortonworks. >> Welcome back to theCUBE's live coverage of DataWorks here in San Jose, California. I'm your host, Rebecca Knight, along with my co-host, James Kobielus. We have with us Eric Herzog. He is the Chief Marketing Officer and VP of Global Channels at the IBM Storage Division. Thanks so much for coming on theCUBE once again, Eric. >> Well, thank you. We always love to be on theCUBE and talk to all of theCUBE analysts about various topics, data, storage, multi-cloud, all the works. >> And before the cameras were rolling, we were talking about how you might be the biggest CUBE alum in the sense of you've been on theCUBE more times than anyone else. >> I know I'm in the top five, but I may be number one, I have to check with Dave Vellante and crew and see. >> Exactly and often wearing a Hawaiian shirt. >> Yes. >> Yes, I was on theCUBE last week from CISCO Live. I was not wearing a Hawaiian shirt. And Stu and John gave me a hard time about why was not I wearing a Hawaiian shirt? So I make sure I showed up to the DataWorks show- >> Stu, Dave, get a load. >> You're in California with a tan, so it fits, it's good. >> So we were talking a little bit before the cameras were rolling and you were saying one of the points that is sort of central to your professional life is it's not just about the storage, it's about the data. So riff on that a little bit. >> Sure, so at IBM we believe everything is data driven and in fact we would argue that data is more valuable than oil or diamonds or plutonium or platinum or silver to anything else. It is the most viable asset, whether you be a global Fortune 500, whether you be a midsize company or whether you be Herzogs Bar and Grill. So data is what you use with your suppliers, with your customers, with your partners. Literally everything around your company is really built around the data so most effectively managing it and make sure, A, it's always performant because when it's not performant they go away. As you probably know, Google did a survey that one, two, after one, two they go off your website, they click somewhere else so has to be performant. Obviously in today's 365, 7 by 24 company it needs to always be resilient and reliable and it always needs to be available, otherwise if the storage goes down, guess what? Your AI doesn't work, your Cloud doesn't work, whatever workload, if you're more traditional, your Oracle, Sequel, you know SAP, none of those workloads work if you don't have a solid storage foundation underneath your data driven enterprise. >> So with that ethos in mind, talk about the products that you are launching, that you newly launched and also your product roadmap going forward. >> Sure, so for us everything really is that storage is this critical foundation for the data driven, multi Cloud enterprise. And as I've said before on theCube, all of our storage software's now Cloud-ified so if you need to automatically tier out to IBM Cloud or Amazon or Azure, we automatically will move the data placement around from one premise out to a Cloud and for certain customers who may be multi Cloud, in this case using multiple private Cloud providers, which happens due to either legal reasons or procurement reasons or geographic reasons for the larger enterprises, we can handle that as well. That's part of it, second thing is we just announced earlier today an artificial intelligence, an AI reference architecture, that incorporates a full stack from the very bottom, both servers and storage, all the way up through the top layer, then the applications on top, so we just launched that today. >> AI for storage management or AI for run a range of applications? >> Regular AI, artificial intelligence from an application perspective. So we announced that reference architecture today. Basically think of the reference architecture as your recipe, your blueprint, of how to put it all together. Some of the components are from IBM, such as Spectrum Scale and Spectrum Computing from my division, our servers from our Cloud division. Some are opensource, Tensor, Caffe, things like that. Basic gives you what the stack needs to be, and what you need to do in various AI workloads, applications and use cases. >> I believe you have distributed deep learning as an IBM capability, that's part of that stack, is that correct? >> That is part of the stack, it's like in the middle of the stack. >> Is it, correct me if I'm wrong, that's containerization of AI functionality? >> Right. >> For distributed deployment? >> Right. >> In an orchestrated Kubernetes fabric, is that correct? >> Yeah, so when you look at it from an IBM perspective, while we clearly support the virtualized world, the VM wares, the hyper V's, the KVMs and the OVMs, and we will continue to do that, we're also heavily invested in the container environment. For example, one of our other divisions, the IBM Cloud Private division, has announced a solution that's all about private Clouds, you can either get it hosted at IBM or literally buy our stack- >> Rob Thomas in fact demoed it this morning, here. >> Right, exactly. And you could create- >> At DataWorks. >> Private Cloud initiative, and there are companies that, whether it be for security purposes or whether it be for legal reasons or other reasons, don't want to use public Cloud providers, be it IBM, Amazon, Azure, Google or any of the big public Cloud providers, they want a private Cloud and IBM either A, will host it or B, with IBM Cloud Private. All of that infrastructure is built around a containerized environment. We support the older world, the virtualized world, and the newer world, the container world. In fact, our storage, allows you to have persistent storage in a container's environment, Dockers and Kubernetes, and that works on all of our block storage and that's a freebie, by the way, we don't charge for that. >> You've worked in the data storage industry for a long time, can you talk a little bit about how the marketing message has changed and evolved since you first began in this industry and in terms of what customers want to hear and what assuages their fears? >> Sure, so nobody cares about speeds and feeds, okay? Except me, because I've been doing storage for 32 years. >> And him, he might care. (laughs) >> But when you look at it, the decision makers today, the CIOs, in 32 years, including seven start ups, IBM and EMC, I've never, ever, ever, met a CIO who used to be a storage guy, ever. So, they don't care. They know that they need storage and the other infrastructure, including servers and networking, but think about it, when the app is slow, who do they blame? Usually they blame the storage guy first, secondarily they blame the server guy, thirdly they blame the networking guy. They never look to see that their code stack is improperly done. Really what you have to do is talk applications, workloads and use cases which is what the AI reference architecture does. What my team does in non AI workloads, it's all about, again, data driven, multi Cloud infrastructure. They want to know how you're going to make a new workload fast AI. How you're going to make their Cloud resilient whether it's private or hybrid. In fact, IBM storage sells a ton of technology to large public Cloud providers that do not have the initials IBM. We sell gobs of storage to other public Cloud providers, both big, medium and small. It's really all about the applications, workloads and use cases, and that's what gets people excited. You basically need a position, just like I talked about with the AI foundations, storage is the critical foundation. We happen to be, knocking on wood, let's hope there's no earthquake, since I've lived here my whole life, and I've been in earthquakes, I was in the '89 quake. Literally fell down a bunch of stairs in the '89 quake. If there's an earthquake as great as IBM storage is, or any other storage or servers, it's crushed. Boom, you're done! Okay, well you need to make sure that your infrastructure, really your data, is covered by the right infrastructure and that it's always resilient, it's always performing and is always available. And that's what IBM drives is about, that's the message, not about how many gigabytes per second in bandwidth or what's the- Not that we can't spew that stuff when we talk to the right person but in general people don't care about it. What they want to know is, "Oh that SAP workload took 30 hours and now it takes 30 minutes?" We have public references that will say that. "Oh, you mean I can use eight to ten times less storage for the same money?" Yes, and we have public references that will say that. So that's what it's really about, so storage is really more from really a speeds and feeds Nuremberger sort of thing, and now all the Nurembergers are doing AI and Caffe and TensorFlow and all of that, they're all hackers, right? It used to be storage guys who used to do that and to a lesser extent server guys and definitely networking guys. That's all shifted to the software side so you got to talk the languages. What can we do with Hortonworks? By the way we were named in Q1 of 2018 as the Hortonworks infrastructure partner of the year. We work with Hortonworks all time, at all levels, whether it be with our channel partners, whether it be with our direct end users, however the customer wants to consume, we work with Hortonworks very closely and other providers as well in that big data analytics and the AI infrastructure world, that's what we do. >> So the containerizations side of the IBM AI stack, then the containerization capabilities in Hortonworks Data Platform 3.0, can you give us a sense for how you plan to, or do you plan at IBM, to work with Hortonworks to bring these capabilities, your reference architecture, into more, or bring their environment for that matter, into more of an alignment with what you're offering? >> So we haven't an exact decision of how we're going to do it, but we interface with Hortonworks on a continual basis. >> Yeah. >> We're working to figure out what's the right solution, whether that be an integrated solution of some type, whether that be something that we do through an adjunct to our reference architecture or some reference architecture that they have but we always make sure, again, we are their partner of the year for infrastructure named in Q1, and that's because we work very tightly with Hortonworks and make sure that what we do ties out with them, hits the right applications, workloads and use cases, the big data world, the analytic world and the AI world so that we're tied off, you know, together to make sure that we deliver the right solutions to the end user because that's what matters most is what gets the end users fired up, not what gets Hortonworks or IBM fired up, it's what gets the end users fired up. >> When you're trying to get into the head space of the CIO, and get your message out there, I mean what is it, what would you say is it that keeps them up at night? What are their biggest pain points and then how do you come in and solve them? >> I'd say the number one pain point for most CIOs is application delivery, okay? Whether that be to the line of business, put it this way, let's take an old workload, okay? Let's take that SAP example, that CIO was under pressure because they were trying, in this case it was a giant retailer who was shipping stuff every night, all over the world. Well guess what? The green undershirts in the wrong size, went to Paducah, Kentucky and then one of the other stores, in Singapore, which needed those green shirts, they ended up with shoes and the reason is, they couldn't run that SAP workload in a couple hours. Now they run it in 30 minutes. It used to take 30 hours. So since they're shipping every night, you're basically missing a cycle, essentially and you're not delivering the right thing from a retail infrastructure perspective to each of their nodes, if you will, to their retail locations. So they care about what do they need to do to deliver to the business the right applications, workloads and use cases on the right timeframe and they can't go down, people get fired for that at the CIO level, right? If something goes down, the CIO is gone and obviously for certain companies that are more in the modern mode, okay? People who are delivering stuff and their primary transactional vehicle is the internet, not retail, not through partners, not through people like IBM, but their primary transactional vehicle is a website, if that website is not resilient, performant and always reliable, then guess what? They are shut down and they're not selling anything to anybody, which is to true if you're Nordstroms, right? Someone can always go into the store and buy something, right, and figure it out? Almost all old retailers have not only a connection to core but they literally have a server and storage in every retail location so if the core goes down, guess what, they can transact. In the era of the internet, you don't do that anymore. Right? If you're shipping only on the internet, you're shipping on the internet so whether it be a new workload, okay? An old workload if you're doing the whole IOT thing. For example, I know a company that I was working with, it's a giant, private mining company. They have those giant, like three story dump trucks you see on the Discovery Channel. Those things cost them a hundred million dollars, so they have five thousand sensors on every dump truck. It's a fricking dump truck but guess what, they got five thousand sensors on there so they can monitor and make sure they take proactive action because if that goes down, whether these be diamond mines or these be Uranium mines or whatever it is, it costs them hundreds of millions of dollars to have a thing go down. That's, if you will, trying to take it out of the traditional, high tech area, which we all talk about, whether it be Apple or Google, or IBM, okay great, now let's put it to some other workload. In this case, this is the use of IOT, in a big data analytics environment with AI based infrastructure, to manage dump trucks. >> I think you're talking about what's called, "digital twins" in a networked environment for materials management, supply chain management and so forth. Are those requirements growing in terms of industrial IOT requirements of that sort and how does that effect the amount of data that needs to be stored, the sophistication of the AI and the stream competing that needs to be provisioned? Can you talk to that? >> The amount of data is growing exponentially. It's growing at yottabytes and zettabytes a year now, not at just exabytes anymore. In fact, everybody on their iPhone or their laptop, I've got a 10GB phone, okay? My laptop, which happens to be a Power Book, is two terabytes of flash, on a laptop. So just imagine how much data's being generated if you're doing in a giant factory, whether you be in the warehouse space, whether you be in healthcare, whether you be in government, whether you be in the financial sector and now all those additional regulations, such as GDPR in Europe and other regulations across the world about what you have to do with your healthcare data, what you have to do with your finance data, the amount of data being stored. And then on top of it, quite honestly, from an AI big data analytics perspective, the more data you have, the more valuable it is, the more you can mine it or the more oil, it's as if the world was just oil, forget the pollution side, let's assume oil didn't cause pollution. Okay, great, then guess what? You would be using oil everywhere and you wouldn't be using solar, you'd be using oil and by the way you need more and more and more, and how much oil you have and how you control that would be the power. That right now is the power of data and if anything it's getting more and more and more. So again, you always have to be able to be resilient with that data, you always have to interact with things, like we do with Hortonworks or other application workloads. Our AI reference architecture is another perfect example of the things you need to do to provide, you know, at the base infrastructure, the right foundation. If you have the wrong foundation to a building, it falls over. Whether it be your house, a hotel, this convention center, if it had the wrong foundation, it falls over. >> Actually to follow the oil analogy just a little bit further, the more of this data you have, the more PII there is and it usually, and the more the workloads need to scale up, especially for things like data masking. >> Right. >> When you have compliance requirements like GDPR, so you want to process the data but you need to mask it first, therefore you need clusters that conceivably are optimized for high volume, highly scalable masking in real time, to drive the downstream app, to feed the downstream applications and to feed the data scientist, you know, data lakes, whatever, and so forth and so on? >> That's why you need things like Incredible Compute which IBM offers with the Power Platform. And why you need storage that, again, can scale up. >> Yeah. >> Can get as big as you need it to be, for example in our reference architecture, we use both what we call Spectrum Scale, which is a big data analytics workload performance engine, it has multiple threaded, multi tasking. In fact one of the largest banks in the world, if you happen to bank with them, your credit card fraud is being done on our stuff, okay? But at the same time we have what's called IBM Cloud Object Storage which is an object store, you want to take every one of those searches for fraud and when they find out that no one stole my MasterCard or the Visa, you still want to put it in there because then you mine it later and see patterns of how people are trying to steal stuff because it's all being done digitally anyway. You want to be able to do that. So you A, want to handle it very quickly and resiliently but then you want to be able to mine it later, as you said, mining the data. >> Or do high value anomaly detection in the moment to be able to tag the more anomalous data that you can then sift through later or maybe in the moment for realtime litigation. >> Well that's highly compute intensive, it's AI intensive and it's highly storage intensive on a performance side and then what happens is you store it all for, lets say, further analysis so you can tell people, "When you get your Am Ex card, do this and they won't steal it." Well the only way to do that, is you use AI on this ocean of data, where you're analyzing all this fraud that has happened, to look at patterns and then you tell me, as a consumer, what to do. Whether it be in the financial business, in this case the credit card business, healthcare, government, manufacturing. One of our resellers actually developed an AI based tool that can scan boxes and cans for faults on an assembly line and actually have sold it to a beer company and to a soda company that instead of people looking at the cans, like you see on the Food Channel, to pull it off, guess what? It's all automatically done. There's no people pulling the can off, "Oh, that can is damaged" and they're looking at it and by the way, sometimes they slip through. Now, using cameras and this AI based infrastructure from IBM, with our storage underneath the hood, they're able to do this. >> Great. Well Eric thank you so much for coming on theCUBE. It's always been a lot of fun talking to you. >> Great, well thank you very much. We love being on theCUBE and appreciate it and hope everyone enjoys the DataWorks conference. >> We will have more from DataWorks just after this. (techno beat music)
SUMMARY :
in the heart of Silicon He is the Chief Marketing Officer and talk to all of theCUBE analysts in the sense of you've been on theCUBE I know I'm in the top five, Exactly and often And Stu and John gave me a hard time about You're in California with and you were saying one of the points and it always needs to be available, that you are launching, for the data driven, and what you need to do of the stack, it's like in in the container environment. Rob Thomas in fact demoed it And you could create- and that's a freebie, by the Sure, so nobody cares And him, he might care. and the AI infrastructure So the containerizations So we haven't an exact decision so that we're tied off, you know, together and the reason is, they of the AI and the stream competing and by the way you need more of this data you have, And why you need storage that, again, my MasterCard or the Visa, you still want anomaly detection in the moment at the cans, like you of fun talking to you. the DataWorks conference. We will have more from
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Diane Greene | PERSON | 0.99+ |
Eric Herzog | PERSON | 0.99+ |
James Kobielus | PERSON | 0.99+ |
Jeff Hammerbacher | PERSON | 0.99+ |
Diane | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Mark Albertson | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Rebecca Knight | PERSON | 0.99+ |
Jennifer | PERSON | 0.99+ |
Colin | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Rob Hof | PERSON | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
Tricia Wang | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Singapore | LOCATION | 0.99+ |
James Scott | PERSON | 0.99+ |
Scott | PERSON | 0.99+ |
Ray Wang | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Brian Walden | PERSON | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
Verizon | ORGANIZATION | 0.99+ |
Jeff Bezos | PERSON | 0.99+ |
Rachel Tobik | PERSON | 0.99+ |
Alphabet | ORGANIZATION | 0.99+ |
Zeynep Tufekci | PERSON | 0.99+ |
Tricia | PERSON | 0.99+ |
Stu | PERSON | 0.99+ |
Tom Barton | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Sandra Rivera | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Qualcomm | ORGANIZATION | 0.99+ |
Ginni Rometty | PERSON | 0.99+ |
France | LOCATION | 0.99+ |
Jennifer Lin | PERSON | 0.99+ |
Steve Jobs | PERSON | 0.99+ |
Seattle | LOCATION | 0.99+ |
Brian | PERSON | 0.99+ |
Nokia | ORGANIZATION | 0.99+ |
Europe | LOCATION | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Scott Raynovich | PERSON | 0.99+ |
Radisys | ORGANIZATION | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
Eric | PERSON | 0.99+ |
Amanda Silver | PERSON | 0.99+ |