Bina Hallman, IBM | VMworld 2019
>> Presenter: Live from San Francisco, celebrating 10 years of high tech coverage, it's the Cube. Covering vmworld 2019. Brought to you by vmware and its ecosystem partners. >> So good to have you here with us on the first day of three days of live coverage here in San Francisco as The Cube continues its 10th year of coverage at vmworld 2019. Along with John Troyer, I'm John Walls, glad to have you with us. We're joined now by Bina Hallman who is the vice president of storage at IBM. Bina good to have you with us with this afternoon. >> Thanks for having me. >> You bet. You know, your everyday assignment is keeping so many people up at night and that's how do we defend ourselves, cyber. How do we develop these resilient networks, resilient services. Let's take a step back for a second and try to paint the scope of the problem in terms of what you're seeing at IBM in terms of cyber intrusions, the nature of those attacks and the areas where those are happening. >> I'll tell you from a client industry perspective, right touch on that a little bit. But cyber resiliency, cyber security it's a huge topic. This is something that every business is thinking about, is talking about. It's not just a discussion in the different departments, it's at the C Suite level, the board level. Because if you think about it, cyber crimes as frequent as they are and as impactful as they are, they can really affect the overall company's revenue generation. The cost of recovering from them can be very expensive. >> We're talking about more than just breeches here. Every week we hear ransomware is very interesting, it's very prevalent, it's here. I honestly hear a lot of government small town governments, or state governments, municipal governments maybe because they have reporting requirements. I don't know what goes on underneath in the private sector, but does it seem like that is one things? >> That's right that's right. We hear it in the news a lot. We hear about ransomware quite a bit as data breeches, as other types of things. When you look at some of the analyst statistics and what they say about the frequency of these types of events, and the likelihood of a business getting affected, the likelihood of a business getting affected by a cyber event is 1 in 3. It used to be 1 in 4 a couple years ago, now 1 in 3 over the next two years. Ransomware itself is increasing frequency. I think it was like every 14 seconds there is a ransomware attack somewhere in the world. The cost of this is tremendous. It's in the trillions of dollars. Both from recovering from that attack, the loss in business and revenue generation and actually the impact to the company's reputation. Again, not just ransomware, it's happening in many industries. You talked about government, it's in manufacturing, it's in financial, it's in health, it's in transportation. When you step back and say, how is it so broad, when you think about every organization to some extent is going through some level of transformation. There's digital transformation. They're leveraging capabilities like hybrid multi cloud, having resources on prim, workloads on prim some services in the cloud. They've got team members that are using mobile devices. Some companies depending on their business might have IOT. So when you look at all of those entry points, these are new ways that the bad guys can get into an organization. That creates the scale and complexity, just gets very large. It used to be that you have a backup. The traditional way for business resiliency used to be you do a backup, you have the data on an external system, you restore it if something happened. And then there was the business continuity. You would have a secondary infrastructure that in the case of an accident or some kind of a natural disaster, which didn't happen very often, you would have somewhere, a secondary infrastructure. All of those were designed with the likelihood being very low of happening. Then the recovery times and the disruption to business was somewhat tolerable. These days, with all of the dynamics we're talking about, and the potential areas of entry you need more of an end to end solution. That's a cyber resiliency strategy that is really comprehensive and that's what a lot of the businesses are thinking about today. How do I make sure I have a complete solution and a strategy that allows me to survive through and come up very quickly after an attack happens. I think most people recognize that they're going to get impacted at some point. It's not if, but it's when and when it does how do I quickly recover. >> You said it with the statistic, that 1 in 3 every two years. So my math tells me in six years time, I'm going to get hit by that standard. But it tells me that it's not if, it is when. So in terms of the strategies that companies are adopting, what do you recommend? What do you suggest now? You paint a realistically grim picture that there's so many different avenues, different opportunities and it's hard to put your fingers in all those holes. >> There's a lot happening in this space and I think that, you know, there are different standards, a lot of regulations but one that has been accepted and being leveraged in the US is around a framework and some guidance the NIST organization, National Institute of Standards in Technology. It's a framework that they put in place, a guidance on how do you plan for, how do you detect and then recover from these types of situations. I'll talk about it a little bit, but it's a very good approach. It starts with an organization needs to start by identifying what are some of the critical business services that their business is dependent on. What are they, what are the systems, what are the workloads, what are the applications. They identify and then what's the tolerance level. How quickly do you need to come up. What's the RPO, RTF. Based on that, develop and prioritize a plan. That plan has to be holistic. It involves from the CIO to the CSA, security office to the operations to the business continuity, to the data owners, the line of business. And then in this environment, you've got partners, you've got services you're leveraging. All of that has to be encompassing for those key services that you identify and prioritize as a client that you need up and running. And up and running very quickly. One of the examples of a client, financial institution. They determined they had 300 services they needed up and running within 24 hours in case there was an attack or in case something happened to their data or their environment. That they defined as what their requirement was. Then you go about working with them to do a few things. You identify and then there are other phases around that I can talk about that as well. >> I was going to go over to IBM a little bit in that obviously, you're with IBM and we're talking about storage, people may not realize how integral storage is now in security, but IBM brings to the table a lot more than just storage. >> Absolutely. >> So can you talk a little bit about that portfolio and IBM's approach? >> Sure, so when I talk about the NIST framework and I talk about the identify stage, there's also things around protection, protecting the environment and those services and those systems. The infrastructure, we do a lot in that space. It's around detection. So now that you've got the protection, and protection might include things like having identity management, having access control, just making sure that the applications are at the latest code levels. Often times that's when the vulnerability comes in when you don't have those security patches installed. Data protection and when it comes to that segment, we've got a very rich portfolio of data protection capabilities with our Spectrum Protect offerings. From a protection perspective, going into an encryption, having capabilities where the infrastructure is designed to have multiple types. You can have physical separation, so you can have an air gap, things like tape are ideal for that because it's physically separated. Tiering to the cloud. You can have technologies like write once read many where they're immutable, you can't change those. You can read them but you can't change them. We've done a lot of work in innovation around what we call safeguarded copies. This is making snapshots, but those snapshots are not deletable, they're access controlled, they're read-only. That allows you to very quickly bring up an environment. >> I think people don't realize that, I see some patterns of, sometimes these things hide. They'll be in there and they will be innocuous so you can't just restore the last backup. >> That's right. >> They may try to rewrite the backup so you may have to go back and find a good one. >> Absolutely, and detection is very important. Detecting that as early as possible is the best way to reduce the cost of recovering from these kinds of events. But like you said, I think I want to say 160 days, your environment might be exposed for 160 days before you detect it. So having capabilities in a portfolio in our offerings, and we do a lot working with a research team our security team on things like our data protection where we have algorithms built in where we look for patterns and we look for anomalies. As soon as we see the patterns for malware, ransomware, we alert the operator so you don't allow it to be resident for that period of time. You quickly try to identify it. Another example is in our infrastructure management software. You can see your whole heterogeneous storage environment. You typically start out by base lining a normal environment, similar to the backup piece but then it looks for anomalies, and are there certain things happening in the network, the storage that warns the operator. >> I almost get the feeling that sometimes it's almost like termites. You don't realize you have a problem until it's too late because they haven't been visible. In a 160 day window, whatever it might be, you might be passed that but because whatever that attack was, it was malicious and as clandestine enough that you didn't find it and it does cause problems so as we're wrapping up here, what kind of confidence do you want to share with the end users with people to let them know that there are tools that they can deploy. That it's not all grim reaper. But it is difficult. >> It is difficult, it's very real. But it's absolutely something that every business can have under control, have a plan around. From an IBM perspective, we are number one leader in security, we're the leader in security. Our focus is not just at a software level, it's starting from the chips we design to the servers we deliver to the storage, the flash core modules, FIPS 140 compliance, the storage software, the data protection, the storage management software all the way through the stack. All the way through our cloud infrastructure. Having that comprehensive end to end security and we have those capabilities, we also have services. Our services and security organization work with clients to establish these, evaluate the environment, establish these strategies and interim plans. It's really about creating the plan, prioritizing it and implementing it, making sure the whole organization is aware and educated on it. >> You got to prepare no doubt about that. Thanks for the time Bina, we appreciate that. And it's not all doom and gloom but it is tough. Tough work and very necessary work. Back with more here on The Cube. You're watching our coverage from vmworld 2019, Here in San Francisco.
SUMMARY :
Brought to you by vmware and its ecosystem partners. So good to have you here with us on the and the areas where those are happening. it's at the C Suite level, the board level. in the private sector, but does it seem like and actually the impact to the company's reputation. So in terms of the strategies that companies It involves from the CIO to the CSA, in that obviously, you're with IBM and we're just making sure that the applications are so you can't just restore the last backup. They may try to rewrite the backup so you may Detecting that as early as possible is the enough that you didn't find it and it does cause it's starting from the chips we design to the Thanks for the time Bina, we appreciate that.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
National Institute of Standards in Technology | ORGANIZATION | 0.99+ |
John Troyer | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
John Walls | PERSON | 0.99+ |
300 services | QUANTITY | 0.99+ |
Bina Hallman | PERSON | 0.99+ |
160 day | QUANTITY | 0.99+ |
10 years | QUANTITY | 0.99+ |
NIST | ORGANIZATION | 0.99+ |
160 days | QUANTITY | 0.99+ |
10th year | QUANTITY | 0.99+ |
three days | QUANTITY | 0.99+ |
San Francisco | LOCATION | 0.99+ |
six years | QUANTITY | 0.99+ |
3 | QUANTITY | 0.99+ |
4 | QUANTITY | 0.99+ |
1 | QUANTITY | 0.99+ |
vmware | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
Bina | PERSON | 0.99+ |
Both | QUANTITY | 0.99+ |
trillions of dollars | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
first day | QUANTITY | 0.98+ |
US | LOCATION | 0.97+ |
FIPS 140 | OTHER | 0.95+ |
24 hours | QUANTITY | 0.94+ |
this afternoon | DATE | 0.9+ |
C Suite | TITLE | 0.89+ |
one things | QUANTITY | 0.84+ |
two years | QUANTITY | 0.83+ |
vmworld | ORGANIZATION | 0.81+ |
vmworld 2019 | EVENT | 0.8+ |
couple years ago | DATE | 0.8+ |
number one | QUANTITY | 0.78+ |
14 seconds | QUANTITY | 0.71+ |
VMworld 2019 | EVENT | 0.67+ |
next two years | DATE | 0.59+ |
second | QUANTITY | 0.58+ |
coverage | QUANTITY | 0.52+ |
Cube | COMMERCIAL_ITEM | 0.5+ |
prim | ORGANIZATION | 0.48+ |
Cube | ORGANIZATION | 0.45+ |
2019 | DATE | 0.41+ |
Spectrum | ORGANIZATION | 0.39+ |
Cube | TITLE | 0.35+ |
Alistair Symon, IBM & Bina Hallman, IBM | IBM Think 2019
live from San Francisco it's the cube covering IB time thing 2019 brought to you by IBM welcome back to the cubes coverage day one IBM think 2019 I'm Lisa Martin with Dave Volante we're in San Francisco where IBM think the second IBM think is at this new rejuvenated Moscone Center we're welcoming back to the cube being a home and VP of offering management from IBM being it's great to have you back on the program good morning and we're welcoming to the cube Alistair Simon VP of storage development at IBM welcome yeah thank you good to be here so we're gonna be here for four days big event being and we were talking before we went live expecting 25 to 30,000 people at the second annual IBM think which is this conglomeration of what five just six what used to be disparate shows talk to us about some of the exciting announcements coming out from with respect to data protection storage cyber resiliency yeah no this is a great event as you said this is our second first time in San Francisco here and a great venue we have close to 30,000 clients and participants here is a big event right you know the topics around an announcements you'll hear about around you know cloud multi cloud solutions ai security infrastructure right so in general quite a broad set of new topics at announcements that think but from a storage perspective you know we've done a number of new announcements or doing number of new announcements around things were doing around made-up data protection around solutions in general whether it's blockchain cyber resiliency private cloud solutions those types of things and then of course around our Flash systems offerings so we have a great set of announcements occurring this week I know you guys have to think about you know put on your binoculars and think about what's coming next so wonder if we could talk about some of the big drivers vina that you're seeing in the marketplace and Alister that you're driving in in development I mean data obviously if we talks about data but we talk about data differently than we used to talk about a ten years ago cloud obviously is a mega trend you're mentioning some new technologies like blockchain Nai what are the big drivers that you guys look at and how does that affect your development roadmaps yes certainly from a you know industry perspective and what clients are dealing with and looking to for solutions for from us you mentioned few you know AI having that end-to-end data pipeline and set of capabilities we made a number of announcement second half of last year around AI solutions that allow clients to start from the beginning all the way to the end and meet their data needs from whether its high performance you know storage and and ingest to capacity tiers being able to hold large amounts of data and having that complete into in solution whether it's with our power AI enterprise or some of the things we did around our spectrum storage for AI within Nvidia so you know a lot of focus around AI but also as clients are getting more and more into moving some of their cloud were close to the cloud or leveraging multi-cloud you know today clients are about 20% on their cloud journey there's still that 80% that's there that we need to help them with and a lot of the solutions today they tend to from a cloud perspective proprietary potentially you know inconsistent set of management tools so being able to help clients and focus on multi cloud solutions that's a big area for us as well and then cyber resilience ease the other yeah and I think just talking about the multi cloud aspect clearly when we develop our products were very focused on being able to connect to the different cloud protocols that are required to move the data from the storage out there to the cloud and do it in a performance related way I think the other thing from an analytic standpoint is really important is we've been very focused in delivering the performance in the storage system that's required both from a bandwidth and sheer I UPS perspective very low latency and you'll see that with some of the technologies we brought out very recently in our all-flash arrays where we're all nvme based both connected to the servers and to the storage so really low latency for applications so you can get the data as fast as you can into the annum engines so very focused on these new technologies that enhance the new capabilities Benny you mentioned something interesting I always love stats a geek out Dave knows this about me the customers are about 20% of the way into their cloud journey we we talk about it as a journey all the time right Dave digital transformation that's an interesting number you also mention some of the something that IBM is really poised to help customers achieve is this this AI journey from beginning to end if a customer is in this process of digital transformation and has what are the stats and average Enterprise has you know between five private and public clouds what is that AI journey obviously it has to be concurrent with a cloud journey there's no time that actually do one person than the other but I'm curious what is the beginning of that ai journey for a customer who is going alright we're in this hybrid multi cloud world that's where we live we have to start preparing our data for AI because we know on multiple levels there's a tremendous amount of opportunity how do you help them start yeah you know and what we typically see for clients says they'll start out on some small AI projects in different different parts of their you know environment and those can start and you know server with internal storage or internal SSDs etc but pretty soon as they want to move that to an enterprise or more of the complete set of solution that requires more of the enterprise capability so as Alistair talked about right for ingests to be able to have the right set of solution whether it's you know having the right set of performance of latency attributes etc and making sure we're working and then and then the capacity tier so it's really important that you know and we do this with our clients as help them start with the with the initial footprint but then make sure that you know from an architecture perspective they're set up to be able to grow into that larger because analytics is all about you know that volume of data and you're kind of mining it so that's kind of the key there how's the first time I ever went to Tucson it was I was there for on a tape mission we had a largely a tape facility lots has changed I'm sure since then the development protocol the environment to hear a lot about two Pizza teams you know speed and agile can you talk a little bit about IBM's process development process yeah we're actually very much well down the road towards a drive to agile development throughout all of our development teams worldwide not just in Tucson and that brings a number of benefits to us it allows us to to quickly prototype new functions so that we can test them out with our clients very early in the development process we're not just waiting till the end of the cycle to try something just like a beta test which we do to a large extent but we want to forget with clients early in the cycle so we can get that initial feedback on designs to make sure that we've done the right thing and an example of that would be what we did with cyber resiliency and our safeguarded copy on our PS 8000 Enterprise array we worked with a large financial institution early on to model the design we were going to provide for that and then we worked with them through the introduction of it and through the early testing and we've put that out at the end of last year and seeing great demand for it so that allows you to take snapshots of your data make those snapshots immutable bad actors can't come in and delete that data and if somebody does correct corrupt your your production copy you can do a quick restore from it all done hand-in-hand with a client through the process this is a ransomware play is that right or not necessarily maybe we could take us through like a likely solution for a client you're creating ransomware you hear about air-gap but there's more to it there yeah so you know typical solution you know it's really around being able to work with clients to plan for because given these events are happening more and more frequently and if you assume that the bad guys are going to get in or they're already in and you need to you notice it's a matter of time then storage plays a huge role in the cyber resiliency plan right so it's really around planning then detect and recovery so we talked about it in that way and from a planning perspective we do a lot of things we insure clients data is on infrastructure that can't be compromised we ensure that they have things like air gap being air gapping is where you know if a bad actor gets into one environment they can't do something bad with the other environment think as you know creating a physical separation we have our tape solutions as a classical example but there's also technologies like immutable right ones read many we have that on our cloud object storage our spectrum scale software-defined storage offerings and then also around data data protection in general making sure you know your copies well snapshots it's essentially that you're setting up the snapshots in a way that they are secure you create that separation but that's the the planning phase another aspect of it that we hope clients was you know model that baseline operation what is the environment look like in under normal operations what are these storage you know infrastructure patterns what are the systems that are the most critical for your business and you know operations what's their day-to-day usage where are they once you have that established then it's all about monitoring and looking for abnormal activities and and if you do see some set of abnormal activities being able to detect that are spectrum protect offering that's a data protection we've built in analytics to look for things and patterns like malware ransomware right and be able to alert now once you've detected something like that being able to quickly recover from that is really important get the business up and running and that's where you know a lot of our storage offerings are automated from a data restore perspective being able to bring those copies back very quickly get your business running very quickly that's important and so all of these you know plan detect recover is where storage plays a huge role across all of that I'm curious we know that security issues are unfortunately commonplace every day through the at and I saw stopped the other day the average security breach will cost an organization upwards of 3.8 million dollars one of the the things I'm curious about is in your customer conversations we're talking about data protection at the storage level and infusing that technology with the intelligence and the automation to facilitate that recovery where are your conversations in a customer are they at the business level because I imagine you know security and protection is at the c-suite yeah this is about some of those how are those business objectives helping to officer facilitate development of the actual technology yeah these are definitely CIO types of conversations but we also you know once we engage in that conversation and go down that journey we work with the clients very closely we do the what we call design thinking kind of workshop so together with the client we work on what types of what are some of the you know top three things that from a business need perspective that they see and then we work to ensure that we come to what we call these Hills these goals that we define jointly and then Alistair and his team work to go over find those and as they're developing then work closely with the client to ensure that we're achieving what the you know what we both expected and deliver it to whether it's a starting with a minimal viable product to product izing or or full product ization and again I would say engaging with the clients early in the process is really important because we'll find out things like what are their you know security requirements within their own data centers which can vary from client to client and it helps us understand how to build in things like how do they want to manage their encryption keys in which particular ways they want for that to meet their own security requirements and it can drive different development strategies from that you guys were talking about spectrum protect earlier and just data protection in general it's a space that's heating up I was talking about Tucson before and tape and tape used to be backup that was it even the language is changing it's called data protection now some people call it data management which of course could mean a lot of things to a lot of different people if you're talking to a database person and different maybe from your storage person but the parlance is evolving and it fits into multi cloud people are trying to get more out of their backup than just insurance so what are you seeing is some of the drivers there how does it fit into your multi cloud strategy and what is ultimately IBM's data protection portfolio strategies yeah so you know tape in general one of the you know when you've got large amounts of data that you're looking to archive tape is a great solution and we are seeing more and more interest from you know cloud service providers leveraging tape as their archives here from overall data protection and data management perspective we think that that base is you know basic data protection and making sure that the data is available when you need it is there but we think that has also evolved to where you do things like snapshots write snapshots that uh that are in the native format so you can for operational recovery very quickly be able to restore those and over a period of time if you no longer need it you can back it up to traditional data protection from that snapshot based technology of course and you have the different cloud consumption models in cloud scale that are enabling you know clients to leverage other types of storage whether it's cloud tier or you know cloud object storage and in our portfolio so you've got the consumption models the scale that's driving some of that put on top of that some of the things we talked about like cyber resiliency right ensuring security and protecting that data from things like malware the bad actors right that's very important and then at the you know what we see coming forward from a transformation prospective client transformation is really bringing all of that together so you have your your data protection you've got your unstructured data whether you know I talked about cloud object storage our scale offerings you've got you know your your archive data but also then being able to put it all together and get value out of that a data by looking at the metadata we've introduced an offering in second half of last year or fourth quarter we call spectrum discover allows clients to be able to you know get a catalog of that metadata and very quickly be able to get view and insights into their environment but also be able to integrate that into their analytic workflow and be able to customize that metadata so you can see a holistic solution coming together from not just data protection all the way up through a complete a is DevOps analytics exact that's the recovery really I'm kind of you know if we think about this the matically from a transformation perspective is this really what you're talking about facilitating security transformation absolutely I mean you know security at all aspects whether it's you know the basic encryption of data at rest encryption of data and flying to the the higher level you know detection of these types of security breaches or events and also the protection even if somebody does breach you you still got the recovery point and say a safeguarded copy that you can go back to to make sure your data is restored so even going beyond the protecting against the breach itself fully encompassing and last question and in terms of that data protection where's the people element right because we all know that that's some common denominator of of any Toto sort of security issue is is people where where are what's the human element in the conversation about what you guys are delivering are there may be some human error proof components that are essential that you're helping to develop based on all the history that we've seen with breaches yeah I think you know overall from helping the client ensure that they've got their environment set up properly from a role based access control perspective ensuring that that separation and that in the overall solution is architected to include some of these capabilities whether it's air gap being or or you know the immutable technologies those types of things look you know whether the bad actors whether they're outside the company getting in or someone you know within the company you have to have the right set of measures that are implemented and it is around security encryption you know role based access control all of that well being Alastair thank you so much for joining David me on the cube this morning we appreciate your time and look forward to hearing a lot of more news coming out over the next four days great thank you very much yeah thank you for Dave Volante I'm Lisa Martin you're watching the cube live at IBM think 2019 stick around we'll be right back with our next guest [Music]
SUMMARY :
flying to the the higher level you know
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
Dave Volante | PERSON | 0.99+ |
Dave Volante | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Alistair | PERSON | 0.99+ |
Alistair Simon | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
David | PERSON | 0.99+ |
Alistair Symon | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Tucson | LOCATION | 0.99+ |
80% | QUANTITY | 0.99+ |
25 | QUANTITY | 0.99+ |
3.8 million dollars | QUANTITY | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Bina Hallman | PERSON | 0.99+ |
Benny | PERSON | 0.99+ |
first time | QUANTITY | 0.98+ |
Alastair | PERSON | 0.98+ |
2019 | DATE | 0.98+ |
30,000 people | QUANTITY | 0.97+ |
this week | DATE | 0.97+ |
one person | QUANTITY | 0.96+ |
today | DATE | 0.96+ |
about 20% | QUANTITY | 0.96+ |
four days | QUANTITY | 0.95+ |
both | QUANTITY | 0.94+ |
PS 8000 Enterprise | COMMERCIAL_ITEM | 0.93+ |
six | QUANTITY | 0.93+ |
five | QUANTITY | 0.93+ |
second first time | QUANTITY | 0.91+ |
one | QUANTITY | 0.91+ |
fourth quarter | DATE | 0.89+ |
last year | DATE | 0.89+ |
Tucson | ORGANIZATION | 0.89+ |
Moscone Center | LOCATION | 0.85+ |
this morning | DATE | 0.85+ |
ten years ago | DATE | 0.84+ |
second | QUANTITY | 0.82+ |
day one | QUANTITY | 0.82+ |
five private | QUANTITY | 0.82+ |
three things | QUANTITY | 0.81+ |
30,000 clients | QUANTITY | 0.81+ |
Think | COMMERCIAL_ITEM | 0.79+ |
Alister | PERSON | 0.76+ |
end of last year | DATE | 0.75+ |
agile | TITLE | 0.74+ |
second half | DATE | 0.67+ |
second annual | QUANTITY | 0.66+ |
next four days | DATE | 0.63+ |
second half | DATE | 0.62+ |
close | QUANTITY | 0.59+ |
IB | EVENT | 0.58+ |
of new announcements | QUANTITY | 0.57+ |
more news | QUANTITY | 0.54+ |
two Pizza | QUANTITY | 0.54+ |
lot | QUANTITY | 0.49+ |
think 2019 | EVENT | 0.48+ |
Bina Hallman & Steven Eliuk, IBM | IBM Think 2018
>> Announcer: Live, from Las Vegas, it's theCUBE. Covering IBM Think 2018. Brought to you by IBM. >> Welcome back to IBM Think 2018. This is theCUBE, the leader in live tech coverage. My name is Dave Vellante and I'm here with Peter Burress. Our wall-to-wall coverage, this is day two. Everything AI, Blockchain, cognitive, quantum computing, smart ledger, storage, data. Bina Hallman is here, she's the Vice President of Offering Management for Storage and Software Defined. Welcome back to theCUBE, Bina. >> Bina: Thanks for having me back. >> Steve Elliot is here. He's the Vice President of Deep Learning in the Global Chief Data Office at IBM. >> Thank you sir. >> Dave: Welcome to the Cube, Steve. Thanks, you guys, for coming on. >> Pleasure to be here. >> That was a great introduction, Dave. >> Thank you, appreciate that. Yeah, so this has been quite an event, consolidating all of your events, bringing your customers together. 30,000 40,000, too many people to count. >> Very large event, yes. >> Standing room only at all the sessions. It's been unbelievable, your thoughts? >> It's been fantastic. Lots of participation, lots of sessions. We brought, as you said, all of our conferences together and it's a great event. >> So, Steve, tell us more about your role. We were talking off the camera, we've had here Paul Bhandari on before, Chief Data Officer at IBM. You're in that office, but you've got other roles around Deep Learning, so explain that. >> Absolutely. >> Sort of multi-tool star here. >> For sure, so, roles and responsibility at IBM and the Chief Data Office, kind of two pillars. We focus in the Deep Learning group on foundation platform components. So, how to accelerate the infrastructure and platform behind the scenes, to accelerate the ideation or product phase. We want data scientists to be very effective, and for us to ensure our projects very very quickly. That said, I mentioned projects, so on the applied side, we have a number of internal use cases across IBM. And it's not just hand vault, it's in the orders of hundreds and those applied use cases are part of the cognitive plan, per se, and each one of those is part of the transformation of IBM into our cognitive. >> Okay, now, we were talking to Ed Walsh this morning, Bina, about how you collaborate with colleagues in the storage business. We know you guys have been growing, >> Bina: That's right. >> It's the fourth quarter straight, and that doesn't event count, some of the stuff that you guys ship on the cloud in storage, >> That's right, that's right. >> Dave: So talk about the collaboration across company. >> Yeah, we've had some tremendous collaboration, you know, the broader IBM and bringing all of that together, and that's one of the things that, you know, we're talking about here today with Steve and team is really as they built out their cognitive architecture to be able to then leverage some of our capabilities and the strengths that we bring to the table as part of that overall architecture. And it's been a great story, yeah. >> So what would you add to that, Steve? >> Yeah, absolutely refreshing. You know I've built up super computers in the past, and, specifically for deep learning, and coming on board at IBM about a year ago, seeing the elastic storage solution, or server. >> Bina: Yeah, elastic storage server, yep. >> It handles a number of different aspects of my pipeline, very uniquely, so for starters, I don't want to worry about rolling out new infrastructure all the time. I want to be able to grow my team, to grow my projects, and that's what nice about ESS is it's distensible, I'm able to roll out more projects, more people, multi-tenancy et cetera, and it supports us effectively. Especially, you know, it has very unique attributes like the read only performance feed, and random access of data, is very unique to the offering. >> Okay, so, if you're a customer of Bina's, right? >> I am, 100%. >> What do you need for infrastructure for Deep Learning, AI, what is it, you mentioned some attributes before, but, take it down a little bit. >> Well, the reality is, there's many different aspects and if anything kind of breaks down, then the data science experience breaks down. So, we want to make sure that everything from the interconnect of the pipelines is effective, that you heard Jensen earlier today from Nvidia, we've got to make sure that we have compute devices that, you know, are effective for the computation that we're rolling out on them. But that said, if those GPUs are starved by data, that we don't have the data available which we're drawing from ESS, then we're not making effective use of those GPUs. It means we have to roll out more of them, et cetera, et cetera. And more importantly, the time for experimentation is elongated, so that whole idea, so product timeline that I talked about is elongated. If anything breaks down, so, we've got to make sure that the storage doesn't break down, and that's why this is awesome for us. >> So let me um, especially from a deep learning standpoint, let me throw, kind of a little bit of history, and tell me if you think, let me hear your thoughts. So, years ago, the data was put as close to the application as possible, about 10, 15 years ago, we started breaking the data from the application, the storage from the application, and now we're moving the algorithm down as close to the data as possible. >> Steve: Yeah. >> At what point in time do we stop calling this storage, and start acknowledging that we're talking about a fabric that's actually quite different, because we put a lot more processing power as close to the data as possible. We're not just storing. We're really doing truly, deeply distributing computing. What do you think? >> There's a number of different areas where that's coming from. Everything from switches, to storage, to memory that's doing computing very close to where the data actually residents. Still, I think that, you know, this is, you can look all the way back to Google file system. Moving computation to where the data is, as close as possible, so you don't have to transfer that data. I think that as time goes on, we're going to get closer and closer to that, but still, we're limited by the capacity of very fast storage. NVMe, very interesting technology, still limited. You know, how much memory do we have on the GPUs? 16 gigs, 24 is interesting, 48 is interesting, the models that I want to train is in the 100s of gigabytes. >> Peter: But you can still parallelize that. >> You can parallelize it, but there's not really anything that's true model parallelism out there right now. There's some hacks and things that people are doing, but. I think we're getting there, it's still some time, but moving it closer and closer means we don't have to spend the power, the latency, et cetera, to move the data. >> So, does that mean that the rate of increase of data and the size of the objects we're going to be looking at, is still going to exceed the rate of our ability to bring algorithms and storage, or algorithms and data together? What do you think? >> I think it's getting closer, but I can always just look at the bigger problem. I'm dealing with 30 terabytes of data for one of the problems that I'm solving. I would like to be using 60 terabytes of data. If I could, if I could do it in the same amount of time, and I wasn't having to transfer it. With that said, if you gave me 60, I'd say, "I really wanted 120." So, it doesn't stop. >> David: (laughing) You're one of those kind of guys. >> I'm definitely one of those guys. I'm curious, what would it look like? Because what I see right now is it would be advantageous, and I would like to do it, but I ran 40,000 experiments with 30 terabytes of data. It would be four times the amount of transfer if I had to run that many experiments of 120. >> Bina, what do you think? What is the fundamental, especially from a software defined side, what does the fundamental value proposition of storage become, as we start pushing more of the intelligence close to the data? >> Yeah, but you know the storage layer fundamentally is software defined, you still need that setup, protocols, and the file system, the NFS, right? And, so, some of that still becomes relevant, even as you kind of separate some of the physical storage or flash from the actual compute. I think there's still a relevance when you talk about software defined storage there, yeah. >> So you don't expect that there's going to be any particular architectural change? I mean, NVMe is going to have a real impact. >> NVMe will have a real impact, and there will be this notion of composable systems and we will see some level of advancement there, of course, and that's around the corner, actually, right? So I do see it progressing from that perspective. >> So what's underneath it all, what actually, what products? >> Yeah, let me share a little bit about the product. So, what Steve and team are using is our elastic storage server. So, I talked about software defined storage. As you know, we have a very complete set of software defined storage offerings, and within that, our strategy has always been allow the clients to consume the capabilities the way they want. A software only on their own hardware, or as a service, or as an integrated solution. And so what Steve and team are using is an integrated solution with our spectrum scale software, along with our flash and power nine server power systems. And on the software side from spectrum scale, this is a very rich offering that we've had in our portfolio. Highly scalable file system, it's one of the solutions that powers a lot of our supercomputers. A project that we are still in the process and have delivered on around Whirl, our national labs. So same file system combined with a set of servers and flash system, right? Highly scalable, erasure coding, high availability as well as throughput, right? 40 gigabytes per second, so that's the solution, that's the storage and system underneath what Steve and team are leveraging. >> Steve, you talk about, "you want more," what else is on Bina's to-do-list from your standpoint? >> Specifically targeted at storage, or? >> Dave: Yeah, what do you want from the products? >> Well, I think long stretch goals are multi-tenancy and the wide array of dimensions that, especially in the chief data office, that we're dealing with. We have so many different business units, so many different of those enterprise problems in the orders of hundreds how do you effectively use that storage medium driving so many different users? I think it's still hard, I think we're doing it a hell of a lot better than we ever have, but it's still, it's an open research area. How do you do that? And especially, there's unique attributes towards deep learning, like, most of the data is read only to a certain degree. When data changes there's some consistency checks that could be done, but really, for my experiment that's running right now, it doesn't really matter that it's changed. So there's a lot of nuances specific to deep learning that I would like exploited if I could, and that's some of the interactions that we're working on to kind of alleviate those pains. >> I was at a CDO conference in Boston last October, and Indra Pal was there and he presented this enterprise data architecture, and there were probably about three or four hundred CDOs, chief data officers, in the room, to sort of explain that. Can you, sort of summarize what that is, and how it relates to sort of what you do on a day to day basis, and how customers are using it? >> Yeah, for sure, so the architecture is kind of like the backbone and rules that kind of govern how we work with the data, right? So, the realities are, there's no sort of blueprint out there. What works at Google, or works at Microsoft, what works at Amazon, that's very unique to what they're doing. Now, IBM has a very unique offering as well. We have so many, we're a composition of many, many different businesses put together. And now, with the Chief Data Office that's come to light across many organizations like you said, at the conference, three to 400 people, the requirements are different across the orders. So, bringing the data together is kind of one of the big attributes of it, decreasing the number of silos, making a monolithic kind of reliable, accessible entity that various business units can trust, and that it's governed behind the scenes to make sure that it's adhering to everyone's policies, that their own specific business unit has deemed to be their policy. We have to adhere to that, or the data won't come. And the beauty of the data is, we've moved into this cognitive era, data is valuable but only if we can link it. If the data is there, but there's no linkages there, what do I do with it? I can't really draw new insights. I can't draw, all those hundreds of enterprise use cases, I can't build new value in them, because I don't have any more data. It's all about linking the data, and then looking for alternative data sources, or additional data sources, and bringing that data together, and then looking at the new insights that come from it. So, in a nutshell, we're doing that internally at IBM to help our transformation. But at the same time creating a blueprint that we're making accessible to CDOs around the world, and our enterprise customers around the world, so they can follow us on this new adventure. New adventure being, you know, two years old, but. >> Yeah, sure, but it seems like, if you're going to apply AI, you've got to have your data house in order to do that. So this sounds like a logical first step, is that right? >> Absolutely, 100%. And, the realities are, there's a lot of people that are kicking the tires and trying to figure out the right way to do that, and it's a big investment. Drawing out large sums of money to kind of build this hypothetical better area for data, you need to have a reference design, and once you have that you can actually approach the C-level suite and say, "Hey, this is what we've seen, this is the potential, "and we have an architecture now, "and they've already gone down all the hard paths, "so now we don't have to go down as many hard paths." So, it's incredibly empowering for them to have that reference design and learning from our mistakes. >> Already proven internally now, bringing it to our enterprise alliance. >> Well, and so we heard Jenny this morning talk about incumbent disruptors, so I'm kind of curious as to what, any learnings you have there? It's early days, I realize that, but when you think about, the discussions, are banks going to lose control of the payment systems? Are retail stores going to go away? Is owning and driving your own vehicle going to be the exception, not the norm? Et cetera, et cetera, et cetera, you know, big questions, how far can we take machine intelligence? Have you seen your clients begin to apply this in their businesses, incumbents, we saw three examples today, good examples, I thought. I don't think it's widespread yet, but what are you guys seeing? What are you learning, and how are you applying that to clients? >> Yeah, so, I mean certainly for us, from these new AI workloads, we have a number of clients and a number of different types of solutions. Whether it's in genomics, or it's AI deep learning in analyzing financial data, you know, a variety of different types of use cases where we do see clients leveraging the capabilities, like spectrum scale, ESS, and other flash system solutions, to address some of those problems. We're seeing it now. Autonomous driving as well, right, to analyze data. >> How about a little road map, to end this segment? Where do you want to take this initiative? What should we be looking for as observers from the outside looking in? >> Well, I think drawing from the endeavors that we have within the CDO, what we want to do is take some of those ideas and look at some of the derivative products that we can take out of there, and how do we kind of move those in to products? Because we want to make it as simple as possible for the enterprise customer. Because although, you see these big scale companies, and all the wonderful things that they're doing, what we've had the feedback from, which is similar to our own experiences, is that those use cases aren't directly applicable for most of the enterprise customers. Some of them are, right, some of the stuff in vision and brand targeting and speech recognition and all that type of stuff are, but at the same time the majority and the 90% area are not. So we have to be able to bring down sorry, just the echoes, very distracting. >> It gets loud here sometimes, big party going on. >> Exactly, so, we have to be able to bring that technology to them in a simpler form so they can make it more accessible to their internal data scientists, and get better outcomes for themselves. And we find that they're on a wide spectrum. Some of them are quite advanced. It doesn't mean just because you have a big name you're quite advanced, some of the smaller players have a smaller name, but quite advanced, right? So, there's a wide array, so we want to make that accessible to these various enterprises. So I think that's what you can expect, you know, the reference architecture for the cognitive enterprise data architecture, and you can expect to see some of the products from those internal use cases come out to some of our offerings, like, maybe IGC or information analyzer, things like that, or maybe the Watson studio, things like that. You'll see it trickle out there. >> Okay, alright Bina, we'll give you the final word. You guys, business is good, four straight quarters of growth, you've got some tailwinds, currency is actually a tailwind for a change. Customers seem to be happy here, final word. >> Yeah, no, we've got great momentum, and I think 2018 we've got a great set of roadmap items, and new capabilities coming out, so, we feel like we've got a real strong set of future for our IBM storage here. >> Great, well, Bina, Steve, thanks for coming on theCUBE. We appreciate your time. >> Thank you. >> Nice meeting you. >> Alright, keep it right there everybody. We'll be back with our next guest right after this. This is day two, IBM Think 2018. You're watching theCUBE. (techno jingle)
SUMMARY :
Brought to you by IBM. Bina Hallman is here, she's the Vice President He's the Vice President of Deep Learning Dave: Welcome to the Cube, Steve. Yeah, so this has been quite an event, Standing room only at all the sessions. We brought, as you said, all of our conferences together You're in that office, but you've got other roles behind the scenes, to accelerate the ideation in the storage business. and that's one of the things that, you know, seeing the elastic storage solution, or server. like the read only performance feed, AI, what is it, you mentioned some attributes before, that the storage doesn't break down, and tell me if you think, let me hear your thoughts. and start acknowledging that we're talking about a fabric the models that I want to train is in the 100s of gigabytes. to move the data. for one of the problems that I'm solving. and I would like to do it, protocols, and the file system, the NFS, right? So you don't expect that there's going to be and that's around the corner, actually, right? allow the clients to consume the capabilities and that's some of the interactions that we're working on and how it relates to sort of what you do on a and that it's governed behind the scenes you've got to have your data house in order to do that. that are kicking the tires and trying to figure out bringing it to our enterprise alliance. and how are you applying that to clients? leveraging the capabilities, like spectrum scale, ESS, and all the wonderful things that they're doing, So I think that's what you can expect, you know, Okay, alright Bina, we'll give you the final word. and new capabilities coming out, so, we feel We appreciate your time. This is day two, IBM Think 2018.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Steve | PERSON | 0.99+ |
Steve Elliot | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Peter Burress | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
Paul Bhandari | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Boston | LOCATION | 0.99+ |
Bina Hallman | PERSON | 0.99+ |
Indra Pal | PERSON | 0.99+ |
60 terabytes | QUANTITY | 0.99+ |
90% | QUANTITY | 0.99+ |
16 gigs | QUANTITY | 0.99+ |
Peter | PERSON | 0.99+ |
100% | QUANTITY | 0.99+ |
2018 | DATE | 0.99+ |
Ed Walsh | PERSON | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
30 terabytes | QUANTITY | 0.99+ |
Jenny | PERSON | 0.99+ |
three | QUANTITY | 0.99+ |
60 | QUANTITY | 0.99+ |
40,000 experiments | QUANTITY | 0.99+ |
Steven Eliuk | PERSON | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
24 | QUANTITY | 0.99+ |
Bina | PERSON | 0.99+ |
two years | QUANTITY | 0.99+ |
120 | QUANTITY | 0.99+ |
48 | QUANTITY | 0.99+ |
last October | DATE | 0.99+ |
one | QUANTITY | 0.98+ |
40 gigabytes | QUANTITY | 0.98+ |
first step | QUANTITY | 0.98+ |
hundreds | QUANTITY | 0.97+ |
three examples | QUANTITY | 0.97+ |
30,000 40,000 | QUANTITY | 0.97+ |
today | DATE | 0.97+ |
400 people | QUANTITY | 0.97+ |
four hundred CDOs | QUANTITY | 0.96+ |
Whirl | ORGANIZATION | 0.95+ |
about 10, 15 years ago | DATE | 0.94+ |
this morning | DATE | 0.94+ |
about three | QUANTITY | 0.92+ |
four times | QUANTITY | 0.91+ |
years ago | DATE | 0.91+ |
100s of gigabytes | QUANTITY | 0.89+ |
fourth quarter | DATE | 0.89+ |
a year ago | DATE | 0.88+ |
four straight quarters | QUANTITY | 0.88+ |
Watson studio | ORGANIZATION | 0.85+ |
day two | QUANTITY | 0.84+ |
ESS | ORGANIZATION | 0.83+ |
nine server power systems | QUANTITY | 0.82+ |
Vice President | PERSON | 0.78+ |
Bina Hallman, IBM & Tahir Ali | IBM Interconnect 2017
>> Narrator: Live from Las Vegas, it's the Cube covering Interconnect 2017, brought to you by IBM. >> Welcome back to Interconnect 2017 from Las Vegas everybody, this is the Cube the leader in live tech coverage. Bina Halmann is here, she's a Cube alumn and the vice president of offering management for storage and software defined at IBM and she's joined by Tahir Ali, who's the director of Enterprise Architecture at the City of Hope Medical Center. Folks, welcome to the Cube- >> Tahir: Thank you very much. >> Thanks so much for coming on. >> Bina: Thanks for having us. >> So Bina we'll start with you been on the cube a number of times. >> Yes. >> Give us the update on what's happening with IBM and Interconnect. >> Yeah, no it's a great show. Lots of exciting announcements and such. From an IBM perspective storage we've been very busy. Filling out our whole flash portfolio. Adding a complete set of hybrid cloud capabilities to our software defined storage. It's been a great 2016 and we're off to a great start in 2017 as well. >> Yeah [Inaudible] going to be here tomorrow >> That's right. so everbody's looking forward to that. So Tahir, let's get into City of Hope. Tell us about the organization and your role. >> Sure, so City of Hope if one of the forty seven comprehensive cancer centers in the nation. We deal with cancer of course, HIV, diabetes and other life threatening diseases. We are maybe 15 to 17 miles east of Los Angeles. My role in particular, I'm a Director of Enterprise Architecture so all new technologies, all new applications that land on City of Hope, we go through all the background. See how the security is going to be, how it's going to implement in our environment, if it's even possible to implement it. Making sure we talk to our business owners, figure out if there's a disaster recovery requirement if they have a HA requirement, if it's a clinical versus a non-clinical application. So we look at a whole stack and see how a new application fits into the infrastructure of City of Hope. >> So you guys to a lot of research there as well or? >> Absolutely. >> Yeah. >> So we are research, we are the small EDU and we are the medical center so- >> So a lot of data. >> A whole lot of data. Data just keeps coming and keeps coming and it's almost like never ending stream of data. Now with the data it's not only just data- Individual data is also growing. So a lot of imaging that happens for cancer research, or cancer medical center, gets bigger and bigger per patient as the three dimensional imaging is here. We look at resolution that is so much more today than it used to be five years. So every single image itself is so much bigger today than it used to be five years ago. Just a sheer difference in the resolution and the dimensions of the data. >> So what are the big drivers in your industry, and how is it affecting the architecture that you put forward? >> Right, so I think that a couple of huge things that are maybe two or three huge conversion points, or the pivot points that we see today. One of them is just the data stream as I mentioned earlier. The second is because a lot of the PHI and hipaa data that we have today- Security is a huge concern in a lot of the healthcare environment. So those two things, and it's almost like a catch 22. More data is coming in you have to figure out where you're going to put that data. But at the same time you got to make sure every single bit is secured enough. So there's a catch 22 where its going, where you have to make sure that data keeps coming and you keep securing the same data. Right so, those two things that we see pivoting the way we strategize around our infrastructure. >> It's hard, they're in conflict in way, >> Tahir: Absolutely. >> Because you've got to lock the data up but then you want to provide accessibilty... >> Tahir: Absolutely. >> as well. So paint a picture of your infrastructure and the applications that it's supporting. >> Right, so our infrastructure is mainly in-house, and our EMR is currently off-prem. A lot of clinical and non-clinical also stay in-house with us in our data center on-prem. Now we are kind of starting to migrate to cloud technologies more and more, as just things are ballooning. So we are in that middle piece where some of our infrastructure in in-house, slowly we are migrating to cloud. So we are at like at a hybrid currently. And as things progress I think more and more is going to go to the cloud. But for a medical center security is everything. So we have to be very careful where our data sits. >> So Bina when you hear that from a client >> Bina: Mm-hmm (affirmative) >> how do you respond? And you know, what do you propose? >> Bina: Yeah. >> How does it all... >> Yeah well- >> come about. >> You know as we see clients like Tahir, and some of the requirements in these spaces. Security is definitely a key factor. So as we develop our products, as we develop capabilities we ensure that security is a number one focus area for us. Whether it's for the on-prem storage, whether it's for the data that's in motion from moving from the on-prem into the cloud, and secure completely all the way through where the client has the control on the security, the keys et cetera. So a lot goes into making sure as we architect these solutions for our clients, that we focus on security. And of course some of the other requirements, industry specific requirements, are all also very important and we focus in on those as well. Whether it's regulatory or compliance requirements, right. >> So from a sort of portfolio standpoint what do you guys do when there's all kinds of innovations over that last four or five years coming in with flash, we heard about object stores this morning, we got cloud, you got block, you've got file, what are you guys doing? >> So we do a lot of different things, so from having filers in-house to doing block storage from- And the worst thing now these days with big data is, as the data is growing the security needs are growing but the end result with the researchers and our physicians the data availability needs to be fast. So now comes a bigger catch 22, where the data is so huge but at the same time they want that all of that very quickly on their fingertips. So now what do you do? That's where we bring in a lot of the flash to upfront it. 10 to 12 percent of our infrastructure has flash in the front, this way all the rendering, or all the rights that happen or- First land on the flash. So everybody who writes, feels like it's a very quick write. But there's a petabytes and petabytes behind the scene that could be on-prem, it could be on the cloud, but they don't need to know that. Its, everything lands so fast that it looks like it's just local and fast. So there's a lot of crisscross that is happening, and started maybe four five years ago with the speed of data is not going to be slow. The size of data increasing like crazy and then security is becoming a bigger and bigger concern as you know. Maybe every month or month and a half there's a breach somewhere that people have to deal with. So we have to handle all of that in one shot. So you know, it's more than just infrastructure itself. There's policies, there's procedures, there's a lot that goes around. >> So when you think about architecting, obviously you think about workloads and- >> Tahir: Of Course. >> what the workload requirement is, it's no a one size fits all. >> Tahir: Right right. >> So where do you start, do you start with- >> Tahir: Sure. >> Sort of, you know a conversation with the business? >> Sure, sure. >> How much money do you got? >> So we don't really deal with the money at all. We provide the best possible solution for that business requirement. So the conversation happens, "tell us what you're looking for." "We're looking for a very fast XYZ." "Okay tell us what exactly you need." "Here's the application, we want it available all the time, "and this is how it's going to look like, "it can't be down because our patients are depending on it". So on and so forth. We take that, we talk to our vendors. We look at exactly how it's architected. If it's- Let's just say it's three-tiered. There's a web, there's an app and then there's a database. You already know by default that if it's a database it's going to go on a high transactional IO where either it's a flash or a very fast spinning disc with a lot of spindles. From there you get the application. Could be a virtual machine, could not be a virtual machine. From there you get to a web tier. Web tiers are usually always on a virtual infrastructure. Then you realize if you want to put it on a DMZ so people from outside can get to it, or it's only for internal use. Then you draw the entire architecture diagram out. Then you price it out, you said "Okay if you want this to be "always on, maybe you need a database that is always on." Right, or you need a database that replicates 24/7. That has a cost associated to that. If you have an application- If wanted two application maybe it's a costier application it could be HA it could not be HA, so there's a cost to that. Web servers are kind of, you know cheaper tier of virtual machines. And then there's a architecture diagram, all the requirements are met in there. And there's a cost associated to that, saying business unit here is how much it's going to cost and this is what you will have. >> Okay so that's where the economics, >> Exactly >> comes into play. Okay this is what your requirements are >> Yep. >> This is, based on that what we would advise. >> Exactly, yeah. >> And then essentially it's can you afford it. >> Right right. (laughs) If you want to buy a house that is a three bedrooms and three bathrooms in Palo Alto, versus a six bedrooms and then seven bathrooms in Palo Alto it's going to be a financial impact that you might not like. (laughs) So it's one of those, right. So what you want has a financial impact on your end solution and that's what we provide. We don't force somebody to get something. We just give them- Hey how many kids do you have? Four kids, then maybe you need a five bedroom house. Right so we kind of do that. >> Is it common discussion? >> Yeah it is, it is. And that's, as you know, some of the things we do focus on. Right, as we- In addition to the security aspect of it of course, is around the automation, around driving in the efficiencies. Because at the end of the day, you know, whether as capital expands or operational expands you want to optimize for both of those. And that's where as we architect the solutions, develop the offerings, we ensure that we build-in capabilities, whether it's storage efficiency capabilities like virtualization, or de-dupe or compression. But as well as this automated tiering. Tiering off from flash to lower tier, whether it's on-prem lower, slower- >> Tahir: Could be a disc. >> speed disc or tape or even off to the cloud, right. And being able to do that, provide that I think addresses many of our clients' needs. That's a common requirement that we do hear. >> And as mentioned 10 to 12 percent of it if flash. >> Tahir: Right. >> The rest, you know ninety percent or so is something else. That's economics, correct? >> Right so- >> And how do you see that changing? >> So I think the percentage won't really change. I think the data size will change. So you have to just think about things, just in generality. Just what you do today. You know when you take a picture, maybe you look at it the first three days, even if you have a phone. After three days, maybe you look at it maybe once every two months. After three months, guess what? You will always never look at them. They're kind of moved away from even your memory banks in your head. Then you say, "Oh I was looking through it". And then maybe once in awhile you look at it. So you have to look at the behavior. A lot of the applications have the same behavior, where the new data is required right away. The older the data gets, the more archival state it gets. It gets warmer and then it gets colder. Now, as a healthcare institute we have to devise something that is great financially, also has the security, and put away in a way where we can pull it without having pain to put it back. So that's where the tiering comes to play. Doesn't matter how we do it. >> And your planning assumption is that the cost disparity between flash and other forms of storage will remain. That other- >> So- >> forms will remain cheaper. >> Right, so we are hoping, but I think the hybrid model of flash- So once you do a hybrid with flash and disc, then it becomes a little more economically suitable for a lot of the people. They do the same thing, they do tiering, but they make it look like a bigger platform. So it's like, "We can give you a petabyte "but it's going to look like flash." It doesn't work like that. They might have 300 terabyte of flash, 700- but it's so integrated quickly, that they can pull it and push it. Then there's a read-aheads write-aheads that takes that advantage to make it look like it. That will drop your pricing. The special sauce that transfer the data between slower and flash discs. >> Two questions for you. >> Sure. >> What do you look for in a supplier? And what drives you nuts about a supplier, that you don't want a supplier to do? >> Sure. So personally speaking, this is just my personal opinion. A stable environment a tried and true vendor is important. Somebody who has a core competency of doing this for a longer term is what I personally look at. There's a lot of new players who come in, they stay for a couple of years, they explode, somebody takes them over or they just kind of vanish. Or certain people outside of their core competency. So if Toyota started to make- Because they wanted to save money they said, "Hey Toyota from now on will make "the tires that are called Toyota." But Toyota is not a tire company. Other companies, Bridgestone and Michelin's have been making tires for a very long time. So the core competency of Toyota is building the cars and not the tires. So when I see these people, or the vendors saying, "Okay I can give you this this this this and this and that and the security and that. Maybe three out of those five things are not their core competency. So I start to wonder if the whole stack is worth it because there's going to be some weakness because they don't have the core competency. That's what I look at. What drives me crazy is, every single time somebody comes to meet with me they want to sell me everything and the kitchen sink under one umbrella. And the answer is one single pane of glass to manage everything. Life is not that easy, I wish it was but it really is not. (laughs) So those two things are- >> Selling the fantasy right. Now Bina we'll give you the last word. Interconnect, give us your final thoughts. What should we know about what's going on in software-defined and IBM storage. >> Yeah you know lots of announcements at Interconnect. You heard, as you talked about, cloud optic storage we've got great new pricing models and capabilities and overall software-defined storage. We're continuing to innovate, continue add capabilities like analytics and you'll see us doing more and more on cognitive. Cognitive storage management to get more out of the data, help clients get more and more information and value out of their data. >> What's the gist of the new pricing models, just um- >> Flexible pricing model depending on how the both hybrid as well as the three tiered on-prem and in between. But really cold as well as a flexible pricing model where depending on how you use the data you know you get consistent pricing so between on-prem and in the cloud. >> So more cloud-like pricing >> Yes, exactly. >> Great. >> Yep. >> Easier consumption, excellent. Well Bina Tahir thanks very much for coming to the cube. >> Yes yes thank you. >> Dave: Pleasure having you. >> Thank you. >> Thank you for having us. >> Dave: You're welcome. Alright keep it right there everybody we'll be back with our next guest and a wrap, right after this short break. Right back. (upbeat music)
SUMMARY :
brought to you by IBM. and the vice president So Bina we'll start with you with IBM and Interconnect. to a great start in 2017 as well. So Tahir, let's get into City of Hope. See how the security is going to be, So a lot of imaging that But at the same time you got to but then you want to and the applications that it's supporting. So we are in that middle piece where and some of the requirements of the flash to upfront it. it's no a one size fits all. and this is what you will have. Okay this is what your requirements are This is, based on that it's can you afford it. So what you want has a of the things we do focus on. that we do hear. And as mentioned 10 to The rest, you know ninety So you have to just think about assumption is that the cost So it's like, "We can give you a petabyte So the core competency of Toyota Now Bina we'll give you the last word. Yeah you know lots of where depending on how you much for coming to the cube. we'll be back with our
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Michelin | ORGANIZATION | 0.99+ |
Tahir | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Bridgestone | ORGANIZATION | 0.99+ |
10 | QUANTITY | 0.99+ |
Toyota | ORGANIZATION | 0.99+ |
2017 | DATE | 0.99+ |
Bina Halmann | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Dave | PERSON | 0.99+ |
Four kids | QUANTITY | 0.99+ |
15 | QUANTITY | 0.99+ |
Bina Hallman | PERSON | 0.99+ |
Two questions | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
two things | QUANTITY | 0.99+ |
Tahir Ali | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
ninety percent | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
2016 | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
Bina | PERSON | 0.99+ |
300 terabyte | QUANTITY | 0.99+ |
Interconnect | ORGANIZATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
12 percent | QUANTITY | 0.99+ |
700 | QUANTITY | 0.99+ |
five bedroom | QUANTITY | 0.99+ |
second | QUANTITY | 0.99+ |
Bina Tahir | PERSON | 0.99+ |
both | QUANTITY | 0.99+ |
17 miles | QUANTITY | 0.99+ |
three bathrooms | QUANTITY | 0.99+ |
three | QUANTITY | 0.98+ |
one shot | QUANTITY | 0.98+ |
tomorrow | DATE | 0.98+ |
six bedrooms | QUANTITY | 0.97+ |
five years ago | DATE | 0.97+ |
City of Hope | LOCATION | 0.97+ |
five things | QUANTITY | 0.97+ |
first three days | QUANTITY | 0.97+ |
three bedrooms | QUANTITY | 0.97+ |
five years | QUANTITY | 0.96+ |
seven bathrooms | QUANTITY | 0.96+ |
catch 22 | OTHER | 0.95+ |
Los Angeles | LOCATION | 0.93+ |
First | QUANTITY | 0.93+ |
four five years ago | DATE | 0.93+ |
one umbrella | QUANTITY | 0.92+ |
three huge conversion points | QUANTITY | 0.91+ |
City of Hope Medical Center | ORGANIZATION | 0.91+ |
two application | QUANTITY | 0.91+ |
After three days | DATE | 0.9+ |
After three months | DATE | 0.89+ |
a half | QUANTITY | 0.89+ |
this morning | DATE | 0.88+ |
forty seven comprehensive cancer centers | QUANTITY | 0.87+ |
IBM Flash System 9100 Digital Launch
(bright music) >> Hi, I'm Peter Burris, and welcome to another special digital community event, brought to you by theCUBE and Wikibon. We've got a great session planned for the next hour or so. Specifically, we're gonna talk about the journey to the data-driven multi-cloud. Sponsored by IBM, with a lot of great thought leadership content from IBM guests. Now, what we'll do is, we'll introduce some of these topics, we'll have these conversations, and at the end, this is gonna be an opportunity for you to participate, as a community, in a crowd chat, so that you can ask questions, voice your opinions, hear what others have to say about this crucial issue. Now why is this so important? Well Wikibon believes very strongly that one of the seminal features of the transition to digital business, driving new-type AI classes of applications, et cetera, is the ability of using flash-based storage systems and related software, to do a better job of delivering data to more complex, richer applications, faster, and that's catalyzing a lot of the transformation that we're talking about. So let me introduce our first guest. Eric Herzog is the CMO and VP Worldwide Storage Channels at IBM. Eric, thanks for coming on theCUBE. >> Great, well thank you Peter. We love coming to theCUBE, and most importantly, it's what you guys can do to help educate all the end-users and the resellers that sell to them, and that's very, very valuable and we've had good feedback from clients and partners, that, hey, we heard you guys on theCUBE, and very interesting, so I really appreciate all the work you guys do. >> Oh, thank you very much. We've got a lot of great things to talk about today. First, and I want to start it off, kick off the proceedings for the next hour or so by addressing the most important issue here. Data-driven. Now Wikibon believes that digital transformation means something, it's the process by which a business treats data as an asset, and re-institutionalizes its work and changes the way it engages with customers, et cetera. But this notion of data-driven is especially important because it elevates the role that storage is gonna play within an organization. Sometimes I think maybe we shouldn't even call it storage. Talk to us a little bit about data-driven and how that concept is driving some of the concepts in innovation that are represented in this and future IBM products. >> Sure. So I think the first thing, it is all about the data, and it doesn't matter whether you're a small company, like Herzog's Bar and Grill, or the largest Fortune 500 in the world. The bottom line is, your most valuable asset is you data, whether that's customer data, supply chain data, partner data that comes to you, that you use, services data, the data you guys sell, right? You're an analysis firm, so you've got data, and you use that data to create you analysis, and then you use that as a product. So, data is the most critical asset. At the same time, data always goes onto storage. So if that foundation of storage is not resilient, is not available, is not performant, then either A, it's totally unavailable, right, you can't get to the customer data. B, there's a problem with the data, okay, so you're doing supply chain and if the storage corrupts the data, then guess what? You can't send out the T-shirts to the right retail location, or have it available online if you're an online retailer. >> Or you sent 200,000 instead of 20, and you get stuck with the bill. >> Right, exactly. So data is that incredible asset and then underneath, think of storage as the foundation of a building. Data is your building, okay, and all the various aspects of that data, customer data, your data, internal data, everything you're doing, that's the building. If the foundation of the building isn't rock solid the building falls down. Whether your building is big or small, and that's what storage does, and then storage can also optimize the building above it. So think of it more than just the foundation but the foundation if you will, that almost has like a tree, and has got things that come up from the bottom and have that beautiful image, and storage can help you out. For example, metadata. Metadata which is data about data could be used by analytics, package them, well guess what? The metadata about data could be exposed by the storage company. So that's why data-driven is so important from an end-user perspective and why storage is that foundation underneath a data-driven enterprise. >> Now we've seen a lot of folks talk about how cloud is the centerpiece of thinking about infrastructure. You're suggesting that data is the centerpiece of infrastructure, and cloud is gonna be an implementation decision. Where do I put the workloads, costs, all the other elements associated with it. But it suggests ultimately that data is not gonna end up in one place. We have to think about data as being where it needs to be to perform the work. That suggests multi-cloud, multi-premise. Talk to us a little bit about the role that storage and multi-cloud play together. >> So let's take multi-cloud first and peel that away. So multi-cloud, we see a couple of different things. So first of all, certain companies don't want to use a public cloud. Whether it's a security issue, and actually some people have found out that public cloud providers, no matter who the vendor is, sort of is a razor in a razor blade. Very cheap to put the storage out there but we want certain SLAs, guess what? The cloud vendors charge more. If you move data around a lot, in and out as you were describing, it's really that valuable, guess what? On ingress and egress gets you charges for that. The cloud provider. So it's almost the razor and the razor blades. So A, there's a cost factor in public only. B, you've got people that have security issues. C, what we've seen is, in many cases, hybrid. So certain datasets go out to the cloud and other datasets stay on the premises. So you've got that aspect of multi, which is public, private or hybrid. The second aspect, which is very common in bigger companies that are either divisionalized or large geographically, is literally the usage, in a hybrid or a public cloud environment, of multiple cloud vendors. So for example, in several countries the data has to physically stay within the confines of that country. So if you're a big enterprise and you've got offices in 200 different, well not 200, but 100 different countries, and 20 of 'em you have to keep in that country by law. If your cloud provider doesn't have a data center there you need to use a different cloud provider. So you've got that. And you also have, I would argue that the cloud is not new anymore. The internet is the original cloud. So it's really old. >> Cloud in many respects is the programming model, or the mature programming model for the internet-based programming applications. >> I'd agree with that. So what that means is, as it gets more mature, from the mid-sized company up, all of a sudden procurement's involved. So think about the way networking, storage and servers, and sometimes even software was bought. The IT guy, the CIO, the line of business might specify, I want to use it but then it goes to procurement. In the mid to big company it's like, great, are we getting three bids on that? So we've also seen that happen, particularly with larger enterprise where, well you were using IBM cloud, that's great, but you are getting a quote from Microsoft or Amazon right? So those are the two aspects we see in multi-cloud, and by the way, that can be a very complex situation dealing with big companies. So the key thing that we do at IBM, is make sure that whichever model you take, public, private or hybrid, or multiple public clouds, or multiple public cloud providers, using a hybrid configuration, that we can support that. So things like our transparent cloud tiering, we've also recently created some solution blueprints for multi-clouds. So these things allow you to simply and easily deploy. Storage has to be viewed as transparent to a cloud. You've gotta be able to move the data back and forth, whether that be backing the data up, or archiving the data, or secondary data usage, or whatever that may be. And so storage really is, gotta be multi-cloud and we've been doing those solutions already and in fact, but honestly for the software side of the IBM portfolio for storage, we have hundreds of cloud providers mid, big and small, that use our storage software to offer backup as a service or storage as a service, and we're again the software foundation underneath what an end-user would buy as a service from those cloud providers. >> So I want to pick up on a word you used, simplicity. So, you and I are old infrastructure hacks and for many years I used to tell my management, infrastructure must do no harm. That's the best way to think about infrastructure. Simplicity is the new value proposition, complexity remains the killer. Talk to us a little bit about the role that simplicity in packaging and service delivery and everything else is again, shaping the way you guys, IBM, think about what products, what systems and when. >> So I think there's a couple of things. First of all, it's all about the right tool for the right job. So you don't want to over-sell and sell a big, giant piece of high-end all-flash array, for example, to a small company. They're not gonna buy that. So we have created a portfolio of which our FlashSystem 9100 is our newest product, but we've got a whole set of portfolios from the entry space to the mid range to the high end. We also have stuff that's tuned for applications, so for example, our lasting storage server which comes in an all-flash configuration is ideal for big data analytics workloads. Our DS8000 family of flash is ideal for mainframe attach, and in fact we have close to 65% of all mainframe attached storage, is from IBM. But you have the right tool for the right job, so that's item number one. The second thing you want to do is easier and easier to use. Whether that be configuring the physical entity itself, so how do you cable, how do you rack and stack it, make sure that it easily integrates into whatever else they're putting together in their data center, but it a cloud data center, a traditional on-premises data center, it doesn't matter. The third thing is all about the software. So how do you have software that makes the array easier and easier to use, and is heavily automated based on AI. So the old automation way, and we've both been in that era, was you set policies. Policy-based management, and when it came out 10 years ago, it was a transformational event. Now it's all about using AI in your infrastructure. Not only does your storage need to be right to enable AI at the server workload level, but we're saying, we've actually deployed AI inside of our storage, making it easier for the storage manager or the IT manager, and in some cases even the app owner to configure the storage 'cause it's automated. >> Going back to that notion that the storage knows something about the metadata, too. >> Right, exactly, exactly. So the last thing is our multi-cloud blueprint. So in those cases, what we've done is create these multi-cloud blueprints. For example, disaster recovery and business continuity using a public cloud. Or secondary data use in a public cloud. How do you go ahead and take a snapshot, a replica or a backup, and use it for dev-ops or test or analytics? And by the way, our Spectrum copy data management software allows you, but you need a blueprint so that it's easy for the end user, or for those end users who buy through our partners, our partners then have this recipe book, these blueprints, you put them together, use the software that happens to come embedded in our new FlashSystem 9100 and then they use that and create all these various different recipes. Almost, I hate to say it, like a baker would do. They use some base ingredients in baking but you can make cookies, candies, all kinds of stuff, like a donut is essentially a baked good that's fried. So all these things use the same base ingredients and that software that comes with the FlashSystem 9100, are those base ingredients, reformulated in different models to give all these multi-cloud blueprints. >> And we've gotta learn more about vegetables so we can talk about salad in that metaphor, (Eric laughing) you and I. Eric once again. >> Great, thank you. >> Thank you so much for joining us here on the CUBE. >> Great, thank you. >> Alright, so let's hear this come to life in the form of a product video from IBM on the FlashSystem 9100. >> Some things change so quickly, it's impossible to track with the naked eye. The speed of change in your business can be just as sudden and requires the ability to rapidly analyze the details of your data. The new, IBM FlashSystem 9100, accelerates your ability to obtain real-time value from that information, and rapidly evolve to a multi-cloud infrastructure, fueled by NVMe technology. In one powerful platform. IBM FlashSystem 9100, combines the performance, of IBM FlashCore technology. The efficiency of IBM Spectrum Virtualize. The IBM software solutions, to speed your multi-cloud deployments, reduce overall costs, plan for performance and capacity, and simplify support using cloud-based IBM storage insights to provide AI-powered predictive analytics, and simplify data protection with a storage solution that's flexible, modern, and agile. It's time to re-think your data infrastructure. (upbeat music) >> Great to hear about the IBM FlashSystem 9100 but let's get some more details. To help us with that, we've got Bina Hallman who's the Vice President Offering Management at IBM Storage. Bina, welcome to theCUBE. >> Well, thanks for having me. It's an exciting even, we're looking forward to it. >> So Bina, I want to build on some of the stuff that we talked to Eric about. Eric did a good job of articulating the overall customer challenge. As IBM conceives how it's going to approach customers and help them solve these challenges, let's talk about some of the core values that IBM brings to bear. What would you say would be one of the, say three, what are the three things that IBM really focuses on, as it thinks about its core values to approach these challenges? >> Sure, sure. It's really around helping the client, providing a simple one-stop shopping approach, ensuring that we're doing all the right things to bring the capabilities together so that clients don't have to take different component technologies and put them together themselves. They can focus on providing business value. And it's really around, delivering the economic benefits around CapEx and OpEx, delivering a set of capabilities that help them move on their journey to a data-driven, multi-cloud. Make it easier and make it simpler. >> So, making sure that it's one place they can go where they can get the solution. But IBM has a long history of engineering. Are you doing anything special in terms of pre-testing, pre-packaging some of these things to make it easier? >> Yeah, we over the years have worked with many of our clients around the world and helping them achieve their vision and their strategy around multi-cloud, and in that journey and those set of experiences, we've identified some key solutions that really do make it easier. And so we're leveraging the breadth of IBM, the power of IBM, making those investment to deliver a set of solutions that are pre-tested, they are supported at the solutions level. Really focusing on delivering and underpinning the solutions with blueprints. Step-by-step documentation, and as clients deploy these solutions, they run into challenges, having IBM support to assist. Really bringing it all together. This notion of a multi-cloud architecture, around delivering modern infrastructure capabilities, NVMe acceleration, but also some of our really core differentiation that we deliver through FlashCore data reduction capabilities, along with things like modern data protection. That segment is changing and we really want to enable clients, their IT, and their line of business to really free them up and focus on a business value, versus putting these components together. So it's really around taking those complex things and make them easier for clients. Get improved RPO, RTO, get improved performance, get improved costs, but also flexibility and agility are very critical. >> That sounds like therefore, I mean the history of storage has been trade-offs that you, this can only go that fast, and that tape can only go that fast but now when we start thinking about flash, NVMe, the trade-offs are not as acute as they used to be. Is IBM's engineering chops capable of pointing how you can in fact have almost all of this at one time? >> Oh absolutely. The breadth and the capabilities in our R and D and the research capabilities, also our experiences that I talked about, engagements, putting all of that together to deliver some key solutions and capabilities. Like, look, everybody needs backup and archive. Backup to recover your data in case of a disaster occurs, archive for long-term retention. That data management, the data protection segment, it's going through a transformation. New emerging capabilities, new ways to do backup. And what we're doing is, pulling all of that together, with things that we introduced, for example, our Protect Plus in the fourth quarter, along with this FS 9100 and the cloud capabilities, to deliver a solution around data protection, data reuse, so that you have a modern backup approach for both virtual and physical environments that is really based on things like snapshots and mountable copies, So you're not using that traditional approach to recovering your copy from a backup by bringing it back. Instead, all you're doing is mounting one of those copies and instantly getting your application back and running for operational recovery. >> So to summarize some of those value, once stop, pre-tested, advanced technologies, smartly engineered. You guys did something interesting on July 10th. Why don't you talk about how those values, and the understanding of the problem, manifested so fast. Kind of an exciting set of new products that you guys introduced on July 10th. >> Absolutely. On July 10th we not only introduced our flagship FlashSystem, the FS 9100, which delivers some amazing client value around the economic benefits of CapEx, OpEx reduction, but also seamless data mobility, data reuse, security. All the things that are important for a client on their cloud journey. In addition to that, we infused that offering with AI-based predictive analytics and of course that performance and NVMe acceleration is really key, but in addition to doing that, we've also introduced some very exciting solutions. Really three key solutions. One around data protection, data reuse, to enable clients to get that agility, and second is around business continuity and data reuse. To be able to really reduce the expense of having business continuity in today's environment. It's a high-risk environment, it's inevitable to have disruptions but really being prepared to mitigate some of those risks and having operational continuity is important and by doing things like leveraging the public cloud for your DR capabilities. That's very important, so we introduced a solution around that. And the third is around private cloud. Taking your IBM storage, your FS 9100, along with the heterogeneous environment you have, and making it cloud-ready. Getting the cloud efficiencies. Making it to where you can use it for environments to create things like native cloud applications that are portable, from on-prem and into the cloud. So those are some of the key ways that we brought this together to really deliver on client value. >> So could you give us just one quick use case of your clients that are applying these technologies to solve their problems? >> Yeah, so let me use the first one that I talked about, the data protection and data reuse. So to be able to take your on-premise environment, really apply an abstraction layer, set up catalogs, set up SLAs and access control, but then be able to step away and manage that storage all through API bays. We have a lot of clients that are doing that and then taking that, making the snapshots, using those copies for things like, whether it's the disaster recovery or secondary use cases like analytics, dev-ops. You know, dev-ops is a really important use case and our clients are really leveraging some of these capabilities for it because you want to make sure that, as application developers are developing their applications, they're working with the latest data and making sure that the testing they're doing is meaningful in finding the maximum number of defects so you get the highest quality of code coming out of them and being able to do that, in a self-service driven way so that they're not having to slow down their innovation. We have clients leveraging our capabilities for those kinds of use cases. >> It's great to hear about the FlashSystem 9100 but let's hear what customers have to say about it. Not too long ago, IBM convened a customer panel to discuss many aspects of this announcement. So let's hear what some of the customers had to say about the FlashSystem 9100. >> Now Owen, you've used just about every flash system that IBM has made. Tell us, what excites you about this announcement of our new FlashSystem 9100. >> Well, let's start with the hardware. The fact that they took the big modules from the older systems, and collapsed that down to a two-and-a-half inch form-factor NVMe drive is mind-blowing. And to do it with the full speed compression as well. When the compression was first announced, for the last FlashSystem 900, I didn't think it was possible. We tested it, I was proven wrong. (laughing) It's entirely possible. And to do that on a small form-factor NVMe drive is just astounding. Now to layer on the full software stack, get all those features, and the possibilities for your business, and what we can do, and leverage those systems and technologies, and take the snapshots in the replication and the insights into what our system's doing, it is really mind-blowing what's coming out today and I cannot wait to just kick those tires. There's more. So with that real-world compression ratio, that we can validate on the new 900, and it's the same in this new system, which is astounding, but we can get more, and just the amount of storage you get in this really small footprint. Like, two rack units is nothing. Half our services are two rack units, which is absolutely astounding, to get that much data in such a very small package, like, 460 terabytes is phenomenal, with all these features. The full solution is amazing, but what else can we do with it? And especially as they've said, if it's for a comparable price as what we've bought before, and we're getting the full solution with the software, the hardware, the extremely small form-factor, what else can you do? What workloads can you pull forward? So where our backup systems weren't on the super fast storage like our production systems are, now we can pull those forward and they can give the same performance as production to run the back-end of the company, which I can't wait to test. >> It's great to hear from customers. The centerpiece of the Wikibon community. But let's also get the analyst's perspective. Let's hear from Eric Burgener, who's the Research Vice President for Storage at IDC. >> Thanks very much Peter, good to be back. >> So we've heard a lot from a number of folks today about some of the changes that are happening in the industry and I want to amplify some things and get the analyst's perspective. So Wikibon, as a fellow analyst, Wikibon believes pretty strongly that the emergence of flash-based storage systems is one of the catalyst technologies that's driving a lot of the changes. If only because, old storage technologies are focused on persisting data. Disc, slow, but at least it was there. Flash systems allow a bit flip, they allow you to think about delivering data to anywhere in your organization. Different applications, without a lot of complexity, but it's gotta be more than that. What else is crucial, to making sure that these systems in fact are enabling the types of applications that customers are trying to deliver today. >> Yeah, so actually there's an emerging technology that provides the perfect answer to that, which is NVMe. If you look at most of the all-flash systems that have shipped so far, they've been based around SCSI. SCSI was a protocol designed for hard disk drives, not flash, even though you can use it with flash. NVMe is specifically designed for flash and that's really gonna open up the ability to get the full value of the performance, the capacity utilization, and the efficiencies, that all-flash arrays can bring to the market. And in this era of big data, more than ever, we need to unlock that performance capability. >> So as we think about the big data, AI, that's gonna have a significant impact overall in the market and how a lot of different vendors are jockeying for position. When IDC looks at the impact of flash, NVMe, and the reemergence of some traditional big vendors, how do you think the market landscape's gonna be changing over the next few years? >> Yeah, how this market has developed, really the NVMe-based all-flash arrays are gonna be a carve-out from the primary storage market which are SCSI-based AFAs today. So we're gonna see that start to grow over time, it's just emerging. We had startups begin to ship NVMe-based arrays back in 2016. This year we've actually got several of the majors who've got products based around their flagship platforms that are optimized for NVMe. So very quickly we're gonna move to a situation where we've got a number of options from both startups and major players available, with the NVMe technology as the core. >> And as you think about NVMe, at the core, it also means that we can do more with software, closer to the data. So that's gotta be another feature of how the market's gonna evolve over the next couple of years, wouldn't you say? >> Yeah, absolutely. A lot of the data services that generate latencies, like in-line data reduction, encryption and that type of thing, we can run those with less impact on the application side when we have much more performant storage on the back-end. But I have to mention one other thing. To really get all that NVMe performance all the way to the application side, you've gotta have an NVMe Over Fabric connection. So it's not enough to just have NVMe in the back-end array but you need that RDMA connection to the hosts and that's what NVMe Over Fabric provides for you. >> Great, so that's what's happening on the technology-product-vendor side, but ultimately the goal here is to enable enterprises to do something different. So what's gonna be the impact on the enterprise over the next few years? >> Yeah, so we believe that SCSI clearly will get replaced in the primary storage space, by NVMe over time. In fact, we've predicted that by 2021, we think that over 50% of all the external, primary storage revenue, will be generated by these end-to-end NVMe-based systems. So we see that transition happening over the course of the next two to three years. Probably by the end of this year, we'll have NVMe-based offerings, with NVMe Over Fabric front ends, available from six of the established storage providers, as well as a number of smaller startups. >> We've come a long way from the brown, spinning stuff, haven't we? >> (laughing) Absolutely. >> Alright, Eric Burgener, thank you very much. IDC Research Vice President, great once again to have you in theCUBE. >> Thanks Peter. >> Always great to get the analyst's perspective, but let's get back to the customer perspective. Again, from that same panel that we saw before, here's some highlights of what customers had to say about IBM's Spectrum family of software. (upbeat music) We love hearing those customer highlights but let's get into some of the overall storage trends and to do that we've asked Eric Herzog and Bina Hallman back to theCUBE. Eric, Bina, thanks again for coming back. So, what I want to do now is, I want to talk a little bit about some trends within the storage world and what the next few years are gonna mean, but Eric, I want to start with you. I was recently at IBM Think, and Ginni Rometty talked about the idea of putting smart to work. Now, I can tell you, that means something to me because the whole notion of how data gets used, how work gets institutionalized around your data, what does storage do in that context? To put smart to work. >> Well I think there's a couple of things. First we've gotta realize that it's not about storage, it's about the data and the information that happens to sit on the storage. So you have to have storage that's always available, always resilient, is incredibly fast, and as I said earlier, transparently moves things in and out of the cloud, automatically, so that the user doesn't have to do it. Second thing that's critical is the integration of AI, artificial intelligence. Both into the storage solution itself, of what the storage does, how you do it, and how it plays with the data, but also, if you're gonna do AI on a broad scale, and for example we're working with a customer right now and their AI configuration in 100 petabytes. Leveraging our storage underneath the hood of that big, giant AI analytics workload. So that's why they have to both think of it in the storage to make the storage better and more productive with the data and the information that it has, but then also as the undercurrent for any AI solution that anyone wants to employ, big, medium or small. >> So Bina, I want to pick up on that because there are gonna be some, there's some advanced technologies that are being exploited within storage right now, to achieve what Eric's talking about, but there's gonna be a lot more. And there's gonna be more intensive application utilizations of some of those technologies. What are some of the technologies that are becoming increasingly important, from a storage standpoint, that people have to think about as they try to achieve their digital transformation objectives. >> That's right, I mean Peter, in addition to some of the basics around making sure your infrastructure is enabled to handle the SLAs and the level of performance that's required by these AI workloads, when you think about what Eric said, this data's gonna reside, it's gonna reside on-premise, it's gonna be behind a firewall, potentially in the cloud, or multiple public clouds. How do you manage that data? How do you get visibility to that data? And then be able to leverage that data for your analytics. And so data management is going to be very important but also, being able to understand what that data contains and be able to run the analytics and be able to do things like tagging the metadata and then doing some specialized analytics around that is going to be very important. The fabric to move that data, data portability from on-prem into the cloud, and back and forth, bidirectionally, is gonna be very important as you look into the future. >> And obviously things like IOT's gonna mean bigger, more, more available. So a lot of technologies, in a big picture, are gonna become more closely associated with storage. I like to say that, at some point in time we've gotta stop thinking about calling stuff storage because it's gonna be so central to the fabric of how data works within a business. But Eric, I want to come back to you and say, those are some of the big picture technologies but what are some of the little picture technologies? That none-the-less are really central to being able to build up this vision over the course of the next few years? >> Well a couple of things. One is the move to NVMe, so we've integrated NVMe into our FLashSystem 9100, we have fabric support, we already announced back in February actually, fabric support for NVMe over an InfiniBand infrastructure with our FlashSystem 900 and we're extending that to all of the other inter-connects from a fabric perspective for NVMe, whether that be ethernet or whether that be fiber channel and we put NVMe in the system. We also have integrated our custom flash models, our FlashCore technology allows us to take raw flash and create, if you will, a custom SSD. Why does that matter? We can get better resiliency, we can get incredibly better performance, which is very tied in to your applications workloads and use cases, especially in data-driven multi-cloud environment. It's critical that the flash is incredibly fast and it really matters. And resilient, what do you do? You try to move it to the cloud and you lose your data. So if you don't have that resiliency and availability, that's a big issue. I think the third thing is, what I call the cloud-ification of software. All of IBM's storage software is cloud-ified. We can move things simultaneously into the cloud. It's all automated. We can move data around all over the place. Not only our data, not only to our boxes, we could actually move other people's array's data around for them and we can do it with our storage software. So it's really critical to have this cloud-ification. It's really cool to have this now technology, NVMe from an end-to-end perspective for fabric and then inside the system, to get the right resiliency, the right availability, the right performance for your applications, workloads and use cases, and you've gotta make sure that everything is cloud-ified and portable, and mobile, and we've done that with the solutions that are wrapped into our FlashSystem 9100 that we launched a couple of weeks ago. >> So you are both though leaders in the storage industry. I think that's very clear, and the whole notion of storage technology, and you work with a lot of customers, you see a lot of use cases. So I want to ask you one quick question, to close here. And that is, if there was one thing that you would tell a storage leader, a CIO or someone who things about storage in a broad way, one mindset change that they have to make, to start this journey and get it going so that it's gonna be successful. What would that one mindset change be? Bina, what do you think? >> You know, I think it's really around, there's a lot of capabilities out there. It's really around simplifying your environment and making sure that, as you're deploying these new solutions or new capabilities, that you've really got a partnership with a vendor that's gonna help you make it easier. Take those complex tasks, make them easier, deliver those step-by-step instructions and documentation and be right there when you need their assistance. So I think that's gonna be really important. >> So look at it from a portfolio perspective, where best of breed is still important, but it's gotta work together because it leverages itself. >> It's gotta work together, absolutely. >> Eric, what would you say? >> Well I think the key thing is, people think storage is storage. All storage is not the same and one of the central tenets at IBM storage is to make sure that we're integrated with the cloud. We can move data around transparently, easily, simply, Bina pointed out the simplicity. If you can't support the cloud, then you're really just a storage box, and that's not what IBM does. Over 40% of what we sell is actually storage software and all that software works with all of our competitors' gear. And in fact our Spectrum Virtualize for Public Cloud, for example, can simultaneously have datasets sitting in a cloud instantiation, and sitting on premises, and then we can use our copy data management to take advantage of that secondary copy. That's all because we're so cloud-ified from a software perspective, so all storage is not the same, and you can't think of storage as, I need the cheapest storage. It's gotta be, how does it drive business value for my oceans of data? That's what matters most, and by the way, we're very cost-effective anyway, especially because of our custom flash model which allows us to have a real price advantage. >> You ain't doing business at a level of 100 petabytes if you're not cost effective. >> Right, so those are the things that we see as really critical, is storage is not storage. Storage is about data and information. >> So let me summarize your point then, if I can really quickly. That in other words, that we have to think about storage as the first step to great data management. >> Absolutely, absolutely Peter. >> Eric, Bina, great conversation. >> Thank you. >> So we've heard a lot of great thought leaderships comments on the data-driven journey with multi-cloud and some great product announcements. But now, let's do the crowd chat. This is your opportunity to participate in this proceedings. It's the centerpiece of the digital community event. What questions do you have? What comments do you have? What answers might you provide to your peers? This is an opportunity for all of us collectively to engage and have those crucial conversations that are gonna allow you to, from a storage perspective, drive business value in your digital business transformations. So, let's get straight to the crowd chat. (bright music)
SUMMARY :
the journey to the data-driven multi-cloud. and the resellers that sell to them, and changes the way it engages with customers, et cetera. and if the storage corrupts the data, then guess what? and you get stuck with the bill. and have that beautiful image, and storage can help you out. is the centerpiece of infrastructure, the data has to physically stay Cloud in many respects is the programming model, already and in fact, but honestly for the software side is again, shaping the way you guys, IBM, think about from the entry space to the mid range to the high end. Going back to that notion that the storage so that it's easy for the end user, (Eric laughing) you and I. Thank you so much in the form of a product video from IBM and requires the ability to rapidly analyze the details Great to hear about the IBM FlashSystem 9100 It's an exciting even, we're looking forward to it. that IBM brings to bear. so that clients don't have to pre-packaging some of these things to make it easier? and in that journey and those set of experiences, and that tape can only go that fast and the research capabilities, also our experiences and the understanding of the problem, manifested so fast. Making it to where you can use it for environments and making sure that the testing they're doing It's great to hear about the FlashSystem 9100 Tell us, what excites you about this announcement and it's the same in this new system, which is astounding, The centerpiece of the Wikibon community. and get the analyst's perspective. that provides the perfect answer to that, and the reemergence of some traditional big vendors, really the NVMe-based all-flash arrays over the next couple of years, wouldn't you say? So it's not enough to just have NVMe in the back-end array over the next few years? over the course of the next two to three years. great once again to have you in theCUBE. and to do that we've asked Eric Herzog so that the user doesn't have to do it. from a storage standpoint, that people have to think about and be able to run the analytics because it's gonna be so central to the fabric One is the move to NVMe, so we've integrated NVMe and the whole notion of storage technology, and be right there when you need their assistance. So look at it from a portfolio perspective, It's gotta work together, and by the way, we're very cost-effective anyway, You ain't doing business at a level of 100 petabytes that we see as really critical, as the first step to great data management. on the data-driven journey with multi-cloud
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Eric Burgener | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Eric | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Eric Herzog | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
July 10th | DATE | 0.99+ |
Owen | PERSON | 0.99+ |
Herzog's Bar and Grill | ORGANIZATION | 0.99+ |
2016 | DATE | 0.99+ |
six | QUANTITY | 0.99+ |
February | DATE | 0.99+ |
first | QUANTITY | 0.99+ |
Bina Hallman | PERSON | 0.99+ |
200,000 | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
20 | QUANTITY | 0.99+ |
2021 | DATE | 0.99+ |
100 petabytes | QUANTITY | 0.99+ |
Bina | PERSON | 0.99+ |
two aspects | QUANTITY | 0.99+ |
DS8000 | COMMERCIAL_ITEM | 0.99+ |
Wikibon | ORGANIZATION | 0.99+ |
100 different countries | QUANTITY | 0.99+ |
two-and-a-half inch | QUANTITY | 0.99+ |
460 terabytes | QUANTITY | 0.99+ |
Ginni Rometty | PERSON | 0.99+ |
FlashSystem 9100 | COMMERCIAL_ITEM | 0.99+ |
FlashSystem 900 | COMMERCIAL_ITEM | 0.99+ |
second aspect | QUANTITY | 0.99+ |
FS 9100 | COMMERCIAL_ITEM | 0.99+ |
hundreds | QUANTITY | 0.99+ |
third thing | QUANTITY | 0.99+ |
200 | QUANTITY | 0.99+ |
Both | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
ingress | ORGANIZATION | 0.98+ |
One | QUANTITY | 0.98+ |
Over 40% | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
one thing | QUANTITY | 0.98+ |
first step | QUANTITY | 0.98+ |