Bina Hallman, IBM | VMworld 2019
>> Presenter: Live from San Francisco, celebrating 10 years of high tech coverage, it's the Cube. Covering vmworld 2019. Brought to you by vmware and its ecosystem partners. >> So good to have you here with us on the first day of three days of live coverage here in San Francisco as The Cube continues its 10th year of coverage at vmworld 2019. Along with John Troyer, I'm John Walls, glad to have you with us. We're joined now by Bina Hallman who is the vice president of storage at IBM. Bina good to have you with us with this afternoon. >> Thanks for having me. >> You bet. You know, your everyday assignment is keeping so many people up at night and that's how do we defend ourselves, cyber. How do we develop these resilient networks, resilient services. Let's take a step back for a second and try to paint the scope of the problem in terms of what you're seeing at IBM in terms of cyber intrusions, the nature of those attacks and the areas where those are happening. >> I'll tell you from a client industry perspective, right touch on that a little bit. But cyber resiliency, cyber security it's a huge topic. This is something that every business is thinking about, is talking about. It's not just a discussion in the different departments, it's at the C Suite level, the board level. Because if you think about it, cyber crimes as frequent as they are and as impactful as they are, they can really affect the overall company's revenue generation. The cost of recovering from them can be very expensive. >> We're talking about more than just breeches here. Every week we hear ransomware is very interesting, it's very prevalent, it's here. I honestly hear a lot of government small town governments, or state governments, municipal governments maybe because they have reporting requirements. I don't know what goes on underneath in the private sector, but does it seem like that is one things? >> That's right that's right. We hear it in the news a lot. We hear about ransomware quite a bit as data breeches, as other types of things. When you look at some of the analyst statistics and what they say about the frequency of these types of events, and the likelihood of a business getting affected, the likelihood of a business getting affected by a cyber event is 1 in 3. It used to be 1 in 4 a couple years ago, now 1 in 3 over the next two years. Ransomware itself is increasing frequency. I think it was like every 14 seconds there is a ransomware attack somewhere in the world. The cost of this is tremendous. It's in the trillions of dollars. Both from recovering from that attack, the loss in business and revenue generation and actually the impact to the company's reputation. Again, not just ransomware, it's happening in many industries. You talked about government, it's in manufacturing, it's in financial, it's in health, it's in transportation. When you step back and say, how is it so broad, when you think about every organization to some extent is going through some level of transformation. There's digital transformation. They're leveraging capabilities like hybrid multi cloud, having resources on prim, workloads on prim some services in the cloud. They've got team members that are using mobile devices. Some companies depending on their business might have IOT. So when you look at all of those entry points, these are new ways that the bad guys can get into an organization. That creates the scale and complexity, just gets very large. It used to be that you have a backup. The traditional way for business resiliency used to be you do a backup, you have the data on an external system, you restore it if something happened. And then there was the business continuity. You would have a secondary infrastructure that in the case of an accident or some kind of a natural disaster, which didn't happen very often, you would have somewhere, a secondary infrastructure. All of those were designed with the likelihood being very low of happening. Then the recovery times and the disruption to business was somewhat tolerable. These days, with all of the dynamics we're talking about, and the potential areas of entry you need more of an end to end solution. That's a cyber resiliency strategy that is really comprehensive and that's what a lot of the businesses are thinking about today. How do I make sure I have a complete solution and a strategy that allows me to survive through and come up very quickly after an attack happens. I think most people recognize that they're going to get impacted at some point. It's not if, but it's when and when it does how do I quickly recover. >> You said it with the statistic, that 1 in 3 every two years. So my math tells me in six years time, I'm going to get hit by that standard. But it tells me that it's not if, it is when. So in terms of the strategies that companies are adopting, what do you recommend? What do you suggest now? You paint a realistically grim picture that there's so many different avenues, different opportunities and it's hard to put your fingers in all those holes. >> There's a lot happening in this space and I think that, you know, there are different standards, a lot of regulations but one that has been accepted and being leveraged in the US is around a framework and some guidance the NIST organization, National Institute of Standards in Technology. It's a framework that they put in place, a guidance on how do you plan for, how do you detect and then recover from these types of situations. I'll talk about it a little bit, but it's a very good approach. It starts with an organization needs to start by identifying what are some of the critical business services that their business is dependent on. What are they, what are the systems, what are the workloads, what are the applications. They identify and then what's the tolerance level. How quickly do you need to come up. What's the RPO, RTF. Based on that, develop and prioritize a plan. That plan has to be holistic. It involves from the CIO to the CSA, security office to the operations to the business continuity, to the data owners, the line of business. And then in this environment, you've got partners, you've got services you're leveraging. All of that has to be encompassing for those key services that you identify and prioritize as a client that you need up and running. And up and running very quickly. One of the examples of a client, financial institution. They determined they had 300 services they needed up and running within 24 hours in case there was an attack or in case something happened to their data or their environment. That they defined as what their requirement was. Then you go about working with them to do a few things. You identify and then there are other phases around that I can talk about that as well. >> I was going to go over to IBM a little bit in that obviously, you're with IBM and we're talking about storage, people may not realize how integral storage is now in security, but IBM brings to the table a lot more than just storage. >> Absolutely. >> So can you talk a little bit about that portfolio and IBM's approach? >> Sure, so when I talk about the NIST framework and I talk about the identify stage, there's also things around protection, protecting the environment and those services and those systems. The infrastructure, we do a lot in that space. It's around detection. So now that you've got the protection, and protection might include things like having identity management, having access control, just making sure that the applications are at the latest code levels. Often times that's when the vulnerability comes in when you don't have those security patches installed. Data protection and when it comes to that segment, we've got a very rich portfolio of data protection capabilities with our Spectrum Protect offerings. From a protection perspective, going into an encryption, having capabilities where the infrastructure is designed to have multiple types. You can have physical separation, so you can have an air gap, things like tape are ideal for that because it's physically separated. Tiering to the cloud. You can have technologies like write once read many where they're immutable, you can't change those. You can read them but you can't change them. We've done a lot of work in innovation around what we call safeguarded copies. This is making snapshots, but those snapshots are not deletable, they're access controlled, they're read-only. That allows you to very quickly bring up an environment. >> I think people don't realize that, I see some patterns of, sometimes these things hide. They'll be in there and they will be innocuous so you can't just restore the last backup. >> That's right. >> They may try to rewrite the backup so you may have to go back and find a good one. >> Absolutely, and detection is very important. Detecting that as early as possible is the best way to reduce the cost of recovering from these kinds of events. But like you said, I think I want to say 160 days, your environment might be exposed for 160 days before you detect it. So having capabilities in a portfolio in our offerings, and we do a lot working with a research team our security team on things like our data protection where we have algorithms built in where we look for patterns and we look for anomalies. As soon as we see the patterns for malware, ransomware, we alert the operator so you don't allow it to be resident for that period of time. You quickly try to identify it. Another example is in our infrastructure management software. You can see your whole heterogeneous storage environment. You typically start out by base lining a normal environment, similar to the backup piece but then it looks for anomalies, and are there certain things happening in the network, the storage that warns the operator. >> I almost get the feeling that sometimes it's almost like termites. You don't realize you have a problem until it's too late because they haven't been visible. In a 160 day window, whatever it might be, you might be passed that but because whatever that attack was, it was malicious and as clandestine enough that you didn't find it and it does cause problems so as we're wrapping up here, what kind of confidence do you want to share with the end users with people to let them know that there are tools that they can deploy. That it's not all grim reaper. But it is difficult. >> It is difficult, it's very real. But it's absolutely something that every business can have under control, have a plan around. From an IBM perspective, we are number one leader in security, we're the leader in security. Our focus is not just at a software level, it's starting from the chips we design to the servers we deliver to the storage, the flash core modules, FIPS 140 compliance, the storage software, the data protection, the storage management software all the way through the stack. All the way through our cloud infrastructure. Having that comprehensive end to end security and we have those capabilities, we also have services. Our services and security organization work with clients to establish these, evaluate the environment, establish these strategies and interim plans. It's really about creating the plan, prioritizing it and implementing it, making sure the whole organization is aware and educated on it. >> You got to prepare no doubt about that. Thanks for the time Bina, we appreciate that. And it's not all doom and gloom but it is tough. Tough work and very necessary work. Back with more here on The Cube. You're watching our coverage from vmworld 2019, Here in San Francisco.
SUMMARY :
Brought to you by vmware and its ecosystem partners. So good to have you here with us on the and the areas where those are happening. it's at the C Suite level, the board level. in the private sector, but does it seem like and actually the impact to the company's reputation. So in terms of the strategies that companies It involves from the CIO to the CSA, in that obviously, you're with IBM and we're just making sure that the applications are so you can't just restore the last backup. They may try to rewrite the backup so you may Detecting that as early as possible is the enough that you didn't find it and it does cause it's starting from the chips we design to the Thanks for the time Bina, we appreciate that.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
National Institute of Standards in Technology | ORGANIZATION | 0.99+ |
John Troyer | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
John Walls | PERSON | 0.99+ |
300 services | QUANTITY | 0.99+ |
Bina Hallman | PERSON | 0.99+ |
160 day | QUANTITY | 0.99+ |
10 years | QUANTITY | 0.99+ |
NIST | ORGANIZATION | 0.99+ |
160 days | QUANTITY | 0.99+ |
10th year | QUANTITY | 0.99+ |
three days | QUANTITY | 0.99+ |
San Francisco | LOCATION | 0.99+ |
six years | QUANTITY | 0.99+ |
3 | QUANTITY | 0.99+ |
4 | QUANTITY | 0.99+ |
1 | QUANTITY | 0.99+ |
vmware | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
Bina | PERSON | 0.99+ |
Both | QUANTITY | 0.99+ |
trillions of dollars | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
first day | QUANTITY | 0.98+ |
US | LOCATION | 0.97+ |
FIPS 140 | OTHER | 0.95+ |
24 hours | QUANTITY | 0.94+ |
this afternoon | DATE | 0.9+ |
C Suite | TITLE | 0.89+ |
one things | QUANTITY | 0.84+ |
two years | QUANTITY | 0.83+ |
vmworld | ORGANIZATION | 0.81+ |
vmworld 2019 | EVENT | 0.8+ |
couple years ago | DATE | 0.8+ |
number one | QUANTITY | 0.78+ |
14 seconds | QUANTITY | 0.71+ |
VMworld 2019 | EVENT | 0.67+ |
next two years | DATE | 0.59+ |
second | QUANTITY | 0.58+ |
coverage | QUANTITY | 0.52+ |
Cube | COMMERCIAL_ITEM | 0.5+ |
prim | ORGANIZATION | 0.48+ |
Cube | ORGANIZATION | 0.45+ |
2019 | DATE | 0.41+ |
Spectrum | ORGANIZATION | 0.39+ |
Cube | TITLE | 0.35+ |
Alistair Symon, IBM & Bina Hallman, IBM | IBM Think 2019
live from San Francisco it's the cube covering IB time thing 2019 brought to you by IBM welcome back to the cubes coverage day one IBM think 2019 I'm Lisa Martin with Dave Volante we're in San Francisco where IBM think the second IBM think is at this new rejuvenated Moscone Center we're welcoming back to the cube being a home and VP of offering management from IBM being it's great to have you back on the program good morning and we're welcoming to the cube Alistair Simon VP of storage development at IBM welcome yeah thank you good to be here so we're gonna be here for four days big event being and we were talking before we went live expecting 25 to 30,000 people at the second annual IBM think which is this conglomeration of what five just six what used to be disparate shows talk to us about some of the exciting announcements coming out from with respect to data protection storage cyber resiliency yeah no this is a great event as you said this is our second first time in San Francisco here and a great venue we have close to 30,000 clients and participants here is a big event right you know the topics around an announcements you'll hear about around you know cloud multi cloud solutions ai security infrastructure right so in general quite a broad set of new topics at announcements that think but from a storage perspective you know we've done a number of new announcements or doing number of new announcements around things were doing around made-up data protection around solutions in general whether it's blockchain cyber resiliency private cloud solutions those types of things and then of course around our Flash systems offerings so we have a great set of announcements occurring this week I know you guys have to think about you know put on your binoculars and think about what's coming next so wonder if we could talk about some of the big drivers vina that you're seeing in the marketplace and Alister that you're driving in in development I mean data obviously if we talks about data but we talk about data differently than we used to talk about a ten years ago cloud obviously is a mega trend you're mentioning some new technologies like blockchain Nai what are the big drivers that you guys look at and how does that affect your development roadmaps yes certainly from a you know industry perspective and what clients are dealing with and looking to for solutions for from us you mentioned few you know AI having that end-to-end data pipeline and set of capabilities we made a number of announcement second half of last year around AI solutions that allow clients to start from the beginning all the way to the end and meet their data needs from whether its high performance you know storage and and ingest to capacity tiers being able to hold large amounts of data and having that complete into in solution whether it's with our power AI enterprise or some of the things we did around our spectrum storage for AI within Nvidia so you know a lot of focus around AI but also as clients are getting more and more into moving some of their cloud were close to the cloud or leveraging multi-cloud you know today clients are about 20% on their cloud journey there's still that 80% that's there that we need to help them with and a lot of the solutions today they tend to from a cloud perspective proprietary potentially you know inconsistent set of management tools so being able to help clients and focus on multi cloud solutions that's a big area for us as well and then cyber resilience ease the other yeah and I think just talking about the multi cloud aspect clearly when we develop our products were very focused on being able to connect to the different cloud protocols that are required to move the data from the storage out there to the cloud and do it in a performance related way I think the other thing from an analytic standpoint is really important is we've been very focused in delivering the performance in the storage system that's required both from a bandwidth and sheer I UPS perspective very low latency and you'll see that with some of the technologies we brought out very recently in our all-flash arrays where we're all nvme based both connected to the servers and to the storage so really low latency for applications so you can get the data as fast as you can into the annum engines so very focused on these new technologies that enhance the new capabilities Benny you mentioned something interesting I always love stats a geek out Dave knows this about me the customers are about 20% of the way into their cloud journey we we talk about it as a journey all the time right Dave digital transformation that's an interesting number you also mention some of the something that IBM is really poised to help customers achieve is this this AI journey from beginning to end if a customer is in this process of digital transformation and has what are the stats and average Enterprise has you know between five private and public clouds what is that AI journey obviously it has to be concurrent with a cloud journey there's no time that actually do one person than the other but I'm curious what is the beginning of that ai journey for a customer who is going alright we're in this hybrid multi cloud world that's where we live we have to start preparing our data for AI because we know on multiple levels there's a tremendous amount of opportunity how do you help them start yeah you know and what we typically see for clients says they'll start out on some small AI projects in different different parts of their you know environment and those can start and you know server with internal storage or internal SSDs etc but pretty soon as they want to move that to an enterprise or more of the complete set of solution that requires more of the enterprise capability so as Alistair talked about right for ingests to be able to have the right set of solution whether it's you know having the right set of performance of latency attributes etc and making sure we're working and then and then the capacity tier so it's really important that you know and we do this with our clients as help them start with the with the initial footprint but then make sure that you know from an architecture perspective they're set up to be able to grow into that larger because analytics is all about you know that volume of data and you're kind of mining it so that's kind of the key there how's the first time I ever went to Tucson it was I was there for on a tape mission we had a largely a tape facility lots has changed I'm sure since then the development protocol the environment to hear a lot about two Pizza teams you know speed and agile can you talk a little bit about IBM's process development process yeah we're actually very much well down the road towards a drive to agile development throughout all of our development teams worldwide not just in Tucson and that brings a number of benefits to us it allows us to to quickly prototype new functions so that we can test them out with our clients very early in the development process we're not just waiting till the end of the cycle to try something just like a beta test which we do to a large extent but we want to forget with clients early in the cycle so we can get that initial feedback on designs to make sure that we've done the right thing and an example of that would be what we did with cyber resiliency and our safeguarded copy on our PS 8000 Enterprise array we worked with a large financial institution early on to model the design we were going to provide for that and then we worked with them through the introduction of it and through the early testing and we've put that out at the end of last year and seeing great demand for it so that allows you to take snapshots of your data make those snapshots immutable bad actors can't come in and delete that data and if somebody does correct corrupt your your production copy you can do a quick restore from it all done hand-in-hand with a client through the process this is a ransomware play is that right or not necessarily maybe we could take us through like a likely solution for a client you're creating ransomware you hear about air-gap but there's more to it there yeah so you know typical solution you know it's really around being able to work with clients to plan for because given these events are happening more and more frequently and if you assume that the bad guys are going to get in or they're already in and you need to you notice it's a matter of time then storage plays a huge role in the cyber resiliency plan right so it's really around planning then detect and recovery so we talked about it in that way and from a planning perspective we do a lot of things we insure clients data is on infrastructure that can't be compromised we ensure that they have things like air gap being air gapping is where you know if a bad actor gets into one environment they can't do something bad with the other environment think as you know creating a physical separation we have our tape solutions as a classical example but there's also technologies like immutable right ones read many we have that on our cloud object storage our spectrum scale software-defined storage offerings and then also around data data protection in general making sure you know your copies well snapshots it's essentially that you're setting up the snapshots in a way that they are secure you create that separation but that's the the planning phase another aspect of it that we hope clients was you know model that baseline operation what is the environment look like in under normal operations what are these storage you know infrastructure patterns what are the systems that are the most critical for your business and you know operations what's their day-to-day usage where are they once you have that established then it's all about monitoring and looking for abnormal activities and and if you do see some set of abnormal activities being able to detect that are spectrum protect offering that's a data protection we've built in analytics to look for things and patterns like malware ransomware right and be able to alert now once you've detected something like that being able to quickly recover from that is really important get the business up and running and that's where you know a lot of our storage offerings are automated from a data restore perspective being able to bring those copies back very quickly get your business running very quickly that's important and so all of these you know plan detect recover is where storage plays a huge role across all of that I'm curious we know that security issues are unfortunately commonplace every day through the at and I saw stopped the other day the average security breach will cost an organization upwards of 3.8 million dollars one of the the things I'm curious about is in your customer conversations we're talking about data protection at the storage level and infusing that technology with the intelligence and the automation to facilitate that recovery where are your conversations in a customer are they at the business level because I imagine you know security and protection is at the c-suite yeah this is about some of those how are those business objectives helping to officer facilitate development of the actual technology yeah these are definitely CIO types of conversations but we also you know once we engage in that conversation and go down that journey we work with the clients very closely we do the what we call design thinking kind of workshop so together with the client we work on what types of what are some of the you know top three things that from a business need perspective that they see and then we work to ensure that we come to what we call these Hills these goals that we define jointly and then Alistair and his team work to go over find those and as they're developing then work closely with the client to ensure that we're achieving what the you know what we both expected and deliver it to whether it's a starting with a minimal viable product to product izing or or full product ization and again I would say engaging with the clients early in the process is really important because we'll find out things like what are their you know security requirements within their own data centers which can vary from client to client and it helps us understand how to build in things like how do they want to manage their encryption keys in which particular ways they want for that to meet their own security requirements and it can drive different development strategies from that you guys were talking about spectrum protect earlier and just data protection in general it's a space that's heating up I was talking about Tucson before and tape and tape used to be backup that was it even the language is changing it's called data protection now some people call it data management which of course could mean a lot of things to a lot of different people if you're talking to a database person and different maybe from your storage person but the parlance is evolving and it fits into multi cloud people are trying to get more out of their backup than just insurance so what are you seeing is some of the drivers there how does it fit into your multi cloud strategy and what is ultimately IBM's data protection portfolio strategies yeah so you know tape in general one of the you know when you've got large amounts of data that you're looking to archive tape is a great solution and we are seeing more and more interest from you know cloud service providers leveraging tape as their archives here from overall data protection and data management perspective we think that that base is you know basic data protection and making sure that the data is available when you need it is there but we think that has also evolved to where you do things like snapshots write snapshots that uh that are in the native format so you can for operational recovery very quickly be able to restore those and over a period of time if you no longer need it you can back it up to traditional data protection from that snapshot based technology of course and you have the different cloud consumption models in cloud scale that are enabling you know clients to leverage other types of storage whether it's cloud tier or you know cloud object storage and in our portfolio so you've got the consumption models the scale that's driving some of that put on top of that some of the things we talked about like cyber resiliency right ensuring security and protecting that data from things like malware the bad actors right that's very important and then at the you know what we see coming forward from a transformation prospective client transformation is really bringing all of that together so you have your your data protection you've got your unstructured data whether you know I talked about cloud object storage our scale offerings you've got you know your your archive data but also then being able to put it all together and get value out of that a data by looking at the metadata we've introduced an offering in second half of last year or fourth quarter we call spectrum discover allows clients to be able to you know get a catalog of that metadata and very quickly be able to get view and insights into their environment but also be able to integrate that into their analytic workflow and be able to customize that metadata so you can see a holistic solution coming together from not just data protection all the way up through a complete a is DevOps analytics exact that's the recovery really I'm kind of you know if we think about this the matically from a transformation perspective is this really what you're talking about facilitating security transformation absolutely I mean you know security at all aspects whether it's you know the basic encryption of data at rest encryption of data and flying to the the higher level you know detection of these types of security breaches or events and also the protection even if somebody does breach you you still got the recovery point and say a safeguarded copy that you can go back to to make sure your data is restored so even going beyond the protecting against the breach itself fully encompassing and last question and in terms of that data protection where's the people element right because we all know that that's some common denominator of of any Toto sort of security issue is is people where where are what's the human element in the conversation about what you guys are delivering are there may be some human error proof components that are essential that you're helping to develop based on all the history that we've seen with breaches yeah I think you know overall from helping the client ensure that they've got their environment set up properly from a role based access control perspective ensuring that that separation and that in the overall solution is architected to include some of these capabilities whether it's air gap being or or you know the immutable technologies those types of things look you know whether the bad actors whether they're outside the company getting in or someone you know within the company you have to have the right set of measures that are implemented and it is around security encryption you know role based access control all of that well being Alastair thank you so much for joining David me on the cube this morning we appreciate your time and look forward to hearing a lot of more news coming out over the next four days great thank you very much yeah thank you for Dave Volante I'm Lisa Martin you're watching the cube live at IBM think 2019 stick around we'll be right back with our next guest [Music]
SUMMARY :
flying to the the higher level you know
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
Dave Volante | PERSON | 0.99+ |
Dave Volante | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Alistair | PERSON | 0.99+ |
Alistair Simon | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
David | PERSON | 0.99+ |
Alistair Symon | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Tucson | LOCATION | 0.99+ |
80% | QUANTITY | 0.99+ |
25 | QUANTITY | 0.99+ |
3.8 million dollars | QUANTITY | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Bina Hallman | PERSON | 0.99+ |
Benny | PERSON | 0.99+ |
first time | QUANTITY | 0.98+ |
Alastair | PERSON | 0.98+ |
2019 | DATE | 0.98+ |
30,000 people | QUANTITY | 0.97+ |
this week | DATE | 0.97+ |
one person | QUANTITY | 0.96+ |
today | DATE | 0.96+ |
about 20% | QUANTITY | 0.96+ |
four days | QUANTITY | 0.95+ |
both | QUANTITY | 0.94+ |
PS 8000 Enterprise | COMMERCIAL_ITEM | 0.93+ |
six | QUANTITY | 0.93+ |
five | QUANTITY | 0.93+ |
second first time | QUANTITY | 0.91+ |
one | QUANTITY | 0.91+ |
fourth quarter | DATE | 0.89+ |
last year | DATE | 0.89+ |
Tucson | ORGANIZATION | 0.89+ |
Moscone Center | LOCATION | 0.85+ |
this morning | DATE | 0.85+ |
ten years ago | DATE | 0.84+ |
second | QUANTITY | 0.82+ |
day one | QUANTITY | 0.82+ |
five private | QUANTITY | 0.82+ |
three things | QUANTITY | 0.81+ |
30,000 clients | QUANTITY | 0.81+ |
Think | COMMERCIAL_ITEM | 0.79+ |
Alister | PERSON | 0.76+ |
end of last year | DATE | 0.75+ |
agile | TITLE | 0.74+ |
second half | DATE | 0.67+ |
second annual | QUANTITY | 0.66+ |
next four days | DATE | 0.63+ |
second half | DATE | 0.62+ |
close | QUANTITY | 0.59+ |
IB | EVENT | 0.58+ |
of new announcements | QUANTITY | 0.57+ |
more news | QUANTITY | 0.54+ |
two Pizza | QUANTITY | 0.54+ |
lot | QUANTITY | 0.49+ |
think 2019 | EVENT | 0.48+ |
Bina Khimani, Amazon Web Services | Splunk .conf18
>> Announcer: Live from Orlando, Florida, it's theCUBE, covering .conf2018. Brought to you by Splunk. >> Welcome back to .conf2018 everybody, this is theCUBE the leader in live tech coverage. I'm Dave Vellante with Stu Miniman, wrapping up day one and we're pleased to have Bina Khimani, who's the global head of Partner Ecosystem for the infrastructure segments at AWS. Bina, it's great to see you, thanks for coming on theCUBE. >> Thank you for having me. >> You're very welcome. >> Pleasure to be here. >> It's an awesome show, everybody's talking data, we love data. >> Yes. >> You guys, you know, you're the heart of data and transformation. Talk about your role, what does it mean to be the global head Partner Ecosystems infrastructure segments, a lot going on in your title. >> Yes. >> Dave: You're busy. (laughing) >> So, in the infrastructure segment, we cover dev apps, security, networking as well as cloud migration programs, different types of cloud migration programs, and we got segment leaders who really own the strategy and figure out where are the best opportunities for us to work with the partners as well as partner development managers and solution architects who drive adoption of the strategy. That's the team we have for this segment. >> So everybody wants to work with AWS, with maybe one or two exceptions. And so Splunk, obviously, you guys have gotten together and formed an alliance. I think AWS has blessed a lot of the Splunk technology, vice versa. What's the partnership like, how has it evolved? >> So Splunk has been an excellent partner. We are really joined hands together in many fronts. They are fantastic AWS marketplace partner. We have many integrations of Splunk and AWS services, whether it is Kinesis data, Firehose, or Macy, or WAF. So many services Splunk and AWS really are well integrated together. They work together. In addition, we have joined go to market programs. We have field engagement, we have remand generation campaigns. We join hands together to make sure that our customers, joint customers, are really getting the best value out of it. So speaking of partnership, we recently launched migration program for getting Splunk on prem, Splunk Enterprise customers to Splunk Cloud while, you know, they are on their journey to Cloud anyway. >> Yeah, Bina let's dig into that some, we know AWS loves talking about migrations, we dig into all the databases that are going and we talk at this conference, you know Splunk started out very much on premises but we've talked to lots of users that are using the Cloud and it's always that right. How much do they migrate, how much do they start there? Bring us instead, you know, what led to this and what are the workings of it. >> So what, you know if you look at the common problems people have customers have on prem, they are same problems that customers have with Splunk Enterprise on prem, which is, you know, they are looking for resiliency. Their administrator goes on vacation. They want to keep it up and running all the time. They help people making some changes that shouldn't have been made. They want the experts to run their infrastructure. So Splunk Cloud is run by Splunk which is, you know they are the best at running that. Also, you know I just heard a term called lottery proof. So Splunk Cloud is lottery proof, what that means the funny thing is, that you know, your administrator wins lottery, you're not out of business. (laughs) At the same time if you look at the the time to value. I was talking to a customer last night over dinner and they were saying that if they wanted to get on Splunk Enterprise, for their volume of data that they needed to be ingested in Splunk, it would take them six months to just get the hardware in place. With Splunk Cloud they were running in 15 minutes. So, just the time to value is very important. Other things, you know, you don't need to plan for your peak performance. You can stretch it, you can get all the advantages of scalability, flexibility, security, everything you need. As well as running Splunk Cloud you know you are truly cost optimized. Also Splunk Cloud is built for AWS so it's really cost optimized in terms of infrastructure costs, as well as the Splunk licensing cost. >> Yeah it's funny you mentioned the joke, you know you go to Splunk cloud you're not out of a job, I mean what we've heard, the Splunk admins are in such high demand. Kind of running their instances probably isn't, you know a major thing that they'd want to be worrying about. >> Yes, yes, so-- >> Dave: Oh please, go. >> So Splunk administrators are in such a high demand and because of that, you know, not only that customers are struggling with having the right administrators in place, also retaining them. And when they go to Cloud, you know, this is a SAS version, they don't need administrators, nor they need hardware. They can just trust the experts who are really good at doing that. >> So migrations are a tricky thing and I wonder if we can get some examples because it's like moving a house. You don't want to move, or you actually do want to move but it's, you have be planful, it's a bit of a pain, but the benefits, a new life, so. In your world, you got to be better, so the world that you just described of elastic, you don't have to plan for peaks, or performance, the cost, capex, the opex, all that stuff. It's 10 X better, no debate there. But still there's a barrier that you have to go through. So, how does AWS make it easier or maybe you could give us some examples of successful migrations and the business impact that you saw. >> Definitely. So like you said, right, migration is a journey. And it's not always easy one. So I'll talk about different kinds of migration but let me talk about Splunk migration first. So Splunk migration unlike many other migration is actually fairly easy because the Splunk data is transient data, so customers can just point all their data sources to Splunk Cloud instead of Splunk Enterprise and it will start pumping data into Splunk Cloud which is productive from day one. Now if some customers want to retain 60 to 90 days data, then they can run this Splunk Enterprise on prem for 60 more days. And then they can move on to Splunk Cloud. So in this case there was no actual data migration involved. And because this is the log data that people want to see only for 60 to 90 days and then it's not valuable anymore. They don't really need to do large migration in this case it's practically just configure your data sources and you are done. That's the simplest part of the migration which is Splunk migration to Splunk Cloud. Let's talk about different migrations. So... you have heard many customers, you know like Capital One or many other Dow-Jones, they are saying that we are going all in on AWS and they are shutting down their data centers, they are, you know, migrating hundreds of thousands of applications and servers, which is not as simple as Splunk Cloud, right? So, what AWS, you know, AWS does this day in and day out. So we have figured it out again and again and again. In all of our customer interactions and migrations we are acquiring ton of knowledge that we are building toward our migration programs. We want to make sure that our customers are not reinventing the wheel every time. So we have migration programs like migration acceleration program which is for custom large scale migrations for larger customers. We have partner migration programs which is entirely focused on working with SI partners, consulting partners to lead the migrations. As well as we're workload migration program where we are standardizing migrations of standard applications like Splunk or Atlassian, or many of their such standard applications, how we can provide kind of easy button to migrate. Now, when customers are going through this migration journey, you know, it's going to be 10 X better like you said, but initially there is a hump. They are probably needing to run two parallel environments, there is a cost element to that. They are also optimizing their business processes there is some delay there. They are doing some technical work, you know, discovery, prioritization, landing zone creations, security, and networking aspects. There are many elements to this. What we try to do is, if you look at the graph, their cost is right now where this and it's going to go down but before that it goes up and then goes down. So what we try to do is really provide all the resources to take that hump out in terms of technical support, technical enablement, you know, partner support, funding elements, marketing. There are all types of elements as well as lot of technical integrations and quick starts to take that hump out and make it really easy for our customers. >> And that was our experience, we're Amazon customer and we went through a migration about, I don't know five or six years ago. We had, you know, server axe and a cage and we were like, you know, moving wires over and you'd get an alert you'd have to go down and fix things. And so it took us some time to get there, but it is 10 X better now though. >> It is. >> The developers were so excited and I wanted to ask you about, sort of the dev-ops piece of it because that's really, it became, we just completely eliminated all the operational pieces of it and integrated it and let the developers take care of it. Became, truly became infrastructure as code. So the dev-ops culture has permeated our small organization, can't imagine the impact on a larger company. Wonder if you could talk about that a little bit. >> Definitely. So... As customers are going through this cloud migration journey they are looking at their entire landscape of application and they're discovering things that they never did. When they discover they are trying to figure out should I go ahead and migrate everything to AWS right now, or should I a refactor and optimize some of my applications. And there I'm seeing both types of decisions where some customers are taking most of their applications shifting it to cloud and then pausing and thinking now it is phase two where I am on cloud, I want to take advantage of the best of the breed whatever technology is there. And I want to transform my applications and I want to really be more agile. At the same time there are customers who are saying that I'm going to discover all my workload and applications and I'm going to prioritize a small set of applications which we are going to take through transformation right now. And for the rest of it we will lift and shift and then we will transform. But as they go through this transformation they are changing the way they do business. They are changing the way they are utilizing different technology. Their core focus is on how do I really compete with my competition in the industry and for that how can IT provide me that agility that I need to roll out changes in my business day in day out. And for that, you know, Lambda, entire code portfolio, code build, code commit, code deploy, as well as cloud trail, and you know all the things that, all the services we have as well as our partners have, they provide them truly that edge on their industry and market. >> Bina, how has the security discussion changed? When Stu and I were at the AWS public sector summit in June, the CIO of the CIA stood up on stage in front of 10,000 people and said, "The cloud on my worst day from a security perspective "is better than my client server infrastructure "on a best day." That's quite an endorsement from the CIA, who's got some chops in security. How has that discussion changed? Obviously it's still fundamental, critical, it's something that you guys emphasize. But how has the perception and reality changed over the last five years? >> Cloud is, you know, security in cloud is a shared responsibility. So, Amazon is really, really good at providing all the very, very secure infrastructure. At the same time we are also really good at providing customers and business partners all of the tools and hand-holding them so that they can make their application secure. Like you said, you know, AWS, many of the analysts are saying that AWS is far more secure than anything they can have within their own data center. And as you can see that in this journey also customers are not now thinking about is it secure or not. We are seeing the conversation that, how in fact, speaking of Splunk right, one customer that I talked to he was saying that I was asking them why did you choose Splunk cloud on AWS and his take was that, "I wanted near instantaneous SOA compliant "and by moving to Splunk cloud on AWS "I got that right away." Even I'm talking to public sector customers they are saying, you know, I want fair DRAM I want in healthcare industry, I want HIPPA Compliance. Everywhere we are seeing that we are able to keep up with security and compliance requirements much faster than what customers can do on their own. >> So they, so you take care of, certainly from the infrastructure standpoint, those certifications and that piece of the compliance so the customer can worry about maybe some of the things that you don't cover, maybe some of their business processes and other documentation, ITIL stuff that they have to do, whatever. But now they have more time to do that presumably 'cause that's check box, AWS has that covered for me, right? Is that the right thinking? >> Yes, plus we provide them all the tools and support and knowledge and everything so that they, and even partner support who are really good at it so that not only they understand that the application and infrastructure will come together as entire secure environment but also they have everything they need to be able to make applications secure. And Splunk is another great example, right? Splunk helps customer get application level security and AWS is providing them infrastructure and together we are working together to make sure our customers' application and infrastructure together are secure. >> So speaking about migrations database, hot topic at a high level anyway, I wonder if you could talk about database migrations. Andy Jassy obviously talks a lot about, well let's see we saw RDS on Prim at VMworld, big announcement. Certainly Aurora, DynamoDB is one of the databases we use. Redshift obviously. How are database migrations going, what are you doing to make those easier? >> So what we do in a nutshell, right for everything we try to build a programatic reputable, scalable approach. That's what Amazon does. And what we do is that for each of these standard migrations for databases, we try to figure out, that let's take few examples, and let's figure out Play Books, let's figure out runbooks, let's make sure technical integrations are in place. We have quick starts in place. We have consulting partners who are really good at doing this again and again and again. And we have all the knowledge built into tools and services and support so that whenever customers want to do it they don't run into hiccups and they have really pleasant experience. >> Excellent. Well I know you're super busy thanks for making some time to come on theCUBE I always love to have AWS on. So thanks for your time Bina. >> Thank you very nice to meet you both. >> Alright you're very welcome. Alright so that's a wrap for day one here at Splunk .conf 2018, Stu and I will be back tomorrow. Day two more customers, we got senior executives coming on tomorrow, course Doug Merritt, always excited to see Doug. Go to siliconangle.com you'll see all the news theCUBE.net is where all these videos live and wikibon.com for all the research. We're out day one Splunk you're watching theCUBE we'll see you tomorrow. Thanks for watching. >> Bina: Thank you. (electronic music)
SUMMARY :
Brought to you by Splunk. for the infrastructure segments at AWS. everybody's talking data, we love data. You guys, you know, Dave: You're busy. That's the team we have for this segment. you guys have gotten together and formed an alliance. you know, they are on their journey to Cloud anyway. and we talk at this conference, you know Splunk started out the funny thing is, that you know, your administrator Kind of running their instances probably isn't, you know and because of that, you know, and the business impact that you saw. They are doing some technical work, you know, and we were like, you know, moving wires over and I wanted to ask you about, sort of the dev-ops And for the rest of it we will lift and shift it's something that you guys emphasize. they are saying, you know, I want fair DRAM and that piece of the compliance so the customer but also they have everything they need to be able Certainly Aurora, DynamoDB is one of the databases we use. and they have really pleasant experience. to come on theCUBE I always love to have AWS on. we'll see you tomorrow. Bina: Thank you.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
AWS | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Doug Merritt | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
60 | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
Doug | PERSON | 0.99+ |
Bina Khimani | PERSON | 0.99+ |
CIA | ORGANIZATION | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Stu | PERSON | 0.99+ |
Splunk | ORGANIZATION | 0.99+ |
June | DATE | 0.99+ |
six months | QUANTITY | 0.99+ |
15 minutes | QUANTITY | 0.99+ |
90 days | QUANTITY | 0.99+ |
Orlando, Florida | LOCATION | 0.99+ |
tomorrow | DATE | 0.99+ |
10,000 people | QUANTITY | 0.99+ |
siliconangle.com | OTHER | 0.99+ |
60 more days | QUANTITY | 0.99+ |
10 X | QUANTITY | 0.98+ |
five | DATE | 0.98+ |
both | QUANTITY | 0.98+ |
one customer | QUANTITY | 0.98+ |
Capital One | ORGANIZATION | 0.98+ |
Bina | PERSON | 0.98+ |
Lambda | TITLE | 0.98+ |
theCUBE.net | OTHER | 0.96+ |
Splunk Cloud | TITLE | 0.96+ |
hundreds of thousands | QUANTITY | 0.96+ |
.conf2018 | EVENT | 0.96+ |
six years ago | DATE | 0.96+ |
day one | QUANTITY | 0.95+ |
Splunk | PERSON | 0.95+ |
VMworld | ORGANIZATION | 0.95+ |
two exceptions | QUANTITY | 0.95+ |
Day two | QUANTITY | 0.95+ |
last night | DATE | 0.94+ |
both types | QUANTITY | 0.94+ |
applications | QUANTITY | 0.93+ |
Partner Ecosystem | ORGANIZATION | 0.93+ |
Partner Ecosystems | ORGANIZATION | 0.9+ |
each | QUANTITY | 0.9+ |
EMBARGOED DO NOT PUBLISH Eric Herzog Bina Hallaman 06 15 18 CUBEConversation
(upbeat music) >> (faintly) Three, two, one. >> Eric, Bina, thanks again for coming back. So, what I want to do now is, I want to talk a little bit about some kind of trends within the storage world, and what the next few years are going to mean. Eric, I want to start with you. I was recently at IBM Think, and Ginni Rometty talked about the idea of "putting smart to work". Now, I can tell you that means something to me, because the whole notion of how data gets used, how work gets institutionalized around your data. What does storage do in that context of "put smart to work"? >> Well, I think there's a couple things. First, we got to realize that it's not about storage. It's about the data and the information that happens to sit on the storage. So, you have to have storage that's always available, always resilient, is incredibly fast, and as I said earlier, transparently moves things in and out of the cloud automatically, so that the user doesn't have to do it. Second thing that's critical is the integration of AI, artificial intelligence, both into the storage solution itself, and what the storage does, how you do it, and how it plays with the data, but also, if you're going to do AI on a broad scale ... For example, we're working with a customer right now, and their AI configuration is 100 petabytes, leveraging our storage underneath the hood of that big giant AI analytics workload. So that's why AI ... both think of it in the storage, to make the storage better and more productive with the data and the information that it has, but then also as the undercurrent for any AI solution that anyone's deployed, big, medium, or small. >> So Bina, I want to pick up on that, because, there's some advanced technologies that are being exploited within storage right now to achieve what Eric's talking about, but there's going to be a lot more. >> Absolutely. >> There's going to be more intensive application utilization of some of those technologies. What are some of the technologies that are becoming increasingly important, from a storage standpoint, that people have to think about as they try to achieve their digital transformation objectives? >> That's right. Peter. In addition to some of the basics around making sure your infrastructure is enabled to handle the SLAs and the level of performance that's required by these AI workloads. When you think about what Eric said, this data is going to reside on premise. It's going to be be behind a firewall, potentially in the cloud or multiple public clouds. How do you manage that data? How do you get a visibility to that data? And then, be able to leverage that data for your analytics. So, data management is going to be very important, but also being able to understand what that data contains, and be able to run the analytics, and be able to do things like tagging the metadata, and then doing some specialized analytics around that is going to be very important. The fabric to move that data, data portability from on premise into the cloud and back and forth, bi-directionally, is going to be very important as you look into the future. >> Obviously, things like IOT is going to mean bigger, more, more available. So a lot of technologies, in a big picture, are going to become more closely associated with storage. In fact, I like to say that, at some point in time, we got to stop thinking about calling this stuff storage, because it's going to be so central to the fabric of how data works within a business. Eric, I want to come back to you and say, this is some of the big picture technologies, but where do some of the little picture technologies, that nonetheless are really central to being able to build up this vision over the course of the next few years? >> Well a couple things. One is the move to NVMe. So we've integrated NVMe into our Flash System 9100. We have fabric support. We already announced back in February, actually, fabric support for NVMe over an Infiband infrastructure with our Flash System 900. We're extending that to all of the other interconnects from a fabric perspective for NVMe, whether that be ethernet or whether that be fiber channel. We put NVMe in the system. We also have integrated our custom flash models. Our flash core technology allows to take raw flash, and create, if you will, a custom SSD. Why does that matter? We can get better resiliency. We can get incredibly better performance, which is very tied into your applications, workloads, and use cases, especially in a data-driven multi-cloud environment. It's critical that the flash is incredibly fast. It really matters. And resilient ... what do you do? You try to move it to the cloud, and you lose your data. So, if you don't have that resiliency and availability, that's a big issue. I think the third thing is, what I call the "cloudification" of software. All of IBM storage software is cloudified. We can move things simultaneously into the cloud. It's all automated. We can move data around all over the place. Not only our data, not only to our boxes, we can actually move other people's array's data around for them, and we can do it with our storage software. So, it's really critical to have this "cloudification". It's really critical to have this new technology. NVMe from an end-to-end perspective for fabric, and then inside the system to get the right resiliency, the right availability, the right performance for your applications, workloads, and use cases, and you've got to make sure that everything is cloudified, and portable, and mobile. We've done that with the solutions that are wrapped into our Flash System 9100 that we launched a couple weeks ago. >> So you are both thought leaders in the storage industry, I think that's very clear. The whole notion of storage technology. You work with a lot of customers, you see a lot of use cases. So I want to ask you kind of one quick question to kind of close here, and that is, if there was one thing that you would tell a storage leader, a CIO, or someone who thinks about storage in a broad way, one mindset change that they have to make to start this journey and get it going so that it's going to be successful. What would that one mindset change be? Bina, what do you think? >> You know, I think it's really around, there's a lot of capabilities out there. It's really around simplifying your environment, and making sure that, as you're deploying these new solutions or new capabilities, that you've really got a partnership with a vendor that's going to help you make it easier. Take those complex tasks, make them easier, deliver those step-by-step instructions and documentation, and be right there when you need their assistance. I think that's going to be really important. >> So looking at it from a portfolio perspective, where best-of-breed is still important, but it's got to work together, because it leverages itself. >> Got to work together, absolutely. >> Eric, what would you say? >> Well I think the key thing is people think storage is storage. All storage is not the same. One of the central tenets at IBM storage is to make sure that we're integrated with the cloud. We can move data around transparently, easily, simply. Bina pointed out the simplicity. If you can't support the cloud, then you're really just a storage box. That's not what IBM does. Over 40 percent of what we sell is actually storage software. All that software works with all of our competitors gear. In fact, our Spectrum Virtualize for Public Cloud, for example, can simultaneously have data sets sitting in a cloud instantiation, and sitting on premises, and then we can use our Copy Data Management to take advantage of that secondary copy. That's all because we're so cloudified from a software perspective. So, all storage is not the same, You can't think of storage as, "I need the cheapest storage." It's got to be, "How's it drive business value for my ocean's of data?" That's what matters most. By the way, we're very cost-effective anyway, especially because of our custom flash module allows us to have a real price advantage. >> You ain't doing business at a level of 100 petabytes if you're not cost-effective. >> Right. Those are things that we see as really critical, is storage is not storage. Storage is really about data and information. >> Let me summarize your point, if I can really quickly. In other words, that we have to think about storage as the first step to great data management. >> Absolutely, absolutely, Peter. >> Eric, Bina, great conversation. >> Thank you. >> Alrighty. >> Thank you. >> I forgot the security (drowned out by music)
SUMMARY :
about the idea of "putting smart to work". that happens to sit on the storage. but there's going to be a lot more. that people have to think about as they try that data contains, and be able to run the analytics, that nonetheless are really central to being One is the move to NVMe. so that it's going to be successful. I think that's going to be really important. but it's got to work together, is to make sure that we're integrated You ain't doing business at a level of is storage is not storage. as the first step to great data management.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Eric | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Eric Herzog | PERSON | 0.99+ |
February | DATE | 0.99+ |
Bina | PERSON | 0.99+ |
Ginni Rometty | PERSON | 0.99+ |
First | QUANTITY | 0.99+ |
100 petabytes | QUANTITY | 0.99+ |
Three | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
one | QUANTITY | 0.98+ |
IBM Think | ORGANIZATION | 0.98+ |
Over 40 percent | QUANTITY | 0.97+ |
third thing | QUANTITY | 0.97+ |
06 15 18 | DATE | 0.97+ |
Bina Hallaman | PERSON | 0.97+ |
one thing | QUANTITY | 0.96+ |
Second thing | QUANTITY | 0.96+ |
first step | QUANTITY | 0.95+ |
One | QUANTITY | 0.95+ |
Flash System 9100 | COMMERCIAL_ITEM | 0.94+ |
one quick question | QUANTITY | 0.92+ |
Flash System 900 | COMMERCIAL_ITEM | 0.91+ |
9100 | COMMERCIAL_ITEM | 0.8+ |
couple weeks ago | DATE | 0.8+ |
Flash | COMMERCIAL_ITEM | 0.77+ |
next few years | DATE | 0.73+ |
System | OTHER | 0.7+ |
one mindset | QUANTITY | 0.66+ |
couple things | QUANTITY | 0.66+ |
couple | QUANTITY | 0.65+ |
secondary | QUANTITY | 0.63+ |
Spectrum | TITLE | 0.54+ |
Cloud | COMMERCIAL_ITEM | 0.37+ |
EMBARGOED DO NOT PUBLISH Bina Hallaman 06.15.18 CUBEConversation
(upbeat music) >> Bina, it's great to see you again. Thanks for coming back in theCUBE and participating in this digital community event. >> Oh, thanks for having me. It's an exciting event. I'm looking forward to it. >> So, Bina, I want to build on some of the stuff that we talked to Eric about. Eric did a good job of articulating the overall customer challenge. As IBM conceives how it's going to approach customers and help them solve these challenges, let's talk about some of the core values that IBM brings to bear. What would you say would be one of the, what are the three things that IBM really focuses on as it thinks about its core values to approach these challenges? >> Sure, sure. It's really around helping the client, providing a simple one-stop shopping kind of an approach, ensuring that we're doing all the right things to bring the capabilities together so that clients don't have to take different component technologies and put them together themselves. They can focus on providing business value. And it's really around delivering the economic benefits around capex and opex, delivering a set of capabilities that help them move on their journey to a data-driven multi-cloud, make it easier and make it simpler. >> So making sure that it's one place they can go where they can get the solution. But IBM has a long history of engineering. Are you doing anything special in terms of pretesting, prepackaging some of these things to make it easier? >> Yeah; we, over the years, have worked with many of our clients around the world and helping them achieve their vision, their strategy around multi-cloud. And, in that journey and those set of experiences, we've identified some key solutions that really do make it easier, and so we're leveraging the breadth of IBM, the power of IBM, making those investments to deliver a set of solutions that are pretested, they are supported at the solutions level, really focusing on delivering and underpinning those solutions with blueprints, step-by-step documentation. And as clients deploy these solutions, if they run into challenges, having IBM support to assess, really bringing it all together. This notion of a multi-cloud architecture around delivering modern infrastructure capabilities, NVMe acceleration, but also some of our really core differentiation that we deliver through FlashCore data reduction capabilities along with things like modern data protection. That segment is changing, and we really want to enable clients, their IT and their line of business, to really free them up and focus on a business value versus putting these components together. So it's really around taking those complex things and make them easier for clients. Get improved RPO, RTO, get improved performance, get improved cost, but also flexibility and agility are very critical. >> Well, that sounds like, therefore, the history of storage has been tradeoffs. This disk can only go that fast, and that tape can only go that fast. But now, when we start to think about Flash NVMe, the tradeoffs are not as acute as they used to be. Is IBM's engineering chops capable of pointing how you can, in fact, have almost all of this at one time? >> Oh, absolutely. The breadth and the capabilities in our R&D, and the research capabilities, also our experiences that I talked about, engagements, putting all of that together to deliver some key solutions and capabilities like... Look, everybody needs backup and archive, backup to recover your data in case of a disaster occurs, archive for long-term retention. That data management, the data protection segment, is going through a transformation. New emerging capabilities, new ways to do backup. And what we're doing is pulling all of that together. The things that we introduce, for example, our Protect Plus in the fourth quarter, along with this FS-9100 and the cloud capabilities, to deliver a solution around data protection, data reuse, so that you have a modern backup approach for both virtual and physical environments that is really based on things like snapshots and mountable copies, so you're not using that traditional approach of recovering your copy from a backup by bringing it back. Instead, all you're doing is mounting one of those copies, and instantly getting your application back and running for operational recovery. >> So, to summarize some of those values: one-stop, pretested advanced technologies smartly engineered. You guys did something interesting on July 10th. Why don't you talk about how those values and the understanding of the problem manifest itself as kind of an exciting set of new products that you guys introduced on July 10th? >> Absolutely. On July 10th, we not only introduced our flagship FlashSystem's FS-9100, which delivers some amazing client value around the economic benefits of capex, opex reduction, but also seamless data mobility, data reuse, security, all the things that are important for a client on their cloud journey. In addition to that, we infused that offering with AI-based predictive analytics and, of course, that performance and NVMe acceleration is really key. But, in addition to doing that, we've also introduced some very exciting solutions, really three key solutions, one around data protection, data reuse, to enable clients to get that agility, and second is around business continuity and data reuse to be able to really reduce the expense of having business continuity. In today's environment, it's a high-risk environment, it's inevitable to have disruptions, but really being prepared to mitigate some of those risks and having operational continuity is important, and by doing things like leveraging the public cloud for your DR capabilities, that's very important, so we introduced a solution around that. And the third is around private cloud. Taking your IBM storage FS-9100 along with the heterogeneous environment you have and making it cloud-ready, getting the cloud efficiencies, making it to where you can use it for environments to create things like native cloud applications that are portable from on-prem into the cloud. So those are some of the key ways that we've kind of brought this together to really deliver on client value. >> So can you give us just one quick use case of some of your clients that are applying these technologies to solve their problems? >> Yeah, so let me use the first one that I talked about, the data protection and data reuse. So to be able to take your on-premise environment, really apply an abstraction layer, set up catalogs, set up SLAs and access control, but then be able to step away and manage that storage all through API base. We have a lot of clients that are doing that and then taking that, making the snapshots, using those copies for things like, well, there's disaster recovery, or secondary use cases like analytics, DevOps. DevOps is a really important use case and our clients are really leveraging some of these capabilities for it, because you want to make sure that, as application developers are developing their applications, they're working with the latest data and making sure that the testing they're doing is meaningful, and finding the maximum number of defects so you get the highest quality of code coming out of them. And being able to do that in a self-service-driven way, so that they're not having to slow down their innovation. We have clients leveraging our capabilities for those kinds of use cases. >> Great conversation! >> All right, we're clear, thank you. >> Did it hit on [Cuts Off] (upbeat music)
SUMMARY :
Bina, it's great to see you again. I'm looking forward to it. that IBM brings to bear. And it's really around delivering the economic benefits prepackaging some of these things to make it easier? making those investments to deliver and that tape can only go that fast. so that you have a modern backup approach and the understanding of the problem making it to where you can use it for environments and making sure that the testing they're doing
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Eric | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
July 10th | DATE | 0.99+ |
Bina Hallaman | PERSON | 0.99+ |
FS-9100 | COMMERCIAL_ITEM | 0.99+ |
Bina | PERSON | 0.99+ |
third | QUANTITY | 0.98+ |
first one | QUANTITY | 0.98+ |
06.15.18 | DATE | 0.98+ |
second | QUANTITY | 0.98+ |
three things | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
both | QUANTITY | 0.97+ |
capex | ORGANIZATION | 0.96+ |
DevOps | TITLE | 0.96+ |
fourth quarter | DATE | 0.95+ |
today | DATE | 0.94+ |
FlashSystem | COMMERCIAL_ITEM | 0.94+ |
Protect Plus | COMMERCIAL_ITEM | 0.93+ |
opex | ORGANIZATION | 0.93+ |
three key solutions | QUANTITY | 0.93+ |
one time | QUANTITY | 0.92+ |
one quick use | QUANTITY | 0.84+ |
one of | QUANTITY | 0.8+ |
one place | QUANTITY | 0.77+ |
-9100 | COMMERCIAL_ITEM | 0.59+ |
FlashCore | TITLE | 0.31+ |
Bina Hallman & Steven Eliuk, IBM | IBM Think 2018
>> Announcer: Live, from Las Vegas, it's theCUBE. Covering IBM Think 2018. Brought to you by IBM. >> Welcome back to IBM Think 2018. This is theCUBE, the leader in live tech coverage. My name is Dave Vellante and I'm here with Peter Burress. Our wall-to-wall coverage, this is day two. Everything AI, Blockchain, cognitive, quantum computing, smart ledger, storage, data. Bina Hallman is here, she's the Vice President of Offering Management for Storage and Software Defined. Welcome back to theCUBE, Bina. >> Bina: Thanks for having me back. >> Steve Elliot is here. He's the Vice President of Deep Learning in the Global Chief Data Office at IBM. >> Thank you sir. >> Dave: Welcome to the Cube, Steve. Thanks, you guys, for coming on. >> Pleasure to be here. >> That was a great introduction, Dave. >> Thank you, appreciate that. Yeah, so this has been quite an event, consolidating all of your events, bringing your customers together. 30,000 40,000, too many people to count. >> Very large event, yes. >> Standing room only at all the sessions. It's been unbelievable, your thoughts? >> It's been fantastic. Lots of participation, lots of sessions. We brought, as you said, all of our conferences together and it's a great event. >> So, Steve, tell us more about your role. We were talking off the camera, we've had here Paul Bhandari on before, Chief Data Officer at IBM. You're in that office, but you've got other roles around Deep Learning, so explain that. >> Absolutely. >> Sort of multi-tool star here. >> For sure, so, roles and responsibility at IBM and the Chief Data Office, kind of two pillars. We focus in the Deep Learning group on foundation platform components. So, how to accelerate the infrastructure and platform behind the scenes, to accelerate the ideation or product phase. We want data scientists to be very effective, and for us to ensure our projects very very quickly. That said, I mentioned projects, so on the applied side, we have a number of internal use cases across IBM. And it's not just hand vault, it's in the orders of hundreds and those applied use cases are part of the cognitive plan, per se, and each one of those is part of the transformation of IBM into our cognitive. >> Okay, now, we were talking to Ed Walsh this morning, Bina, about how you collaborate with colleagues in the storage business. We know you guys have been growing, >> Bina: That's right. >> It's the fourth quarter straight, and that doesn't event count, some of the stuff that you guys ship on the cloud in storage, >> That's right, that's right. >> Dave: So talk about the collaboration across company. >> Yeah, we've had some tremendous collaboration, you know, the broader IBM and bringing all of that together, and that's one of the things that, you know, we're talking about here today with Steve and team is really as they built out their cognitive architecture to be able to then leverage some of our capabilities and the strengths that we bring to the table as part of that overall architecture. And it's been a great story, yeah. >> So what would you add to that, Steve? >> Yeah, absolutely refreshing. You know I've built up super computers in the past, and, specifically for deep learning, and coming on board at IBM about a year ago, seeing the elastic storage solution, or server. >> Bina: Yeah, elastic storage server, yep. >> It handles a number of different aspects of my pipeline, very uniquely, so for starters, I don't want to worry about rolling out new infrastructure all the time. I want to be able to grow my team, to grow my projects, and that's what nice about ESS is it's distensible, I'm able to roll out more projects, more people, multi-tenancy et cetera, and it supports us effectively. Especially, you know, it has very unique attributes like the read only performance feed, and random access of data, is very unique to the offering. >> Okay, so, if you're a customer of Bina's, right? >> I am, 100%. >> What do you need for infrastructure for Deep Learning, AI, what is it, you mentioned some attributes before, but, take it down a little bit. >> Well, the reality is, there's many different aspects and if anything kind of breaks down, then the data science experience breaks down. So, we want to make sure that everything from the interconnect of the pipelines is effective, that you heard Jensen earlier today from Nvidia, we've got to make sure that we have compute devices that, you know, are effective for the computation that we're rolling out on them. But that said, if those GPUs are starved by data, that we don't have the data available which we're drawing from ESS, then we're not making effective use of those GPUs. It means we have to roll out more of them, et cetera, et cetera. And more importantly, the time for experimentation is elongated, so that whole idea, so product timeline that I talked about is elongated. If anything breaks down, so, we've got to make sure that the storage doesn't break down, and that's why this is awesome for us. >> So let me um, especially from a deep learning standpoint, let me throw, kind of a little bit of history, and tell me if you think, let me hear your thoughts. So, years ago, the data was put as close to the application as possible, about 10, 15 years ago, we started breaking the data from the application, the storage from the application, and now we're moving the algorithm down as close to the data as possible. >> Steve: Yeah. >> At what point in time do we stop calling this storage, and start acknowledging that we're talking about a fabric that's actually quite different, because we put a lot more processing power as close to the data as possible. We're not just storing. We're really doing truly, deeply distributing computing. What do you think? >> There's a number of different areas where that's coming from. Everything from switches, to storage, to memory that's doing computing very close to where the data actually residents. Still, I think that, you know, this is, you can look all the way back to Google file system. Moving computation to where the data is, as close as possible, so you don't have to transfer that data. I think that as time goes on, we're going to get closer and closer to that, but still, we're limited by the capacity of very fast storage. NVMe, very interesting technology, still limited. You know, how much memory do we have on the GPUs? 16 gigs, 24 is interesting, 48 is interesting, the models that I want to train is in the 100s of gigabytes. >> Peter: But you can still parallelize that. >> You can parallelize it, but there's not really anything that's true model parallelism out there right now. There's some hacks and things that people are doing, but. I think we're getting there, it's still some time, but moving it closer and closer means we don't have to spend the power, the latency, et cetera, to move the data. >> So, does that mean that the rate of increase of data and the size of the objects we're going to be looking at, is still going to exceed the rate of our ability to bring algorithms and storage, or algorithms and data together? What do you think? >> I think it's getting closer, but I can always just look at the bigger problem. I'm dealing with 30 terabytes of data for one of the problems that I'm solving. I would like to be using 60 terabytes of data. If I could, if I could do it in the same amount of time, and I wasn't having to transfer it. With that said, if you gave me 60, I'd say, "I really wanted 120." So, it doesn't stop. >> David: (laughing) You're one of those kind of guys. >> I'm definitely one of those guys. I'm curious, what would it look like? Because what I see right now is it would be advantageous, and I would like to do it, but I ran 40,000 experiments with 30 terabytes of data. It would be four times the amount of transfer if I had to run that many experiments of 120. >> Bina, what do you think? What is the fundamental, especially from a software defined side, what does the fundamental value proposition of storage become, as we start pushing more of the intelligence close to the data? >> Yeah, but you know the storage layer fundamentally is software defined, you still need that setup, protocols, and the file system, the NFS, right? And, so, some of that still becomes relevant, even as you kind of separate some of the physical storage or flash from the actual compute. I think there's still a relevance when you talk about software defined storage there, yeah. >> So you don't expect that there's going to be any particular architectural change? I mean, NVMe is going to have a real impact. >> NVMe will have a real impact, and there will be this notion of composable systems and we will see some level of advancement there, of course, and that's around the corner, actually, right? So I do see it progressing from that perspective. >> So what's underneath it all, what actually, what products? >> Yeah, let me share a little bit about the product. So, what Steve and team are using is our elastic storage server. So, I talked about software defined storage. As you know, we have a very complete set of software defined storage offerings, and within that, our strategy has always been allow the clients to consume the capabilities the way they want. A software only on their own hardware, or as a service, or as an integrated solution. And so what Steve and team are using is an integrated solution with our spectrum scale software, along with our flash and power nine server power systems. And on the software side from spectrum scale, this is a very rich offering that we've had in our portfolio. Highly scalable file system, it's one of the solutions that powers a lot of our supercomputers. A project that we are still in the process and have delivered on around Whirl, our national labs. So same file system combined with a set of servers and flash system, right? Highly scalable, erasure coding, high availability as well as throughput, right? 40 gigabytes per second, so that's the solution, that's the storage and system underneath what Steve and team are leveraging. >> Steve, you talk about, "you want more," what else is on Bina's to-do-list from your standpoint? >> Specifically targeted at storage, or? >> Dave: Yeah, what do you want from the products? >> Well, I think long stretch goals are multi-tenancy and the wide array of dimensions that, especially in the chief data office, that we're dealing with. We have so many different business units, so many different of those enterprise problems in the orders of hundreds how do you effectively use that storage medium driving so many different users? I think it's still hard, I think we're doing it a hell of a lot better than we ever have, but it's still, it's an open research area. How do you do that? And especially, there's unique attributes towards deep learning, like, most of the data is read only to a certain degree. When data changes there's some consistency checks that could be done, but really, for my experiment that's running right now, it doesn't really matter that it's changed. So there's a lot of nuances specific to deep learning that I would like exploited if I could, and that's some of the interactions that we're working on to kind of alleviate those pains. >> I was at a CDO conference in Boston last October, and Indra Pal was there and he presented this enterprise data architecture, and there were probably about three or four hundred CDOs, chief data officers, in the room, to sort of explain that. Can you, sort of summarize what that is, and how it relates to sort of what you do on a day to day basis, and how customers are using it? >> Yeah, for sure, so the architecture is kind of like the backbone and rules that kind of govern how we work with the data, right? So, the realities are, there's no sort of blueprint out there. What works at Google, or works at Microsoft, what works at Amazon, that's very unique to what they're doing. Now, IBM has a very unique offering as well. We have so many, we're a composition of many, many different businesses put together. And now, with the Chief Data Office that's come to light across many organizations like you said, at the conference, three to 400 people, the requirements are different across the orders. So, bringing the data together is kind of one of the big attributes of it, decreasing the number of silos, making a monolithic kind of reliable, accessible entity that various business units can trust, and that it's governed behind the scenes to make sure that it's adhering to everyone's policies, that their own specific business unit has deemed to be their policy. We have to adhere to that, or the data won't come. And the beauty of the data is, we've moved into this cognitive era, data is valuable but only if we can link it. If the data is there, but there's no linkages there, what do I do with it? I can't really draw new insights. I can't draw, all those hundreds of enterprise use cases, I can't build new value in them, because I don't have any more data. It's all about linking the data, and then looking for alternative data sources, or additional data sources, and bringing that data together, and then looking at the new insights that come from it. So, in a nutshell, we're doing that internally at IBM to help our transformation. But at the same time creating a blueprint that we're making accessible to CDOs around the world, and our enterprise customers around the world, so they can follow us on this new adventure. New adventure being, you know, two years old, but. >> Yeah, sure, but it seems like, if you're going to apply AI, you've got to have your data house in order to do that. So this sounds like a logical first step, is that right? >> Absolutely, 100%. And, the realities are, there's a lot of people that are kicking the tires and trying to figure out the right way to do that, and it's a big investment. Drawing out large sums of money to kind of build this hypothetical better area for data, you need to have a reference design, and once you have that you can actually approach the C-level suite and say, "Hey, this is what we've seen, this is the potential, "and we have an architecture now, "and they've already gone down all the hard paths, "so now we don't have to go down as many hard paths." So, it's incredibly empowering for them to have that reference design and learning from our mistakes. >> Already proven internally now, bringing it to our enterprise alliance. >> Well, and so we heard Jenny this morning talk about incumbent disruptors, so I'm kind of curious as to what, any learnings you have there? It's early days, I realize that, but when you think about, the discussions, are banks going to lose control of the payment systems? Are retail stores going to go away? Is owning and driving your own vehicle going to be the exception, not the norm? Et cetera, et cetera, et cetera, you know, big questions, how far can we take machine intelligence? Have you seen your clients begin to apply this in their businesses, incumbents, we saw three examples today, good examples, I thought. I don't think it's widespread yet, but what are you guys seeing? What are you learning, and how are you applying that to clients? >> Yeah, so, I mean certainly for us, from these new AI workloads, we have a number of clients and a number of different types of solutions. Whether it's in genomics, or it's AI deep learning in analyzing financial data, you know, a variety of different types of use cases where we do see clients leveraging the capabilities, like spectrum scale, ESS, and other flash system solutions, to address some of those problems. We're seeing it now. Autonomous driving as well, right, to analyze data. >> How about a little road map, to end this segment? Where do you want to take this initiative? What should we be looking for as observers from the outside looking in? >> Well, I think drawing from the endeavors that we have within the CDO, what we want to do is take some of those ideas and look at some of the derivative products that we can take out of there, and how do we kind of move those in to products? Because we want to make it as simple as possible for the enterprise customer. Because although, you see these big scale companies, and all the wonderful things that they're doing, what we've had the feedback from, which is similar to our own experiences, is that those use cases aren't directly applicable for most of the enterprise customers. Some of them are, right, some of the stuff in vision and brand targeting and speech recognition and all that type of stuff are, but at the same time the majority and the 90% area are not. So we have to be able to bring down sorry, just the echoes, very distracting. >> It gets loud here sometimes, big party going on. >> Exactly, so, we have to be able to bring that technology to them in a simpler form so they can make it more accessible to their internal data scientists, and get better outcomes for themselves. And we find that they're on a wide spectrum. Some of them are quite advanced. It doesn't mean just because you have a big name you're quite advanced, some of the smaller players have a smaller name, but quite advanced, right? So, there's a wide array, so we want to make that accessible to these various enterprises. So I think that's what you can expect, you know, the reference architecture for the cognitive enterprise data architecture, and you can expect to see some of the products from those internal use cases come out to some of our offerings, like, maybe IGC or information analyzer, things like that, or maybe the Watson studio, things like that. You'll see it trickle out there. >> Okay, alright Bina, we'll give you the final word. You guys, business is good, four straight quarters of growth, you've got some tailwinds, currency is actually a tailwind for a change. Customers seem to be happy here, final word. >> Yeah, no, we've got great momentum, and I think 2018 we've got a great set of roadmap items, and new capabilities coming out, so, we feel like we've got a real strong set of future for our IBM storage here. >> Great, well, Bina, Steve, thanks for coming on theCUBE. We appreciate your time. >> Thank you. >> Nice meeting you. >> Alright, keep it right there everybody. We'll be back with our next guest right after this. This is day two, IBM Think 2018. You're watching theCUBE. (techno jingle)
SUMMARY :
Brought to you by IBM. Bina Hallman is here, she's the Vice President He's the Vice President of Deep Learning Dave: Welcome to the Cube, Steve. Yeah, so this has been quite an event, Standing room only at all the sessions. We brought, as you said, all of our conferences together You're in that office, but you've got other roles behind the scenes, to accelerate the ideation in the storage business. and that's one of the things that, you know, seeing the elastic storage solution, or server. like the read only performance feed, AI, what is it, you mentioned some attributes before, that the storage doesn't break down, and tell me if you think, let me hear your thoughts. and start acknowledging that we're talking about a fabric the models that I want to train is in the 100s of gigabytes. to move the data. for one of the problems that I'm solving. and I would like to do it, protocols, and the file system, the NFS, right? So you don't expect that there's going to be and that's around the corner, actually, right? allow the clients to consume the capabilities and that's some of the interactions that we're working on and how it relates to sort of what you do on a and that it's governed behind the scenes you've got to have your data house in order to do that. that are kicking the tires and trying to figure out bringing it to our enterprise alliance. and how are you applying that to clients? leveraging the capabilities, like spectrum scale, ESS, and all the wonderful things that they're doing, So I think that's what you can expect, you know, Okay, alright Bina, we'll give you the final word. and new capabilities coming out, so, we feel We appreciate your time. This is day two, IBM Think 2018.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Steve | PERSON | 0.99+ |
Steve Elliot | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Peter Burress | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
Paul Bhandari | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Boston | LOCATION | 0.99+ |
Bina Hallman | PERSON | 0.99+ |
Indra Pal | PERSON | 0.99+ |
60 terabytes | QUANTITY | 0.99+ |
90% | QUANTITY | 0.99+ |
16 gigs | QUANTITY | 0.99+ |
Peter | PERSON | 0.99+ |
100% | QUANTITY | 0.99+ |
2018 | DATE | 0.99+ |
Ed Walsh | PERSON | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
30 terabytes | QUANTITY | 0.99+ |
Jenny | PERSON | 0.99+ |
three | QUANTITY | 0.99+ |
60 | QUANTITY | 0.99+ |
40,000 experiments | QUANTITY | 0.99+ |
Steven Eliuk | PERSON | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
24 | QUANTITY | 0.99+ |
Bina | PERSON | 0.99+ |
two years | QUANTITY | 0.99+ |
120 | QUANTITY | 0.99+ |
48 | QUANTITY | 0.99+ |
last October | DATE | 0.99+ |
one | QUANTITY | 0.98+ |
40 gigabytes | QUANTITY | 0.98+ |
first step | QUANTITY | 0.98+ |
hundreds | QUANTITY | 0.97+ |
three examples | QUANTITY | 0.97+ |
30,000 40,000 | QUANTITY | 0.97+ |
today | DATE | 0.97+ |
400 people | QUANTITY | 0.97+ |
four hundred CDOs | QUANTITY | 0.96+ |
Whirl | ORGANIZATION | 0.95+ |
about 10, 15 years ago | DATE | 0.94+ |
this morning | DATE | 0.94+ |
about three | QUANTITY | 0.92+ |
four times | QUANTITY | 0.91+ |
years ago | DATE | 0.91+ |
100s of gigabytes | QUANTITY | 0.89+ |
fourth quarter | DATE | 0.89+ |
a year ago | DATE | 0.88+ |
four straight quarters | QUANTITY | 0.88+ |
Watson studio | ORGANIZATION | 0.85+ |
day two | QUANTITY | 0.84+ |
ESS | ORGANIZATION | 0.83+ |
nine server power systems | QUANTITY | 0.82+ |
Vice President | PERSON | 0.78+ |
Bina Hallman, IBM & Tahir Ali | IBM Interconnect 2017
>> Narrator: Live from Las Vegas, it's the Cube covering Interconnect 2017, brought to you by IBM. >> Welcome back to Interconnect 2017 from Las Vegas everybody, this is the Cube the leader in live tech coverage. Bina Halmann is here, she's a Cube alumn and the vice president of offering management for storage and software defined at IBM and she's joined by Tahir Ali, who's the director of Enterprise Architecture at the City of Hope Medical Center. Folks, welcome to the Cube- >> Tahir: Thank you very much. >> Thanks so much for coming on. >> Bina: Thanks for having us. >> So Bina we'll start with you been on the cube a number of times. >> Yes. >> Give us the update on what's happening with IBM and Interconnect. >> Yeah, no it's a great show. Lots of exciting announcements and such. From an IBM perspective storage we've been very busy. Filling out our whole flash portfolio. Adding a complete set of hybrid cloud capabilities to our software defined storage. It's been a great 2016 and we're off to a great start in 2017 as well. >> Yeah [Inaudible] going to be here tomorrow >> That's right. so everbody's looking forward to that. So Tahir, let's get into City of Hope. Tell us about the organization and your role. >> Sure, so City of Hope if one of the forty seven comprehensive cancer centers in the nation. We deal with cancer of course, HIV, diabetes and other life threatening diseases. We are maybe 15 to 17 miles east of Los Angeles. My role in particular, I'm a Director of Enterprise Architecture so all new technologies, all new applications that land on City of Hope, we go through all the background. See how the security is going to be, how it's going to implement in our environment, if it's even possible to implement it. Making sure we talk to our business owners, figure out if there's a disaster recovery requirement if they have a HA requirement, if it's a clinical versus a non-clinical application. So we look at a whole stack and see how a new application fits into the infrastructure of City of Hope. >> So you guys to a lot of research there as well or? >> Absolutely. >> Yeah. >> So we are research, we are the small EDU and we are the medical center so- >> So a lot of data. >> A whole lot of data. Data just keeps coming and keeps coming and it's almost like never ending stream of data. Now with the data it's not only just data- Individual data is also growing. So a lot of imaging that happens for cancer research, or cancer medical center, gets bigger and bigger per patient as the three dimensional imaging is here. We look at resolution that is so much more today than it used to be five years. So every single image itself is so much bigger today than it used to be five years ago. Just a sheer difference in the resolution and the dimensions of the data. >> So what are the big drivers in your industry, and how is it affecting the architecture that you put forward? >> Right, so I think that a couple of huge things that are maybe two or three huge conversion points, or the pivot points that we see today. One of them is just the data stream as I mentioned earlier. The second is because a lot of the PHI and hipaa data that we have today- Security is a huge concern in a lot of the healthcare environment. So those two things, and it's almost like a catch 22. More data is coming in you have to figure out where you're going to put that data. But at the same time you got to make sure every single bit is secured enough. So there's a catch 22 where its going, where you have to make sure that data keeps coming and you keep securing the same data. Right so, those two things that we see pivoting the way we strategize around our infrastructure. >> It's hard, they're in conflict in way, >> Tahir: Absolutely. >> Because you've got to lock the data up but then you want to provide accessibilty... >> Tahir: Absolutely. >> as well. So paint a picture of your infrastructure and the applications that it's supporting. >> Right, so our infrastructure is mainly in-house, and our EMR is currently off-prem. A lot of clinical and non-clinical also stay in-house with us in our data center on-prem. Now we are kind of starting to migrate to cloud technologies more and more, as just things are ballooning. So we are in that middle piece where some of our infrastructure in in-house, slowly we are migrating to cloud. So we are at like at a hybrid currently. And as things progress I think more and more is going to go to the cloud. But for a medical center security is everything. So we have to be very careful where our data sits. >> So Bina when you hear that from a client >> Bina: Mm-hmm (affirmative) >> how do you respond? And you know, what do you propose? >> Bina: Yeah. >> How does it all... >> Yeah well- >> come about. >> You know as we see clients like Tahir, and some of the requirements in these spaces. Security is definitely a key factor. So as we develop our products, as we develop capabilities we ensure that security is a number one focus area for us. Whether it's for the on-prem storage, whether it's for the data that's in motion from moving from the on-prem into the cloud, and secure completely all the way through where the client has the control on the security, the keys et cetera. So a lot goes into making sure as we architect these solutions for our clients, that we focus on security. And of course some of the other requirements, industry specific requirements, are all also very important and we focus in on those as well. Whether it's regulatory or compliance requirements, right. >> So from a sort of portfolio standpoint what do you guys do when there's all kinds of innovations over that last four or five years coming in with flash, we heard about object stores this morning, we got cloud, you got block, you've got file, what are you guys doing? >> So we do a lot of different things, so from having filers in-house to doing block storage from- And the worst thing now these days with big data is, as the data is growing the security needs are growing but the end result with the researchers and our physicians the data availability needs to be fast. So now comes a bigger catch 22, where the data is so huge but at the same time they want that all of that very quickly on their fingertips. So now what do you do? That's where we bring in a lot of the flash to upfront it. 10 to 12 percent of our infrastructure has flash in the front, this way all the rendering, or all the rights that happen or- First land on the flash. So everybody who writes, feels like it's a very quick write. But there's a petabytes and petabytes behind the scene that could be on-prem, it could be on the cloud, but they don't need to know that. Its, everything lands so fast that it looks like it's just local and fast. So there's a lot of crisscross that is happening, and started maybe four five years ago with the speed of data is not going to be slow. The size of data increasing like crazy and then security is becoming a bigger and bigger concern as you know. Maybe every month or month and a half there's a breach somewhere that people have to deal with. So we have to handle all of that in one shot. So you know, it's more than just infrastructure itself. There's policies, there's procedures, there's a lot that goes around. >> So when you think about architecting, obviously you think about workloads and- >> Tahir: Of Course. >> what the workload requirement is, it's no a one size fits all. >> Tahir: Right right. >> So where do you start, do you start with- >> Tahir: Sure. >> Sort of, you know a conversation with the business? >> Sure, sure. >> How much money do you got? >> So we don't really deal with the money at all. We provide the best possible solution for that business requirement. So the conversation happens, "tell us what you're looking for." "We're looking for a very fast XYZ." "Okay tell us what exactly you need." "Here's the application, we want it available all the time, "and this is how it's going to look like, "it can't be down because our patients are depending on it". So on and so forth. We take that, we talk to our vendors. We look at exactly how it's architected. If it's- Let's just say it's three-tiered. There's a web, there's an app and then there's a database. You already know by default that if it's a database it's going to go on a high transactional IO where either it's a flash or a very fast spinning disc with a lot of spindles. From there you get the application. Could be a virtual machine, could not be a virtual machine. From there you get to a web tier. Web tiers are usually always on a virtual infrastructure. Then you realize if you want to put it on a DMZ so people from outside can get to it, or it's only for internal use. Then you draw the entire architecture diagram out. Then you price it out, you said "Okay if you want this to be "always on, maybe you need a database that is always on." Right, or you need a database that replicates 24/7. That has a cost associated to that. If you have an application- If wanted two application maybe it's a costier application it could be HA it could not be HA, so there's a cost to that. Web servers are kind of, you know cheaper tier of virtual machines. And then there's a architecture diagram, all the requirements are met in there. And there's a cost associated to that, saying business unit here is how much it's going to cost and this is what you will have. >> Okay so that's where the economics, >> Exactly >> comes into play. Okay this is what your requirements are >> Yep. >> This is, based on that what we would advise. >> Exactly, yeah. >> And then essentially it's can you afford it. >> Right right. (laughs) If you want to buy a house that is a three bedrooms and three bathrooms in Palo Alto, versus a six bedrooms and then seven bathrooms in Palo Alto it's going to be a financial impact that you might not like. (laughs) So it's one of those, right. So what you want has a financial impact on your end solution and that's what we provide. We don't force somebody to get something. We just give them- Hey how many kids do you have? Four kids, then maybe you need a five bedroom house. Right so we kind of do that. >> Is it common discussion? >> Yeah it is, it is. And that's, as you know, some of the things we do focus on. Right, as we- In addition to the security aspect of it of course, is around the automation, around driving in the efficiencies. Because at the end of the day, you know, whether as capital expands or operational expands you want to optimize for both of those. And that's where as we architect the solutions, develop the offerings, we ensure that we build-in capabilities, whether it's storage efficiency capabilities like virtualization, or de-dupe or compression. But as well as this automated tiering. Tiering off from flash to lower tier, whether it's on-prem lower, slower- >> Tahir: Could be a disc. >> speed disc or tape or even off to the cloud, right. And being able to do that, provide that I think addresses many of our clients' needs. That's a common requirement that we do hear. >> And as mentioned 10 to 12 percent of it if flash. >> Tahir: Right. >> The rest, you know ninety percent or so is something else. That's economics, correct? >> Right so- >> And how do you see that changing? >> So I think the percentage won't really change. I think the data size will change. So you have to just think about things, just in generality. Just what you do today. You know when you take a picture, maybe you look at it the first three days, even if you have a phone. After three days, maybe you look at it maybe once every two months. After three months, guess what? You will always never look at them. They're kind of moved away from even your memory banks in your head. Then you say, "Oh I was looking through it". And then maybe once in awhile you look at it. So you have to look at the behavior. A lot of the applications have the same behavior, where the new data is required right away. The older the data gets, the more archival state it gets. It gets warmer and then it gets colder. Now, as a healthcare institute we have to devise something that is great financially, also has the security, and put away in a way where we can pull it without having pain to put it back. So that's where the tiering comes to play. Doesn't matter how we do it. >> And your planning assumption is that the cost disparity between flash and other forms of storage will remain. That other- >> So- >> forms will remain cheaper. >> Right, so we are hoping, but I think the hybrid model of flash- So once you do a hybrid with flash and disc, then it becomes a little more economically suitable for a lot of the people. They do the same thing, they do tiering, but they make it look like a bigger platform. So it's like, "We can give you a petabyte "but it's going to look like flash." It doesn't work like that. They might have 300 terabyte of flash, 700- but it's so integrated quickly, that they can pull it and push it. Then there's a read-aheads write-aheads that takes that advantage to make it look like it. That will drop your pricing. The special sauce that transfer the data between slower and flash discs. >> Two questions for you. >> Sure. >> What do you look for in a supplier? And what drives you nuts about a supplier, that you don't want a supplier to do? >> Sure. So personally speaking, this is just my personal opinion. A stable environment a tried and true vendor is important. Somebody who has a core competency of doing this for a longer term is what I personally look at. There's a lot of new players who come in, they stay for a couple of years, they explode, somebody takes them over or they just kind of vanish. Or certain people outside of their core competency. So if Toyota started to make- Because they wanted to save money they said, "Hey Toyota from now on will make "the tires that are called Toyota." But Toyota is not a tire company. Other companies, Bridgestone and Michelin's have been making tires for a very long time. So the core competency of Toyota is building the cars and not the tires. So when I see these people, or the vendors saying, "Okay I can give you this this this this and this and that and the security and that. Maybe three out of those five things are not their core competency. So I start to wonder if the whole stack is worth it because there's going to be some weakness because they don't have the core competency. That's what I look at. What drives me crazy is, every single time somebody comes to meet with me they want to sell me everything and the kitchen sink under one umbrella. And the answer is one single pane of glass to manage everything. Life is not that easy, I wish it was but it really is not. (laughs) So those two things are- >> Selling the fantasy right. Now Bina we'll give you the last word. Interconnect, give us your final thoughts. What should we know about what's going on in software-defined and IBM storage. >> Yeah you know lots of announcements at Interconnect. You heard, as you talked about, cloud optic storage we've got great new pricing models and capabilities and overall software-defined storage. We're continuing to innovate, continue add capabilities like analytics and you'll see us doing more and more on cognitive. Cognitive storage management to get more out of the data, help clients get more and more information and value out of their data. >> What's the gist of the new pricing models, just um- >> Flexible pricing model depending on how the both hybrid as well as the three tiered on-prem and in between. But really cold as well as a flexible pricing model where depending on how you use the data you know you get consistent pricing so between on-prem and in the cloud. >> So more cloud-like pricing >> Yes, exactly. >> Great. >> Yep. >> Easier consumption, excellent. Well Bina Tahir thanks very much for coming to the cube. >> Yes yes thank you. >> Dave: Pleasure having you. >> Thank you. >> Thank you for having us. >> Dave: You're welcome. Alright keep it right there everybody we'll be back with our next guest and a wrap, right after this short break. Right back. (upbeat music)
SUMMARY :
brought to you by IBM. and the vice president So Bina we'll start with you with IBM and Interconnect. to a great start in 2017 as well. So Tahir, let's get into City of Hope. See how the security is going to be, So a lot of imaging that But at the same time you got to but then you want to and the applications that it's supporting. So we are in that middle piece where and some of the requirements of the flash to upfront it. it's no a one size fits all. and this is what you will have. Okay this is what your requirements are This is, based on that it's can you afford it. So what you want has a of the things we do focus on. that we do hear. And as mentioned 10 to The rest, you know ninety So you have to just think about assumption is that the cost So it's like, "We can give you a petabyte So the core competency of Toyota Now Bina we'll give you the last word. Yeah you know lots of where depending on how you much for coming to the cube. we'll be back with our
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Michelin | ORGANIZATION | 0.99+ |
Tahir | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Bridgestone | ORGANIZATION | 0.99+ |
10 | QUANTITY | 0.99+ |
Toyota | ORGANIZATION | 0.99+ |
2017 | DATE | 0.99+ |
Bina Halmann | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Dave | PERSON | 0.99+ |
Four kids | QUANTITY | 0.99+ |
15 | QUANTITY | 0.99+ |
Bina Hallman | PERSON | 0.99+ |
Two questions | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
two things | QUANTITY | 0.99+ |
Tahir Ali | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
ninety percent | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
2016 | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
Bina | PERSON | 0.99+ |
300 terabyte | QUANTITY | 0.99+ |
Interconnect | ORGANIZATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
12 percent | QUANTITY | 0.99+ |
700 | QUANTITY | 0.99+ |
five bedroom | QUANTITY | 0.99+ |
second | QUANTITY | 0.99+ |
Bina Tahir | PERSON | 0.99+ |
both | QUANTITY | 0.99+ |
17 miles | QUANTITY | 0.99+ |
three bathrooms | QUANTITY | 0.99+ |
three | QUANTITY | 0.98+ |
one shot | QUANTITY | 0.98+ |
tomorrow | DATE | 0.98+ |
six bedrooms | QUANTITY | 0.97+ |
five years ago | DATE | 0.97+ |
City of Hope | LOCATION | 0.97+ |
five things | QUANTITY | 0.97+ |
first three days | QUANTITY | 0.97+ |
three bedrooms | QUANTITY | 0.97+ |
five years | QUANTITY | 0.96+ |
seven bathrooms | QUANTITY | 0.96+ |
catch 22 | OTHER | 0.95+ |
Los Angeles | LOCATION | 0.93+ |
First | QUANTITY | 0.93+ |
four five years ago | DATE | 0.93+ |
one umbrella | QUANTITY | 0.92+ |
three huge conversion points | QUANTITY | 0.91+ |
City of Hope Medical Center | ORGANIZATION | 0.91+ |
two application | QUANTITY | 0.91+ |
After three days | DATE | 0.9+ |
After three months | DATE | 0.89+ |
a half | QUANTITY | 0.89+ |
this morning | DATE | 0.88+ |
forty seven comprehensive cancer centers | QUANTITY | 0.87+ |
Evolving Your Analytics Center of Excellence | Beyond.2020 Digital
>>Hello, everyone, and welcome to track three off beyond. My name is being in Yemen and I am an account executive here at Thought spot based out of our London office. If the accents throwing you off I don't quite sound is British is you're expecting it because the backgrounds Australian so you can look forward to seeing my face. As we go through these next few sessions, I'm gonna be introducing the guests as well as facilitating some of the Q and A. So make sure you come and say hi in the chat with any comments, questions, thoughts that you have eso with that I mean, this whole track, as the title somewhat gives away, is really about everything that you need to know and all the tips and tricks when it comes to adoption and making sure that your thoughts what deployment is really, really successful. We're gonna be taking off everything from user training on boarding new use cases and picking the right use cases, as well as hearing from our customers who have been really successful in during this before. So with that, though, I'm really excited to introduce our first guest, Kathleen Maley. She is a senior analytics executive with over 15 years of experience in the space. And she's going to be talking to us about all her tips and tricks when it comes to making the most out of your center of excellence from obviously an analytics perspective. So with that, I'm going to pass the mic to her. But look forward to continuing the chat with you all in the chat. Come say hi. >>Thank you so much, Bina. And it is really exciting to be here today, thanks to everyone for joining. Um, I'll jump right into it. The topic of evolving your analytics center of excellence is a particular passion of mine on I'm looking forward to sharing some of my best practices with you. I started my career, is a member of an analytic sioe at Bank of America was actually ah, model developer. Um, in my most recent role at a regional bank in the Midwest, I ran an entire analytics center of excellence. Um, but I've also been on the business side running my own P and l. So I think through this combination of experiences, I really developed a unique perspective on how to most effectively establish and work with an analytic CEO. Um, this thing opportunity is really a two sided opportunity creating value from analytics. Uh, and it really requires the analytics group and the line of business Thio come together. Each has a very specific role to play in making that happen. So that's a lot of what I'll talk about today. Um, I started out just like most analysts do formally trained in statistics eso whether your data analyst or a business leader who taps into analytical talent. I want you to leave this talk today, knowing the modern definition of analytics, the purpose of a modern sioe, some best practices for a modern sioe and and then the role that each of you plays in bringing this Kuito life. So with that said, let me start by level, setting on the definition of analytics that aligns with where the discipline is headed. Um, versus where it's been historically, analytics is the discovery, interpretation and communication of meaningful patterns in data, the connective tissue between data and effective decision making within an organization. And this is a definition that I've been working under for the last, you know, 7 to 10 years of my career notice there is nothing in there about getting the data. We're at this amazing intersection of statistics and technology that effectively eliminates getting the data as a competitive advantage on this is just It's true for analysts who are thinking in terms of career progression as it is for business leaders who have to deliver results for clients and shareholders. So the definition is action oriented. It's purposeful. It's not about getting the data. It's about influencing and enabling effective decision making. Now, if you're an analyst, this can be scary because it's likely what you spend a huge amount of your time doing, so much so that it probably feels like getting the data is your job. If that's the case, then the emergence of these new automated tools might feel like your job is at risk of becoming obsolete. If you're a business leader, this should be scary because it means that other companies air shooting out in front of you not because they have better ideas, necessarily, but because they can move so much faster. According to new research from Harvard Business Review, nearly 90% of businesses say the more successful when they equipped those at the front lines with the ability to make decisions in the moment and organizations who are leading their industries and embracing these decision makers are delivering substantial business value nearly 50% reporting increased customer satisfaction, employee engagement, improve product and service quality. So, you know, there there is no doubt that speed matters on it matters more and more. Um, but if you're feeling a little bit nervous, I want you to think of it. I want you think of it a little differently. Um, you think about the movie Hidden figures. The job of the women in hidden figures was to calculate orbital trajectories, uh, to get men into space and then get them home again. And at the start of the movie, they did all the required mathematical calculations by hand. At the end of the movie, when technology eliminated the need to do those calculations by hand, the hidden figures faced essentially the same decision many of you are facing now. Do I become obsolete, or do I develop a new set of, in their case, computer science skills required to keep doing the job of getting them into space and getting them home again. The hidden figures embraced the latter. They stayed relevant on They increase their value because they were able to doom or of what really mattered. So what we're talking about here is how do we embrace the new technology that UN burdens us? And how do we up skill and change our ways of working to create a step function increase in data enabled value and the first step, really In evolving your analytics? Dewey is redefining the role of analytics from getting the data to influencing and enabling effective decision making. So if this is the role of the modern analyst, a strategic thought partner who harnesses the power of data and directs it toward achieving specific business outcomes, then let's talk about how the series in which they operate needs change to support this new purpose. Um, first, historical CEOs have primarily been about fulfilling data requests. In this scenario, C always were often formed primarily as an efficiency measure. This efficiency might have come in the form of consistency funds, ability of resource is breaking down silos, creating and building multipurpose data assets. Um, and under the getting the data scenario that's actually made a lot of sense for modern Sealy's, however, the objective is to create an organization that supports strategic business decision ing for individuals and for the enterprises the whole. So let's talk about how we do that while maintaining the progress made by historical seaweeds. It's about really extending its extending what, what we've already done the progress we've already made. So here I'll cover six primary best practices. None is a silver bullet. Each needs to fit within your own company culture. But these air major areas to consider as you evolve your analytics capabilities first and foremost always agree on the purpose and approach of your Coe. Successfully evolving yourself starts with developing strategic partnerships with the business leaders that your organization will support that the analytics see we will support. Both parties need to explicitly blocked by in to the objective and agree on a set of operating principles on bond. I think the only way to do that is just bringing people to the table, having an open and honest conversation about where you are today, where you wanna be and then agree on how you will move forward together. It's not about your organization or my organization. How do we help the business solve problems that, you know, go beyond what what we've been able to do today? So moving on While there's no single organizational model that works for everyone, I generally favor a hybrid model that includes some level of fully dedicated support. This is where I distinguish between to whom the analyst reports and for whom the analyst works. It's another concept that is important to embrace in spirit because all of the work the analyst does actually comes from the business partner. Not from at least it shouldn't come from the head of the analytic Center of excellence. Andan analysts who are fully dedicated to a line of business, have the time in the practice to develop stronger partnerships to develop domain knowledge and history on those air key ingredients to effectively solving business problems. You, you know, how can you solve a problem when you don't really understand what it is? So is the head of an analytic sioe. I'm responsible for making sure that I hire the right mix of skills that I can effectively manage the quality of my team's work product. I've got a specialized skill set that allows me to do that, Um, that there's career path that matters to analysts on all of the other things that go along with Tele management. But when it comes to doing the work, three analysts who report to me actually work for the business and creating some consistency and stability there will make them much more productive. Um, okay, so getting a bit more, more tactical, um, engagement model answers the question. Who do I go to When? And this is often a question that business partners ask of a centralized analytics function or even the hybrid model. Who do I go to win? Um, my recommendation. Make it easy for them. Create a single primary point of contact whose job is to build relationships with a specific partner set of partners to become deeply embedded in their business and strategies. So they know why the businesses solving the problems they need to solve manage the portfolio of analytical work that's being done on behalf of the partner, Onda Geun. Make it make it easy for the partner to access the entire analytics ecosystem. Think about the growing complexity of of the current analytics ecosystem. We've got automated insights Business Analytics, Predictive modeling machine learning. Um, you Sometimes the AI is emerging. Um, you also then have the functional business questions to contend with. Eso This was a big one for me and my experience in retail banking. Uh, you know, if if I'm if I'm a deposits pricing executive, which was the line of business role that I ran on, I had a question about acquisitions through the digital channel. Do I talk Thio the checking analyst, Or do I talk to the digital analyst? Um, who owns that question? Who do I go to? Eso having dedicated POC s on the flip side also helps the head of the center of excellence actually manage. The team holistically reduces the number of entry points in the complexity coming in so that there is some efficiency. So it really is a It's a win win. It helps on both sides. Significantly. Um, there are several specific operating rhythms. I recommend each acting as a as a different gear in an integrated system, and this is important. It's an integrated decision system. All of these for operating rhythms, serves a specific purpose and work together. So I recommend a business strategy session. First, UM, a portfolio management routine, an internal portfolio review and periodic leadership updates, and I'll say a little bit more about each of those. So the business strategy session is used to set top level priorities on an annual or semiannual basis. I've typically done this by running half day sessions that would include a business led deep dive on their strategy and current priorities. Again, always remembering that if I'm going to try and solve all the business problem, I need to know what the business is trying to achieve. Sometimes new requester added through this process often time, uh, previous requests or de prioritized or dropped from the list entirely. Um, one thing I wanna point out, however, is that it's the partner who decides priorities. The analyst or I can guide and make recommendations, but at the end of the day, it's up to the business leader to decide what his or her short term and long term needs and priorities are. The portfolio management routine Eyes is run by the POC, generally on a biweekly or possibly monthly basis. This is where new requests or prioritize, So it's great if we come together. It's critical if we come together once or twice a year to really think about the big rocks. But then we all go back to work, and every day a new requests are coming up. That pipeline has to be managed in an intelligent way. So this is where the key people, both the analyst and the business partners come together. Thio sort of manage what's coming in, decking it against top priorities, our priorities changing. Um, it's important, uh, Thio recognize that this routine is not a report out. This routine is really for the POC who uses it to clarify questions. Raised risks facilitate decisions, um, from his partners with his or her partner so that the work continues. So, um, it should be exactly as long as it needs to be on. Do you know it's as soon as the POC has the information he or she needs to get back to work? That's what happens. An internal portfolio review Eyes is a little bit different. This this review is internal to the analytics team and has two main functions. First, it's where the analytics team can continue to break down silos for themselves and for their partners by talking to each other about the questions they're getting in the work that they're doing. But it's also the form in which I start to challenge my team to develop a new approach of asking why the request was made. So we're evolving. We're evolving from getting the data thio enabling effective business decision ing. Um, and that's new. That's new for a lot of analysts. So, um, the internal portfolio review is a safe space toe asks toe. Ask the people who work for May who report to May why the partner made this request. What is the partner trying to solve? Okay, senior leadership updates the last of these four routines, um, less important for the day to day, but significantly important for maintaining the overall health of the SIOE. I've usually done this through some combination of email summaries, but also standing agenda items on a leadership routine. Um, for for me, it is always a shared update that my partner and I present together. We both have our names on it. I typically talk about what we learned in the data. Briefly, my partner will talk about what she is going to do with it, and very, very importantly, what it is worth. Okay, a couple more here. Prioritization happens at several levels on Dive. Alluded to this. It happens within a business unit in the Internal Portfolio review. It has to happen at times across business units. It also can and should happen enterprise wide on some frequency. So within business units, that is the easiest. Happens most frequently across business units usually comes up as a need when one leader business leader has a significant opportunity but no available baseline analytical support. For whatever reason. In that case, we might jointly approach another business leader, Havenaar Oi, based discussion about maybe borrowing a resource for some period of time. Again, It's not my decision. I don't in isolation say, Oh, good project is worth more than project. Be so owner of Project Be sorry you lose. I'm taking those. Resource is that's It's not good practice. It's not a good way of building partnerships. Um, you know that that collaboration, what is really best for the business? What is best for the enterprise, um, is an enterprise decision. It's not a me decision. Lastly, enterprise level part ization is the probably the least frequent is aided significantly by the semi annual business strategy sessions. Uh, this is the time to look enterprise wide. It all of the business opportunities that play potential R a y of each and jointly decide where to align. Resource is on a more, uh, permanent basis, if you will, to make sure that the most important, um, initiatives are properly staffed with analytical support. Oxygen funding briefly, Um, I favor a hybrid model, which I don't hear talked about in a lot of other places. So first, I think it's really critical to provide each business unit with some baseline level of analytical support that is centrally funded as part of a shared service center of excellence. And if a business leader needs additional support that can't otherwise be provided, that leader can absolutely choose to fund an incremental resource from her own budget that is fully dedicated to the initiative that is important to her business. Um, there are times when that privatization happens at an enterprise level, and the collective decision is we are not going to staff this potentially worthwhile initiative. Um, even though we know it's worthwhile and a business leader might say, You know what? I get it. I want to do it anyway. And I'm gonna find budget to make that happen, and we create that position, uh, still reporting to the center of excellence for all of the other reasons. The right higher managing the work product. But that resource is, as all resource is, works for the business leader. Um, so, uh, it is very common thinking about again. What's the value of having these resource is reports centrally but work for the business leader. It's very common Thio here. I can't get from a business leader. I can't get what I need from the analytics team. They're too busy. My work falls by the wayside. So I have to hire my own people on. My first response is have we tried putting some of these routines into place on my second is you might be right. So fund a resource that's 100% dedicated to you. But let me use my expertise to help you find the right person and manage that person successfully. Um, so at this point, I I hope you see or starting to see how these routines really work together and how these principles work together to create a higher level of operational partnership. We collectively know the purpose of a centralized Chloe. Everyone knows his or her role in doing the work, managing the work, prioritizing the use of this very valuable analytical talent. And we know where higher ordered trade offs need to be made across the enterprise, and we make sure that those decisions have and those decision makers have the information and connectivity to the work and to each other to make those trade offs. All right, now that we've established the purpose of the modern analyst and the functional framework in which they operate, I want to talk a little bit about the hard part of getting from where many individual analysts and business leaders are today, uh, to where we have the opportunity to grow in order to maintain pain and or regain that competitive advantage. There's no judgment here. It's simply an artifact. How we operate today is simply an artifact of our historical training, the technology constraints we've been under and the overall newness of Applied analytics as a distinct discipline. But now is the time to start breaking away from some of that and and really upping our game. It is hard not because any of these new skills is particularly difficult in and of themselves. But because any time you do something, um, for the first time, it's uncomfortable, and you're probably not gonna be great at it the first time or the second time you try. Keep practicing on again. This is for the analyst and for the business leader to think differently. Um, it gets easier, you know. So as a business leader when you're tempted to say, Hey, so and so I just need this data real quick and you shoot off that email pause. You know it's going to help them, and I'll get the answer quicker if I give him a little context and we have a 10 minute conversation. So if you start practicing these things, I promise you will not look back. It makes a huge difference. Um, for the analyst, become a consultant. This is the new set of skills. Uh, it isn't as simple as using layman's terms. You have to have a different conversation. You have to be willing to meet your business partner as an equal at the table. So when they say, Hey, so and so can you get me this data You're not allowed to say yes. You're definitely not is not to say no. Your reply has to be helped me understand what you're trying to achieve, so I can better meet your needs. Andi, if you don't know what the business is trying to achieve, you will never be able to help them get there. This is a must have developed project management skills. All of a sudden, you're a POC. You're in charge of keeping track of everything that's coming in. You're in charge of understanding why it's happening. You're responsible for making sure that your partner is connected across the rest of the analytics. Um, team and ecosystem that takes some project management skills. Um, be business focused, not data focused. Nobody cares what your algorithm is. I hate to break it to you. We love that stuff on. We love talking about Oh, my gosh. Look, I did this analysis, and I didn't think this is the way I was gonna approach it, and I did. I found this thing. Isn't it amazing? Those are the things you talk about internally with your team because when you're doing that, what you're doing is justifying and sort of proving the the rightness of your answer. It's not valuable to your business partner. They're not going to know what you're talking about anyway. Your job is to tell them what you found. Drawing conclusions. Historically, Analyst spent so much of their time just getting data into a power 0.50 pages of summarized data. Now the job is to study that summarized data and draw a conclusion. Summarized data doesn't explain what's happening. They're just clues to what's happening. And it's your job as the analyst to puzzle out that mystery. If a partner asked you a question stated in words, your answer should be stated in words, not summarized data. That is a new skill for some again takes practice, but it changes your ability to create value. So think about that. Your job is to put the answer on page with supporting evidence. Everything else falls in the cutting room floor, everything. Everything. Everything has to be tied to our oi. Um, you're a cost center and you know, once you become integrated with your business partner, once you're working on business initiatives, all of a sudden, this actually becomes very easy to do because you will know, uh, the business case that was put forth for that business initiative. You're part of that business case. So it becomes actually again with these routines in place with this new way of working with this new way of thinking, it's actually pretty easy to justify and to demonstrate the value that analytic springs to an organization. Andi, I think that's important. Whether or not the organization is is asking for it through formalized reporting routine Now for the business partner, understand that this is a transformation and be prepared to support it. It's ultimately about providing a higher level of support to you, but the analysts can't do it unless you agree to this new way of working. So include your partner as a member of your team. Talk to them about the problems you're trying to sell to solve. Go beyond asking for the data. Be willing and able to tie every request to an overarching business initiative on be poised for action before solution is commissioned. This is about preserving. The precious resource is you have at your disposal and you know often an extra exploratory and let it rip. Often, an exploratory analysis is required to determine the value of a solution, but the solution itself should only be built if there's a plan, staffing and funding in place to implement it. So in closing, transformation is hard. It requires learning new things. It also requires overriding deeply embedded muscle memory. The more you can approach these changes is a team knowing you won't always get it right and that you'll have to hold each other accountable for growth, the better off you'll be and the faster you will make progress together. Thanks. >>Thank you so much, Kathleen, for that great content and thank you all for joining us. Let's take a quick stretch on. Get ready for the next session. Starting in a few minutes, you'll be hearing from thought spots. David Coby, director of Business Value Consulting, and Blake Daniel, customer success manager. As they discuss putting use cases toe work for your business
SUMMARY :
But look forward to continuing the chat with you all in the chat. This is for the analyst and for the business leader to think differently. Get ready for the next session.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Kathleen | PERSON | 0.99+ |
Kathleen Maley | PERSON | 0.99+ |
David Coby | PERSON | 0.99+ |
Yemen | LOCATION | 0.99+ |
100% | QUANTITY | 0.99+ |
10 minute | QUANTITY | 0.99+ |
Blake Daniel | PERSON | 0.99+ |
second | QUANTITY | 0.99+ |
Bank of America | ORGANIZATION | 0.99+ |
London | LOCATION | 0.99+ |
First | QUANTITY | 0.99+ |
Dewey | PERSON | 0.99+ |
7 | QUANTITY | 0.99+ |
Each | QUANTITY | 0.99+ |
May | DATE | 0.99+ |
Both parties | QUANTITY | 0.99+ |
0.50 pages | QUANTITY | 0.99+ |
each | QUANTITY | 0.99+ |
Thio | PERSON | 0.99+ |
both sides | QUANTITY | 0.99+ |
10 years | QUANTITY | 0.99+ |
nearly 50% | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Hidden figures | TITLE | 0.99+ |
over 15 years | QUANTITY | 0.99+ |
first | QUANTITY | 0.98+ |
second time | QUANTITY | 0.98+ |
first guest | QUANTITY | 0.98+ |
once | QUANTITY | 0.98+ |
nearly 90% | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
Bina | PERSON | 0.98+ |
single | QUANTITY | 0.98+ |
first time | QUANTITY | 0.98+ |
Midwest | LOCATION | 0.97+ |
three analysts | QUANTITY | 0.97+ |
one | QUANTITY | 0.96+ |
first response | QUANTITY | 0.96+ |
two sided | QUANTITY | 0.94+ |
Chloe | PERSON | 0.92+ |
first step | QUANTITY | 0.92+ |
half day | QUANTITY | 0.91+ |
Business Value Consulting | ORGANIZATION | 0.9+ |
POC | ORGANIZATION | 0.9+ |
two main functions | QUANTITY | 0.89+ |
each business unit | QUANTITY | 0.88+ |
twice a year | QUANTITY | 0.86+ |
couple | QUANTITY | 0.81+ |
Sealy | ORGANIZATION | 0.8+ |
Thought | ORGANIZATION | 0.77+ |
Andi | PERSON | 0.76+ |
six primary best | QUANTITY | 0.76+ |
one leader | QUANTITY | 0.7+ |
Onda | PERSON | 0.68+ |
three | QUANTITY | 0.68+ |
Review | ORGANIZATION | 0.66+ |
biweekly | QUANTITY | 0.65+ |
Australian | OTHER | 0.63+ |
four routines | QUANTITY | 0.61+ |
Havenaar Oi | ORGANIZATION | 0.6+ |
Geun | ORGANIZATION | 0.59+ |
Harvard | ORGANIZATION | 0.54+ |
Business | TITLE | 0.51+ |
British | LOCATION | 0.5+ |
Beyond.2020 | OTHER | 0.5+ |
SIOE | TITLE | 0.39+ |
IBM Flash System 9100 Digital Launch
(bright music) >> Hi, I'm Peter Burris, and welcome to another special digital community event, brought to you by theCUBE and Wikibon. We've got a great session planned for the next hour or so. Specifically, we're gonna talk about the journey to the data-driven multi-cloud. Sponsored by IBM, with a lot of great thought leadership content from IBM guests. Now, what we'll do is, we'll introduce some of these topics, we'll have these conversations, and at the end, this is gonna be an opportunity for you to participate, as a community, in a crowd chat, so that you can ask questions, voice your opinions, hear what others have to say about this crucial issue. Now why is this so important? Well Wikibon believes very strongly that one of the seminal features of the transition to digital business, driving new-type AI classes of applications, et cetera, is the ability of using flash-based storage systems and related software, to do a better job of delivering data to more complex, richer applications, faster, and that's catalyzing a lot of the transformation that we're talking about. So let me introduce our first guest. Eric Herzog is the CMO and VP Worldwide Storage Channels at IBM. Eric, thanks for coming on theCUBE. >> Great, well thank you Peter. We love coming to theCUBE, and most importantly, it's what you guys can do to help educate all the end-users and the resellers that sell to them, and that's very, very valuable and we've had good feedback from clients and partners, that, hey, we heard you guys on theCUBE, and very interesting, so I really appreciate all the work you guys do. >> Oh, thank you very much. We've got a lot of great things to talk about today. First, and I want to start it off, kick off the proceedings for the next hour or so by addressing the most important issue here. Data-driven. Now Wikibon believes that digital transformation means something, it's the process by which a business treats data as an asset, and re-institutionalizes its work and changes the way it engages with customers, et cetera. But this notion of data-driven is especially important because it elevates the role that storage is gonna play within an organization. Sometimes I think maybe we shouldn't even call it storage. Talk to us a little bit about data-driven and how that concept is driving some of the concepts in innovation that are represented in this and future IBM products. >> Sure. So I think the first thing, it is all about the data, and it doesn't matter whether you're a small company, like Herzog's Bar and Grill, or the largest Fortune 500 in the world. The bottom line is, your most valuable asset is you data, whether that's customer data, supply chain data, partner data that comes to you, that you use, services data, the data you guys sell, right? You're an analysis firm, so you've got data, and you use that data to create you analysis, and then you use that as a product. So, data is the most critical asset. At the same time, data always goes onto storage. So if that foundation of storage is not resilient, is not available, is not performant, then either A, it's totally unavailable, right, you can't get to the customer data. B, there's a problem with the data, okay, so you're doing supply chain and if the storage corrupts the data, then guess what? You can't send out the T-shirts to the right retail location, or have it available online if you're an online retailer. >> Or you sent 200,000 instead of 20, and you get stuck with the bill. >> Right, exactly. So data is that incredible asset and then underneath, think of storage as the foundation of a building. Data is your building, okay, and all the various aspects of that data, customer data, your data, internal data, everything you're doing, that's the building. If the foundation of the building isn't rock solid the building falls down. Whether your building is big or small, and that's what storage does, and then storage can also optimize the building above it. So think of it more than just the foundation but the foundation if you will, that almost has like a tree, and has got things that come up from the bottom and have that beautiful image, and storage can help you out. For example, metadata. Metadata which is data about data could be used by analytics, package them, well guess what? The metadata about data could be exposed by the storage company. So that's why data-driven is so important from an end-user perspective and why storage is that foundation underneath a data-driven enterprise. >> Now we've seen a lot of folks talk about how cloud is the centerpiece of thinking about infrastructure. You're suggesting that data is the centerpiece of infrastructure, and cloud is gonna be an implementation decision. Where do I put the workloads, costs, all the other elements associated with it. But it suggests ultimately that data is not gonna end up in one place. We have to think about data as being where it needs to be to perform the work. That suggests multi-cloud, multi-premise. Talk to us a little bit about the role that storage and multi-cloud play together. >> So let's take multi-cloud first and peel that away. So multi-cloud, we see a couple of different things. So first of all, certain companies don't want to use a public cloud. Whether it's a security issue, and actually some people have found out that public cloud providers, no matter who the vendor is, sort of is a razor in a razor blade. Very cheap to put the storage out there but we want certain SLAs, guess what? The cloud vendors charge more. If you move data around a lot, in and out as you were describing, it's really that valuable, guess what? On ingress and egress gets you charges for that. The cloud provider. So it's almost the razor and the razor blades. So A, there's a cost factor in public only. B, you've got people that have security issues. C, what we've seen is, in many cases, hybrid. So certain datasets go out to the cloud and other datasets stay on the premises. So you've got that aspect of multi, which is public, private or hybrid. The second aspect, which is very common in bigger companies that are either divisionalized or large geographically, is literally the usage, in a hybrid or a public cloud environment, of multiple cloud vendors. So for example, in several countries the data has to physically stay within the confines of that country. So if you're a big enterprise and you've got offices in 200 different, well not 200, but 100 different countries, and 20 of 'em you have to keep in that country by law. If your cloud provider doesn't have a data center there you need to use a different cloud provider. So you've got that. And you also have, I would argue that the cloud is not new anymore. The internet is the original cloud. So it's really old. >> Cloud in many respects is the programming model, or the mature programming model for the internet-based programming applications. >> I'd agree with that. So what that means is, as it gets more mature, from the mid-sized company up, all of a sudden procurement's involved. So think about the way networking, storage and servers, and sometimes even software was bought. The IT guy, the CIO, the line of business might specify, I want to use it but then it goes to procurement. In the mid to big company it's like, great, are we getting three bids on that? So we've also seen that happen, particularly with larger enterprise where, well you were using IBM cloud, that's great, but you are getting a quote from Microsoft or Amazon right? So those are the two aspects we see in multi-cloud, and by the way, that can be a very complex situation dealing with big companies. So the key thing that we do at IBM, is make sure that whichever model you take, public, private or hybrid, or multiple public clouds, or multiple public cloud providers, using a hybrid configuration, that we can support that. So things like our transparent cloud tiering, we've also recently created some solution blueprints for multi-clouds. So these things allow you to simply and easily deploy. Storage has to be viewed as transparent to a cloud. You've gotta be able to move the data back and forth, whether that be backing the data up, or archiving the data, or secondary data usage, or whatever that may be. And so storage really is, gotta be multi-cloud and we've been doing those solutions already and in fact, but honestly for the software side of the IBM portfolio for storage, we have hundreds of cloud providers mid, big and small, that use our storage software to offer backup as a service or storage as a service, and we're again the software foundation underneath what an end-user would buy as a service from those cloud providers. >> So I want to pick up on a word you used, simplicity. So, you and I are old infrastructure hacks and for many years I used to tell my management, infrastructure must do no harm. That's the best way to think about infrastructure. Simplicity is the new value proposition, complexity remains the killer. Talk to us a little bit about the role that simplicity in packaging and service delivery and everything else is again, shaping the way you guys, IBM, think about what products, what systems and when. >> So I think there's a couple of things. First of all, it's all about the right tool for the right job. So you don't want to over-sell and sell a big, giant piece of high-end all-flash array, for example, to a small company. They're not gonna buy that. So we have created a portfolio of which our FlashSystem 9100 is our newest product, but we've got a whole set of portfolios from the entry space to the mid range to the high end. We also have stuff that's tuned for applications, so for example, our lasting storage server which comes in an all-flash configuration is ideal for big data analytics workloads. Our DS8000 family of flash is ideal for mainframe attach, and in fact we have close to 65% of all mainframe attached storage, is from IBM. But you have the right tool for the right job, so that's item number one. The second thing you want to do is easier and easier to use. Whether that be configuring the physical entity itself, so how do you cable, how do you rack and stack it, make sure that it easily integrates into whatever else they're putting together in their data center, but it a cloud data center, a traditional on-premises data center, it doesn't matter. The third thing is all about the software. So how do you have software that makes the array easier and easier to use, and is heavily automated based on AI. So the old automation way, and we've both been in that era, was you set policies. Policy-based management, and when it came out 10 years ago, it was a transformational event. Now it's all about using AI in your infrastructure. Not only does your storage need to be right to enable AI at the server workload level, but we're saying, we've actually deployed AI inside of our storage, making it easier for the storage manager or the IT manager, and in some cases even the app owner to configure the storage 'cause it's automated. >> Going back to that notion that the storage knows something about the metadata, too. >> Right, exactly, exactly. So the last thing is our multi-cloud blueprint. So in those cases, what we've done is create these multi-cloud blueprints. For example, disaster recovery and business continuity using a public cloud. Or secondary data use in a public cloud. How do you go ahead and take a snapshot, a replica or a backup, and use it for dev-ops or test or analytics? And by the way, our Spectrum copy data management software allows you, but you need a blueprint so that it's easy for the end user, or for those end users who buy through our partners, our partners then have this recipe book, these blueprints, you put them together, use the software that happens to come embedded in our new FlashSystem 9100 and then they use that and create all these various different recipes. Almost, I hate to say it, like a baker would do. They use some base ingredients in baking but you can make cookies, candies, all kinds of stuff, like a donut is essentially a baked good that's fried. So all these things use the same base ingredients and that software that comes with the FlashSystem 9100, are those base ingredients, reformulated in different models to give all these multi-cloud blueprints. >> And we've gotta learn more about vegetables so we can talk about salad in that metaphor, (Eric laughing) you and I. Eric once again. >> Great, thank you. >> Thank you so much for joining us here on the CUBE. >> Great, thank you. >> Alright, so let's hear this come to life in the form of a product video from IBM on the FlashSystem 9100. >> Some things change so quickly, it's impossible to track with the naked eye. The speed of change in your business can be just as sudden and requires the ability to rapidly analyze the details of your data. The new, IBM FlashSystem 9100, accelerates your ability to obtain real-time value from that information, and rapidly evolve to a multi-cloud infrastructure, fueled by NVMe technology. In one powerful platform. IBM FlashSystem 9100, combines the performance, of IBM FlashCore technology. The efficiency of IBM Spectrum Virtualize. The IBM software solutions, to speed your multi-cloud deployments, reduce overall costs, plan for performance and capacity, and simplify support using cloud-based IBM storage insights to provide AI-powered predictive analytics, and simplify data protection with a storage solution that's flexible, modern, and agile. It's time to re-think your data infrastructure. (upbeat music) >> Great to hear about the IBM FlashSystem 9100 but let's get some more details. To help us with that, we've got Bina Hallman who's the Vice President Offering Management at IBM Storage. Bina, welcome to theCUBE. >> Well, thanks for having me. It's an exciting even, we're looking forward to it. >> So Bina, I want to build on some of the stuff that we talked to Eric about. Eric did a good job of articulating the overall customer challenge. As IBM conceives how it's going to approach customers and help them solve these challenges, let's talk about some of the core values that IBM brings to bear. What would you say would be one of the, say three, what are the three things that IBM really focuses on, as it thinks about its core values to approach these challenges? >> Sure, sure. It's really around helping the client, providing a simple one-stop shopping approach, ensuring that we're doing all the right things to bring the capabilities together so that clients don't have to take different component technologies and put them together themselves. They can focus on providing business value. And it's really around, delivering the economic benefits around CapEx and OpEx, delivering a set of capabilities that help them move on their journey to a data-driven, multi-cloud. Make it easier and make it simpler. >> So, making sure that it's one place they can go where they can get the solution. But IBM has a long history of engineering. Are you doing anything special in terms of pre-testing, pre-packaging some of these things to make it easier? >> Yeah, we over the years have worked with many of our clients around the world and helping them achieve their vision and their strategy around multi-cloud, and in that journey and those set of experiences, we've identified some key solutions that really do make it easier. And so we're leveraging the breadth of IBM, the power of IBM, making those investment to deliver a set of solutions that are pre-tested, they are supported at the solutions level. Really focusing on delivering and underpinning the solutions with blueprints. Step-by-step documentation, and as clients deploy these solutions, they run into challenges, having IBM support to assist. Really bringing it all together. This notion of a multi-cloud architecture, around delivering modern infrastructure capabilities, NVMe acceleration, but also some of our really core differentiation that we deliver through FlashCore data reduction capabilities, along with things like modern data protection. That segment is changing and we really want to enable clients, their IT, and their line of business to really free them up and focus on a business value, versus putting these components together. So it's really around taking those complex things and make them easier for clients. Get improved RPO, RTO, get improved performance, get improved costs, but also flexibility and agility are very critical. >> That sounds like therefore, I mean the history of storage has been trade-offs that you, this can only go that fast, and that tape can only go that fast but now when we start thinking about flash, NVMe, the trade-offs are not as acute as they used to be. Is IBM's engineering chops capable of pointing how you can in fact have almost all of this at one time? >> Oh absolutely. The breadth and the capabilities in our R and D and the research capabilities, also our experiences that I talked about, engagements, putting all of that together to deliver some key solutions and capabilities. Like, look, everybody needs backup and archive. Backup to recover your data in case of a disaster occurs, archive for long-term retention. That data management, the data protection segment, it's going through a transformation. New emerging capabilities, new ways to do backup. And what we're doing is, pulling all of that together, with things that we introduced, for example, our Protect Plus in the fourth quarter, along with this FS 9100 and the cloud capabilities, to deliver a solution around data protection, data reuse, so that you have a modern backup approach for both virtual and physical environments that is really based on things like snapshots and mountable copies, So you're not using that traditional approach to recovering your copy from a backup by bringing it back. Instead, all you're doing is mounting one of those copies and instantly getting your application back and running for operational recovery. >> So to summarize some of those value, once stop, pre-tested, advanced technologies, smartly engineered. You guys did something interesting on July 10th. Why don't you talk about how those values, and the understanding of the problem, manifested so fast. Kind of an exciting set of new products that you guys introduced on July 10th. >> Absolutely. On July 10th we not only introduced our flagship FlashSystem, the FS 9100, which delivers some amazing client value around the economic benefits of CapEx, OpEx reduction, but also seamless data mobility, data reuse, security. All the things that are important for a client on their cloud journey. In addition to that, we infused that offering with AI-based predictive analytics and of course that performance and NVMe acceleration is really key, but in addition to doing that, we've also introduced some very exciting solutions. Really three key solutions. One around data protection, data reuse, to enable clients to get that agility, and second is around business continuity and data reuse. To be able to really reduce the expense of having business continuity in today's environment. It's a high-risk environment, it's inevitable to have disruptions but really being prepared to mitigate some of those risks and having operational continuity is important and by doing things like leveraging the public cloud for your DR capabilities. That's very important, so we introduced a solution around that. And the third is around private cloud. Taking your IBM storage, your FS 9100, along with the heterogeneous environment you have, and making it cloud-ready. Getting the cloud efficiencies. Making it to where you can use it for environments to create things like native cloud applications that are portable, from on-prem and into the cloud. So those are some of the key ways that we brought this together to really deliver on client value. >> So could you give us just one quick use case of your clients that are applying these technologies to solve their problems? >> Yeah, so let me use the first one that I talked about, the data protection and data reuse. So to be able to take your on-premise environment, really apply an abstraction layer, set up catalogs, set up SLAs and access control, but then be able to step away and manage that storage all through API bays. We have a lot of clients that are doing that and then taking that, making the snapshots, using those copies for things like, whether it's the disaster recovery or secondary use cases like analytics, dev-ops. You know, dev-ops is a really important use case and our clients are really leveraging some of these capabilities for it because you want to make sure that, as application developers are developing their applications, they're working with the latest data and making sure that the testing they're doing is meaningful in finding the maximum number of defects so you get the highest quality of code coming out of them and being able to do that, in a self-service driven way so that they're not having to slow down their innovation. We have clients leveraging our capabilities for those kinds of use cases. >> It's great to hear about the FlashSystem 9100 but let's hear what customers have to say about it. Not too long ago, IBM convened a customer panel to discuss many aspects of this announcement. So let's hear what some of the customers had to say about the FlashSystem 9100. >> Now Owen, you've used just about every flash system that IBM has made. Tell us, what excites you about this announcement of our new FlashSystem 9100. >> Well, let's start with the hardware. The fact that they took the big modules from the older systems, and collapsed that down to a two-and-a-half inch form-factor NVMe drive is mind-blowing. And to do it with the full speed compression as well. When the compression was first announced, for the last FlashSystem 900, I didn't think it was possible. We tested it, I was proven wrong. (laughing) It's entirely possible. And to do that on a small form-factor NVMe drive is just astounding. Now to layer on the full software stack, get all those features, and the possibilities for your business, and what we can do, and leverage those systems and technologies, and take the snapshots in the replication and the insights into what our system's doing, it is really mind-blowing what's coming out today and I cannot wait to just kick those tires. There's more. So with that real-world compression ratio, that we can validate on the new 900, and it's the same in this new system, which is astounding, but we can get more, and just the amount of storage you get in this really small footprint. Like, two rack units is nothing. Half our services are two rack units, which is absolutely astounding, to get that much data in such a very small package, like, 460 terabytes is phenomenal, with all these features. The full solution is amazing, but what else can we do with it? And especially as they've said, if it's for a comparable price as what we've bought before, and we're getting the full solution with the software, the hardware, the extremely small form-factor, what else can you do? What workloads can you pull forward? So where our backup systems weren't on the super fast storage like our production systems are, now we can pull those forward and they can give the same performance as production to run the back-end of the company, which I can't wait to test. >> It's great to hear from customers. The centerpiece of the Wikibon community. But let's also get the analyst's perspective. Let's hear from Eric Burgener, who's the Research Vice President for Storage at IDC. >> Thanks very much Peter, good to be back. >> So we've heard a lot from a number of folks today about some of the changes that are happening in the industry and I want to amplify some things and get the analyst's perspective. So Wikibon, as a fellow analyst, Wikibon believes pretty strongly that the emergence of flash-based storage systems is one of the catalyst technologies that's driving a lot of the changes. If only because, old storage technologies are focused on persisting data. Disc, slow, but at least it was there. Flash systems allow a bit flip, they allow you to think about delivering data to anywhere in your organization. Different applications, without a lot of complexity, but it's gotta be more than that. What else is crucial, to making sure that these systems in fact are enabling the types of applications that customers are trying to deliver today. >> Yeah, so actually there's an emerging technology that provides the perfect answer to that, which is NVMe. If you look at most of the all-flash systems that have shipped so far, they've been based around SCSI. SCSI was a protocol designed for hard disk drives, not flash, even though you can use it with flash. NVMe is specifically designed for flash and that's really gonna open up the ability to get the full value of the performance, the capacity utilization, and the efficiencies, that all-flash arrays can bring to the market. And in this era of big data, more than ever, we need to unlock that performance capability. >> So as we think about the big data, AI, that's gonna have a significant impact overall in the market and how a lot of different vendors are jockeying for position. When IDC looks at the impact of flash, NVMe, and the reemergence of some traditional big vendors, how do you think the market landscape's gonna be changing over the next few years? >> Yeah, how this market has developed, really the NVMe-based all-flash arrays are gonna be a carve-out from the primary storage market which are SCSI-based AFAs today. So we're gonna see that start to grow over time, it's just emerging. We had startups begin to ship NVMe-based arrays back in 2016. This year we've actually got several of the majors who've got products based around their flagship platforms that are optimized for NVMe. So very quickly we're gonna move to a situation where we've got a number of options from both startups and major players available, with the NVMe technology as the core. >> And as you think about NVMe, at the core, it also means that we can do more with software, closer to the data. So that's gotta be another feature of how the market's gonna evolve over the next couple of years, wouldn't you say? >> Yeah, absolutely. A lot of the data services that generate latencies, like in-line data reduction, encryption and that type of thing, we can run those with less impact on the application side when we have much more performant storage on the back-end. But I have to mention one other thing. To really get all that NVMe performance all the way to the application side, you've gotta have an NVMe Over Fabric connection. So it's not enough to just have NVMe in the back-end array but you need that RDMA connection to the hosts and that's what NVMe Over Fabric provides for you. >> Great, so that's what's happening on the technology-product-vendor side, but ultimately the goal here is to enable enterprises to do something different. So what's gonna be the impact on the enterprise over the next few years? >> Yeah, so we believe that SCSI clearly will get replaced in the primary storage space, by NVMe over time. In fact, we've predicted that by 2021, we think that over 50% of all the external, primary storage revenue, will be generated by these end-to-end NVMe-based systems. So we see that transition happening over the course of the next two to three years. Probably by the end of this year, we'll have NVMe-based offerings, with NVMe Over Fabric front ends, available from six of the established storage providers, as well as a number of smaller startups. >> We've come a long way from the brown, spinning stuff, haven't we? >> (laughing) Absolutely. >> Alright, Eric Burgener, thank you very much. IDC Research Vice President, great once again to have you in theCUBE. >> Thanks Peter. >> Always great to get the analyst's perspective, but let's get back to the customer perspective. Again, from that same panel that we saw before, here's some highlights of what customers had to say about IBM's Spectrum family of software. (upbeat music) We love hearing those customer highlights but let's get into some of the overall storage trends and to do that we've asked Eric Herzog and Bina Hallman back to theCUBE. Eric, Bina, thanks again for coming back. So, what I want to do now is, I want to talk a little bit about some trends within the storage world and what the next few years are gonna mean, but Eric, I want to start with you. I was recently at IBM Think, and Ginni Rometty talked about the idea of putting smart to work. Now, I can tell you, that means something to me because the whole notion of how data gets used, how work gets institutionalized around your data, what does storage do in that context? To put smart to work. >> Well I think there's a couple of things. First we've gotta realize that it's not about storage, it's about the data and the information that happens to sit on the storage. So you have to have storage that's always available, always resilient, is incredibly fast, and as I said earlier, transparently moves things in and out of the cloud, automatically, so that the user doesn't have to do it. Second thing that's critical is the integration of AI, artificial intelligence. Both into the storage solution itself, of what the storage does, how you do it, and how it plays with the data, but also, if you're gonna do AI on a broad scale, and for example we're working with a customer right now and their AI configuration in 100 petabytes. Leveraging our storage underneath the hood of that big, giant AI analytics workload. So that's why they have to both think of it in the storage to make the storage better and more productive with the data and the information that it has, but then also as the undercurrent for any AI solution that anyone wants to employ, big, medium or small. >> So Bina, I want to pick up on that because there are gonna be some, there's some advanced technologies that are being exploited within storage right now, to achieve what Eric's talking about, but there's gonna be a lot more. And there's gonna be more intensive application utilizations of some of those technologies. What are some of the technologies that are becoming increasingly important, from a storage standpoint, that people have to think about as they try to achieve their digital transformation objectives. >> That's right, I mean Peter, in addition to some of the basics around making sure your infrastructure is enabled to handle the SLAs and the level of performance that's required by these AI workloads, when you think about what Eric said, this data's gonna reside, it's gonna reside on-premise, it's gonna be behind a firewall, potentially in the cloud, or multiple public clouds. How do you manage that data? How do you get visibility to that data? And then be able to leverage that data for your analytics. And so data management is going to be very important but also, being able to understand what that data contains and be able to run the analytics and be able to do things like tagging the metadata and then doing some specialized analytics around that is going to be very important. The fabric to move that data, data portability from on-prem into the cloud, and back and forth, bidirectionally, is gonna be very important as you look into the future. >> And obviously things like IOT's gonna mean bigger, more, more available. So a lot of technologies, in a big picture, are gonna become more closely associated with storage. I like to say that, at some point in time we've gotta stop thinking about calling stuff storage because it's gonna be so central to the fabric of how data works within a business. But Eric, I want to come back to you and say, those are some of the big picture technologies but what are some of the little picture technologies? That none-the-less are really central to being able to build up this vision over the course of the next few years? >> Well a couple of things. One is the move to NVMe, so we've integrated NVMe into our FLashSystem 9100, we have fabric support, we already announced back in February actually, fabric support for NVMe over an InfiniBand infrastructure with our FlashSystem 900 and we're extending that to all of the other inter-connects from a fabric perspective for NVMe, whether that be ethernet or whether that be fiber channel and we put NVMe in the system. We also have integrated our custom flash models, our FlashCore technology allows us to take raw flash and create, if you will, a custom SSD. Why does that matter? We can get better resiliency, we can get incredibly better performance, which is very tied in to your applications workloads and use cases, especially in data-driven multi-cloud environment. It's critical that the flash is incredibly fast and it really matters. And resilient, what do you do? You try to move it to the cloud and you lose your data. So if you don't have that resiliency and availability, that's a big issue. I think the third thing is, what I call the cloud-ification of software. All of IBM's storage software is cloud-ified. We can move things simultaneously into the cloud. It's all automated. We can move data around all over the place. Not only our data, not only to our boxes, we could actually move other people's array's data around for them and we can do it with our storage software. So it's really critical to have this cloud-ification. It's really cool to have this now technology, NVMe from an end-to-end perspective for fabric and then inside the system, to get the right resiliency, the right availability, the right performance for your applications, workloads and use cases, and you've gotta make sure that everything is cloud-ified and portable, and mobile, and we've done that with the solutions that are wrapped into our FlashSystem 9100 that we launched a couple of weeks ago. >> So you are both though leaders in the storage industry. I think that's very clear, and the whole notion of storage technology, and you work with a lot of customers, you see a lot of use cases. So I want to ask you one quick question, to close here. And that is, if there was one thing that you would tell a storage leader, a CIO or someone who things about storage in a broad way, one mindset change that they have to make, to start this journey and get it going so that it's gonna be successful. What would that one mindset change be? Bina, what do you think? >> You know, I think it's really around, there's a lot of capabilities out there. It's really around simplifying your environment and making sure that, as you're deploying these new solutions or new capabilities, that you've really got a partnership with a vendor that's gonna help you make it easier. Take those complex tasks, make them easier, deliver those step-by-step instructions and documentation and be right there when you need their assistance. So I think that's gonna be really important. >> So look at it from a portfolio perspective, where best of breed is still important, but it's gotta work together because it leverages itself. >> It's gotta work together, absolutely. >> Eric, what would you say? >> Well I think the key thing is, people think storage is storage. All storage is not the same and one of the central tenets at IBM storage is to make sure that we're integrated with the cloud. We can move data around transparently, easily, simply, Bina pointed out the simplicity. If you can't support the cloud, then you're really just a storage box, and that's not what IBM does. Over 40% of what we sell is actually storage software and all that software works with all of our competitors' gear. And in fact our Spectrum Virtualize for Public Cloud, for example, can simultaneously have datasets sitting in a cloud instantiation, and sitting on premises, and then we can use our copy data management to take advantage of that secondary copy. That's all because we're so cloud-ified from a software perspective, so all storage is not the same, and you can't think of storage as, I need the cheapest storage. It's gotta be, how does it drive business value for my oceans of data? That's what matters most, and by the way, we're very cost-effective anyway, especially because of our custom flash model which allows us to have a real price advantage. >> You ain't doing business at a level of 100 petabytes if you're not cost effective. >> Right, so those are the things that we see as really critical, is storage is not storage. Storage is about data and information. >> So let me summarize your point then, if I can really quickly. That in other words, that we have to think about storage as the first step to great data management. >> Absolutely, absolutely Peter. >> Eric, Bina, great conversation. >> Thank you. >> So we've heard a lot of great thought leaderships comments on the data-driven journey with multi-cloud and some great product announcements. But now, let's do the crowd chat. This is your opportunity to participate in this proceedings. It's the centerpiece of the digital community event. What questions do you have? What comments do you have? What answers might you provide to your peers? This is an opportunity for all of us collectively to engage and have those crucial conversations that are gonna allow you to, from a storage perspective, drive business value in your digital business transformations. So, let's get straight to the crowd chat. (bright music)
SUMMARY :
the journey to the data-driven multi-cloud. and the resellers that sell to them, and changes the way it engages with customers, et cetera. and if the storage corrupts the data, then guess what? and you get stuck with the bill. and have that beautiful image, and storage can help you out. is the centerpiece of infrastructure, the data has to physically stay Cloud in many respects is the programming model, already and in fact, but honestly for the software side is again, shaping the way you guys, IBM, think about from the entry space to the mid range to the high end. Going back to that notion that the storage so that it's easy for the end user, (Eric laughing) you and I. Thank you so much in the form of a product video from IBM and requires the ability to rapidly analyze the details Great to hear about the IBM FlashSystem 9100 It's an exciting even, we're looking forward to it. that IBM brings to bear. so that clients don't have to pre-packaging some of these things to make it easier? and in that journey and those set of experiences, and that tape can only go that fast and the research capabilities, also our experiences and the understanding of the problem, manifested so fast. Making it to where you can use it for environments and making sure that the testing they're doing It's great to hear about the FlashSystem 9100 Tell us, what excites you about this announcement and it's the same in this new system, which is astounding, The centerpiece of the Wikibon community. and get the analyst's perspective. that provides the perfect answer to that, and the reemergence of some traditional big vendors, really the NVMe-based all-flash arrays over the next couple of years, wouldn't you say? So it's not enough to just have NVMe in the back-end array over the next few years? over the course of the next two to three years. great once again to have you in theCUBE. and to do that we've asked Eric Herzog so that the user doesn't have to do it. from a storage standpoint, that people have to think about and be able to run the analytics because it's gonna be so central to the fabric One is the move to NVMe, so we've integrated NVMe and the whole notion of storage technology, and be right there when you need their assistance. So look at it from a portfolio perspective, It's gotta work together, and by the way, we're very cost-effective anyway, You ain't doing business at a level of 100 petabytes that we see as really critical, as the first step to great data management. on the data-driven journey with multi-cloud
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Eric Burgener | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Eric | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Eric Herzog | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
July 10th | DATE | 0.99+ |
Owen | PERSON | 0.99+ |
Herzog's Bar and Grill | ORGANIZATION | 0.99+ |
2016 | DATE | 0.99+ |
six | QUANTITY | 0.99+ |
February | DATE | 0.99+ |
first | QUANTITY | 0.99+ |
Bina Hallman | PERSON | 0.99+ |
200,000 | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
20 | QUANTITY | 0.99+ |
2021 | DATE | 0.99+ |
100 petabytes | QUANTITY | 0.99+ |
Bina | PERSON | 0.99+ |
two aspects | QUANTITY | 0.99+ |
DS8000 | COMMERCIAL_ITEM | 0.99+ |
Wikibon | ORGANIZATION | 0.99+ |
100 different countries | QUANTITY | 0.99+ |
two-and-a-half inch | QUANTITY | 0.99+ |
460 terabytes | QUANTITY | 0.99+ |
Ginni Rometty | PERSON | 0.99+ |
FlashSystem 9100 | COMMERCIAL_ITEM | 0.99+ |
FlashSystem 900 | COMMERCIAL_ITEM | 0.99+ |
second aspect | QUANTITY | 0.99+ |
FS 9100 | COMMERCIAL_ITEM | 0.99+ |
hundreds | QUANTITY | 0.99+ |
third thing | QUANTITY | 0.99+ |
200 | QUANTITY | 0.99+ |
Both | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
ingress | ORGANIZATION | 0.98+ |
One | QUANTITY | 0.98+ |
Over 40% | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
one thing | QUANTITY | 0.98+ |
first step | QUANTITY | 0.98+ |