Image Title

Search Results for Brian Sherman:

IBM and Brocade: Architecting Storage Solutions for an Uncertain Future | CUBE Conversation


 

>> Narrator: From theCUBE studios in Palo Alto in Boston connecting with our leaders all around the world. This is theCUBE conversation. >> Welcome to theCUBE and the special IBM Brocade panel. I'm Lisa Martin. And I'm having a great opportunity here to sit down for the next 20 minutes with three gentlemen please welcome Brian Sherman a distinguished engineer from IBM, Brian, great to have you joining us. >> Thanks for having me. >> And Matt key here. Flash systems SME from IBM, Matt, happy Friday. >> Happy Friday, Lisa. Thanks for having us. >> Our pleasure. And AIG Customer solution here from Brocade is here. AJ welcome. >> Thanks for having me along. >> AJ we're going to stick with you, IBM and Brocade have had a very long you said about 22 year strategic partnership. There's some new news. And in terms of the evolution of that talk to us about what's going on with with Brocade IBM and what is new in the storage industry? >> Yeah, so the the newest thing for us at the moment is that IBM just in mid-October launched our Gen seven platforms. So this is think about the stresses that are going on in the IT environments. This is our attempt to keep pace with with the performance levels that the IBM teams are now putting into their storage environments the All-Flash Data Centers and the new technologies around non-volatile memory express. So that's really, what's driving this along with the desire to say, "You know what people aren't allowed "to be in the data center." And so if they can't be in the data center then the fabrics actually have to be able to figure out what's going on and basically provide a lot of the automation pieces. So something we're referring to as the autonomous SAM. >> And we're going to dig into NBME of our fabrics in a second but I do want to AJ continue with you in terms of industries, financial services, healthcare airlines there's the biggest users, biggest need. >> Pretty much across the board. So if you look at the global 2000 as an example, something on the order of about 96, 97% of the global 2000 make use of fiber channel environments and in portions of their world generally tends to be a lot of the high end financial guys, a lot of the pharmaceutical guys, the automotive, the telcos, pretty much if the data matters, and it's something that's critical whether we talk about payment card information or healthcare environments, data that absolutely has to be retained, has to get there, has to perform then it's this combination that we're bringing together today around the new storage elements and the functionalities they have there. And then our ability in the fabric. So the concept of a 64 gig environment to help basically not be the bottleneck in the application demands, 'cause one thing I can promise you after 40 years in this industry is the software guys always figure out how to all the performance that the hardware guys put on the shelf, right? Every single time. >> Well there's gauntlet thrown down there. Matt, let's go to you. I want to get IBM's perspective on this. Again, as we said, a 22 year strategic partnership, as we look at things like not being able to get into the data center during these unprecedented times and also the need to be able to remove some of those bottlenecks how does IBM view this? >> Yeah, totally. It's certainly a case of raising the bar, right? So we have to as a vendor continue to evolve in terms of performance, in terms of capacity, cost density, escalating simplicity, because it's not just a case of not be able to touch the rates, but there's fewer people not being able to adjust the rates, right? It's a case where our operational density continues to have to evolve being able to raise the bar on the network and be able to still saturate those line rates and be able to provide that simply a cost efficiency that gets us to a utilization that raises the bar from our per capita ratio from not just talking about 200, 300 terabytes per admin but going beyond the petabyte scale per admin. And we can't do that unless people have access to the data. And we have to provide the resiliency. We have to provide the simplicity of presentation and automation from our side. And then this collaboration that we do with our network brother like Brocade here continued to stay out of the discussion when it comes to talking about networks and who threw the ball next. So we truly appreciate this Gen seven launch that they're doing we're happy to come in and fill that pipe on the flash side for them. >> Excellent and Brian as a distinguished engineer and let me get your perspectives on the evolution of the technology over this 22 year partnership. >> Thanks Lisa. It certainly has been a longstanding, a great relationship, great partnership all the way from inventing joint things, to developing, to testing and deploying to different technologies through the course of time. And it's been one of those that where we are today, like AJ had talked about being able to sustain what the applications require today in this always on time type of environment. And as Matt said, bringing together the density and operational simplicity to make that happen 'cause we have to make it easier from the storage side for operations to be able to manage this volume of data that we have coming out and our due diligence is to be able to serve the data up as fast as we can and as resilient as we can. >> And so sticking with you, Brian that simplicity is key because as we know as we get more and more advances in technology the IT environment is only becoming more complex. So really truly enabling organizations in any industry to simplify is absolute table stakes. >> Yeah, it definitely is. And that's core to what we're focused on and how do we make the storage environment simple. It's been one those through the years and historically, we've had entry-level us and the industry as a whole, is that an entry-level product mid range level products, high-end level products. And earlier this year, we said enough, enough of that it's one product portfolio. So it's the same software stack it's just, okay. Small, medium and large in terms of the appliances that get delivered. Again, building on what Matt said, from a density perspective where we can have a petabyte of uncompressed and data reduced storage in a two Enclosure. So it becomes from a overall administration perspective, again, one software stake, one automation stack, one way to do point in time copies, replication. So in focusing on how to make that as simple for the operations as we possibly can. >> I think we'd all take a little bit of that right now. Matt, let's go to you and then AJ view, let's talk a little bit more, dig into the IBM storage arrays. I mean, we're talking about advances in flash, we're talking about NBME as a forcing function for applications to change and evolve with the storage. Matt, give us your thoughts on that. >> We saw a monumental leap in where we take some simplicity pieces from how we deliver our arrays but also the technology within the arrays. About nine months ago, in February we launched into the latest generation of non technology and with that born the story of simplicity one of the pieces that we've been happily essentially negating of value prop is storage level tiering and be able to say, "Hey, well we still support the idea of going down "to near line SaaS and enterprise disc in different flavors "of solid state whether it's tier one short usage "the tier zero high performance, high usage, "all the way up to storage class memory." While we support those technologies and the automated tiering, this elegance of what we've done as latest generation technology that we launched nine months ago has been able to essentially homogenize the environments to we're able to deliver that petabyte per rack unit ratio that Brian was mentioning be able to deliver over into all tier zero solution that doesn't have to go through woes of software managed data reduction or any kind of software managed hearing just to be always fast, always essentially available from a 100% data availability guaranteed that we offer through a technology called hyper swap, but it's really kind of highlighting what we take in from that simplicity story, by going into that extra mile and meeting the market in technology refresh. I mean, if you say the words IBM over the Thanksgiving table, you're kind of thinking, how big blue, big mainframe, old iron stuff but it's very happy to say over in distributed systems that we are in fact leading this pack by multiple months not just the fact that, "Hey, we announced sooner." But actually coming to delivering on-prem the actual solution itself nine, 10 months prior to anybody else and when that gets us into new density flavors gets us into new efficiency offerings. Not just talk about, "Hey, I can do this petabyte scale "a couple of rack units but with the likes of Brocade." That actually equates to a terabyte per second and a floor tile, what's that do for your analytics story? And the fact that we're now leveraging NBME to undercut the value prop of spinning disc in your HBC analytics environments by five X, that's huge. So now let's take near line SaaS off the table for anything that's actually per data of an angle of value to us. So in simplicity elements, what we're doing now will be able to make our own flash we've been deriving from the tech memory systems acquisition eight years ago and then integrating that into some essentially industry proven software solutions that we do with the bird flies. That appliance form factor has been absolutely monumental for us in the distributed systems. >> And thanks for giving us a topic to discuss at our socially distant Thanksgiving table. We'll talk about IBM. I know now I have great, great conversation. AJ over to you lot of advances here also in such a dynamic times, I want to get Brocade's perspective on how you're taking advantage of these latest technologies with IBM and also from a customer's perspective, what are they feeling and really being able to embrace and utilize that simplicity that Matt talked about. >> So there's a couple of things that fall into that to be honest, one of which is that similar to what you heard Brian described across the IBM portfolio for storage in our SaaS infrastructure. It's a single operating system up and down the line. So from the most entry-level platform we have to the largest platform we have it's a single software up and down. It's a single management environment up and down and it's also intended to be extremely reliable and extremely performance because here's part of the challenge when Matt's talking about multiple petabytes in a two U rack height, but the conversation you want to flip on its head there a little bit is "Okay exactly how many virtual machines "and how many applications are you going to be driving "out of that?" Because it's going to be thousands like between six and 10,000 potentially out of that, right? So imagine then if you have some sort of little hiccup in the connectivity to the data store for 6,000 to 10,000 applications, that's not the kind of thing that people get forgiving about. When we're all home like this. When your healthcare, when your finance, when your entertainment, when everything is coming to you across the network and remotely in this version and it's all application driven, the one thing that you want to make sure of is that network doesn't hiccup because humans have a lot of really good characteristics. Patience would not be one of those. And so you want to make sure that everything is in fact in play and running. And that's as one of the things that we work very hard with our friends at IBM to make sure of is that the kinds of analytics that Matt was just describing are things that you can readily get done. Speed is the new currency of business is a phrase you hear from... A quote you hear from Marc Benioff at Salesforce, right. And he's right if you can get data out of intelligence out of the data you've been collecting, that's really cool. But one of the other sort of flip sides on the people not being able to be in the data center and then to Matt's point, not as many people around either is how are humans fast enough when you look... Honestly when you look at the performance of the platforms, these folks are putting up how is human response time going to be good enough? And we all sort of have this headset of a network operations center where you've got a couple dozen people in a half lit room staring at massive screens on the thing to pop. Okay, if the first time a red light pops the human begins the investigation at what point is that going to be good enough? And so our argument for the autonomy piece of of what we're doing in the fabrics is you can't wait on the humans. You need to augment it. I get that people still want to be in charge and that's good. Humans are still smarter than the Silicon. We're not as repeatable, but we're still so far smarter about it. And so we needed to be able to do that measurement. We need to be able to figure out what normal looks like. We need to be able to highlight to the storage platform and to the application admins, when things go sideways because the demand from the applications isn't going to slow down. The demands from your environment whether you want to think about take the next steps with not just your home entertainment home entertainment systems but learning augmented reality, right. Virtual reality environments for kids, right? How do you make them feel like they're part and parcel of the classroom, for as long as we have to continue living a modified world and perhaps past it, right? If you can take a grade school from your local area and give them a virtual walkthrough of the loop where everybody's got a perfect view and it all looks incredibly real to them those are cool things, right? Those are cool applications, right? If you can figure out a new vaccine faster, right. Not a bad thing, right. If we can model better, not a bad thing. So we need to enable those things we need to not be the bottleneck, which is you get Matt and Brian over an adult beverage at some point and ask them about the cycle time for the Silicon they're playing with. We've never had Moore's law applied to external storage before never in the history of external storage. Has that been true until now. And so their cycle times, Matt, right? >> Yeah you struck a nerve there AJ, cause it's pretty simple for us to follow the linear increase in capacity and computational horsepower, right. We just ride the X86 bandwagon, ride the Silicon bandwagon. But what we have to do in order to maintain But what we have to do in order to maintain the simplicity story is followed more important one is the resiliency factor, right? 'Cause as we increased the capacity as we increased the essentially the amount of data responsible for each admin we have to literally log rhythmically increase the resiliency of these boxes because we're going to talk about petabyte scale systems and hosting them really 10,000 virtual machines in the two U form factor. I need to be able to accommodate that to make sure things don't blip. I need resilient networks, right. Have redundancy and access. I need to have protection schemes at every single layer of the stack. And so we're quite happy to be able to provide that as we leapfrog the industry and go in literally situations that are three times the competitive density that we you see out there and other distributed systems that are still bound by the commercial offerings, then, hey we also have to own that risk from a vendor side we have to make these things is actually rate six protection scheme equivalent from a drive standpoint and act back from controllers everywhere. Be able to supply the performance and consistency of that service throughout even the bad situations. >> And to that point, one of the things that you talked about, that's interesting to me that I'd kind of like you to highlight is your recovery times, because bad things will happen. And so you guys do something very, very different about that. That's critical to a lot of my customers because they know that Murphy will show up one day. So, I mean 'cause it happens, so then what. >> Well, speaking of that, then what Brian I want to go over to you. You mentioned Matt mentioned resiliency. And if we think of the situation that we're in in 2020 many companies are used to DR and BC plans for natural disasters, pandemics. So as we look at the shift and then the the volume of ransomware, that's going up one ransomware attack every 11 seconds this year, right now. How Brian what's that change that businesses need to make from from cyber security to cyber resiliency? >> Yeah, it's a good point in, and I try to hammer that home with our clients that, you're used to having your business continuity disaster recovery this whole cyber resiliency thing is a completely separate practice that we have to set up and think about and go through the same thought process that you did for your DR What are you going to do? What are you going to pretest? How are you going to test it? How are you going to detect whether or not you've got ransomware? So I spent a lot of time with our clients on that theme of you have to think about and build your cyber resiliency plan 'cause it's going to happen. It's not like a DR plan where it's a pure insurance policy and went and like you said, every 11 seconds there's an event that takes place. It's going to be a win not then. Yeah and then we have to work with our customers to put in a place for cyber resiliency and then we spent a lot of discussion on, okay what does that mean for my critical applications, from a restore time of backup and mutability. What do we need for those types of services, right? In terms of quick restore, which are my tier zero applications that I need to get back as fast as possible, what other ones can I they'll stick out on tape or virtual tape in and do things like that. So again, there's a wide range of technology that we have available in the in the portfolio for helping our clients from cyber resiliency. And then we try to distinguish that cyber resiliency versus cyber security. So how do we help to keep every, everybody out from a cybersecurity view? And then what can we do from the cyber resiliency, from a storage perspective to help them once once it gets to us, that's a bad thing. So how can we help? How help our folks recover? Well, and that's the point that you're making Brian is that now it's not a matter of, could this happen to us? It's going to, how much can we tolerate? But ultimately we have to be able to recover. We can't restore that data and one of those things when you talk about ransomware and things, we go to that people as the weakest link insecurity AJ talked about that, there's the people. Yeah there's probably quite a bit of lack of patients going on right now. But as we look as I want to go back over to you to kind of look at, from a data center perspective and these storage solutions, being able to utilize things to help the people, AI and Machine Learning. You talked about AR VR. Talk to me a little bit more about that as you see, say in the next 12 months or so as moving forward, these trends these new solutions that are simplified. >> Yeah, so a couple of things around that one of which is iteration of technology the storage platforms the Silicon they're making use of Matt I think you told me 14 months is the roughly the Silicon cycle that you guys are seeing, right? So performance levels are going to continue to go up the speeds. The speeds are going to continue to go up. The scale is going to is going to continue to shift. And one of the things that does for a lot of the application owners is it lets them think broader. It lets them think bigger. And I wish I could tell you that I knew what the next big application was going to be but then we'd be having a conversation about which Island in the Pacific I was going to be retiring too. But they're going to come and they're going to consume this performance because if you look at the applications that you're dealing with in your everyday life, right. They continue to get broader. The scope of them continues to scale out, right. There's things that we do. I saw I think it was an MIT development recently where they're talking about being able to and they were originally doing it for Alzheimer's and dementia, but they're talking about being able to use the microphones in your smartphone to listen to the way you cough and use that as a predictor for people who have COVID that are not symptomatic yet. So asymptomatic COVID people, right? So when we start talking about where this, where this kind of technology can go and where it can lead us, right. There's sort of this unending possibility for it. But what that on, in part is that the infrastructure has to be extremely sound, right? The foundation has to be there. We have to have the resilience, the reliability and one of the points that Brian was just making is extremely key. We talk about disaster tolerance business continuous, so business continuance is how do you recover? Cyber resilience is the same conversation, right? So you have the protection side of it. Here's my defenses. Now what happens when they actually get in. And let's be honest, right? Humans are frequently that weak link, right. For a variety of behaviors that the humans that humans have. And so when that happens, where's the software in the storage that tells you, "Hey, wait there's an odd traffic behavior here "where data is being copied "at rates and to locations that that are not normal." And so that's part of when we talk about what we're doing in our side of the automation is how do you know what normal looks like? And once you know what normal looks like you can figure out where the outliers are. And that's one of the things that people use a lot for trying to determine whether or not ransomware is going on is, "Hey, this is a traffic pattern, that's new. "This is a traffic pattern. "That's different." Are they doing this because they're copying the dataset from here to here and encrypting it as they go, right? 'Cause that's one of the challenges you got to, you got to watch for. So I think you're going to see a lot of advancement in the application space. And not just the MIT stuff, which is great. The fact that people are actually able to or I may have misspoken, maybe Johns Hopkins. And I apologize to the Johns Hopkins folks that kind of scenario, right. There's no knowing what they can make use of here in terms of the data sets, right. Because we're gathering so much data, the internet of things is an overused phrase but the sheer volume of data that's being generated outside of the data center, but manipulated analyzed and stored internally. 'Cause you got to have it someplace secure. Right and that's one of the things that we look at from our side is we've got to be that as close to unbreakable as we can be. And then when things do break able to figure out exactly what happened as rapidly as possible and then the recovery cycle as well. >> Excellent and I want to finish with you. We just have a few seconds left, but as AJ was talking about this massive evolution and applications, for example when we talk about simplicity and we talk about resiliency and being able to recover when something happens, how did these new technologies that we've been unpacking today? How did these help the admin folks deal with all of the dynamics that are happening today? >> Yeah so I think the biggest the drop, the mic thing we can say right now is that we're delivering 100% tier zero in Vme without data reduction value props on top of it at a cost that undercuts off-prem S3 storage. So if you look at what you can do from an off-prem solution for air gap and from cyber resiliency you can put your data somewhere else. And it's going to take whatever long time to transfer that data back on prem, to read get back to your recover point. But when you work at economics that we're doing right now in the distributed systems, hey, you're DR side, your copies of data do not have to wait for that. Off-prem bandwidth to restore. You can actually literally restore it in place. And you couple that with all of the the technology on the software side that integrates with it I get incremental point in time. Recovery is either it's on the primary side of DRS side, wherever, but the fact that we get to approach this thing from a cost value then by all means I can naturally absorb a lot of the cyber resiliency value in that too. And because it's all getting all the same orchestrated capabilities, regardless of the big, small, medium, all that stuff, it's the same skillsets. And so I don't need to really learn new platforms or new solutions to providing cyber resiliency. It's just part of my day-to-day activity because fundamentally all of us have to wear that cyber resiliency hat. But as, as our job, as a vendor is to make that simple make it cost elegance, and be able to provide a essentially a homogenous solutions overall. So, hey, as your business grows, your risk gets averted on your recovery means also get the thwarted essentially by your incumbent solutions and architecture. So it's pretty cool stuff that we're doing, right. >> It is pretty cool. And I'd say a lot of folks would say, that's the Nirvana but I think the message that the three of you have given in the last 20 minutes or so is that IBM and Brocade together. This is a reality. You guys are a cornucopia of knowledge. Brian, Matt, AJ, thank you so much for joining me on this panel I really enjoyed our conversation. >> Thank you. >> Thank you again Lisa. >> My pleasure. From my guests I'm Lisa Martin. You've been watching this IBM Brocade panel on theCUBE.

Published Date : Dec 9 2020

SUMMARY :

all around the world. Brian, great to have you joining us. And Matt key here. Thanks for having us. And AIG Customer solution And in terms of the evolution of that that are going on in the IT environments. but I do want to AJ continue with you data that absolutely has to be retained, and also the need to be able to remove that raises the bar on the evolution of the technology is to be able to serve the data up in any industry to simplify And that's core to what we're focused on Matt, let's go to you and then AJ view, the environments to we're AJ over to you lot of advances here in the connectivity to the data store I need to be able to accommodate that And to that point, that businesses need to make Well, and that's the point And one of the things that does for a lot and being able to recover And because it's all getting all the same of you have given in the last 20 minutes IBM Brocade panel on theCUBE.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
BrianPERSON

0.99+

Marc BenioffPERSON

0.99+

LisaPERSON

0.99+

MattPERSON

0.99+

Lisa MartinPERSON

0.99+

Brian ShermanPERSON

0.99+

IBMORGANIZATION

0.99+

6,000QUANTITY

0.99+

Palo AltoLOCATION

0.99+

FebruaryDATE

0.99+

AIGORGANIZATION

0.99+

BrocadeORGANIZATION

0.99+

100%QUANTITY

0.99+

22 yearQUANTITY

0.99+

22 yearQUANTITY

0.99+

thousandsQUANTITY

0.99+

2020DATE

0.99+

oneQUANTITY

0.99+

threeQUANTITY

0.99+

64 gigQUANTITY

0.99+

mid-OctoberDATE

0.99+

SalesforceORGANIZATION

0.99+

BostonLOCATION

0.99+

FridayDATE

0.99+

Johns HopkinsORGANIZATION

0.99+

eight years agoDATE

0.99+

COVIDOTHER

0.98+

one dayQUANTITY

0.98+

about 96, 97%QUANTITY

0.98+

AJPERSON

0.98+

this yearDATE

0.98+

10,000 applicationsQUANTITY

0.98+

ThanksgivingEVENT

0.97+

each adminQUANTITY

0.97+

nineQUANTITY

0.97+

nine months agoDATE

0.97+

10,000 virtual machinesQUANTITY

0.97+

three timesQUANTITY

0.97+

todayDATE

0.97+

twoQUANTITY

0.97+

10,000QUANTITY

0.97+

14 monthsQUANTITY

0.96+

40 yearsQUANTITY

0.96+