Talor Holloway, Advent One | IBM Think 2021
>>from around the globe. It's the >>cube with digital >>coverage of IBM >>Think 2021 brought to you >>by IBM. Welcome back everyone to the cube coverage of IBM Think 2021 virtual um john for your host of the cube. Our next guest taylor Holloway. Chief technology officer at advent one. Tyler welcome to the cube from down under in Australia and we're in Palo alto California. How are you? >>Well thanks john thanks very much. Glad to be glad to be on here. >>Love love the virtual cube of the virtual events. We can get to talk to people really quickly with click um great conversation here around hybrid cloud, multi cloud and all things software enterprise before we get started. I wanna take a minute to explain what you guys do at advent one. What's the main focus? >>Yeah. So look we have a lot of customers in different verticals. Um so you know generally what we provide depends on the particular industry the customers in. But generally speaking we see a lot of demand for operational efficiency, helping our clients tackle cyber security risks, adopt cloud and set them up to modernize the applications. >>And this is this has been a big wave coming in for sure with you know, cloud and scale. So I gotta ask you, what are the main challenges that you guys are solvent for your customers um and how are you helping them overcome come that way and transformative innovative way? >>Yeah, look, I think helping our clients um improve their security posture is a big one. We're finding as well that our customers are gaining a lot of operational efficiency by adopting sort of open source technology red huts an important partner of ours as his IBM um and we're seeing them sort of move away from some more proprietary solutions. Automation is a big focus for us as well. We've had some great outcomes with our clients or helping them automate um and you know, to live up um you know the stand up and data operations of environments a lot quickly a lot more easily and uh and to be able to sort of apply some standards across multiple sort of areas of their I. T. Estate. >>What are some of the solutions that you guys are doing with IBM's portfolio on the infrastructure side, you got red hat, you've got a lot of open source stuff to meet the needs of clients. What do you mean? What's the mean? >>Uh Yeah I think on the storage side will probably help our clients sort of tackle the expanding data in structured and particularly unstructured data they're trying to take control of so you know, looking at spectrum scale and those type of products from an audio perspective for unstructured data is a good example. And so they're flush systems for more block storage and more run of the mill sort of sort of environments. We have helped our clients consolidate and modernize on IBM power systems. Having Red Hat is both a Lynx operating system and having open shift as a container platform. Um really helps there. And Red Hat also provides management overlay, which has been great on what we do with IBM power systems. We've been working on a few different sort of use cases on power in particular. More recently, SAP Hana is a big one where we've had some success with our clients migrating Muhanna on to onto IBM power systems. And we've also helped our customers, you know, improve some um some environments on the other end of the side, such as IBM I, we still have a large number of customers with, with IBM I and and you know how do we help them? You know some of them are moving to cloud in one way or another others are consuming some kind of IRS and we can sort of wrap around a managed service to to help them through. >>So I gotta ask you the question, you know U C T. Oh you played a lot of technologies kubernetes just become this lingua franca for this kind of like I'll call a middleware kind of orchestration layer uh containers. Also you're awesome but I gotta ask you when you walk into a client's environment you have to name names but you know usually you see kind of two pictures man, they need some serious help or they got their act together. So either way they're both opportunities for Hybrid cloud. How do you how do you how do you evaluate the environment when you go in, when you walk into those two scenarios? What goes through your mind? What some of the conversations that you guys have with those clients. Can you take me through a kind of day in the life of both scenarios? The ones that are like I can't get the job done, I'm so close in on the right team and the other ones, like we're grooving, we're kicking butt. >>Yeah. So look, let's start well, I supposed to start off with you try and take somewhat of a technology agnostic view and just sort of sit down and listen to what they're trying to achieve, how they're going for customers who have got it. You know, as you say, all nailed down things are going really well. Um it's just really understanding what what can we do to help. Is there an opportunity for us to help at all like there? Um, you know, generally speaking, there's always going to be something and it may be, you know, we don't try and if someone is going really well, they might just want someone to help with a bespoke use case or something very specific where they need help. On the other end of the scale where a customer is sort of pretty early on and starting to struggle. We generally try and help them not boil the ocean at once. Just try and get some winds, pick some key use cases, you know, deliver some value back and then sort of growing from there rather than trying to go into a customer and trying to do everything at once tends to be a challenge. Just understand what the priorities are and help them get going. >>What's the impact been for red hat? Um, in your customer base, a lot of overlap. Some overlap, no overlap coming together. What's the general trend that you're seeing? What's the reaction been? >>Yeah I think it's been really good. Obviously IBM have a lot of focus on cloud packs where they're bringing their software on red hat open shift that will run on multiple clouds. So I think that's one that we'll see a lot more of overtime. Um Also helping customers automate their I. T. Operations with answerable is one we do quite a lot of um and there's some really bespoke use cases we've done with that as well as some standardized one. So helping with day two operations and all that sort of thing. But there's also some really sort of out there things customers have needed to automate that's been a challenge for them and being able to use open source tools to do it has worked really well. We've had some good wins there, >>you know, I want to ask you about the architecture and I'm just some simplify it real. Just for the sake of devops, um you know, segmentation, you got hybrid clouds, take a programmable infrastructure and then you've got modern applications that need to have a I some have said I've even sit on the cube and other broadcast that if you don't have a I you're gonna be at a handicap some machine learning, some data has to be in there. You can probably see ai and mostly everything as you go in and try to architect that out for customers um and help them get to a hybrid cloud infrastructure with real modern application front end with using data. What's what's the playbook? Do you have any best practices or examples you can share or scenarios or visions that you see uh playing >>out? I think you're the first one is obviously making sure customers data is in the right place. So if they might be wanting to use um some machine learning in one particular cloud provider and they've got a lot of their applications and data in another, you know, how do we help them make it mobile and able to move data from one cloud to another or back into court data center? So there's a lot of that. I think that we spend a lot of time with customers to try and get a right architecture and also how do we make sure it's secure from end to end. So if they're moving things from into multiple one or more public clouds as well as maybe in their own data center, making sure connectivity is all set up properly. All the security requirements are met. So I think we sort of look at it from a from a high level design point of view, we look at obviously what the target state is going to be versus the current state that really take into account security, performance, connectivity or those sort of things to make sure that they're going to have a good result. >>You know, one of the things you mentioned and this comes up a lot of my interviews with partners of IBM is they always comment about their credibility and all the other than the normal stuff. But one of the things that comes out a lot pretty much consistently is their experience in verticals. Uh they have such a track record in verticals and this is where AI and machine learning data has to be very much scoped in on the vertical. You can't generalize and have a general purpose data plane inside of vertically specialized kind of focus. How how do you see that evolving, how does IBM play there with this kind of the horizontally scalable mindset of a hybrid model, both on premise in the cloud, but that's still saying provide that intimacy with the data to fuel the machine learning or NLP or power that ai which seems to be critical. >>Yeah, I think there's a lot of services where you know, public cloud providers are bringing out new services all the time and some of it is pre can and easy to consume. I think what IBM from what I've observed, being really good at is handling some of those really bespoke use cases. So if you have a particular vertical with a challenge, um you know, there's going to be sort of things that are pre can that you can go and consume. But if you need to do something custom that could be quite challenging. How do they sort of build something that could be quite specific for a particular industry and then obviously being able to repeat that afterwards for us, that's obviously something we're very interested in. >>Yeah, tell I love chatting whether you love getting the low down also, people might not know your co author of a book performance guy with IBM Power Systems, So I gotta ask you, since I got you here and I don't mean to put you on the spot, but if you can just share your vision or any kind of anecdotal observation as people start to put together their architecture and again, you know, Beauty's in the eye of the beholder, every environment is different. But still, hybrid, distributed concept is distributed computing. Is there a KPI is there a best practice on as a manager or systems architect to kind of keep an eye on what what good is and how how good becomes better because the day to operations becomes a super important concept. We're seeing some called Ai ops where okay, I'm provisioning stuff out on a hybrid Cloud operational environment. But now day two hits are things happen as more stuff entered into the equation. What's your vision on KPs and management? What to keep tracking? >>Yeah, I think obviously attention to detail is really important to be able to build things properly. A good KPI particularly managed service area that I'm curious that understanding is how often do you actually have to log into the systems that you're managing? So if you're logging in and recitation into servers and all this sort of stuff all the time, all of your automation and configuration management is not set up properly. So, really a good KPI an interesting one is how often do you log into things all the time? If something went wrong, would you sooner go and build another one and shoot the one that failed or go and restore from backup? So thinking about how well things are automated. If things are immutable using infrastructure as code, those are things that I think are really important when you look at, how is something going to be scalable and easy to manage going forward. What I hate to see is where, you know, someone build something and automates it all in the first place and they're too scared to run it again afterwards in case it breaks something. >>It's funny the next generation of leaders probably won't even know like, hey, yeah, taylor and john they had to log into systems back in the day. You know, I mean, I could be like a story they tell their kids. Uh but no, that's a good Metro. This is this automation. So it's on the next level. Let's go the next level automation. Um what's the low hanging fruit for automation? Because you're getting at really the kind of the killer app there, which is, you know, self healing systems, good networks that are programmable but automation will define more value. What's your take? >>I think the main thing is where you start to move from a model of being able to start small and automate individual things which could be patching or system provisioning or anything like that. But what you really want to get to is to be able to drive everything through, get So instead of having a written up paper, change request, I'm going to change your system and all the rest of it. It really should be driven through a pull request and have things through it and and build pipelines to go and go and make a change running in development, make sure it's successful and then it goes and gets pushed into production. That's really where I think you want to get to and you can start to have a lot of people collaborating really well on this particular project or a customer that also have some sort of guard rails around what happens in some level of governance rather than being a free for all. >>Okay, final question. Where do you see event one headed? What's your future plans to continue to be a leader? I. T. Service leader for this guy? BMS Infrastructure portfolio? >>I think it comes down to people in the end, so really making sure that we partner with our clients and to be well positioned to understand what they want to achieve and and have the expertise in our team to bring to the table to help them do it. I think open source is a key enabler to help our clients adopt a hybrid cloud model to sort of touched on earlier uh as well as be able to make use of multiple clouds where it makes sense from a managed service perspective. I think everyone is really considering themselves and next year managed service provider. But what that means for us is to provide a different, differentiated managed service and also have the strong technical expertise to back it up. >>Taylor Holloway, chief technology officer advent one remote videoing in from down under in Australia. I'm john ferrier and Palo alto with cube coverage of IBM thing. Taylor, thanks for joining me today from the cube. >>Thank you very much. >>Okay, cube coverage. Thanks for watching ever. Mhm mm
SUMMARY :
It's the Welcome back everyone to the cube coverage of IBM Think 2021 Glad to be glad to be on here. I wanna take a minute to explain what you guys do at advent one. Um so you know generally And this is this has been a big wave coming in for sure with you know, cloud and scale. We've had some great outcomes with our clients or helping them automate um and you know, What are some of the solutions that you guys are doing with IBM's portfolio on the infrastructure side, control of so you know, looking at spectrum scale and those type of products from an audio perspective for What some of the conversations that you guys have with those clients. there's always going to be something and it may be, you know, we don't try and if someone is going really well, What's the general trend that you're seeing? and there's some really bespoke use cases we've done with that as well as some standardized one. you know, I want to ask you about the architecture and I'm just some simplify it real. and they've got a lot of their applications and data in another, you know, how do we help them make it mobile and You know, one of the things you mentioned and this comes up a lot of my interviews with partners of IBM is they Yeah, I think there's a lot of services where you know, public cloud providers are bringing out new services all the time and since I got you here and I don't mean to put you on the spot, but if you can just share your vision or is where, you know, someone build something and automates it all in the first place and they're too scared to run it So it's on the next level. I think the main thing is where you start to move from a model of being able to start small Where do you see event one headed? I think it comes down to people in the end, so really making sure that we partner with our clients and I'm john ferrier and Palo alto with cube coverage of IBM Thanks for watching ever.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
IBM | ORGANIZATION | 0.99+ |
Australia | LOCATION | 0.99+ |
Taylor Holloway | PERSON | 0.99+ |
today | DATE | 0.99+ |
taylor | PERSON | 0.99+ |
Talor Holloway | PERSON | 0.99+ |
Tyler | PERSON | 0.99+ |
one | QUANTITY | 0.99+ |
Taylor | PERSON | 0.99+ |
two scenarios | QUANTITY | 0.99+ |
taylor Holloway | PERSON | 0.99+ |
Think 2021 | COMMERCIAL_ITEM | 0.99+ |
john | PERSON | 0.99+ |
next year | DATE | 0.99+ |
both scenarios | QUANTITY | 0.99+ |
IBM Power Systems | ORGANIZATION | 0.98+ |
two pictures | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
Palo alto California | LOCATION | 0.97+ |
Red Hat | TITLE | 0.96+ |
first one | QUANTITY | 0.96+ |
both | QUANTITY | 0.96+ |
Palo alto | ORGANIZATION | 0.92+ |
both opportunities | QUANTITY | 0.92+ |
two hits | QUANTITY | 0.9+ |
red hat | TITLE | 0.88+ |
Think | COMMERCIAL_ITEM | 0.83+ |
john ferrier | PERSON | 0.82+ |
advent one | ORGANIZATION | 0.82+ |
one cloud | QUANTITY | 0.79+ |
one way | QUANTITY | 0.78+ |
Lynx | TITLE | 0.75+ |
two operations | QUANTITY | 0.69+ |
BMS | ORGANIZATION | 0.68+ |
Chief | PERSON | 0.67+ |
2021 | DATE | 0.63+ |
SAP Hana | TITLE | 0.63+ |
Muhanna | TITLE | 0.58+ |
cloud | QUANTITY | 0.54+ |
Advent One | ORGANIZATION | 0.53+ |
IBM21 Talor Holloway VTT
>>from around the globe. It's the cube with digital >>coverage of IBM >>Think 2021 brought to >>you by IBM. Welcome back everyone to the cube coverage of IBM Think 2021 virtual um john for your host of the cube. Our next guest taylor Holloway. Chief technology officer at advent one. Tyler welcome to the cube from down under in Australia and we're in Palo alto California. How are you? >>Well thanks john thanks very much. Glad to be glad to be on here. >>Love love the virtual cube of the virtual events. We can get to talk to people really quickly with click um great conversation here around hybrid cloud, multi cloud and all things software enterprise before we get started. I wanna take a minute to explain what you guys do at advent one. What's the main focus? >>Yeah. So look we have a lot of customers in different verticals. Um so you know generally what we provide depends on the particular industry the customers in. But generally speaking we see a lot of demand for operational efficiency, helping our clients tackle cyber security risks, adopt cloud and set them up to modernize the applications. >>And this is this has been a big wave coming in for sure with, you know, cloud and scale. So I gotta ask you, what are the main challenges that you guys are solvent for your customers um and how are you helping them overcome come that way and transformative innovative way? >>Yeah, look, I think helping our clients um improve their security posture is a big one. We're finding as well that our customers are gaining a lot of operational efficiency by adopting sort of open source technology. Red Hearts, an important partner of ours is IBM um and we're seeing them sort of move away from some more proprietary solutions. Automation is a big focus for us as well. We've had some great outcomes with our clients or helping them automate um and you know deliver um, you know, the stand up and data operations of environments a lot quickly, a lot more easily. And uh and to be able to sort of apply some standards across multiple sort of areas of their estate. >>What are some of the solutions that you guys are doing with IBM's portfolio in the I. T. Infrastructure side? You got red hat, you got a lot of open source stuff to meet the needs of clients. What do you mean? What's that mean? >>Um Yeah, I think on the storage side will probably help our clients sort of tackle the expanding data in structured and particularly unstructured data they're trying to take control of so, you know, looking at spectrum scale and those type of products from an audio perspective for unstructured data is a good example. And so they're flash systems for more block storage and more run of the mill sort of sort of environments. We have helped our clients consolidate and modernize on IBM Power systems. Having Red Hat is both a UNIX operating system and having I can shift as a container platform really helps there. And Red Hat also provides management overlay, which has been great on what we do with IBM Power systems. We've been working on a few different sort of use cases on power in particular, sort of more recently. Um SAP Hana is a big one where we've had some success with our clients migrating Muhanna on to onto IBM power systems and we've also helped our customers, you know, improve some um some environments on the other end of the side, such as IBM I, we still have a large number of customers with with IBM I and and you know how do we help them? You know some of them are moving to cloud in one way or another others are consuming some kind of IRS and we can sort of wrap around a managed service to to help them through. >>So I gotta ask you the question, you know U. C. T. Oh you played a lot of technology actually kubernetes just become this lingua franca for this kind of like I'll call a middleware kind of orchestration layer uh containers. Obviously you're awesome but I gotta ask you when you walk into a client's environment you have to name names but you know usually you see kind of two pictures man, they need some serious help or they got their act together. So either way they're both opportunities for Hybrid cloud. How do you how do you how do you evaluate the environment when you go in, when you walk into those two scenarios? What goes through your mind? What some of the conversations that you guys have with those clients? Can you take me through a kind of day in the life of both scenarios? The ones that are like I can't get the job done, I'm so close in on the right team and the other ones, like we're grooving, we're kicking butt. >>Yeah. So look, let's start, well, I supposed to start off with you try and take somewhat of a technology agnostic view and just sort of sit down and listen to what they're trying to achieve, how they're going for customers who have got it. You know, as you say, all nailed down things are going really well. Um it's just really understanding what what can we do to help. Is there an opportunity for us to help at all like there? Um, you know, generally speaking, there's always going to be something and it may be, you know, we don't try and if someone is going really well, they might just want someone to help with a bespoke use case or something very specific where they need help. On the other end of the scale where a customer is sort of pretty early on and starting to struggle. We generally try and help them not boil the ocean at once. Just try and get some winds, pick some key use cases, you know, deliver some value back and then sort of growing from there rather than trying to go into a customer and trying to do everything at once tends to be a challenge. Just understand what the priorities are and help them get going. >>What's the impact been for red hat? Um, in your customer base, a lot of overlap. Some overlap, no overlap coming together. What's the general trend that you're seeing? What's the reaction been? >>Yeah I think it's been really good. Obviously IBM have a lot of focus on cloud packs where they're bringing their software on red hat open shift that will run on multiple clouds. So I think that's one that we'll see a lot more of overtime. Um Also helping customers automate their I. T. Operations with answerable is one we do quite a lot of um and there's some really bespoke use cases we've done with that as well as some standardized one. So helping with day two operations and all that sort of thing. But there's also some really sort of out there things customers have needed to automate. That's been a challenge for them and being able to use open source tools to do it has worked really well. We've had some good wins there, >>you know, I want to ask you about the architecture and I'm just some simplify it real just for the sake of devops, um you know, segmentation, you got hybrid clouds, take a programmable infrastructure and then you've got modern applications that need to have a I some have said, I've even said on the cube and other broadcasts that if you don't have a I you're gonna be at a handicap some machine learning, some data has to be in there. You can probably see aI and mostly everything as you go in and try to architect that out for customers um and help them get to a hybrid cloud infrastructure with real modern application front end with using data. What's what's the playbook, do you have any best practices or examples you can share or scenarios or visions that you see uh playing >>out? I think the yeah, the first one is obviously making sure customers data is in the right place. So if they might be wanting to use um some machine learning in one particular cloud provider and they've got a lot of their applications and data in another, you know, how do we help them make it mobile and able to move data from one cloud to another or back into court data center? So there's a lot of that. I think that we spend a lot of time with customers to try and get a right architecture and also how do we make sure it's secure from end to end. So if they're moving things from into multiple one or more public clouds as well as maybe in their own data center, making sure connectivity is all set up properly. All the security requirements are met. So I think we sort of look at it from a from a high level design point of view, we look at obviously what the target state is going to be versus the current state that really take into account security, performance, connectivity or those sort of things to make sure that they're going to have a good result. >>You know, one of the things you mentioned and this comes up a lot of my interviews with partners of IBM is they always comment about their credibility and all the other than the normal stuff. But one of the things that comes out a lot pretty much consistently is their experience in verticals. Uh just have such a track record in verticals and this is where AI and machine learning data has to be very much scoped in on the vertical. You can't generalize and have a general purpose data plane inside of vertically specialized kind of focus. How how do you see that evolving, how does IBM play there with this kind of the horizontally scalable mindset of a hybrid model, both on premise in the cloud, but that's still saying provide that that intimacy with the data to fuel the machine learning or NLP or power that AI, which seems to be critical. >>Yeah, I think there's a lot of services where, you know, public cloud providers are bringing out new services all the time and some of it is pre can and easy to consume. I think what IBM from what I've observed being really good at is handling some of those really bespoke use cases. So if you have a particular vertical with a challenge, um you know, there's going to be sort of things that are pre can that you can go and consume. But if you need to do something custom that could be quite challenging. How do they sort of build something that could be quite specific for a particular industry and then obviously being able to repeat that afterwards for us, that's obviously something we're very interested in. >>Yeah, taylor love chatting, whether you love getting the low down, also, people might not know your co author of a book performance guy with IBM Power Systems, so I gotta ask you, since I got you here and I don't mean to put you on the spot, but if you can just share your vision or any kind of anecdotal observation as people start to put together their architecture and again, you know, Beauty's in the eye of the beholder, every environment is different. But still, hybrid, distributed concept is distributed computing, Is there a KPI is there a best practice on as a manager or systems architect to kind of keep an eye on what what good is and how how good becomes better because the day to operations becomes a super important concept. We're seeing some called Ai ops where Okay, I'm provisioning stuff out on a hybrid Cloud operational environment. But now day two hits are things happen as more stuff entered into the equation. What's your vision on KPs and management? What to keep >>tracking? Yeah, I think obviously attention to detail is really important to be able to build things properly. A good KPI particularly managed service area that I'm curious that understanding is how often do you actually have to log into the systems that you're managing? So if you're logging in and recitation into servers and all this sort of stuff all the time, all of your automation and configuration management is not set up properly. So, really a good KPI an interesting one is how often do you log into things all the time if something went wrong, would you sooner go and build another one and shoot the one that failed or go and restore from backup? So thinking about how well things are automated. If things are immutable using infrastructure as code, those are things that I think are really important when you look at, how is something going to be scalable and easy to manage going forward. What I hate to see is where, you know, someone build something and automated all in the first place and they're too scared to run it again afterwards in case it breaks something. >>It's funny the next generation of leaders probably won't even know like, hey, yeah, taylor and john they had to log into systems back in the day. You know, I mean, I could be like a story they tell their kids. Uh but no, that's a good metric. This is this automation. So it's on the next level. Let's go the next level automation. Um what's the low hanging fruit for automation? Because you're getting at really the kind of the killer app there which is, you know, self healing systems, good networks that are programmable but automation will define more value. >>What's your take? I think the main thing is where you start to move from a model of being able to start small and automate individual things which could be patching or system provisioning or anything like that. But what you really want to get to is to be able to drive everything through. Get So instead of having a written up paper, change request, I'm going to change your system and all the rest of it. It really should be driven through a pull request and have things through it and and build pipelines to go and go and make a change running in development, make sure it's successful and then it goes and gets pushed into production. That's really where I think you want to get to and you can start to have a lot of people collaborating really well on this particular project or a customer that also have some sort of guard rails around what happens in some level of governance rather than being a free for >>all. Okay, final question. Where do you see event one headed? What's your future plans to continue to be a leader? I. T. Service by leader for this guy? BMS infrastructure portfolio? >>I think it comes down to people in the end, so really making sure that we partner with our clients and to be well positioned to understand what they want to achieve and and have the expertise in our team to bring to the table to help them do it. I think open source is a key enabler to help our clients adopt a hybrid cloud model to sort of touched on earlier as well as be able to make use of multiple clouds where it makes sense From a managed service perspective. I think everyone is really considering themselves next year managed service provider, but what that means for us is to provide a different, differentiated managed service and also have the strong technical expertise to back it up. >>Taylor Holloway, chief technology officer advent one remote videoing in from down under in Australia. I'm john ferrier and Palo alto with cube coverage of IBM thing. Taylor, thanks for joining me today from the cube. >>Thank you very much. >>Okay, cube coverage. Thanks for watching ever. Mhm
SUMMARY :
It's the cube with digital you by IBM. Glad to be glad to be on here. I wanna take a minute to explain what you guys do at advent one. Um so you know generally And this is this has been a big wave coming in for sure with, you know, cloud and scale. We've had some great outcomes with our clients or helping them automate um and you know deliver What are some of the solutions that you guys are doing with IBM's portfolio in the I. we still have a large number of customers with with IBM I and and you know how What some of the conversations that you guys have with those clients? there's always going to be something and it may be, you know, we don't try and if someone is going really well, What's the general trend that you're seeing? That's been a challenge for them and being able to use open source tools to do it has worked um you know, segmentation, you got hybrid clouds, take a programmable infrastructure and and they've got a lot of their applications and data in another, you know, how do we help them make it mobile and You know, one of the things you mentioned and this comes up a lot of my interviews with partners of IBM is they Yeah, I think there's a lot of services where, you know, public cloud providers are bringing out new services all the time and some since I got you here and I don't mean to put you on the spot, but if you can just share your vision or the time if something went wrong, would you sooner go and build another one and shoot the one that failed So it's on the next level. I think the main thing is where you start to move from a model of being able to Where do you see event one headed? I think it comes down to people in the end, so really making sure that we partner with our clients and I'm john ferrier and Palo alto with cube coverage of IBM Thanks for watching ever.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
IBM | ORGANIZATION | 0.99+ |
Australia | LOCATION | 0.99+ |
today | DATE | 0.99+ |
Taylor Holloway | PERSON | 0.99+ |
Taylor | PERSON | 0.99+ |
taylor | PERSON | 0.99+ |
Tyler | PERSON | 0.99+ |
IBM Power Systems | ORGANIZATION | 0.99+ |
two scenarios | QUANTITY | 0.99+ |
Red Hearts | ORGANIZATION | 0.99+ |
john | PERSON | 0.99+ |
next year | DATE | 0.99+ |
both scenarios | QUANTITY | 0.99+ |
Think 2021 | COMMERCIAL_ITEM | 0.99+ |
one | QUANTITY | 0.99+ |
Think 2021 | COMMERCIAL_ITEM | 0.98+ |
taylor Holloway | PERSON | 0.98+ |
Palo alto California | LOCATION | 0.98+ |
Red Hat | TITLE | 0.98+ |
two pictures | QUANTITY | 0.98+ |
both | QUANTITY | 0.97+ |
first one | QUANTITY | 0.97+ |
first | QUANTITY | 0.96+ |
both opportunities | QUANTITY | 0.93+ |
UNIX | TITLE | 0.92+ |
Palo alto | ORGANIZATION | 0.88+ |
one way | QUANTITY | 0.86+ |
john ferrier | PERSON | 0.75+ |
day two | QUANTITY | 0.69+ |
day | QUANTITY | 0.66+ |
U. | ORGANIZATION | 0.66+ |
Chief technology officer | PERSON | 0.63+ |
Power | TITLE | 0.63+ |
two hits | QUANTITY | 0.61+ |
Talor Holloway | PERSON | 0.57+ |
advent one | ORGANIZATION | 0.5+ |
SAP Hana | ORGANIZATION | 0.49+ |
power | TITLE | 0.49+ |
systems | COMMERCIAL_ITEM | 0.47+ |
Muhanna | TITLE | 0.41+ |
IBM | COMMERCIAL_ITEM | 0.4+ |
C. | LOCATION | 0.37+ |
Tim Vincent & Steve Roberts, IBM | DataWorks Summit 2018
>> Live from San Jose, in the heart of Silicon Valley, it's theCUBE, overing DataWorks Summit 2018. Brought to you by Hortonworks. >> Welcome back everyone to day two of theCUBE's live coverage of DataWorks, here in San Jose, California. I'm your host, Rebecca Knight, along with my co-host James Kobielus. We have two guests on this panel today, we have Tim Vincent, he is the VP of Cognitive Systems Software at IBM, and Steve Roberts, who is the Offering Manager for Big Data on IBM Power Systems. Thanks so much for coming on theCUBE. >> Oh thank you very much. >> Thanks for having us. >> So we're now in this new era, this Cognitive Systems era. Can you set the scene for our viewers, and tell our viewers a little bit about what you do and why it's so important >> Okay, I'll give a bit of a background first, because James knows me from my previous role as, and you know I spent a lot of time in the data and analytics space. I was the CTO for Bob running the analytics group up 'til about a year and a half ago, and we spent a lot of time looking at what we needed to do from a data perspective and AI's perspective. And Bob, when he moved over to the Cognitive Systems, Bob Picciano who's my current boss, Bob asked me to move over and really start helping build, help to build out more of a software, and more of an AI focus, and a workload focus on how we thinking of the Power brand. So we spent a lot of time on that. So when you talk about cognitive systems or AI, what we're really trying to do is think about how you actually couple a combination of software, so co-optimize software space and the hardware space specific of what's needed for AI systems. Because the act of processing, the data processing, the algorithmic processing for AI is very, very different then what you would have for traditional data workload. So we're spending a lot of time thinking about how you actually co-optimize those systems so you can actually build a system that's really optimized for the demands of AI. >> And is this driven by customers, is this driven by just a trend that IBM is seeing? I mean how are you, >> It's a combination of both. >> So a lot of this is, you know, there's a lot of thought put into this before I joined the team. So there was a lot of good thinking from the Power brand, but it was really foresight on things like Moore's Law coming to an end of it's lifecycle right, and the ramifications to that. And at the same time as you start getting into things like narrow NATS and the floating point operations that you need to drive a narrow NAT, it was clear that we were hitting the boundaries. And then there's new technologies such as what Nvidia produces with with their GPUs, that are clearly advantageous. So there's a lot of trends that were comin' together the technical team saw, and at the same time we were seeing customers struggling with specific things. You know how to actually build a model if the training time is going to be weeks, and months, or let alone hours. And one of the scenarios I like to think about, I was probably showing my age a bit, but went to a school called University of Waterloo, and when I went to school, and in my early years, they had a batch based system for compilation and a systems run. You sit in the lab at night and you submit a compile job and the compile job will say, okay it's going to take three hours to compile the application, and you think of the productivity hit that has to you. And now you start thinking about, okay you've got this new skill in data scientists, which is really, really hard to find, they're very, very valuable. And you're giving them systems that take hours and weeks to do what the need to do. And you know, so they're trying to drive these models and get a high degree of accuracy in their predictions, and they just can't do it. So there's foresight on the technology side and there's clear demand on the customer side as well. >> Before the cameras were rolling you were talking about how the term data scientists and app developers is used interchangeably, and that's just wrong. >> And actually let's hear, 'cause I'd be in this whole position that I agree with it. I think it's the right framework. Data science is a team sport but application development has an even larger team sport in which data scientists, data engineers play a role. So, yeah we want to hear your ideas on the broader application development ecosystem, and where data scientists, and data engineers, and sort, fall into that broader spectrum. And then how IBM is supporting that entire new paradigm of application development, with your solution portfolio including, you know Power, AI on Power? >> So I think you used the word collaboration and team sport, and data science is a collaborative team sport. But you're 100% correct, there's also a, and I think it's missing to a great degree today, and it's probably limiting the actual value AI in the industry, and that's had to be data scientists and the application developers interact with each other. Because if you think about it, one of the models I like to think about is a consumer-producer model. Who consumes things and who produces things? And basically the data scientists are producing a specific thing, which is you know simply an AI model, >> Machine models, deep-learning models. >> Machine learning and deep learning, and the application developers are consuming those things and then producing something else, which is the application logic which is driving your business processes, and this view. So they got to work together. But there's a lot of confusion about who does what. You know you see people who talk with data scientists, build application logic, and you know the number of people who are data scientists can do that is, you know it exists, but it's not where the value, the value they bring to the equation. And the application developers developing AI models, you know they exist, but it's not the most prevalent form fact. >> But you know it's kind of unbalanced Tim, in the industry discussion of these role definitions. Quite often the traditional, you know definition, our sculpting of data scientist is that they know statistical modeling, plus data management, plus coding right? But you never hear the opposite, that coders somehow need to understand how to build statistical models and so forth. Do you think that the coders of the future will at least on some level need to be conversant with the practices of building,and tuning, or training the machine learning models or no? >> I think it's absolutely happen. And I will actually take it a step further, because again the data scientist skill is hard for a lot of people to find. >> Yeah. >> And as such is a very valuable skill. And what we're seeing, and we are actually one of the offerings that we're pulling out is something called PowerAI Vision, and it takes it up another level above the application developer, which is how do you actually really unlock the capabilities of AI to the business persona, the subject matter expert. So in the case of vision, how do you actually allow somebody to build a model without really knowing what a deep learning algorithm is, what kind of narrow NATS you use, how to do data preparation. So we build a tool set which is, you know effectively a SME tool set, which allows you to automatically label, it actually allows you to tag and label images, and then as you're tagging and labeling images it learns from that and actually it helps automate the labeling of the image. >> Is this distinct from data science experience on the one hand, which is geared towards the data scientists and I think Watson Analytics among your tools, is geared towards the SME, this a third tool, or an overlap. >> Yeah this is a third tool, which is really again one of the co-optimized capabilities that I talked about, is it's a tool that we built out that really is leveraging the combination of what we do in Power, the interconnect which we have with the GPU's, which is the NVLink interconnect, which gives us basically a 10X improvement in bandwidth between the CPU and GPU. That allows you to actually train your models much more quickly, so we're seeing about a 4X improvement over competitive technologies that are also using GPU's. And if we're looking at machine learning algorithms, we've recently come out with some technology we call Snap ML, which allows you to push machine learning, >> Snap ML, >> Yeah, it allows you to push machine learning algorithms down into the GPU's, and this is, we're seeing about a 40 to 50X improvement over traditional processing. So it's coupling all these capabilities, but really allowing a business persona to something specific, which is allow them to build out AI models to do recognition on either images or videos. >> Is there a pre-existing library of models in the solution that they can tap into? >> Basically it allows, it has a, >> Are they pre-trained? >> No they're not pre-trained models that's one of the differences in it. It actually has a set of models that allow, it picks for you, and actually so, >> Oh yes, okay. >> So this is why it helps the business persona because it's helping them with labeling the data. It's also helping select the best model. It's doing things under the covers to optimize things like hyper-parameter tuning, but you know the end-user doesn't have to know about all these things right? So you're tryin' to lift, and it comes back to your point on application developers, it allows you to lift the barrier for people to do these tasks. >> Even for professional data scientists, there may be a vast library of models that they don't necessarily know what is the best fit for the particular task. Ideally you should have, the infrastructure should recommend and choose, under various circumstances, the models, and the algorithms, the libraries, whatever for you for to the task, great. >> One extra feature of PowerAI Enterprises is that it does include a way to do a quick visual inspection of a models accuracy with a small data sample before you invest in scaling over a cluster or large data set. So you can get a visual indicator as to the, whether the models moving towards accuracy or you need to go and test an alternate model. >> So it's like a dashboard, of like Gini coefficients and all that stuff, okay. >> Exactly it gives you a snapshot view. And the other thing I was going to mention, you guys talked about application development, data scientists and of course a big message here at the conference is, you know data science meets big data and the work that Hortonworks is doing involving the notion of container support in YARN, GPU awareness in YARN, bringing data science experience, which you can include the PowerAI capability that Tim was talking about, as a workload tightly coupled with Hadoop. And this is where our Power servers are really built, not for just a monolithic building block that always has the same ratio of compute and storage, but fit for purpose servers that can address either GPU optimized workloads, providing the bandwidth enhancements that Tim talked about with the GPU, but also day-to-day servers, that can now support two terrabytes of memory, double the overall memory bandwidth on the box, 44 cores that can support up to 176 threads for parallelization of Spark workloads, Sequel workloads, distributed data science workloads. So it's really about choosing the combination of servers that can really mix this evolving workload need, 'cause a dupe isn't now just map produced, it's a multitude of workloads that you need to be able to mix and match, and bring various capabilities to the table for a compute, and that's where Power8, now Power9 has really been built for this kind of combination workloads where you can add acceleration where it makes sense, add big data, smaller core, smaller memory, where it makes sense, pick and choose. >> So Steve at this show, at DataWorks 2018 here in San Jose, the prime announcement, partnership announced between IBM and Hortonworks was IHAH, which I believe is IBM Host Analytics on Hortonworks. What I want to know is that solution that runs inside, I mean it runs on top of HDP 3.0 and so forth, is there any tie-in from an offering management standpoint between that and PowerAI so you can build models in the PowerAI environment, and then deploy them out to, in conjunction with the IHAH, is there, going forward, I mean just wanted to get a sense of whether those kinds of integrations. >> Well the same data science capability, data science experience, whether you choose to run it in the public cloud, or run it in private cloud monitor on prem, it's the same data science package. You know PowerAI has a set of optimized deep-learning libraries that can provide advantage on power, apply when you choose to run those deployments on our Power system alright, so we can provide additional value in terms of these optimized libraries, this memory bandwidth improvements. So really it depends upon the customer requirements and whether a Power foundation would make sense in some of those deployment models. I mean for us here with Power9 we've recently announced a whole series of Linux Power9 servers. That's our latest family, including as I mentioned, storage dense servers. The one we're showcasing on the floor here today, along with GPU rich servers. We're releasing fresh reference architecture. It's really to support combinations of clustered models that can as I mentioned, fit for purpose for the workload, to bring data science and big data together in the right combination. And working towards cloud models as well that can support mixing Power in ICP with big data solutions as well. >> And before we wrap, we just wanted to wrap. I think in the reference architecture you describe, I'm excited about the fact that you've commercialized distributed deep-learning for the growing number of instances where you're going to build containerized AI and distributing pieces of it across in this multi-cloud, you need the underlying middleware fabric to allow all those pieces to play together into some larger applications. So I've been following DDL because you've, research lab has been posting information about that, you know for quite a while. So I'm excited that you guys have finally commercialized it. I think there's a really good job of commercializing what comes out of the lab, like with Watson. >> Great well a good note to end on. Thanks so much for joining us. >> Oh thank you. Thank you for the, >> Thank you. >> We will have more from theCUBE's live coverage of DataWorks coming up just after this. (bright electronic music)
SUMMARY :
in the heart of Silicon he is the VP of Cognitive little bit about what you do and you know I spent a lot of time And at the same time as you how the term data scientists on the broader application one of the models I like to think about and the application developers in the industry discussion because again the data scientist skill So in the case of vision, on the one hand, which is geared that really is leveraging the combination down into the GPU's, and this is, that's one of the differences in it. it allows you to lift the barrier for the particular task. So you can get a visual and all that stuff, okay. and the work that Hortonworks is doing in the PowerAI environment, in the right combination. So I'm excited that you guys Thanks so much for joining us. Thank you for the, of DataWorks coming up just after this.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
James Kobielus | PERSON | 0.99+ |
Rebecca Knight | PERSON | 0.99+ |
Bob | PERSON | 0.99+ |
Steve Roberts | PERSON | 0.99+ |
Tim Vincent | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
James | PERSON | 0.99+ |
Hortonworks | ORGANIZATION | 0.99+ |
Bob Picciano | PERSON | 0.99+ |
Steve | PERSON | 0.99+ |
San Jose | LOCATION | 0.99+ |
100% | QUANTITY | 0.99+ |
44 cores | QUANTITY | 0.99+ |
two guests | QUANTITY | 0.99+ |
Tim | PERSON | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
10X | QUANTITY | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
San Jose, California | LOCATION | 0.99+ |
IBM Power Systems | ORGANIZATION | 0.99+ |
Cognitive Systems Software | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
three hours | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Cognitive Systems | ORGANIZATION | 0.99+ |
University of Waterloo | ORGANIZATION | 0.98+ |
third tool | QUANTITY | 0.98+ |
DataWorks Summit 2018 | EVENT | 0.97+ |
50X | QUANTITY | 0.96+ |
PowerAI | TITLE | 0.96+ |
DataWorks 2018 | EVENT | 0.93+ |
theCUBE | ORGANIZATION | 0.93+ |
two terrabytes | QUANTITY | 0.93+ |
up to 176 threads | QUANTITY | 0.92+ |
40 | QUANTITY | 0.91+ |
about | DATE | 0.91+ |
Power9 | COMMERCIAL_ITEM | 0.89+ |
a year and a half ago | DATE | 0.89+ |
IHAH | ORGANIZATION | 0.88+ |
4X | QUANTITY | 0.88+ |
IHAH | TITLE | 0.86+ |
DataWorks | TITLE | 0.85+ |
Watson | ORGANIZATION | 0.84+ |
Linux Power9 | TITLE | 0.83+ |
Snap ML | OTHER | 0.78+ |
Power8 | COMMERCIAL_ITEM | 0.77+ |
Spark | TITLE | 0.76+ |
first | QUANTITY | 0.73+ |
PowerAI | ORGANIZATION | 0.73+ |
One extra | QUANTITY | 0.71+ |
DataWorks | ORGANIZATION | 0.7+ |
day two | QUANTITY | 0.69+ |
HDP 3.0 | TITLE | 0.68+ |
Watson Analytics | ORGANIZATION | 0.65+ |
Power | ORGANIZATION | 0.58+ |
NVLink | OTHER | 0.57+ |
YARN | ORGANIZATION | 0.55+ |
Hadoop | TITLE | 0.55+ |
theCUBE | EVENT | 0.53+ |
Moore | ORGANIZATION | 0.45+ |
Analytics | ORGANIZATION | 0.43+ |
Power9 | ORGANIZATION | 0.41+ |
Host | TITLE | 0.36+ |
Sumit Gupta & Steven Eliuk, IBM | IBM CDO Summit Spring 2018
(music playing) >> Narrator: Live, from downtown San Francisco It's the Cube. Covering IBM Chief Data Officer Startegy Summit 2018. Brought to you by: IBM >> Welcome back to San Francisco everybody we're at the Parc 55 in Union Square. My name is Dave Vellante, and you're watching the Cube. The leader in live tech coverage and this is our exclusive coverage of IBM's Chief Data Officer Strategy Summit. They hold these both in San Francisco and in Boston. It's an intimate event, about 150 Chief Data Officers really absorbing what IBM has done internally and IBM transferring knowledge to its clients. Steven Eluk is here. He is one of those internal practitioners at IBM. He's the Vice President of Deep Learning and the Global Chief Data Office at IBM. We just heard from him and some of his strategies and used cases. He's joined by Sumit Gupta, a Cube alum. Who is the Vice President of Machine Learning and deep learning within IBM's cognitive systems group. Sumit. >> Thank you. >> Good to see you, welcome back Steven, lets get into it. So, I was um paying close attention when Bob Picciano took over the cognitive systems group. I said, "Hmm, that's interesting". Recently a software guy, of course I know he's got some hardware expertise. But bringing in someone who's deep into software and machine learning, and deep learning, and AI, and cognitive systems into a systems organization. So you guys specifically set out to develop solutions to solve problems like Steven's trying to solve. Right, explain that. >> Yeah, so I think ugh there's a revolution going on in the market the computing market where we have all these new machine learning, and deep learning technologies that are having meaningful impact or promise of having meaningful impact. But these new technologies, are actually significantly I would say complex and they require very complex and high performance computing systems. You know I think Bob and I think in particular IBM saw the opportunity and realized that we really need to architect a new class of infrastructure. Both software and hardware to address what data scientist like Steve are trying to do in the space, right? The open source software that's out there: Denzoflo, Cafe, Torch - These things are truly game changing. But they also require GPU accelerators. They also require multiple systems like... In fact interestingly enough you know some of the super computers that we've been building for the scientific computing world, those same technologies are now coming into the AI world and the enterprise. >> So, the infrastructure for AI, if I can use that term? It's got to be flexible, Steven we were sort of talking about that elastic versus I'm even extending it to plastic. As Sumit you just said, it's got to have that tooling, got to have that modern tooling, you've got to accommodate alternative processor capabilities um, and so, that forms what you've used Steven to sort of create new capabilities new business capabilities within IBM. I wanted to, we didn't touch upon this before, but we touched upon your data strategy before but tie it back to the line of business. You essentially are a presume a liaison between the line of business and the chief data office >> Steven: Yeah. >> Officer office. How did that all work out, and shake out? Did you defining the business outcomes, the requirements, how did you go about that? >> Well, actually, surprisingly, we have very little new use cases that we're generating internally from my organization. Because there's so many to pick from already throughout the organization, right? There's all these business units coming to us and saying, "Hey, now the data is in the data lake and now we know there's more data, now we want to do this. How do we do it?" You know, so that's where we come in, that's where we start touching and massaging and enabling them. And that's the main efforts that we have. We do have some derivative works that have come out, that have been like new offerings that you'll see here. But mostly we already have so many use cases that from those businesses units that we're really trying to heighten and bring extra value to those domains first. >> So, a lot of organizations sounds like IBM was similar you created the data lake you know, things like "a doop" made a lower cost to just put stuff in the data lake. But then, it's like "okay, now what?" >> Steven: Yeah. >> So is that right? So you've got the data and this bog of data and you're trying to make more sense out of it but get more value out of it? >> Steven: Absolutely. >> That's what they were pushing you to do? >> Yeah, absolutely. And with that, with more data you need more computational power. And actually Sumit and I go pretty far back and I can tell you from my previous roles I heightened to him many years ago some of the deficiencies in the current architecture in X86 etc and I said, "If you hit these points, I will buy these products." And what they went back and they did is they, they addressed all of the issues that I had. Like there's certain issues... >> That's when you were, sorry to interrupt, that's when you were a customer, right? >> Steven: That's when I was... >> An external customer >> Outside. I'm still an internal customer, so I've always been a customer I guess in that role right? >> Yep, yep. >> But, I need to get data to the computational device as quickly as possible. And with certain older gen technologies, like PTI Gen3 and certain issues around um x86. I couldn't get that data there for like high fidelity imaging for autonomous vehicles for ya know, high fidelity image analysis. But, with certain technologies in power we have like envy link and directly to the CPU. And we also have PTI Gen4, right? So, so these are big enablers for me so that I can really keep the utilization of those very expensive compute devices higher. Because they're not starved for data. >> And you've also put a lot of emphasis on IO, right? I mean that's... >> Yeah, you know if I may break it down right there's actually I would say three different pieces to the puzzle here right? The highest level from Steve's perspective, from Steven's teams perspective or any data scientist perspective is they need to just do their data science and not worry about the infrastructure, right? They actually don't want to know that there's an infrastructure. They want to say, "launch job" - right? That's the level of grand clarity we want, right? In the background, they want our schedulers, our software, our hardware to just seamlessly use either one system or scale to 100 systems, right? To use one GPU or to use 1,000 GPUs, right? So that's where our offerings come in, right. We went and built this offering called Powder and Powder essentially is open source software like TensorFlow, like Efi, like Torch. But performace and capabilities add it to make it much easier to use. So for example, we have an extremely terrific scheduling software that manages jobs called Spectrum Conductor for Spark. So as the name suggests, it uses Apache Spark. But again the data scientist doesn't know that. They say, "launch job". And the software actually goes and scales that job across tens of servers or hundreds of servers. The IT team can determine how many servers their going to allocate for data scientist. They can have all kinds of user management, data management, model management software. We take the open source software, we package it. You know surprisingly ugh most people don't realize this, the open source software like TensorFlow has primarily been built on a (mumbles). And most of our enterprise clients, including Steven, are on Redhat. So we, we engineered Redhat to be able to manage TensorFlow. And you know I chose those words carefully, there was a little bit of engineering both on Redhat and on TensorFlow to make that whole thing work together. Sounds trivial, took several months and huge value proposition to the enterprise clients. And then the last piece I think that Steven was referencing too, is we also trying to go and make the eye more accessible for non data scientist or I would say even data engineers. So we for example, have a software called Powder Vision. This takes images and videos, and automatically creates a trained deep learning model for them, right. So we analyze the images, you of course have to tell us in these images, for these hundred images here are the most important things. For example, you've identified: here are people, here are cars, here are traffic signs. But if you give us some of that labeled data, we automatically do the work that a data scientist would have done, and create this pre trained AI model for you. This really enables many rapid prototyping for a lot of clients who either kind of fought to have data scientists or don't want to have data scientists. >> So just to summarize that, the three pieces: It's making it simpler for the data scientists, just run the job - Um, the backend piece which is the schedulers, the hardware, the software doing its thing - and then its making that data science capability more accessible. >> Right, right, right. >> Those are the three layers. >> So you know, I'll resay it in my words maybe >> Yeah please. >> Ease of use right, hardware software optimized for performance and capability, and point and click AI, right. AI for non data scientists, right. It's like the three levels that I think of when I'm engaging with data scientists and clients. >> And essentially it's embedded AI right? I've been making the point today that a lot of the AI is going to be purchased from companies like IBM, and I'm just going to apply it. I'm not going to try to go build my own, own AI right? I mean, is that... >> No absolutely. >> Is that the right way to think about it as a practitioner >> I think, I think we talked about it a little bit about it on the panel earlier but if we can, if we can leverage these pre built models and just apply a little bit of training data it makes it so much easier for the organizations and so much cheaper. They don't have to invest in a crazy amount of infrastructure, all the labeling of data, they don't have to do that. So, I think it's definitely steering that way. It's going to take a little bit of time, we have some of them there. But as we as we iterate, we are going to get more and more of these types of you know, commodity type models that people could utilize. >> I'll give you an example, so we have a software called Intelligent Analytics at IBM. It's very good at taking any surveillance data and for example recognizing anomalies or you know if people aren't suppose to be in a zone. Ugh and we had a client who wanted to do worker safety compliance. So they want to make sure workers are wearing their safety jackets and their helmets when they're in a construction site. So we use surveillance data created a new AI model using Powder AI vision. We were then able to plug into this IVA - Intelligence Analytic Software. So they have the nice gooey base software for the dashboards and the alerts, yet we were able to do incremental training on their specific use case, which by the way, with their specific you know equipment and jackets and stuff like that. And create a new AI model, very quickly. For them to be able to apply and make sure their workers are actually complaint to all of the safety requirements they have on the construction site. >> Hmm interesting. So when I, Sometimes it's like a new form of capture says identify "all the pictures with bridges", right that's the kind of thing you're capable to do with these video analytics. >> That's exactly right. You, every, clients will have all kinds of uses I was at a, talking to a client, who's a major car manufacturer in the world and he was saying it would be great if I could identify the make and model of what cars people are driving into my dealership. Because I bet I can draw a ugh corelation between what they drive into and what they going to drive out of, right. Marketing insights, right. And, ugh, so there's a lot of things that people want to do with which would really be spoke in their use cases. And build on top of existing AI models that we have already. >> And you mentioned, X86 before. And not to start a food fight but um >> Steven: And we use both internally too, right. >> So lets talk about that a little bit, I mean where do you use X86 where do you use IBM Cognitive and Power Systems? >> I have a mix of both, >> Why, how do you decide? >> There's certain of work loads. I will delegate that over to Power, just because ya know they're data starved and we are noticing a complication is being impacted by it. Um, but because we deal with so many different organizations certain organizations optimize for X86 and some of them optimize for power and I can't pick, I have to have everything. Just like I mentioned earlier, I also have to support cloud on prim, I can't pick just to be on prim right, it so. >> I imagine the big cloud providers are in the same boat which I know some are your customers. You're betting on data, you're betting on digital and it's a good bet. >> Steven: Yeah, 100 percent. >> We're betting on data and AI, right. So I think data, you got to do something with the data, right? And analytics and AI is what people are doing with that data we have an advantage both at the hardware level and at the software level in these two I would say workloads or segments - which is data and AI, right. And we fundamentally have invested in the processor architecture to improve the performance and capabilities, right. You could offer a much larger AI models on a power system that you use than you can on an X86 system that you use. Right, that's one advantage. You can train and AI model four times faster on a power system than you can on an Intel Based System. So the clients who have a lot of data, who care about how fast their training runs, are the ones who are committing to power systems today. >> Mmm.Hmm. >> Latency requirements, things like that, really really big deal. >> So what that means for you as a practitioner is you can do more with less or is it I mean >> I can definitely do more with less, but the real value is that I'm able to get an outcome quicker. Everyone says, "Okay, you can just roll our more GPU's more GPU's, but run more experiments run more experiments". No no that's not actually it. I want to reduce the time for a an experiment Get it done as quickly as possible so I get that insight. 'Cause then what I can do I can get possibly cancel out a bunch of those jobs that are already running cause I already have the insight, knowing that that model is not doing anything. Alright, so it's very important to get the time down. Jeff Dean said it a few years ago, he uses the same slide often. But, you know, when things are taking months you know that's what happened basically from the 80's up until you know 2010. >> Right >> We didn't have the computation we didn't have the data. Once we were able to get that experimentation time down, we're able to iterate very very quickly on this. >> And throwing GPU's at the problem doesn't solve it because it's too much complexity or? >> It it helps the problem, there's no question. But when my GPU utilization goes from 95% down to 60% ya know I'm getting only a two-thirds return on investment there. It's a really really big deal, yeah. >> Sumit: I mean the key here I think Steven, and I'll draw it out again is this time to insight. Because time to insight actually is time to dollars, right. People are using AI either to make more money, right by providing better customer products, better products to the customers, giving better recommendations. Or they're saving on their operational costs right, they're improving their efficiencies. Maybe their routing their trucks in the right way, their routing their inventory in the right place, they're reducing the amount of inventory that they need. So in all cases you can actually coordinate AI to a revenue outcome or a dollar outcome. So the faster you can do that, you know, I tell most people that I engage with the hardware and software they get from us pays for itself very quickly. Because they make that much more money or they save that much more money, using power systems. >> We, we even see this internally I've heard stories and all that, Sumit kind of commented on this but - There's actually sales people that take this software & hardware out and they're able to get an outcome sometimes in certain situations where they just take the clients data and they're sales people they're not data scientists they train it it's so simple to use then they present the client with the outcomes the next day and the client is just like blown away. This isn't just a one time occurrence, like sales people are actually using this right. So it's getting to the area that it's so simple to use you're able to get those outcomes that we're even seeing it you know deals close quicker. >> Yeah, that's powerful. And Sumit to your point, the business case is actually really easy to make. You can say, "Okay, this initiative that you're driving what's your forecast for how much revenue?" Now lets make an assumption for how much faster we're going to be able to deliver it. And if I can show them a one day turn around, on a corpus of data, okay lets say two months times whatever, my time to break. I can run the business case very easily and communicate to the CFO or whomever the line of business head so. >> That's right. I mean just, I was at a retailer, at a grocery store a local grocery store in the bay area recently and he was telling me how In California we've passed legislation that does not allow plastic bags anymore. You have to pay for it. So people are bringing their own bags. But that's actually increased theft for them. Because people bring their own bag, put stuff in it and walk out. And he didn't want to have an analytic system that can detect if someone puts something in a bag and then did not buy it at purchase. So it's, in many ways they want to use the existing camera systems they have but automatically be able to detect fraudulent behavior or you know anomalies. And it's actually quite easy to do with a lot of the software we have around Power AI Vision, around video analytics from IBM right. And that's what we were talking about right? Take existing trained AI models on vision and enhance them for your specific use case and the scenarios you're looking for. >> Excellent. Guys we got to go. Thanks Steven, thanks Sumit for coming back on and appreciate the insights. >> Thank you >> Glad to be here >> You're welcome. Alright, keep it right there buddy we'll be back with our next guest. You're watching "The Cube" at IBM's CDO Strategy Summit from San Francisco. We'll be right back. (music playing)
SUMMARY :
Brought to you by: IBM and the Global Chief Data Office at IBM. So you guys specifically set out to develop solutions and realized that we really need to architect between the line of business and the chief data office how did you go about that? And that's the main efforts that we have. to just put stuff in the data lake. and I can tell you from my previous roles so I've always been a customer I guess in that role right? so that I can really keep the utilization And you've also put a lot of emphasis on IO, right? That's the level of grand clarity we want, right? So just to summarize that, the three pieces: It's like the three levels that I think of a lot of the AI is going to be purchased about it on the panel earlier but if we can, and for example recognizing anomalies or you know that's the kind of thing you're capable to do And build on top of existing AI models that we have And not to start a food fight but um and I can't pick, I have to have everything. I imagine the big cloud providers are in the same boat and at the software level in these two I would say really really big deal. but the real value is that We didn't have the computation we didn't have the data. It it helps the problem, there's no question. So the faster you can do that, you know, and they're able to get an outcome sometimes and communicate to the CFO or whomever and the scenarios you're looking for. appreciate the insights. with our next guest.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Steven Eluk | PERSON | 0.99+ |
Steve | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Bob Picciano | PERSON | 0.99+ |
Steven | PERSON | 0.99+ |
Sumit | PERSON | 0.99+ |
Jeff Dean | PERSON | 0.99+ |
Sumit Gupta | PERSON | 0.99+ |
California | LOCATION | 0.99+ |
Boston | LOCATION | 0.99+ |
Bob | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Steven Eliuk | PERSON | 0.99+ |
three pieces | QUANTITY | 0.99+ |
100 systems | QUANTITY | 0.99+ |
two months | QUANTITY | 0.99+ |
100 percent | QUANTITY | 0.99+ |
2010 | DATE | 0.99+ |
hundred images | QUANTITY | 0.99+ |
1,000 GPUs | QUANTITY | 0.99+ |
95% | QUANTITY | 0.99+ |
The Cube | TITLE | 0.99+ |
one GPU | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
60% | QUANTITY | 0.99+ |
Denzoflo | ORGANIZATION | 0.99+ |
one system | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
tens of servers | QUANTITY | 0.99+ |
two-thirds | QUANTITY | 0.99+ |
Parc 55 | LOCATION | 0.99+ |
one day | QUANTITY | 0.98+ |
hundreds of servers | QUANTITY | 0.98+ |
one time | QUANTITY | 0.98+ |
X86 | COMMERCIAL_ITEM | 0.98+ |
IBM Cognitive | ORGANIZATION | 0.98+ |
80's | DATE | 0.98+ |
three levels | QUANTITY | 0.98+ |
today | DATE | 0.97+ |
Both | QUANTITY | 0.97+ |
CDO Strategy Summit | EVENT | 0.97+ |
Spark | TITLE | 0.96+ |
one advantage | QUANTITY | 0.96+ |
Spectrum Conductor | TITLE | 0.96+ |
Torch | TITLE | 0.96+ |
X86 | TITLE | 0.96+ |
Vice President | PERSON | 0.95+ |
three different pieces | QUANTITY | 0.95+ |
PTI Gen4 | COMMERCIAL_ITEM | 0.94+ |
three layers | QUANTITY | 0.94+ |
Union Square | LOCATION | 0.93+ |
TensorFlow | TITLE | 0.93+ |
Torch | ORGANIZATION | 0.93+ |
PTI Gen3 | COMMERCIAL_ITEM | 0.92+ |
Efi | TITLE | 0.92+ |
Startegy Summit 2018 | EVENT | 0.9+ |
Bina Hallman & Steven Eliuk, IBM | IBM Think 2018
>> Announcer: Live, from Las Vegas, it's theCUBE. Covering IBM Think 2018. Brought to you by IBM. >> Welcome back to IBM Think 2018. This is theCUBE, the leader in live tech coverage. My name is Dave Vellante and I'm here with Peter Burress. Our wall-to-wall coverage, this is day two. Everything AI, Blockchain, cognitive, quantum computing, smart ledger, storage, data. Bina Hallman is here, she's the Vice President of Offering Management for Storage and Software Defined. Welcome back to theCUBE, Bina. >> Bina: Thanks for having me back. >> Steve Elliot is here. He's the Vice President of Deep Learning in the Global Chief Data Office at IBM. >> Thank you sir. >> Dave: Welcome to the Cube, Steve. Thanks, you guys, for coming on. >> Pleasure to be here. >> That was a great introduction, Dave. >> Thank you, appreciate that. Yeah, so this has been quite an event, consolidating all of your events, bringing your customers together. 30,000 40,000, too many people to count. >> Very large event, yes. >> Standing room only at all the sessions. It's been unbelievable, your thoughts? >> It's been fantastic. Lots of participation, lots of sessions. We brought, as you said, all of our conferences together and it's a great event. >> So, Steve, tell us more about your role. We were talking off the camera, we've had here Paul Bhandari on before, Chief Data Officer at IBM. You're in that office, but you've got other roles around Deep Learning, so explain that. >> Absolutely. >> Sort of multi-tool star here. >> For sure, so, roles and responsibility at IBM and the Chief Data Office, kind of two pillars. We focus in the Deep Learning group on foundation platform components. So, how to accelerate the infrastructure and platform behind the scenes, to accelerate the ideation or product phase. We want data scientists to be very effective, and for us to ensure our projects very very quickly. That said, I mentioned projects, so on the applied side, we have a number of internal use cases across IBM. And it's not just hand vault, it's in the orders of hundreds and those applied use cases are part of the cognitive plan, per se, and each one of those is part of the transformation of IBM into our cognitive. >> Okay, now, we were talking to Ed Walsh this morning, Bina, about how you collaborate with colleagues in the storage business. We know you guys have been growing, >> Bina: That's right. >> It's the fourth quarter straight, and that doesn't event count, some of the stuff that you guys ship on the cloud in storage, >> That's right, that's right. >> Dave: So talk about the collaboration across company. >> Yeah, we've had some tremendous collaboration, you know, the broader IBM and bringing all of that together, and that's one of the things that, you know, we're talking about here today with Steve and team is really as they built out their cognitive architecture to be able to then leverage some of our capabilities and the strengths that we bring to the table as part of that overall architecture. And it's been a great story, yeah. >> So what would you add to that, Steve? >> Yeah, absolutely refreshing. You know I've built up super computers in the past, and, specifically for deep learning, and coming on board at IBM about a year ago, seeing the elastic storage solution, or server. >> Bina: Yeah, elastic storage server, yep. >> It handles a number of different aspects of my pipeline, very uniquely, so for starters, I don't want to worry about rolling out new infrastructure all the time. I want to be able to grow my team, to grow my projects, and that's what nice about ESS is it's distensible, I'm able to roll out more projects, more people, multi-tenancy et cetera, and it supports us effectively. Especially, you know, it has very unique attributes like the read only performance feed, and random access of data, is very unique to the offering. >> Okay, so, if you're a customer of Bina's, right? >> I am, 100%. >> What do you need for infrastructure for Deep Learning, AI, what is it, you mentioned some attributes before, but, take it down a little bit. >> Well, the reality is, there's many different aspects and if anything kind of breaks down, then the data science experience breaks down. So, we want to make sure that everything from the interconnect of the pipelines is effective, that you heard Jensen earlier today from Nvidia, we've got to make sure that we have compute devices that, you know, are effective for the computation that we're rolling out on them. But that said, if those GPUs are starved by data, that we don't have the data available which we're drawing from ESS, then we're not making effective use of those GPUs. It means we have to roll out more of them, et cetera, et cetera. And more importantly, the time for experimentation is elongated, so that whole idea, so product timeline that I talked about is elongated. If anything breaks down, so, we've got to make sure that the storage doesn't break down, and that's why this is awesome for us. >> So let me um, especially from a deep learning standpoint, let me throw, kind of a little bit of history, and tell me if you think, let me hear your thoughts. So, years ago, the data was put as close to the application as possible, about 10, 15 years ago, we started breaking the data from the application, the storage from the application, and now we're moving the algorithm down as close to the data as possible. >> Steve: Yeah. >> At what point in time do we stop calling this storage, and start acknowledging that we're talking about a fabric that's actually quite different, because we put a lot more processing power as close to the data as possible. We're not just storing. We're really doing truly, deeply distributing computing. What do you think? >> There's a number of different areas where that's coming from. Everything from switches, to storage, to memory that's doing computing very close to where the data actually residents. Still, I think that, you know, this is, you can look all the way back to Google file system. Moving computation to where the data is, as close as possible, so you don't have to transfer that data. I think that as time goes on, we're going to get closer and closer to that, but still, we're limited by the capacity of very fast storage. NVMe, very interesting technology, still limited. You know, how much memory do we have on the GPUs? 16 gigs, 24 is interesting, 48 is interesting, the models that I want to train is in the 100s of gigabytes. >> Peter: But you can still parallelize that. >> You can parallelize it, but there's not really anything that's true model parallelism out there right now. There's some hacks and things that people are doing, but. I think we're getting there, it's still some time, but moving it closer and closer means we don't have to spend the power, the latency, et cetera, to move the data. >> So, does that mean that the rate of increase of data and the size of the objects we're going to be looking at, is still going to exceed the rate of our ability to bring algorithms and storage, or algorithms and data together? What do you think? >> I think it's getting closer, but I can always just look at the bigger problem. I'm dealing with 30 terabytes of data for one of the problems that I'm solving. I would like to be using 60 terabytes of data. If I could, if I could do it in the same amount of time, and I wasn't having to transfer it. With that said, if you gave me 60, I'd say, "I really wanted 120." So, it doesn't stop. >> David: (laughing) You're one of those kind of guys. >> I'm definitely one of those guys. I'm curious, what would it look like? Because what I see right now is it would be advantageous, and I would like to do it, but I ran 40,000 experiments with 30 terabytes of data. It would be four times the amount of transfer if I had to run that many experiments of 120. >> Bina, what do you think? What is the fundamental, especially from a software defined side, what does the fundamental value proposition of storage become, as we start pushing more of the intelligence close to the data? >> Yeah, but you know the storage layer fundamentally is software defined, you still need that setup, protocols, and the file system, the NFS, right? And, so, some of that still becomes relevant, even as you kind of separate some of the physical storage or flash from the actual compute. I think there's still a relevance when you talk about software defined storage there, yeah. >> So you don't expect that there's going to be any particular architectural change? I mean, NVMe is going to have a real impact. >> NVMe will have a real impact, and there will be this notion of composable systems and we will see some level of advancement there, of course, and that's around the corner, actually, right? So I do see it progressing from that perspective. >> So what's underneath it all, what actually, what products? >> Yeah, let me share a little bit about the product. So, what Steve and team are using is our elastic storage server. So, I talked about software defined storage. As you know, we have a very complete set of software defined storage offerings, and within that, our strategy has always been allow the clients to consume the capabilities the way they want. A software only on their own hardware, or as a service, or as an integrated solution. And so what Steve and team are using is an integrated solution with our spectrum scale software, along with our flash and power nine server power systems. And on the software side from spectrum scale, this is a very rich offering that we've had in our portfolio. Highly scalable file system, it's one of the solutions that powers a lot of our supercomputers. A project that we are still in the process and have delivered on around Whirl, our national labs. So same file system combined with a set of servers and flash system, right? Highly scalable, erasure coding, high availability as well as throughput, right? 40 gigabytes per second, so that's the solution, that's the storage and system underneath what Steve and team are leveraging. >> Steve, you talk about, "you want more," what else is on Bina's to-do-list from your standpoint? >> Specifically targeted at storage, or? >> Dave: Yeah, what do you want from the products? >> Well, I think long stretch goals are multi-tenancy and the wide array of dimensions that, especially in the chief data office, that we're dealing with. We have so many different business units, so many different of those enterprise problems in the orders of hundreds how do you effectively use that storage medium driving so many different users? I think it's still hard, I think we're doing it a hell of a lot better than we ever have, but it's still, it's an open research area. How do you do that? And especially, there's unique attributes towards deep learning, like, most of the data is read only to a certain degree. When data changes there's some consistency checks that could be done, but really, for my experiment that's running right now, it doesn't really matter that it's changed. So there's a lot of nuances specific to deep learning that I would like exploited if I could, and that's some of the interactions that we're working on to kind of alleviate those pains. >> I was at a CDO conference in Boston last October, and Indra Pal was there and he presented this enterprise data architecture, and there were probably about three or four hundred CDOs, chief data officers, in the room, to sort of explain that. Can you, sort of summarize what that is, and how it relates to sort of what you do on a day to day basis, and how customers are using it? >> Yeah, for sure, so the architecture is kind of like the backbone and rules that kind of govern how we work with the data, right? So, the realities are, there's no sort of blueprint out there. What works at Google, or works at Microsoft, what works at Amazon, that's very unique to what they're doing. Now, IBM has a very unique offering as well. We have so many, we're a composition of many, many different businesses put together. And now, with the Chief Data Office that's come to light across many organizations like you said, at the conference, three to 400 people, the requirements are different across the orders. So, bringing the data together is kind of one of the big attributes of it, decreasing the number of silos, making a monolithic kind of reliable, accessible entity that various business units can trust, and that it's governed behind the scenes to make sure that it's adhering to everyone's policies, that their own specific business unit has deemed to be their policy. We have to adhere to that, or the data won't come. And the beauty of the data is, we've moved into this cognitive era, data is valuable but only if we can link it. If the data is there, but there's no linkages there, what do I do with it? I can't really draw new insights. I can't draw, all those hundreds of enterprise use cases, I can't build new value in them, because I don't have any more data. It's all about linking the data, and then looking for alternative data sources, or additional data sources, and bringing that data together, and then looking at the new insights that come from it. So, in a nutshell, we're doing that internally at IBM to help our transformation. But at the same time creating a blueprint that we're making accessible to CDOs around the world, and our enterprise customers around the world, so they can follow us on this new adventure. New adventure being, you know, two years old, but. >> Yeah, sure, but it seems like, if you're going to apply AI, you've got to have your data house in order to do that. So this sounds like a logical first step, is that right? >> Absolutely, 100%. And, the realities are, there's a lot of people that are kicking the tires and trying to figure out the right way to do that, and it's a big investment. Drawing out large sums of money to kind of build this hypothetical better area for data, you need to have a reference design, and once you have that you can actually approach the C-level suite and say, "Hey, this is what we've seen, this is the potential, "and we have an architecture now, "and they've already gone down all the hard paths, "so now we don't have to go down as many hard paths." So, it's incredibly empowering for them to have that reference design and learning from our mistakes. >> Already proven internally now, bringing it to our enterprise alliance. >> Well, and so we heard Jenny this morning talk about incumbent disruptors, so I'm kind of curious as to what, any learnings you have there? It's early days, I realize that, but when you think about, the discussions, are banks going to lose control of the payment systems? Are retail stores going to go away? Is owning and driving your own vehicle going to be the exception, not the norm? Et cetera, et cetera, et cetera, you know, big questions, how far can we take machine intelligence? Have you seen your clients begin to apply this in their businesses, incumbents, we saw three examples today, good examples, I thought. I don't think it's widespread yet, but what are you guys seeing? What are you learning, and how are you applying that to clients? >> Yeah, so, I mean certainly for us, from these new AI workloads, we have a number of clients and a number of different types of solutions. Whether it's in genomics, or it's AI deep learning in analyzing financial data, you know, a variety of different types of use cases where we do see clients leveraging the capabilities, like spectrum scale, ESS, and other flash system solutions, to address some of those problems. We're seeing it now. Autonomous driving as well, right, to analyze data. >> How about a little road map, to end this segment? Where do you want to take this initiative? What should we be looking for as observers from the outside looking in? >> Well, I think drawing from the endeavors that we have within the CDO, what we want to do is take some of those ideas and look at some of the derivative products that we can take out of there, and how do we kind of move those in to products? Because we want to make it as simple as possible for the enterprise customer. Because although, you see these big scale companies, and all the wonderful things that they're doing, what we've had the feedback from, which is similar to our own experiences, is that those use cases aren't directly applicable for most of the enterprise customers. Some of them are, right, some of the stuff in vision and brand targeting and speech recognition and all that type of stuff are, but at the same time the majority and the 90% area are not. So we have to be able to bring down sorry, just the echoes, very distracting. >> It gets loud here sometimes, big party going on. >> Exactly, so, we have to be able to bring that technology to them in a simpler form so they can make it more accessible to their internal data scientists, and get better outcomes for themselves. And we find that they're on a wide spectrum. Some of them are quite advanced. It doesn't mean just because you have a big name you're quite advanced, some of the smaller players have a smaller name, but quite advanced, right? So, there's a wide array, so we want to make that accessible to these various enterprises. So I think that's what you can expect, you know, the reference architecture for the cognitive enterprise data architecture, and you can expect to see some of the products from those internal use cases come out to some of our offerings, like, maybe IGC or information analyzer, things like that, or maybe the Watson studio, things like that. You'll see it trickle out there. >> Okay, alright Bina, we'll give you the final word. You guys, business is good, four straight quarters of growth, you've got some tailwinds, currency is actually a tailwind for a change. Customers seem to be happy here, final word. >> Yeah, no, we've got great momentum, and I think 2018 we've got a great set of roadmap items, and new capabilities coming out, so, we feel like we've got a real strong set of future for our IBM storage here. >> Great, well, Bina, Steve, thanks for coming on theCUBE. We appreciate your time. >> Thank you. >> Nice meeting you. >> Alright, keep it right there everybody. We'll be back with our next guest right after this. This is day two, IBM Think 2018. You're watching theCUBE. (techno jingle)
SUMMARY :
Brought to you by IBM. Bina Hallman is here, she's the Vice President He's the Vice President of Deep Learning Dave: Welcome to the Cube, Steve. Yeah, so this has been quite an event, Standing room only at all the sessions. We brought, as you said, all of our conferences together You're in that office, but you've got other roles behind the scenes, to accelerate the ideation in the storage business. and that's one of the things that, you know, seeing the elastic storage solution, or server. like the read only performance feed, AI, what is it, you mentioned some attributes before, that the storage doesn't break down, and tell me if you think, let me hear your thoughts. and start acknowledging that we're talking about a fabric the models that I want to train is in the 100s of gigabytes. to move the data. for one of the problems that I'm solving. and I would like to do it, protocols, and the file system, the NFS, right? So you don't expect that there's going to be and that's around the corner, actually, right? allow the clients to consume the capabilities and that's some of the interactions that we're working on and how it relates to sort of what you do on a and that it's governed behind the scenes you've got to have your data house in order to do that. that are kicking the tires and trying to figure out bringing it to our enterprise alliance. and how are you applying that to clients? leveraging the capabilities, like spectrum scale, ESS, and all the wonderful things that they're doing, So I think that's what you can expect, you know, Okay, alright Bina, we'll give you the final word. and new capabilities coming out, so, we feel We appreciate your time. This is day two, IBM Think 2018.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Steve | PERSON | 0.99+ |
Steve Elliot | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Peter Burress | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
Paul Bhandari | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Boston | LOCATION | 0.99+ |
Bina Hallman | PERSON | 0.99+ |
Indra Pal | PERSON | 0.99+ |
60 terabytes | QUANTITY | 0.99+ |
90% | QUANTITY | 0.99+ |
16 gigs | QUANTITY | 0.99+ |
Peter | PERSON | 0.99+ |
100% | QUANTITY | 0.99+ |
2018 | DATE | 0.99+ |
Ed Walsh | PERSON | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
30 terabytes | QUANTITY | 0.99+ |
Jenny | PERSON | 0.99+ |
three | QUANTITY | 0.99+ |
60 | QUANTITY | 0.99+ |
40,000 experiments | QUANTITY | 0.99+ |
Steven Eliuk | PERSON | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
24 | QUANTITY | 0.99+ |
Bina | PERSON | 0.99+ |
two years | QUANTITY | 0.99+ |
120 | QUANTITY | 0.99+ |
48 | QUANTITY | 0.99+ |
last October | DATE | 0.99+ |
one | QUANTITY | 0.98+ |
40 gigabytes | QUANTITY | 0.98+ |
first step | QUANTITY | 0.98+ |
hundreds | QUANTITY | 0.97+ |
three examples | QUANTITY | 0.97+ |
30,000 40,000 | QUANTITY | 0.97+ |
today | DATE | 0.97+ |
400 people | QUANTITY | 0.97+ |
four hundred CDOs | QUANTITY | 0.96+ |
Whirl | ORGANIZATION | 0.95+ |
about 10, 15 years ago | DATE | 0.94+ |
this morning | DATE | 0.94+ |
about three | QUANTITY | 0.92+ |
four times | QUANTITY | 0.91+ |
years ago | DATE | 0.91+ |
100s of gigabytes | QUANTITY | 0.89+ |
fourth quarter | DATE | 0.89+ |
a year ago | DATE | 0.88+ |
four straight quarters | QUANTITY | 0.88+ |
Watson studio | ORGANIZATION | 0.85+ |
day two | QUANTITY | 0.84+ |
ESS | ORGANIZATION | 0.83+ |
nine server power systems | QUANTITY | 0.82+ |
Vice President | PERSON | 0.78+ |
Steven Kenniston, The Storage Alchemist & Eric Herzog, IBM | VMworld 2017
>> Announcer: Live from Las Vegas it's theCUBE covering VM World 2017, brought to you by VMWare and its ecosystem partners. (upbeat techno music) >> Hey, welcome back to day two of VM World 2017 theCUBE's continuing coverage, I am Lisa Martin with my co-host Dave Vellante and we have a kind of a cute mafia going on here. We have Eric Herzog the CMO of IBM Storage back with us, and we also have Steve Kenniston, another CUBE alumni, Global Spectrum Software Distance Development Executive at IBM, welcome guys! >> Thank you. >> Thank you, great to be here. >> So lots of stuff going on, IBM Storage business health, first question, Steve, to you, what's going on there, tell us about that. >> Steve: What's going on in IBM Storage? >> Yes. >> All kinds of great things. I mean, first of all, I think we were walking the show floor just talking about how VMWare, VM World use to be a storage show and then it wasn't for a long time. Now you're walking around there, you see all kinds of storage. Now IBM, really stepping up its game. We've got two booths. We're talking all about not just, you know, the technologies, cognitive, IOT, that sort of thing but also where do those bits and bytes live? That's your, that's your assets. You got to store that information someplace and then you got to protect that information. We got all we're showcasing all kinds of solutions on the show floor including Versus Stack and that sort of thing where you you know make your copies of your data, store your data, reissue your data, protect your data, it's a great show. >> Lisa: Go ahead. >> Please. So I ever want to get into it, right, I mean, we've watched, this is our eighth year doing theCUBE at VM World, and doing theCUBE in general, but to see the evolution of this ecosystem in this community, you're right, it was storage world, and part of the reason was, and you know this well Eric, it was such a problem, you know. And all the APIs that VMWare released to really solve that storage problem, Flash obviously has changed the game a little bit but I want to talk about data protection if you're my backup specifically. Steve you and I have talked over the years about the ascendancy of VMWare coincided with a reduction in the physical capacity that was allocated to things like the applications like Backup. That was a real problem so the industry had their re-architect its backup and then companies like VM exploded on the scene, simplicity was a theme, and now we're seeing a sort of a similar scene change around cloud. So what's your perspective on that sort of journey in the evolution of data protection and where we are today, especially in the context of cloud? >> Yeah, I think there's been a couple, a couple big trends. I think you talked about it correctly, Dave, from the standpoint of when you think about your data protection capacity being four X at a minimum greater than your primary storage capacity, the next thing you start understanding is now with a growth in data, I need to be able to leverage and use that data. The number one thing, the number one driver to putting data in the cloud is data protection, right? And then it's now how can I reuse that data that's in the cloud and you look at things like AWS and that sort of thing the ability to spin up applications, and now what I need to do is I need to connect to with the data to be able to run those applications and if I'm going to do a test development environment if I'm going to run an analytics report or I'm going to do something, I want to connect to my data. So we have solutions that help you promote that data into the cloud, leverage that data, take advantage of that data and it it's just continually growing and continually shifting. >> So you guys are really leaning in to to VM World this year, got a big presence. What's going on there? You know, one would think, okay you know VMWare it's you know clearly grabbing a big piece of the market. You got them doing more storage. What's going on Eric, is it just, "Hey, we're a good partner." "Hey we're not going to let them you know elbow us out." "We're going to be competitive with the evil machine company." What's the dynamic in the VMWare ecosystem with you guys? >> Well I think the big thing for us is IBM has had a powerful partnership with VMWare since day one. Way back when IBM use to have an Intel Server Division everything was worked with VMWare, been a VMR partner, years and years ago on the server world as that division transferred away to Lenovo the storage division became front and center so all kinds of integration with our all Flash arrays our Versus Stack which you do jointly with Cisco and VMWare providing a conversion for structured solution, the products that Steve's team just brought out Spectrum Protect Plus installs in 30 minutes, recovers instantly off of a VM, can handle multiple VMs, can recover VMs or files, can be used to back up hundreds and thousands of virtual machines if that's what you've got in your infrastructure. So, the world has gone virtual and cloud. IBM is there with virtual and cloud. You need to move data out to IBM cloud, or exams on our azure Spectrum Protect Plus, Spectrum Scale, Spectrum Virtualize all members of our software family, and the arrays that they ship on all can transparently move data to a cloud; move it back and forth at the blink of an eye. With VMWare you need that same sort of level of integration we've had it on the array side we've now brought that out with Spectrum Protect Plus to make sure that backups are, in fact Spectrum Protect Plus is so easily, even me with my masters degree in Chinese history can back up and protect my data in a VMWare environment. So it's designed to be used by the VMWare admin or the app owner, not by the backup guy or the storage admin. Not that they won't love it too, but it's designed for the guys who don't know much about storage. >> I'll tell you Dave I saw it, I watched him get a demo and then I watched him turn around and present it, it was impressive. (laughing) >> I want to ask you a quick question. Long time partners IBM and VMWare as you as you've just said. You were an EMC guy that's where I first met you, from a marketing and a positioning perspective what have you guys done in the last year since the combination has completed to continue to differentiate the IBM VMWare strengths as now VMWare's part of Dell EMC. >> So I think the key thing is VMWare always has been the switch under the storage business. When I was at EMC, we owned 81 percent of the company and you walked into Palo Alto and Pat Gelsinger who I used to work for at EMC is now the CEO of VMWare you walk into the data center and there's IBM arrays, EMC arrays, HP arrays, Dell arrays and then app arrays. And a bunch of all small guys so the good thing is they've always been the switch under the storage industry, IBM because of it's old history and the server industry has always had tight integration with them, and we've just made sure we've done. I think they key difference we've done is it's all about the data. CEO, CIO they hate talking about storage. It's all about the data and that's what we're doing. Spectrum Protect Plus is all about keeping the data safe protected, and as Steve talked about using it in the cloud using real data sets for test and dev for dev opps, that's unique. Not everyone's doing that we're one of the few guys that do that. It's all about the data and you sell the storage as a foundation of that data. >> Well I mean IBM's always been good about not selling speeds and feeds, but selling at the boardroom level, the C level I mean you're IBM. That's your brand. Having said that, there's a lot of knife fights going on tactically in the business and you guys are knife fighters I know you both you're both startup guys, you're not afraid to get you know down and dirty. So Steve, how do you address the skepticism that somebody might have and say, "Alright, you know I hear you, this all sounds great but, you know I need simplicity." You guys you talk simplicity your Chinese History background, but I'm still skeptical. What can you tell me, proof points share with us to convince us that you really are from a from a simplicity standpoint competitive with the pack? >> I think, I think you seen a pretty big transformation over the last 18 months with what the some of the stuff that we've done with the software portfolio. So, a lot of folks can talk a good game about a software defined strategy. The fact that we put the entire Spectrum suite now under one portfolio now things are starting to really gel and come together. We done things like interesting skunkworks project with Spectrum Protect Plus and now we even had business partners in our booth who are backup architects talking about the solution who sell everybody else's solution on the floor saying, "This is, the I can't believe it, "I can't believe this is IBM. "They're putting together solutions that "are just unbelievably easy to use." They need that and I think you're exactly right, Dave. It used to be where you have a lot of technical technicians in the field and people wanted to architect things and put things together. Those days are gone, right? Now what you're finding is the younger generation coming in they're iPhone type people they want click simplicity just want to use it that sort of thing. We've started to recognize that and we've had to build that into our product. We were, we are a humbler IBM now. We are listening to our business partners. We are asking them what do we need to be doing to help you be successful in the field, not just from a product set, but also a selling you know, a selling motion. The Spectrum suite all the products under one thing now working and they're operating together, the ability to buy them more easily, the ability to leverage them use them, put it in a sandbox, test it out, not get charged for it, okay I like it, now I want to deploy it. We've really made it a lot easier to consume technology in a much easier way, right, software defined, and we're making the products easier to use. >> How've you been able to achieve that transformation is it cultural, is it somebody came down and said you thou shalt simplify and I mean you've been there a couple years now. >> Yeah so I think, I think the real thing is IBM has brought into the division a bunch of people from outside the division. So Ed Walsh our general manager who's going to be on shortly five startups. Steve, five startups. Me, seven startups. Our new VP of Offering Manager in the Solutions said not only Net Up, four startups. Our new VP of North American sales, HDS, three startups. So we've brought in a bunch of guys who A, use to work at the big competition, EMC, Net Up, Hitachi, et cetera. And we've also brought in a bunch of people who are startup guys who are used to turning on dime, it's all about ease of use, it's all about simplicity, it's all about automation. So between the infusion of this intellectual capital from a number of us who've been outside the company, particularly in the startup world, and the incredible technical depth of IBM storage teams and our test teams and all the other teams that we leverage, we've just sort of pointed them in the direction like it needs to be installed in 30 minutes. Well guess what, they knew how to do that 30 years ago. They just never did because they were, you know stuck in the IBM silo if you will, and now the big model we have at IBM is outside in, not inside out, outside in. And the engineering teams have responded to that and made things that are easy to use, incredibly automated, work with everyone's gear not just ours. There're other guys that sell storage software. But other than in the protection space all the other guys, it only works with EMC or it only works with Net Up or it only works HP. Our software works with everyone's stuff including every one of our major competitors, and we're fine with that. So that's come from this infusion and combination of the incredible technical depth and DNA of IBM with a bunch of group of people about 10 of us who've all for come from either A, from the big competitors, but also from a bunch of startups. And we've just merged that over the last two years into something that's fortunate and incredibly powerful. We are now the number one storage software company in the world, and in overall storage, both systems and software, we're number two. >> Dave: So where's that data, is that IDC data? >> Yeah, that's the IDC data. >> And what are they what are they, when they count that what are they counting? Are they sort of eliminating any hardware you know, associated with that or? >> Well no, storage systems would be external systems our all Flash arrays, our Ver, that's all the systems side. Software's purely software only, so. >> Dave: No appliances >> Yeah, yeah, yeah. >> Dave: The value of those licenses associated with that >> Well as our CFO pointed out, so if you take a look at our track record of last at the beginning of this year, we grew seven percent in Q1, one of the only storage companies to grow certainly of the majors, we grew eight percent in Q2, again one of the only storage companies to grow of the major players, and as our CFO pointed out in his call in Q2, over 40 percent of the division's revenue is storage software, not full system, just stand alone software particularly with the strength of the suite and all the things we're doing you know, to make it easy to use install in 30 minutes and have a mastery in Chinese History be able to protect his data and never lose it. And that's what we want to be able to do. >> Okay so that's a licensed model, and is it a, is it a, is it a ratable model is it a sort of a perpetual model, what is it? >> We've got both depending on the solution. We have cloud engagement models, we can consume it in the cloud. We got some guys are traditionalists, gimme an ELA, you know Enterprise License Agreement. So we, we're the pasta guys. We have the best pasta in the world. Do you want red sauce, white sauce, or pesto? Dave I said that because you're part Italian, I'm half Italian on my mother's side. >> I like Italian. >> So we have the best pasta, whatever the right sauce is for you we deliver the best pasta on the planet, in our case the best storage software on the planet. >> You heard, you heard Michael up there on stage today, "Don't worry about it." He was invoking his best Italian, I have an affinity for that. So, so Steve, this is your second stint at IBM, Ed's second stint, I'm very intrigued that Doug Balog is now moved over this is a little inside baseball here, but running sales again. >> Steve: Right. >> So that's unique, actually, to see have a guy who use to run storage, leave, go be the general manager of the Power Systems division, in OpenPOWER, then come back, to drive storage sales. So, you're seeing, it's like a little gravity action. Guys sort of coming back in, what's going on from your perspective? >> Well I I think, I think Eric said it best. We've done an outside in. We've been bringing a lot of people in and I think that the development team and I wanted to to to bounce off of what Eric had said was they always knew how to do it, it's just they they needed to see and understand the motivation behind why they wanted to do it or why they needed to do it. And now they're seeing these people come in and talk about in a very, you know caring way that this is how the world is changing, and they believe it, and they know how to do it and they're getting excited. So now there's a lot more what what people might think, "Oh I'm just going to go develop my code and go home," and whatever they're not; they're excited. They want to build new products. They want to make these things interoperate together. They're, they're passionate about hearing from the customer, they're passionate about tell me what I can do to make it better. And all of those things when one group hears something that's going on to make something better they want to do the same thing, right. So it's it's really, it's breathing good energy into the storage division, I think. >> So question for you guys on that front you talked about Eric, we don't leave it at storage anymore, right? C levels don't care about that. But you've just talked about two very strong quarters in storage revenue perspective. What's driving that what's or what's dragging that? Is it data protection? What are some of the other business level drivers that are bringing that storage sale along? >> So for us it's been a couple things. So when you look at just the pure product perspective the growth has been around our all Flash arrays. We have a broad portfolio; we have very cost effective stuff we have stuff for the mainframe we have super high performance stuff we have stuff for big data analytic workloads. So again there isn't one Flash, you know there's a couple startups that started with one Flash and that's all they had. We think it's the right Flash tool for the right job. It's all about data, applications, workloads and use case. Big data analytics is not the same as your Oracle database to do your ERP system or your logistics system if you're someone like a Walmart. You need a different type of Flash for that. We tune everything to that. So Flash has been a growth engine for us, the other's been software defined storage. The fact that we suited it up, we have the broadest software portfolio in the industry. We have Block, we have File, we have Object, we have Backup, we have Archive. We've got Management Plane. We've got that and by packaging it into a suite, I hate to say we stole it from the old Microsoft Office, but we did. And for the in-user base it's up to a 40 percent discount. I'm old enough to remember the days of the computer store I think Dave might've gone to a computer store once or twice too. And there it was: Microsoft Office at eye level for $999. Excel, Powerpoint and Word above it at $499. Which would you buy? So we've got the Spectrum suite up to a 40 percent savings, and we let the users use all of the software for free in their dev environments at no charge and it's not a timeout version, it's not a lite version, it is a full version of the software. So you get to try a full thing out for free and then at the suite level you save up to 40 percent. What's not to like? >> And I think, I just wanted to compliment that too. I think to also answer the question is one of the things we've done so we've talked about development really growing, getting excited, wanting to build things. The other thing that's also happening is that is at the field level, we've stopped talking speeds and feeds like directly, right, so it has become this, this higher level conversation and now IBMers who go and sell things like cognitive and IOT and that sort of thing, they're now wanting to bring us in, because we're not talking about the feeds and speeds and screwing up like how they like to sell. We're talking about, Jenny will come out and say data is your is your most valuable asset in your company. And we say, okay, I got to store those bits and bytes some place, right? We provide that mechanism. We provide it in a multitude of different ways. And we want to compliment what they're doing. So now when I put presentations together to help the sales field, I talk about storage in a way that is more, how does it help cognitive? How does it help IOT? How does it help test and dev? And and by the way there's a suite, it's storing it, it's using it, it's protecting it. It's all of those things and now it's complimenting their selling motion. >> Well your both of the passion and the energy coming from both of you is very palpable. So thank you for sticking around, Eric, and Steve for coming back to theCUBE and sharing all the exciting things that are going on at IBM. That energy is definitely electric. So wish you guys the best of luck in the next day or so of the show and again, thank you for spending some time with us this afternoon. >> Thanks for having us. >> Thanks for having us. >> Absolutely, and for my co-host Dave Vellante, I'm Lisa Martin, you're watching theCUBE's live continuing coverage of the Emerald 2017 day two. Stick around, we'll be right back. (techno music)
SUMMARY :
covering VM World 2017, brought to you by VMWare We have Eric Herzog the CMO of IBM Storage back with us, first question, Steve, to you, what's going on there, and that sort of thing where you you know and part of the reason was, and you know this well Eric, that's in the cloud and you look at things What's the dynamic in the VMWare ecosystem with you guys? and the arrays that they ship on and then I watched him turn around and present it, I want to ask you a quick question. and the server industry and you guys are knife fighters I know you both the ability to leverage them use them, How've you been able to achieve that transformation and now the big model we have at IBM is outside in, that's all the systems side. and all the things we're doing We have the best pasta in the world. in our case the best storage software on the planet. You heard, you heard Michael up there on stage today, go be the general manager of the Power Systems division, and talk about in a very, you know caring way So question for you guys on that front and then at the suite level you save up to 40 percent. And and by the way there's a suite, and sharing all the exciting things live continuing coverage of the Emerald 2017 day two.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Steve | PERSON | 0.99+ |
Eric Herzog | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Steve Kenniston | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Eric | PERSON | 0.99+ |
Hitachi | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
Doug Balog | PERSON | 0.99+ |
$499 | QUANTITY | 0.99+ |
Walmart | ORGANIZATION | 0.99+ |
Steven Kenniston | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
Ed Walsh | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Michael | PERSON | 0.99+ |
seven percent | QUANTITY | 0.99+ |
81 percent | QUANTITY | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
eight percent | QUANTITY | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
30 minutes | QUANTITY | 0.99+ |
Jenny | PERSON | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
Word | TITLE | 0.99+ |
Ed | PERSON | 0.99+ |
$999 | QUANTITY | 0.99+ |
Excel | TITLE | 0.99+ |
second stint | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
eighth year | QUANTITY | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
OpenPOWER | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
Powerpoint | TITLE | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
twice | QUANTITY | 0.99+ |
Net Up | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
HP | ORGANIZATION | 0.99+ |
two booths | QUANTITY | 0.99+ |
Power Systems | ORGANIZATION | 0.99+ |
VM World 2017 | EVENT | 0.99+ |
once | QUANTITY | 0.99+ |
Lenovo | ORGANIZATION | 0.98+ |
VMWare | TITLE | 0.98+ |
IBM Storage | ORGANIZATION | 0.98+ |
over 40 percent | QUANTITY | 0.98+ |
Raja Mukhopadhyay & Stefanie Chiras - Nutanix .NEXTconf 2017 - #NEXTconf - #theCUBE
[Voiceover] - Live from Washington D.C. It's theCUBE covering dot next conference. Brought to you by Nutanix. >> Welcome back to the district everybody. This is Nutanix NEXTconf, hashtag NEXTconf. And this is theCUBE, the leader in live tech coverage. Stephanie Chiras is here. She's the Vice President of IBM Power Systems Offering Management, and she's joined by Raja Mukhopadhyay who is the VP of Product Management at Nutanix. Great to see you guys again. Thanks for coming on. >> Yeah thank you. Thanks for having us. >> So Stephanie, you're welcome, so Stephanie I'm excited about you guys getting into this whole hyper converged space. But I'm also excited about the cognitive systems group. It's kind of a new play on power. Give us the update on what's going on with you guys. >> Yeah so we've been through some interesting changes here. IBM Power Systems, while we still maintain that branding around our architecture, from a division standpoint we're now IBM Cognitive Systems. We've been through a change in leadership. We have now Senior Vice President Bob Picciano leading IBM Cognitive Systems, which is foundationally built upon the technology that's comes from Power Systems. So our portfolio remains IBM Power Systems, but really what it means is we've set our sights on how to take our technology into really those cognitive workloads. It's a focus on clients going to the cognitive era and driving their business into the cognitive era. It's changed everything we do from how we deliver and pull together our offerings. We have offerings like Power AI, which is an offering built upon a differentiated accelerated product with Power technology inside. It has NVIDIA GPU's, it has NVLink capability, and we have all the optimized frameworks. So you have Caffe, Torch, TensorFlow, Chainer, Theano. All of those are optimized for the server, downloadable right in a binary. So it's really about how do we bring ease of use for cognitive workloads and allow clients to work in machine learning and deep learning. >> So Raja, again, part of the reason I'm so excited is IBM has a $15 billion analytics business. You guys talk, you guys talked to the analysts this morning about one of the next waves of workloads is this sort of data oriented, AI, machine learning workloads. IBM obviously has a lot of experience in that space. How did this relationship come together, and let's talk about what it brings to customers. >> It was all like customer driven, right? So all our customers they told us that, look Nutanix we have used your software to bring really unprecedented levels of like agility and simplicity to our data center infrastructure. But, you know, they run at certain sets of workloads on, sort of, non IBM platforms. But a lot of mission critical applications, a lot of the, you know, the cognitive applications. They want to leverage IBM for that, and they said, look can we get the same Nutanix one click simplicity all across my data center. And that is a promise that we see, can we bring all of the AHV goodness that abstracts the underlying platform no matter whether you're running on x86, or your cognitive applications, or your mission critical applications on IBM power. You know, it's a fantastic thing for a joint customer. >> So Stephanie come on, couldn't you reach somewhere into the IBM portfolio and pull out a hyper converged, you know, solution? Why Nutanix? >> Clients love it. Look what the hyper converged market is doing. It's growing at incredible rates, and clients love Nutanix, right? We see incredible repurchases around Nutanix. Clients buy three, next they buy 10. Those repurchase is a real sign that clients like the experience. Now you can take that experience, and under the same simplicity and elegance right of the Prism platform for clients. You can pull in and choose the infrastructure that's best for your workload. So I look at a single Prism experience, if I'm running a database, I can pull that onto a Power based offering. If I'm running a BDI I can pull that onto an alternative. But I can now with the simplicity of action under Prism, right for clients who love that look and feel, pick the best infrastructure for the workloads you're running, simply. That's the beauty of it. >> Raja, you know, Nutanix is spread beyond the initial platform that you had. You have Supermicro inside, you've got a few OEMs. This one was a little different. Can you bring us inside a little bit? You know, what kind of engineering work had to happen here? And then I want to understand from a workload perspective, it used to be, okay what kind of general purpose? What do you want on Power, and what should you say isn't for power? >> Yeah, yeah, it's actually I think a power to, you know it speaks to the, you know, the power of our engineering teams that the level of abstraction that they were able to sort of imbue into our software. The transition from supporting x86 platforms to making the leap onto Power, it has not been a significant lift from an engineering standpoint. So because the right abstractions were put in from the get go. You know, literally within a matter of mere months, something like six to eight months, we were able to have our software put it onto the IBM power platform. And that is kind of the promise that our customers saw that look, for the first time as they are going through a re-platforming of their data center. They see the power in Nutanix as software to abstract all these different platforms. Now in terms of the applications that, you know, they are hoping to run. I think, you know, we're at the cusp of a big transition. If you look at enterprise applications, you could have framed them as systems of record, and systems of engagement. If you look forward the next 10 years, we'll see this big shift, and this new class of applications around systems of intelligence. And that is what a lot-- >> David: Say that again, systems of-- >> Systems of intelligence, right? And that is where a lot of like IBM Power platform, and the things that the Power architecture provides. You know, things around better GPU capabilities. It's going to drive those applications. So our customers are thinking of running both the classical mission critical applications that IBM is known for, but as well as the more sort of forward leaning cognitive and data analytics driven applications. >> So Stephanie, on one hand I look at this just as an extension of what IBM's done for years with Linux. But why is it more, what's it going to accelerate from your customers and what applications that they want to deploy? >> So first, one of the additional reasons Nutanix was key to us is they support the Acropolis platform, which is KVM based. Very much supports our focus on being open around our playing in the Linux space, playing in the KVM space, supporting open. So now as you've seen, throughout since we launched POWER8 back in early 2014 we went Little Endian. We've been very focused on getting a strategic set of ISV's ported to the platform. Right, Hortonworks, MongoDB, EnterpriseDB. Now it's about being able to take the value propositions that we have and, you know, we're pretty bullish on our value propositions. We have a two x price performance guarantee on MongoDB that runs better on Power than it runs on the alternative competition. So we're pretty bullish. Now for clients who have taken a stance that their data center will be a hyper converged data center because they like the simplicity of it. Now they can pull in that value in a seamless way. To me it's really all about compatibility. Pick the best architecture, and all compatible within your data center. >> So you talked about, six to eight months you were able to do the integration. Was that Open Power that allowed you to do that, was it Little Endian, you know, advancements? >> I think it was a combination of both, right? We have done a lot from our Linux side to be compatible within the broad Linux ecosystem particularly around KVM. That was critical for this integration into Acropolis. So we've done a lot from the bottoms up to be, you know, Linux is Linux is Linux. And just as Raja said, right, they've done a lot in their platform to be able to abstract from the underlying and provide a seamless experience that, you know, I think you guys used the term invisible infrastructure, right? The experience to the client is simple, right? And in a simple way, pick the best, right for the workload I run. >> You talked about systems of intelligence. Bob Picciano a lot of times would talk about the insight economy. And so we're, you're right we have the systems of records, systems of engagement. Systems of intelligence, let's talk about those workloads a little bit. I infer from that, that you're essentially basically affecting outcomes, while the transaction is occurring. Maybe it's bringing transactions in analytics together. And doing so in a fashion that maybe humans aren't as involved. Maybe they're not involved at all. What do you mean by systems of intelligence, and how do your joint solutions address those? >> Yeah so, you know, one way to look at it is, I mean, so far if you look at how, sort of decisions are made and insights are gathered. It's we look at data, and between a combination of mostly, you know we try to get structured data, and then we try to draw inferences from it. And mostly it's human beings drawing the inferences. If you look at the promise of technologies like machine learning and deep learning. It is precisely that you can throw unstructured data where no patterns are obvious, and software will find patterns there in. And what we mean by systems of intelligence is imagine you're going through your business, and literally hundreds of terabytes of your transactional data is flowing through a system. The software will be able to come up with insights that would be very hard for human beings to otherwise kind of, you know infer, right? So that's one dimension, and it speaks to kind of the fact that there needs to be a more real time aspect to that sort of system. >> Is part of your strategy to drive specific solutions, I mean integrating certain IBM software on Power, or are you sort of stepping back and say, okay customers do whatever you want. Maybe you can talk about that. >> No we're very keen to take this up to a solution value level, right? We have architected our ISV strategy. We have architected our software strategy for this space, right? It is all around the cognitive workloads that we're focused on. But it's about not just being a platform and an infrastructure platform, it's about being able to bring that solution level above and target it. So when a client runs that workload they know this is the infrastructure they should put it on. >> What's the impact on the go to market then for that offering? >> So from a solutions level or when the-- >> Just how you know it's more complicated than the traditional, okay here is your platform for infrastructure. You know, what channel, maybe it's a question for Raja, but yeah. >> Yeah sure, so clearly, you know, the product will be sold by, you know, the community of Nutanix's channel partners as well as IBM's channels partners, right? So, and, you know, we'll both make the appropriate investments to make sure that the, you know, the daughter channel community is enabled around how they essentially talk about the value proposition of the solution in front of our joint customers. >> Alright we have to leave there, Stephanie, Raja, thanks so much for coming back in theCUBE. It's great to see you guys. >> Raja: Thank you. >> Stephanie: Great to see you both, thank you. >> Alright keep it right there everybody we'll be back with our next guest we're live from D.C. Nutanix dot next, be right back. (electronic music)
SUMMARY :
Brought to you by Nutanix. Great to see you guys again. Thanks for having us. so Stephanie I'm excited about you guys getting So you have Caffe, Torch, TensorFlow, You guys talk, you guys talked to the analysts this morning a lot of the, you know, the cognitive applications. for the workloads you're running, simply. beyond the initial platform that you had. Now in terms of the applications that, you know, and the things that the Power architecture provides. So Stephanie, on one hand I look at this just as that we have and, you know, Was that Open Power that allowed you to do that, to be, you know, Linux is Linux is Linux. What do you mean by systems of intelligence, It is precisely that you can throw unstructured data or are you sort of stepping back and say, It is all around the cognitive workloads Just how you know it's more complicated the appropriate investments to make sure that the, you know, It's great to see you guys. you both, thank you. Alright keep it right there everybody
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Raja Mukhopadhyay | PERSON | 0.99+ |
Stephanie | PERSON | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Stephanie Chiras | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Bob Picciano | PERSON | 0.99+ |
Stefanie Chiras | PERSON | 0.99+ |
$15 billion | QUANTITY | 0.99+ |
Raja | PERSON | 0.99+ |
six | QUANTITY | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
Washington D.C. | LOCATION | 0.99+ |
both | QUANTITY | 0.99+ |
eight months | QUANTITY | 0.99+ |
IBM Cognitive Systems | ORGANIZATION | 0.99+ |
three | QUANTITY | 0.99+ |
early 2014 | DATE | 0.99+ |
Linux | TITLE | 0.99+ |
10 | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
first time | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
two | QUANTITY | 0.97+ |
IBM Power Systems Offering Management | ORGANIZATION | 0.96+ |
hundreds of terabytes | QUANTITY | 0.95+ |
#NEXTconf | EVENT | 0.95+ |
Prism | ORGANIZATION | 0.95+ |
single | QUANTITY | 0.94+ |
MongoDB | TITLE | 0.94+ |
Supermicro | ORGANIZATION | 0.93+ |
Hortonworks | ORGANIZATION | 0.93+ |
Vice President | PERSON | 0.92+ |
one way | QUANTITY | 0.92+ |
Senior Vice President | PERSON | 0.86+ |
POWER8 | TITLE | 0.86+ |
next 10 years | DATE | 0.86+ |
NEXTconf | EVENT | 0.83+ |
this morning | DATE | 0.83+ |
one dimension | QUANTITY | 0.79+ |
Acropolis | ORGANIZATION | 0.79+ |
x86 | QUANTITY | 0.75+ |
NVLink | OTHER | 0.74+ |
Endian | ORGANIZATION | 0.73+ |
EnterpriseDB | TITLE | 0.73+ |
VP | PERSON | 0.68+ |
Linton Ward, IBM & Asad Mahmood, IBM - DataWorks Summit 2017
>> Narrator: Live from San Jose, in the heart of Silicon Valley, it's theCUBE! Covering Data Works Summit 2017. Brought to you by Hortonworks. >> Welcome back to theCUBE. I'm Lisa Martin with my co-host George Gilbert. We are live on day one of the Data Works Summit in San Jose in the heart of Silicon Valley. Great buzz in the event, I'm sure you can see and hear behind us. We're very excited to be joined by a couple of fellows from IBM. A very longstanding Hortonworks partner that announced a phenomenal suite of four new levels of that partnership today. Please welcome Asad Mahmood, Analytics Cloud Solutions Specialist at IBM, and medical doctor, and Linton Ward, Distinguished Engineer, Power Systems OpenPOWER Solutions from IBM. Welcome guys, great to have you both on the queue for the first time. So, Linton, software has been changing, companies, enterprises all around are really looking for more open solutions, really moving away from proprietary. Talk to us about the OpenPOWER Foundation before we get into the announcements today, what was the genesis of that? >> Okay sure, we recognized the need for innovation beyond a single chip, to build out an ecosystem, an innovation collaboration with our system partners. So, ranging from Google to Mellanox for networking, to Hortonworks for software, we believe that system-level optimization and innovation is what's going to bring the price performance advantage in the future. That traditional seamless scaling doesn't really bring us there by itself but that partnership does. >> So, from today's announcements, a number of announcements that Hortonworks is adopting IBM's data science platforms, so really the theme this morning of the keynote was data science, right, it's the next leg in really transforming an enterprise to be very much data driven and digitalized. We also saw the announcement about Atlas for data governance, what does that mean from your perspective on the engineering side? >> Very exciting you know, in terms of building out solutions of hardware and software the ability to really harden the Hortonworks data platform with servers, and storage and networking I think is going to bring simplification to on-premises, like people are seeing with the Cloud, I think the ability to create the analyst workbench, or the cognitive workbench, using the data science experience to create a pipeline of data flow and analytic flow, I think it's going to be very strong for innovation. Around that, most notable for me is the fact that they're all built on open technologies leveraging communities that universities can pick up, contribute to, I think we're going to see the pace of innovation really pick up. >> And on that front, on pace of innovation, you talked about universities, one of the things I thought was really a great highlight in the customer panel this morning that Raj Verma hosted was you had health care, insurance companies, financial services, there was Duke Energy there, and they all talked about one of the great benefits of open source is that kids in universities have access to the software for free. So from a talent attraction perspective, they're really kind of fostering that next generation who will be able to take this to the next level, which I think is a really important point as we look at data science being kind of the next big driver or transformer and also going, you know, there's not a lot of really skilled data scientists, how can that change over time? And this is is one, the open source community that Hortonworks has been very dedicated to since the beginning, it's a great it's really a great outcome of that. >> Definitely, I think the ability to take the risk out of a new analytical project is one benefit, and the other benefit is there's a tremendous, not just from young people, a tremendous amount of interest among programmers, developers of all types, to create data science skills, data engineering and data science skills. >> If we leave aside the skills for a moment and focus on the, sort of, the operationalization of the models once they're built, how should we think about a trained model, or, I should break it into two pieces. How should we think about training the models, where the data comes from and who does it? And then, the orchestration and deployment of them, Cloud, Edge Gateway, Edge device, that sort of thing. >> I think it all comes down to exactly what your use case is. You have to identify what use case you're trying to tackle, whether that's applicable to clinical medicine, whether that's applicable to finance, to banking, to retail or transportation, first you have to have that use case in mind, then you can go about training that model, developing that model, and for that you need to have a good, potent, robust data set to allow you to carry out that analysis and whether you want to do exploratory analysis or you want to do predictive analysis, that needs to be very well defined in your training stage. Once you have that model developed, then we have certain services, such as Watson Machine Learning, within data science experience that will allow you to take that model that you just developed, just moments ago, and just deploy that as a restful API that you can then embed into an application and to your solution, and in that solution you can basically use across industry. >> Are there some use cases where you have almost like a tiering of models where, you know, there're some that are right at the edge like, you know, a big device like a car and then, you know, there's sort of the fog level which is the, say, cell towers or other buildings nearby and then there's something in the Cloud that's sort of like, master model or an ensemble of models, I don't assume that's like, Evel Knievel would say you know, "Don't try that at home," but sort-of, is the tooling being built to enable that? >> So the tooling is already in existence right now. You can actually go ahead right now and be able to build out prototypes, even full-level, full-range applications right on the Cloud, and you can do that, you can do that thanks to Data Science Experience, you can do that thanks to IBM Bluemix, you can go ahead and do that type of analysis right there and not only that, you can allow that analysis to actually guide you along the path from building a model to building a full-range application and this is all happening on the Cloud level. We can talk more about it happening on on-premise level but on the Cloud level specifically, you can have those applications built on the fly, on the Cloud and have them deployed for web apps, for moblie apps, et cetera. >> One of the things that you talked about is use cases in certain verticals, IBM has been very strong and vertically focused for a very long time, but you kind of almost answered the question that I'd like to maybe explore a little bit more about building these models, training the models, in say, health care or telco and being able to deploy them, where's the horizontal benefits there that IBM would be able to deliver faster to other industries? >> Definitely, I think the main thing is that IBM, first of all, gives you that opportunity, that platform to say that hey, you have a data set, you have a use case, let's give you the tooling, let's give you the methodology to take you from data, to a model, to ultimately that full range application and specifically, I've built some applications specific to federal health care, specifically to address clinical medicine and behavioral medicine and that's allowed me to actually use IBM tools and some open source technologies as well to actually go out and build these applications on the fly as a prototype to show, not only the realm, the art of the possible when it comes to these technologies, but also to solve problems, because ultimately, that's what we're trying to accomplish here. We're trying to find real-world solutions to real-world problems. >> Linton, let me re-direct something towards you about, a lot of people are talking about how Moore's law slowing down or even ending, well at least in terms of speed of processors, but if you look at the, not just the CPU but FPGA or Asic or the tensor processing unit, which, I assume is an Asic, and you have the high speed interconnects, if we don't look at just, you know what can you fit on one chip, but you look at, you know 3D what's the density of transistors in a rack or in a data center, is that still growing as fast or faster, and what does it mean for the types of models that we can build? >> That's a great question. One of the key things that we did with the OpenPOWER Foundation, is to open up the interfaces to the chip, so with NVIDIA we have NVLink, which gives us a substantial increase in bandwidth, we have created something called OpenCAPI, which is a coherent protocol, to get to other types of accelerators, so we believe that hybrid computing in that form, you saw NVIDIDA on-stage this morning, and we believe especially for deploring the acceleration provided for GPUs is going to continue to drive substantial growth, it's a very exciting time. >> Would it be fair to say that we're on the same curve, if we look at it, not from the point of view of, you know what can we fit on a little square, but if we look at what can we fit in a data center or the power available to model things, you know Jeff Dean at Google said, "If Android users "talk into their phones for two to three minutes a day, "we need two to three times the data centers we have." Can we grow that price performance faster and enable sort of things that we did not expect? >> I think the innovation that you're describing will, in fact, put pressure on data centers. The ability to collect data from autonomous vehicles or other N points is really going up. So, we're okay for the near-term but at some point we will have to start looking at other technologies to continue that growth. Right now we're in the throws of what I call fast data versus slow data, so keeping the slow data cheaply and getting the fast data closer to the compute is a very big deal for us, so NAND flash and other non-volatile technologies for the fast data are where the innovation is happening right now, but you're right, over time we will continue to collect more and more data and it will put pressure on the overall technologies. >> Last question as we get ready to wrap here, Asad, your background is fascinating to me. Having a medical degree and working in federal healthcare for IBM, you talked about some of the clinical work that you're doing and the models that you're helping to build. What are some of the mission critical needs that you're seeing in health care today that are really kind of driving, not just health care organizations to do big data right, but to do data science right? >> Exactly, so I think one of the biggest questions that we get and one of the biggest needs that we get from the healthcare arena is patient-centric solutions. There are a lot of solutions that are hoping to address problems that are being faced by physicians on a day-to-day level, but there are not enough applications that are addressing the concerns that are the pain points that patients are facing on a daily basis. So the applications that I've started building out at IBM are all patient-centric applications that basically put the level of their data, their symptoms, their diagnosis, in their hands alone and allows them to actually find out more or less what's going wrong with my body at any particular time during the day and then find the right healthcare professional or the right doctor that is best suited to treating that condition, treating that diagnosis. So I think that's the big thing that we've seen from the healthcare market right now. The big need that we have, that we're currently addressing with our Cloud analytics technology which is just becoming more and more advanced and sophisticated and is trending towards some of the other health trends or technology trends that we have currently right now on the market, including the Blockchain, which is tending towards more of a de-centralized focus on these applications. So it's actually they're putting more of the data in the hands of the consumer, of the hands of the patient, and even in the hands of the doctor. >> Wow, fantastic. Well you guys, thank you so much for joining us on theCUBE. Congratulations on your first time being on the show, Asad Mahmood and Linton Ward from IBM, we appreciate your time. >> Thank you very much. >> Thank you. >> And for my co-host George Gilbert, I'm Lisa Martin, you're watching theCUBE live on day one of the Data Works Summit from Silicon Valley but stick around, we've got great guests coming up so we'll be right back.
SUMMARY :
Brought to you by Hortonworks. Welcome guys, great to have you both to build out an ecosystem, an innovation collaboration to be very much data driven and digitalized. the ability to really harden the Hortonworks data platform and also going, you know, there's not a lot is one benefit, and the other benefit is of the models once they're built, and for that you need to have a good, potent, to actually guide you along the path that platform to say that hey, you have a data set, the acceleration provided for GPUs is going to continue or the power available to model things, you know and getting the fast data closer to the compute for IBM, you talked about some of the clinical work There are a lot of solutions that are hoping to address Well you guys, thank you so much for joining us on theCUBE. on day one of the Data Works Summit from Silicon Valley
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
George Gilbert | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Jeff Dean | PERSON | 0.99+ |
Duke Energy | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
Asad Mahmood | PERSON | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Raj Verma | PERSON | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
Asad | PERSON | 0.99+ |
Mellanox | ORGANIZATION | 0.99+ |
San Jose | LOCATION | 0.99+ |
Hortonworks | ORGANIZATION | 0.99+ |
Evel Knievel | PERSON | 0.99+ |
OpenPOWER Foundation | ORGANIZATION | 0.99+ |
two pieces | QUANTITY | 0.99+ |
Linton | PERSON | 0.99+ |
Linton Ward | PERSON | 0.99+ |
three times | QUANTITY | 0.99+ |
Data Works Summit | EVENT | 0.99+ |
one | QUANTITY | 0.98+ |
first time | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
one chip | QUANTITY | 0.98+ |
one benefit | QUANTITY | 0.97+ |
One | QUANTITY | 0.96+ |
Android | TITLE | 0.96+ |
three minutes a day | QUANTITY | 0.95+ |
both | QUANTITY | 0.94+ |
day one | QUANTITY | 0.94+ |
Moore | PERSON | 0.93+ |
this morning | DATE | 0.92+ |
OpenCAPI | TITLE | 0.91+ |
first | QUANTITY | 0.9+ |
single chip | QUANTITY | 0.89+ |
Data Works Summit 2017 | EVENT | 0.88+ |
telco | ORGANIZATION | 0.88+ |
DataWorks Summit 2017 | EVENT | 0.85+ |
NVLink | COMMERCIAL_ITEM | 0.79+ |
NVIDIDA | TITLE | 0.76+ |
IBM Bluemix | ORGANIZATION | 0.75+ |
Watson Machine Learning | TITLE | 0.75+ |
Power Systems OpenPOWER Solutions | ORGANIZATION | 0.74+ |
Edge | TITLE | 0.67+ |
Edge Gateway | TITLE | 0.62+ |
couple | QUANTITY | 0.6+ |
Covering | EVENT | 0.6+ |
Narrator | TITLE | 0.56+ |
Atlas | TITLE | 0.52+ |
Linton | ORGANIZATION | 0.51+ |
Ward | PERSON | 0.47+ |
3D | QUANTITY | 0.36+ |