Image Title

Search Results for Nirmal:

Dinesh Nirmal, IBM | IBM Think 2020


 

>> Announcer: From theCUBE Studios in Palo Alto and Boston, it's theCUBE, covering IBM Think, brought to you by IBM. >> Welcome back, I'm Stu Miniman, and this is theCUBE's coverage of IBM Think 2020, the digital experience. Welcome to the program, Dinesh Nirmal, who's the chief product officer for Cloud Paks inside IBM. Dinesh, nice to see you, thanks so much for joining us. >> Thank you Stu, really appreciate you taking the time. >> All right, so, I've been to many IBM shows, and of course, I'm an analyst in the cloud space, so I'm familiar with IBM Cloud Paks, but maybe just refresh our audience minds here, what they are, how long have they been around for, what clouds do they live on, and maybe what's new in 2020 that if somebody had looked at this in the past that they might not know about IBM Cloud Pak? >> Yeah, so thanks Stu. So to start with, let me say that Cloud Paks are cloud agnostic. So, the whole goal is that you build once and it can run anywhere. That is the basic mantra, or principle, that we want to build Cloud Paks with. So they are, look at them as a set of micro services containerized in a form that it can run on any public cloud or behind a firewall. So that's the whole premise of Cloud Paks. So, when you go back to Cloud Paks, it's an integrated set of services that solve a specific set of business problems and also accelerates building each set of applications and solutions. That's what Cloud Paks brings. So, especially in this environment Stu, think about it. If I'm an enterprise, my goal is how can I accelerate and how can I automate? Those are the two key things that comes to my mind if I am a C-level exec at an enterprise. So, Cloud Paks enables that, meaning you already have a set of stitched together services that accelerates the application development. It automates a lot of things for you. So you today have a lot of applications running on multiple clouds or behind the firewall. How do you manage those, right? Cloud Paks will help. So, let me give you one example since you asked specifically on Cloud Paks. Let's take Cloud Pak for Data. The set of services that is available in Cloud Pak for Data will make it easier for all the way from ingest to visualization. There's a set of services that you can use, so you don't have to go build a service or a product or user product for ingest, then use another product for ETL, use another product for building models, another product to manage those models. The Cloud Pak for Data will solve all the problems end to end. It's a rich set of services that will give you all the value that you need all the way from ingest to visualization. And with any personas, whether you are a data engineer, data scientist, or you are a business analyst, you all can collaborate through the Cloud Paks. So that's the two minute answer to your question what Cloud Paks is. >> Awesome, thanks Dinesh. Yeah, I guess you pointed out something right at the beginning there. I hear IBM Cloud Pak and I think IBM Cloud. But you said specifically this is really cloud agnostic. So this week is Think, last week I was covering Red Hat Summit, so I heard a lot about multicloud deployments, talked to the rail team, talked to the open chip team. So, help me understand where do Cloud Pak fit when we're talking about these multicloud employments? And is there some connection with the partnership that, of course, IBM has with Red Hat? >> Of course, so all Cloud Paks are optimized for OpenShift, meaning how do we use the set of services that OpenShift gives, the container management that OpenShift provides? So as we build containers or micro services, how do we make sure that we are optimizing or taking advantage of OpenShift? So, for example, the set of services like logging, monitoring, security, all those services metering that comes from OpenShift is what we are using as Cloud Pak. So Cloud Paks are optimized for OpenShift. From an automation perspective, how do we use Ansible, right? So, all the value that Red Hat and OpenShift brings is what Cloud Pak is built on. So if you look at as a layer as a Lego, the base Lego is OpenShift and rail. And then on top of it sit Cloud Paks, and applications and solutions on top of it. So it's, if I look at layer base, the base Lego layer is OpenShift and Red Hat rail. >> Well, great, that's super important because, one of the things we've been looking at for a while is, you talk about hybrid cloud, you talk about multicloud, and often it's not that platform, that infrastructure discussion, but the biggest challenge for companies today is how do I build new applications, how do I modernize what I have? So, sounds like this is exactly where you're targeting to help people through that transformation that they're going through. >> Yeah, exactly Stu, because if you look at it, in the past products were siloed. You build a product, you use a set of specs to build it. It was siloed. And customers becomes the software integrators, or system integrators, where they have to take the different products, put it together. So even if I am focused on the data space, or AI space, before I had to bring in three or four or five different products, make it all work together to build a model, deploy the model, manage the model, the lifecycle of the model, the lifecycle of the data. But the Cloud Paks bring it all in one box, where out of the box you are ready to go. So your time to value is much more higher with Cloud Paks because you already get a set of stitched together services that gets working right out of the box. >> So, I love the idea of out of the box. When I think of cloud native, modern application development, simplicity is not the first thing I think of, Dinesh. So, help me understand. So many customers, it's the tools, the skillsets, they don't necessarily have the experience. How is what your product set and your team's doing, help customers that deal with the ever-changing landscape and the complexity that they are faced with? >> Yeah, so the honest truth, Stu, is that enterprise applications are not an app that you create and put it on iPhone, right? I mean, it is much more complex, because it's dealing with hundreds of millions of people trying to transact with the system. You need to make sure there is a disaster recovery backup, scalability, elasticity, all those things, security, obviously, very critical piece, and multitenancy. All those things has to come together in an enterprise application. So, when people talk about simplicity, it comes at a price. So, what Cloud Paks has done, is that we have really focused on the user experience and design piece. So, you as an end-user has a great experience using the integrated set of services. The complexity piece will still be there, to some extent, because you're building a very complex multitenant enterprise application, but how do we make it easier for a developer or a data scientist to collaborate or reuse the assets, find the data much more easier, or trust the data much more easier than before? Use AI to predict a lot of the things, including bias detection, all those things. So, we are making a lot of the development, automation and acceleration easier. The complexity part will be there still, because enterprise applications tend to be complex by nature. But we are making it much more easier for you to develop, deploy, manage and govern what you are building. >> Yeah, so, how does Cloud Paks allow you to really work with the customers, focus on things like innovation, showing them the latest in the IBM software portfolio? >> Yeah, so the first piece is that we made it much more easier for the different personas to collaborate. So in the past, what is the biggest challenge, me as a data scientist had? Me as a data scientist, the biggest challenge was that getting access to the data, trusted data. Now we have put some governance around it, where by which you can get data, trusted data, much more easier using Cloud Pak for Data. Governance around the data, meaning if you have a CDO, you want to see who is using the data, how clean is the data, right? A lot of times he data might not be clean, so we want to make sure we can help with that. Now, let me move into the the line of business piece, not just the data. If I am an LOB, and I want to use, automate a lot of the process I have in today, in my enterprise, and not go through the every process automation, and go through your superior or supervisor to get approval, how do we use AI in the business process automation also? So those kind of things, you will get through Cloud Paks. Now, the other piece of Cloud Pak, if I am an IT space, right? The day-two operations, scalability, security, delivery of the software, backup and restore, how do we automate and help with that, the storage layer? Those are day-two operations. So, we are taking it all the way from day one, meaning the whole experience of setting it up, to day two, where enterprise is really worried about, making it seamless and easy using Cloud Paks, I go back to what I said in the beginning, which is out of the accelerate and automate, a lot of the work that enterprise have to do today, much more easier. >> Okay, we talked earlier in the discussion about that this can be used across multiple clouded environments. My understanding, you mentioned one of the IBM Cloud Paks, one for data. There's a number of different Cloud Paks out there. How does that work from a customer's standpoint? Do I have to choose a Cloud Pak or a specific cloud? Is it a license that goes across all of my environments? Help me understand how this deployment mechanism and its support and maintenance works. >> Right, so we have the base, obviously. I said look at it as a modular Lego model. The base is obviously open chipped and rail. On top of its cells sits a bedrock, we call, which is a common set of services and the logic to expand. On top of it sits Cloud Pak for Data, Cloud Pak for Security, Cloud Pak for Applications, there's Cloud Pak for Multicloud Management, there's Cloud Pak for Integration. So there is total of six Cloud Paks that's available, but you can pick and choose which Cloud Pak you want. So let's say you are a CDO, or you are an enterprise who want to focus on data and AI, you can just pick Cloud Pak for Data. Or let's say you are a Cloud Pak based on processes, BPM decision rules, you can go without platform automation, which gives you the set of tools. But the biggest benefits too, is that all these Cloud Paks are a set of integrated services that can all work together, sits optimized on top of open chipped. So, all of a sudden, you'll need Cloud Pak for Data, and now you want to do data, but now you want to expand it into your line of business, and you want Cloud Pak for Automation, you can bring that in. Now those two Cloud Paks works together well. Now you want to bring in Cloud Pak for Multicloud Management, because you have data, or applications running on multiple clouds, so now you can bring Cloud Pak for MCM, which is multicloud management, and those three work together. So it's all a set of integrated set of services that is optimized on top of OpenShift, which makes it much more easier for customers to bring the rich set of services together and accelerate and automate their lifecycle journey within the enterprise. >> Great, last question for you Dinesh. What new in 2020, what should customers be looking at today? Would love if you can give a little bit of guidance as to where customers should be looking at for things that might be coming a little bit down the line here, and if they want to learn more about IBM Cloud Paks, where should they be looking? >> Yeah, they want to learn more, there's www.ibm.com/cloudpaks. There's a place to go. There, all the details around Cloud Paks are there. You can also get in touch with me, and I can definitely take you to more detail. But what is coming is that, look, so we have a set of Cloud Paks, but we want to expand and make it extensible. So how do we, already it's built on an open platform, but how do we make sure our partners and ISPs can come and build on top of the base-cloud part? So that's the focus going to be, as each Cloud Pak innovate and add more value within those Cloud Paks. We also want to expand it so that our partners and our ISPs and GSIs can build on top of it. So this year, the focus is continuously innovate across the Cloud Paks, but also make it much more extensible for third parties to come and build more value on top of the Cloud Pak itself. That's one area we are focusing on. The other area's MCM, right? Multicloud management, because there is tremendous appetite for customers to move data or applications on cloud, and not only on one cloud, hybrid cloud. So how do you manage that, right? So multicloud management definitely helps on that perspective. So our focus this year is going to be one, make it extensible, make it more open, but at the same time continuously innovate on every single Cloud Pak to make that journey for customers on automating and accelerating application development easier. >> All right, well Dinesh, thank you so much. Yeah, the things that you talked about, that absolutely top of mind for customers that we talked to. Multicloud management, as you said, it was the ACM, the Advanced Cluster Management, that we heard about from the Red Hat team last week at Summit. So thank you so much for the updates. Definitely exciting to watch Cloud Pak, how you're helping customers deal with that huge, it's the opportunity but also the challenge of building their next applications, modernizing what they're doing without, still having to think about what they have from (faintly speaking), so thanks so much, great to talk with you. >> Well, thanks Stu, great talking. >> All right, lots more coverage from IBM Think 2020, the digital experience. I'm Stu Miniman, and as always, thank you for watching theCUBE. (upbeat music)

Published Date : May 4 2020

SUMMARY :

Think, brought to you by IBM. the digital experience. appreciate you taking the time. So, the whole goal is that you build once right at the beginning there. So, for example, the set but the biggest challenge the lifecycle of the model, and the complexity that lot of the development, for the different personas to collaborate. one of the IBM Cloud Paks, services and the logic to expand. a little bit down the line here, So that's the focus going to be, Yeah, the things that you talked about, the digital experience.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dinesh NirmalPERSON

0.99+

IBMORGANIZATION

0.99+

DineshPERSON

0.99+

last weekDATE

0.99+

Stu MinimanPERSON

0.99+

Palo AltoLOCATION

0.99+

fourQUANTITY

0.99+

BostonLOCATION

0.99+

2020DATE

0.99+

threeQUANTITY

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

first pieceQUANTITY

0.99+

StuPERSON

0.99+

two minuteQUANTITY

0.99+

one boxQUANTITY

0.99+

Red HatORGANIZATION

0.99+

LegoORGANIZATION

0.99+

todayDATE

0.99+

Cloud Pak for DataTITLE

0.98+

this yearDATE

0.98+

twoQUANTITY

0.98+

one exampleQUANTITY

0.98+

each setQUANTITY

0.98+

Cloud PaksTITLE

0.98+

Cloud Pak for DataTITLE

0.98+

sixQUANTITY

0.98+

Red Hat SummitEVENT

0.98+

theCUBEORGANIZATION

0.98+

www.ibm.com/cloudpaksOTHER

0.98+

this weekDATE

0.98+

two keyQUANTITY

0.97+

day twoQUANTITY

0.97+

OpenShiftTITLE

0.96+

Cloud PakTITLE

0.96+

oneQUANTITY

0.95+

Cloud PakTITLE

0.95+

eachQUANTITY

0.95+

hundreds of millions of peopleQUANTITY

0.93+

Cloud PaksTITLE

0.92+

theCUBE StudiosORGANIZATION

0.91+

day oneQUANTITY

0.91+

Think 2020COMMERCIAL_ITEM

0.9+

Cloud Pak forTITLE

0.89+

Cloud PaksORGANIZATION

0.89+

Multicloud ManagementTITLE

0.88+

PaksTITLE

0.87+

Dinesh Nirmal, IBM | IBM Think 2020


 

>>Yeah, >>from the Cube Studios in Palo Alto and Boston. It's the Cube covering IBM think wrought by IBM. >>Welcome back. I'm Stew Middleman. And this is the Cube's coverage of IBM. Think 2020. The digital experience. Happy to welcome to the program. Dinesh Nirmal, Who's the chief product officer for cloud packs inside IBM Deneche. Nice to see you. Thanks so much for joining us. >>Thank you. Still really appreciate you taking the time. >>All right, So Ah, I've been to many IBM shows, and of course, I'm an analyst in the cloud space. So I'm familiar with IBM cloud packs. But maybe, you know, just refresh our audience minds here what they are. How long have they been around for? You know, what clouds do they live on? And maybe what's what's new in 2020? That if somebody had looked at this, you know, in the past that they might not know about IBM Cloud. >>Yeah, so thanks to start with, let me say that. Well, tax at 12. Agnostic. So the whole goal is that you build once and it can run anywhere. That is the basic mantra or principle that we want to build packs with, So they're looking at them as a set of micro services containerized in the form that it can run on any public or behind the firewall. So that's the whole premise of about Pass. So when you go back to cloud packs, it's an integrator set of services that solve a specific set of business problems and also accelerates building rich set of applications and solutions. That's what cloud back springs. So you know, especially in this and moments to think about it, you know, violent underprice. My goal is how can I accelerate and how can I automate? Those are the two key things you know that comes to my mind if I am a C level execs at an enterprise. So cloud practice enables that meaning you already have a set off stitched together services that accelerate the application development. It automates a lot of things for you. So you do. They have a lot of applications running on multiple clouds or behind the firewall. How do you manage those right about banks will help, so I'll let me give you one example. Since you are specifically on 12 packs, let's stay cloud back for data the set of services that is available in cloud back for data will make it easier for all the way from ingest to visualization. There's a set of services that you can use so you don't have to go build a service or a product or use a product for ingest. Then use another product for CTL. Use another product for building models, another product to manage those models. The cloud back for data will solve all the problems and to end. It's a rich set of services that will give you all the value that you need all the way from ingest to visualization and with any personas We know whether you are a data engineer, data scientist or you are, you know, business analyst. You all can cooperate through the part. So that's the you know, two minute answer your question. What about practice? >>Awesome. Thanks in. Actually, I I guess you pointed out something right at the beginning. There I hear IBM Cloud pack and I think IBM cloud. But you said specifically this is really cloud agnostic. So you know, this week is think Last week I was covering Red Hat Summit, so I heard a lot about multi cloud deployments. You know, talk to the well team, talk to the open shift team. Um, so help me understand. You know, where do cod packed bit when we're talking about, you know, these multi cloud employments, you know? And is there some connection with the partnership that, of course, IBM has with red hat >>off course. I mean, so all cloud packs are optimized for open shipped. Meaning, you know, how do we use the set of services that open ship gives that container management that open provides. So as we build containers or micro services, how do we make sure that we are optimizing or taking advantage of open ship? So, for example, the set of services like logging, monitoring, security, all those services meeting that comes from open shift is what we are using. Basketball packs of cloud packs are optimized for open shift. Um, you know, from an automation perspective, how do we use and simple Right? So all the value that red hat an open ship brings is what about back is built on. So if you look at it as a layer as a Lego, the based Lego is open shift and rail. And then on top of it sits cloud pass and applications and solutions on top of it. So if I look at layer bass, the bass Lego layer is open shift and red pepper. >>Well, great. That's that's super important because, you know, one of the things we've been looking at for a while is you talk about hybrid cloud, You talk about multi cloud, and often it's that platform that infrastructure discussion. But the biggest challenge for companies today is how do I build new applications? How do I modernize what I have? So >>it >>sounds like this is exactly, you know where you're targeting to help people, you know, through the through that transformation that they're going through. >>Yeah, exactly. Stew. Because if you look at it, you know, in the past, products for siloed I mean, you know, you build a product, you use a set of specs to build it. It was a silo, and customers becomes the software integrators, or system integrators, where they have to take the different products, put it together. So even if I am, you know, focused on the data space or AI space before I have to bring in three or four or five different products make it all work together to build a model, deploy the model, manage the model, the lifecycle of the model, the life cycle of the data. But the cloud packs bring it all in one box, were out of the box. You're ready to go, so your time to value is much more higher with cloud packs because you already get a several stitched together services that gets working right out of the box. >>So I love the idea of out of the box when I think of Cloud native modern modern application development. Simplicity is not the first thing I think of Danish. So help me understand. You know so many customers. It's, you know, the tools, the skill set. You know, they don't necessarily have the experience. How is what you know your products and your team's doing help customers deal with, You know, the ever changing landscape and the complexity that they're faced with. >>Yeah, so the that honest roots, too, is that enterprise applications are not an app that you create and put it on iPhone, right? I mean, it is much more complex because it's dealing with you know, hundreds of millions of people trying to transact with the system. You need to make sure there is a disaster recovery, backup scalability, the elasticity. I mean, all those things. Security, I mean, obviously very critical piece and multi tenancy. All those things has to come together in an enterprise application. So when people talk about, you know, simplicity, it comes at a price. So what cloud practice is done is that you know, we have really focused on the user experience and design piece. So you, as an end user, has a great Syrians using the integrated set of services. The complexity piece will still be there to some extent, because you're building a very complex, you know, multi tenant application, the price application. But how do we make it easier for a developer or a data scientist to collaborate or reuse the assets, find the data much more easier or trusted data much more easier than before? Use AI, you know, to predict a lot of the things including, you know, bias detection, all those things. So we're making a lot off the development, automation and acceleration easier. The complexity part will be there still, because You know, enterprise applications tend to be complex by nature, but we're making it much more easier for you to develop, deploy and manage and govern what you're building. >>Yeah, so? So how does cloud packs allow you to really, you know, work with customers focus on, you know, things like innovation showing them the latest in the IBM software portfolio. >>Yeah. So off the first pieces that we made it much more easier for the different personas to collaborate. So in the past, you know what is the biggest challenge? Me as a data scientist had me as a data scientist. The biggest challenge was that getting access to the data Trusted data. Now, you know, we have put some governance around it, or by average, you can get, you know, data trusted data much more easier using our back to data governments around the data. Meaning if you have a CDO, you want to see who is using the data? How clean is that data, right? I mean, a lot of times that data might not be clean, so we want to make sure we can. You can help with that. Now let me move into the you know the line of business. He's not just the data. If I am, you know, a l l o B. And I want to use order. Made a lot of the process I have in today in my enterprise and not go to the ah, the every process automation and go through your superior or supervisor to get approval. How do we use AI in the business process? Automation also. So those kind of things you will get to cloud parts now, the other piece of it. But if I'm an I t space, right, the day two operations scalability, security, delivery of the software, backup and restore how do we automate and help it at the storage layer? Those air day two operations. So we're taking it all the way from day one. The whole experience of setting it up today to where enterprises really wherever, making it seamless and easy. Using quote Thanks. I go back to what I said in the beginning, which is how do we accelerate and automate a lot of the work that enterprises today much more easier. >>Okay, Wei talked earlier in the discussion about that. This can be used across multiple cloud environments. My understanding you mentioned one of the IBM cloud packs, one for data. There's a number of different cloud tax out there. How does that >>work from >>a customer standpoint? Do I have to choose a cloud pack for a specific cloud? Does it is a license that goes across all of my environment, Help me understand on this deployment mechanism and support inmates works >>right? So we have the base. Obviously, I said, You know, look at us. A modular Lego model. The base is obviously open shift in Rome. On top of itself sits a bedroom because which is a common set of services on the logic to experience. On top of it sits lower back for data well back for security cloud. For applications, there's cloud back for multi cloud management. There's cloud platform integration, so there's total of six power packs that's available, but you can pick and choose which about back you want. So let's say you are. You're a CD or you are an enterprise. We want to focus on data and ai. You can just speak lower back for data or let's say you are a, you know, based on processes, BPM decision rules. You can go with our back for automation, which gives you the set of tools. But the biggest benefits do is that all these quarterbacks are set up in the greatest services that can all work together since optimized on top of open ship. So all of a sudden you lead bar Cloud back for data, and now you want to do data. But now you want to expand it in New York and line of business. And you want that for automation? You can bring that in. Now those two quarterbacks works together. Well, now you want to bring it back for multi cloud management because you have data or applications running on multiple clouds. So now you can will bring it back for EMC M, which is multi cloud management. And those three work together. So it's fall as set off integrated set of services that is optimized on top of open shift, which makes it much more easier for customers to bring the rich set of services together and accelerate and automate their lifecycle journey within the enterprise. >>Great last question for you, Dan Ashe. You know what? What new in 2020. What should customers be looking at today would love if you could give a little bit of guidance as to where customers should be looking at for things that might be coming a little bit down the line here. And if they want to learn more about IBM cloud backs, where should they be looking? >>Yeah, if they want to learn more, there's, you know, the VW IBM dot com slash power packs. That's a place to go there. All the details on power packs are there. You can also get in touch with me, and I can definitely much more detail. But what is coming is that look. So we have a set of cloud parts, but we want to expand on, Make it extensible. So how do we you know, already it's built on an open platform. But how do we make sure our partners and I s feeds can come and build on top of the space cloud? So that's the focus going to be as each quote back innovate and add more value in within those cloud grants. We also wanted bandit so that you know our partners and our eyes, fees and GS size and build on top of it. So this year the focus is continuously in a way across the cloud part, but also make it much more extensible for third parties to come and build more value. That's the you know, That's one area of focus in the various EMC em right multi cloud management, because there is tremendous appetite for customers to move data or applications on cloud, and not only on one cloud hybrid cloud. So how do you manage that? Right. So multi cloud management definitely helps from that perspective. So our focus this year is going to be one. Make it extensible, make it more open, but at the same time continuously innovate on every single cloud part to make that journey for customers on automation and accelerating off application element easier. >>All right, we'll do next. Thank you so much. Yeah, the things that you talked about, that absolutely, you know, top of mind for customers that we talk to multi cloud management. As you said, it was the ACM, the advanced cluster management that we heard about from the Red Hat team last week at Summit. So thank you so much for the updates. Definitely exciting to watch cloud packed. How you're helping customers, you know, deal with that. That huge. It's the opportunity. But also the challenge of building their next applications, modernizing what they're doing without, you know, still having to think about what they have from their existing. Thanks so much. Great to talk. >>Thanks to you. >>All right, lots more coverage from IBM. Think 2020 The digital experience. I'm stew minimum. And as always, Thank you for watching the Cube. >>Yeah, Yeah, yeah, yeah, yeah.

Published Date : Apr 24 2020

SUMMARY :

It's the Cube Dinesh Nirmal, Who's the chief product officer for Still really appreciate you taking the time. That if somebody had looked at this, you know, in the past that they might not know about So that's the you So you know, this week is think Last week I was covering Red Hat Summit, So if you look at it as a layer as a Lego, the based Lego is open shift and That's that's super important because, you know, one of the things we've been looking at you know, through the through that transformation that they're going through. So even if I am, you know, focused on the data space or AI space before It's, you know, the tools, the skill set. So what cloud practice is done is that you know, we have really focused on the user experience So how does cloud packs allow you to really, you know, So in the past, you know what My understanding you mentioned one of the IBM cloud packs, So all of a sudden you lead bar Cloud back for data, What should customers be looking at today would love if you could give a little bit of guidance as That's the you know, That's one area of focus in the various EMC em right multi cloud Yeah, the things that you talked about, that absolutely, you know, And as always, Thank you for watching the Cube. Yeah, Yeah, yeah, yeah,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dan AshePERSON

0.99+

Dinesh NirmalPERSON

0.99+

IBMORGANIZATION

0.99+

New YorkLOCATION

0.99+

Stew MiddlemanPERSON

0.99+

fourQUANTITY

0.99+

2020DATE

0.99+

Palo AltoLOCATION

0.99+

12 packsQUANTITY

0.99+

fiveQUANTITY

0.99+

BostonLOCATION

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

Last weekDATE

0.99+

Red HatORGANIZATION

0.99+

RomeLOCATION

0.99+

last weekDATE

0.99+

threeQUANTITY

0.99+

LegoORGANIZATION

0.99+

VWORGANIZATION

0.99+

two minuteQUANTITY

0.99+

first piecesQUANTITY

0.99+

this weekDATE

0.99+

todayDATE

0.98+

one exampleQUANTITY

0.98+

this yearDATE

0.98+

Red Hat SummitEVENT

0.98+

twoQUANTITY

0.98+

six power packsQUANTITY

0.98+

oneQUANTITY

0.97+

two keyQUANTITY

0.97+

each quoteQUANTITY

0.97+

one boxQUANTITY

0.97+

Cube StudiosORGANIZATION

0.96+

WeiPERSON

0.95+

SyriansPERSON

0.95+

first thingQUANTITY

0.94+

this yearDATE

0.94+

EMC MORGANIZATION

0.92+

hundreds of millions of peopleQUANTITY

0.89+

day oneQUANTITY

0.89+

StewPERSON

0.88+

Think 2020COMMERCIAL_ITEM

0.85+

day twoQUANTITY

0.79+

CubeCOMMERCIAL_ITEM

0.77+

IBM DenecheORGANIZATION

0.77+

AgnosticPERSON

0.71+

CubeORGANIZATION

0.67+

single cloud partQUANTITY

0.63+

DanishLOCATION

0.6+

ACMORGANIZATION

0.6+

IBM dot comORGANIZATION

0.57+

banditORGANIZATION

0.52+

EMCORGANIZATION

0.5+

12QUANTITY

0.5+

CloudTITLE

0.34+

CloudCOMMERCIAL_ITEM

0.29+

Nirmal Mehta & Bret Fisher, Booz Allen Hamilton | DockerCon 2018


 

>> Live, from San Francisco, it's The Cube! Covering DockerCon '18. Brought to you by Docker and its ecosystem partners. >> Hey, welcome back to The Cube. We are live at DockerCon 2018 on a beautiful day in San Francisco. We're glad you're not playing hooky though if you're in the city because it's important to be here watching John Troyer and myself, Lisa Martin, talk to some awesome, inspiring guests. We're excited to welcome two Docker captains, that's right, to The Cube. We've got Nirmal Mehta, you are the chief technologist of Booz Allen. Welcome back to The Cube. And, we've got Bret Fisher, the author of Docker Mastery. Both of you, Docker captains. Can't wait to dig into that. But you're both speakers here at the fifth annual DockerCon. So Bret, let's talk, you just came off the stage basically. So, thank you for carving out some time for us. Talk to us about your session. What did you talk about? What was some of the interaction with the attendees? >> Well the focus is on Docker Swarm and I'm a assist admin at heart so I focus on ops more than developer but I spend my life helping developers get their stuff into production. And so, that talk centers around the challenges of going in and doing real work that's for a business with containers and how do you get what seems like an incredible amount of new stuff into production all at the same time on a container ecosystem. So, kind of helping them build the tools they need, and what we call a stack, a stack of tools, that ultimately create a full production solution. >> What were some of the commentary you heard from attendees in terms of... Were these mostly community members, were there users of container technology, what was sort of the dynamic like? >> Well you have, there's all sorts of dynamics, right? I mean you have startups, I think I took a survey in the room because it was packed and like 20% of the people in the room about were a solo DevOps admin. So they were the only person responsible for their infrastructure and their needs are way different than a team that has 20 or 30 people all serving that responsibility. So, the talk was a little bit about how do they handle their job and do this stuff. You know, all this latest technology without being overwhelmed and, then, how does it grow in complexity to a larger team and how do they sustain that. So, yeah. >> Bret, it's nice that the technology is mature enough now that people are in production, but what are some of the barriers that people hit when they try to go into production the first time? >> Yeah, great question. I think the biggest barrier is trying to do too much new at the same time. And, I don't know why we keep relearning this lesson in IT, right? We've had that problem for decades of projects being over cost, over budget, over timed, and I think with so much exciting new stuff in containers it's susceptible to that level of, we need all these new things, but you actually don't, right? You can actually get by with very small amounts of change, incrementally. So, we try to teach that pattern of growing over time, and, yeah. >> You mentioned like the one person team versus the multi-person team kind of DevOps organization. Does that same problem of boiling the ocean, do you see that in both groups? >> Yeah, I mean you have fundamentally the same needs, the same problem that you have to solve, but different levels of complexity is really all it has to do with and different levels of budget, obviously, right? So, usually the solo admin doesn't have the million dollar budget for all the tools and bells and whistles, so they might have to do more on their own, but, then, they also have less time so it's a tough row to hoe, you know, to deal with, because you've got those two different fundamental problems of time and money and people are using the most expensive thing. So, no matter what the tool is you're trying to buy, it's usually your time that's the most valuable thing. So how do we get more of our time back? And that's really what containers were all about originally was just getting more of our time back out of it and so we can put back into the business instead of focusing on the tech itself. >> Nirmal, your talk tomorrow is on empathy. >> Yes. >> Very provocative, dig into that for us. >> Sure, so it was actually inspired by a conversation I had with John a couple years ago on Geek Whisperers podcast and he asked the folks on that show, yourself included, asked if there was an event in my past that I kind of regret or taught me a lot. And it was about basically neglecting someone on my team and just kind of shoving them away. And, that moment was a big change in how I felt about the IT industry. And, what I had done was pushed someone who probably needed that help and built up a lot of courage to talk to me and I kind of just dismissed him too quickly. And, from there, I was thinking more and more about game theory and behavioral economics and seeing a lot of our clients and organizations struggle to go through a digital transformation, a DevOps transformation, a cultural transformation. So, to me, culture is kind of the core of what's happening in the industry. And so, the idea of my talk is a little bit of behavioral economics, a little bit of game theory, to kind of set the stage for where your IT organization is probably kind of is right now and how to use empathy to get your organization to that DevOps and to a more efficient place and resolve those conflicts that happen inherently. And, somehow tie that all together with Docker. So, that's kind of what my talk is all about. >> Nice, I mean what's interesting to me, Lisa, is that we do Cubes and there are many Cubes actually all across the country during conference season, right? And we talk to CEOs and VPs of very large companies and even today, at DockerCon, the word 'culture' and the talking about culture and process and people has come up every single interview. So, it's not just from the techies up that this conversation is going... this DevOps and empathy conversation is going on, it seems to be from the top down as well. Everyone seems to recognize that, if you really are going to get this productivity gain, it's not just about the tech, you gotta have culture. >> Absolutely, a successful transformation of an organization is both grassroots and top down. Can't have it without either. And, I think we inherently want to have a... Like, we want to take a pill to solve that problem and there's lots of pills: Docker or cloud or CICD or something. But, those tools are the foundational safety net for a cultural transformation, that's all that it is. So, if you're implementing Docker or Jenkins or some CICD pipeline or automation, that's a safety blanket for providing trust in an organization to allow that change in the culture to happen. But, you still need that cultural change. Just adopting Docker isn't going to make you automatically a more effective organization. Sorry, but it's just one piece and it's an important piece but you have to have that top down understanding of where you are now as an organization and where you want to be in the future. And understanding that this kind of legacy, siloed team mindset is no longer how you can achieve that. >> You talked about trust earlier from a thematic perspective as something that comes up. You know we were at SAP Sapphire last week and trust came up a lot as really paramount. And that was in the context of a vendor/customer relationship. But, to your point, it's imperative that it's actually coming from within organizations. We talk a lot about, well stuff today: multi-cloud--multi-cloud, silos-- but, there's also silos with people and without that cultural shift and probably that empathy, how successful, how big of an impact can a technology make? Are you talking with folks that are at the executive level as well as the developer level in terms of how they each have a stake and need to contribute to this empathy? >> Yeah, absolutely. So, the talk I'm doing is basically the ammunition a lower level person would need to go up to management and say, hey, you know this is where the organization is, this is what the IT department kind of looks like, these are the conflicts, and we have to change in order to succeed. And a lot of folks don't. They see the technology changes that they need. You know, adopting the new javascript framework or the new UX pattern. But, they might not have the ammunition to understand the business strategy, the organizational issues. But, they still need that evidence to actually convince a CTO or a CEO or a COO for the need to change. So, I've talked to both groups. From the C-level side, I think it comes from the inherent speed of the industry, the competitive landscape, those are all the pressures that they see and the disruptions that they are tackling. Maybe it's incumbent disruption or new startups that they may have to compete with in the future. The need for constant innovation is kind of the driver. And, IT is kind of where all that is, these days. >> That's great. Building on the concept of trust and this morning at the keynote, Matt Mckesson where they talked about trusting Docker, trusting Docker the company, trusting Docker the technology. Almost the very first words out of Steve Singh's mouth this morning were about community. And, I think community is one of the big reasons people do trust Docker and one of the things that brings them along. You guys are both Docker captains, part of a program of advocacy, community programs. I don't know, Bret, can you tell us a little bit about the program and what's involved in it? >> Yeah, sure. So, it's been around over two years now and it actually spawned out of Docker's pre-existing programs were focusing on speakers and bloggers and supporting them as well as community leaders that run meetups. And they kind of figured out that a key set of people were kind of doing two or three of those things all at once. And so, they were sort of deciding how do we make like super-groups of these people and they came up with the term Docker captain It really just means you know something about Docker, you share it constantly, something about a Docker toolset, something about the container tools. And that you're sort of... And you don't work for Docker. You're a community person that is, maybe you're working for someone that is a partner of Docker or maybe you're just a meetup volunteer that also blogs a lot about patterns and practices of Docker or new Docker features. And so, they kind of use the engineering teams at Docker to kind of pick through people on the internet and the people they see in the community that are sort of rising out of all the noise out there. And they ask them to be a part of the program and then, of course, we get nice jackets and lots of training. And, it's really just a great group of people, we're about 70 people now around the world. >> And yeah, this is global as well, right? >> Oh yeah, yep. It's one of my favorite aspects is the international aspect. I work for Booz Allen which is a more US government focused and I don't get to interact with the global community much. But, through the Docker captain program got friendships and connections almost on every continent and a lot of locations. I just saw a post of a Docker meetup in like, I think it was like Tunisia. Very, very out there kind of places. There was a Cuban one, recently, in Havana. The best connections to a global community that I've ever seen. I think one of the biggest drivers is the rapid adoption and kind of industry trend of containerization and the Docker brand and what it is basically gave rise to a ton of folks just beginners, just wanting to know what it's all about. And, we've been identified as folks that are approachable and have kind of a mandate to be people that can help answer those initial questions, help align folks that have questions with the right resources, and also just make it like a soft, warm, fuzzy kind of introduction to the community. And engage on all kinds of levels, advanced to beginner levels. >> It was interesting, again, this morning, I think about half the people raised their hands to the question, "is it their first year?" So, it still seems like the Docker, the inbound people interested in Docker is still growing and millions of developers all over the world, right? I don't know, Bret, you have a course, Docker Mastery, you also do meetups, and so I'm curious like what is the common pathway or drivers for new folks coming in, that you see and talk with? >> Yeah, what's the pathways? >> Yeah, the pathway, what's driving them? What are they trying to do? Again, are they these solo folks? >> Yeah, it's sort of a little bit of everything. We're very lucky in the course. We actually just crossed 55,000 students worldwide, 161 countries on a course that is only a year old. So, it kind of speaks to the volume of people around the world that really want to learn containers and all the tools around them. I think that the common theme there is I think we had the early adopters, right, and that was the first three or four years of Docker was people that were Silicon Valley, startups, people who were already on the bleeding edge of technology, whether it was hobbyist or enterprise. It was all people, but it was sort of the Linux people. Now, what we're getting is the true enterprise admins and developers, right. And that means, Microsoft, IBM mainframes, .Net, Java, you're getting all of these sort of traditional enterprise technologies but they all have the same passion, they're just coming in a few years later. So, what's funny is, you're meetups don't really change. They're just growing. Like what you see worldwide, the trend is we're still on the up-climb of all the groups, we have over 200 meetups worldwide now that meet once a month about Docker. It's just a crazy time right now. Everything's growing and it's like you wonder if it's ever going to stop, right How big are we gonna get, gonna take over the world with containers? >> Yeah, about 60% or more of all our meetups are completely new to Docker. And, it ranges from, you know, my boss told me about it so I gotta learn it or I found it and I want to convince other people in my organization to use it so I need to learn it more so I can make that case or, it's immediately solving a problem but I don't know how to take it to the next level, don't know where it's going, all that. It's a lot of new people. >> I get students a lot, college students that want to be more aggressive when they get in the marketplace and they hear the word 'DevOps' a lot and they think DevOps is a thing I need to learn in order to get a job. They don't really know what that is. And, of course, we don't even. At this point, it's so watered down, I don't know if anyone really knows what it is. But eventually, they search that and they come up with sort of key terms and I think one of those the come up right away is Docker. And they don't know what that is. But, I get asked the question a lot, If I go to this workshop or if I go the meetup or whatever, can I put that on my resume so I can get my first job out of school? They're always looking for something else beyond their schooling to make them a better first resume. So, it's cool to see even the people just stepping into the job market getting their feet wet with Docker even when they don't even know why they need it. >> It sounds like a symbiotic thought leadership community that you guys are part of and it sounds like the momentum we heard this morning in the general session is really carried out through the Docker captains and the communities. So, Nirmal, Bret, thanks so much for stopping by bringing your snazzy sweatshirts and sharing what you guys are doing as Docker captains. We appreciate your time. >> Thank you. >> Thank you. >> We want to thank you for watching The Cube. I'm Lisa Martin with John Troyer. We're live at DockerCon 2018. Stick around, John and I will be right back with our next guest.

Published Date : Jun 13 2018

SUMMARY :

Brought to you by Docker and its ecosystem partners. So, thank you for carving out some time for us. And so, that talk centers around the challenges of going in What were some of the commentary you heard and like 20% of the people in the room about and I think with so much exciting new stuff in containers Does that same problem of boiling the ocean, the same problem that you have to solve, and how to use empathy to get your organization and the talking about culture and process and people in the culture to happen. and need to contribute to this empathy? or new startups that they may have to compete with Building on the concept of trust and the people they see in the community and have kind of a mandate to be people that can help So, it kind of speaks to the volume of people but I don't know how to take it to the next level, and they think DevOps is a thing I need to learn and it sounds like the momentum we heard this morning We want to thank you for watching The Cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

Matt MckessonPERSON

0.99+

Bret FisherPERSON

0.99+

JohnPERSON

0.99+

HavanaLOCATION

0.99+

John TroyerPERSON

0.99+

20%QUANTITY

0.99+

San FranciscoLOCATION

0.99+

20QUANTITY

0.99+

Nirmal MehtaPERSON

0.99+

Steve SinghPERSON

0.99+

DockerORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

55,000 studentsQUANTITY

0.99+

IBMORGANIZATION

0.99+

last weekDATE

0.99+

161 countriesQUANTITY

0.99+

LisaPERSON

0.99+

NirmalPERSON

0.99+

threeQUANTITY

0.99+

one pieceQUANTITY

0.99+

four yearsQUANTITY

0.99+

BretPERSON

0.99+

a yearQUANTITY

0.99+

both groupsQUANTITY

0.99+

DockerCon 2018EVENT

0.99+

oneQUANTITY

0.99+

BothQUANTITY

0.99+

bothQUANTITY

0.99+

Silicon ValleyLOCATION

0.98+

Docker MasteryTITLE

0.98+

first wordsQUANTITY

0.98+

millionsQUANTITY

0.98+

DockerConEVENT

0.98+

30 peopleQUANTITY

0.98+

first jobQUANTITY

0.98+

TunisiaLOCATION

0.98+

over 200 meetupsQUANTITY

0.98+

twoQUANTITY

0.98+

decadesQUANTITY

0.98+

SAP SapphireORGANIZATION

0.98+

first timeQUANTITY

0.98+

first yearQUANTITY

0.98+

todayDATE

0.98+

one personQUANTITY

0.97+

Geek WhisperersTITLE

0.97+

tomorrowDATE

0.97+

DockerCon '18EVENT

0.97+

CubesORGANIZATION

0.96+

million dollarQUANTITY

0.96+

about 60%QUANTITY

0.96+

first threeQUANTITY

0.96+

The CubeORGANIZATION

0.94+

JavaTITLE

0.94+

a couple years agoDATE

0.93+

fifthQUANTITY

0.92+

two different fundamental problemsQUANTITY

0.91+

once a monthQUANTITY

0.91+

both speakersQUANTITY

0.91+

this morningDATE

0.91+

about 70 peopleQUANTITY

0.91+

first resumeQUANTITY

0.9+

Dinesh Nirmal, IBM | IBM Think 2018


 

>> Voiceover: Live from Las Vegas it's the Cube. Covering IBM Think 2018. Brought to you by IBM. >> Welcome back to IBM Think 2018. This is the Cube, the leader in live tech coverage. My name is Dave Vellante and this is our third day of wall-to-wall coverage of IBM Think. Dinesh Nirmal is here, he's the Vice-President of Analytics Development at IBM. Dinesh, great to see you again. >> I know. >> We just say each other a couple of weeks ago. >> I know, in New York. >> Yeah and, of course, in Big Data SV >> Right. >> Over at the Strata Conference. So, great to see you again. >> Well, Thank you. >> A little different venue here. We had real intimate in New York City and in San Jose. >> I know, I know. >> Massive. What are your thoughts on bringing all the clients together like this? >> I mean, it's great because we have combined all the conferences into one, which obviously helps because the message is very clear to our clients on what we are doing end-to-end, but the feedback has been tremendous. I mean, you know, very positive. >> What has the feedback been like in terms of how you guys are making progress in the analytics group? What are they like? What are they asking you for more of? >> Right. So on the analytics side, the data is growing you know, by terabytes a day and the questions is how do they create insights into this massive amount of data that they have in their premise or on Cloud. So we have been working to make sure that how can we build the tools to enable our customers to create insights whether the data is on private cloud, public, or hybrid. And that's a very unique valid proposition that we bring to our customers. Regardless of where your data is, we can help you whether it's cloud, private, or hybrid. >> Well so, we're living in this multi-petabyte world now. Like overnight it became multi-petabyte. And one of the challenges of course people have is not only how do you deal with that volume of data, but how do I act on it and get insights quickly. How do I operationalize it? So maybe you can talk about some of the challenges of operationalizing data. >> Right. So, when I look at machine learning, there is three D's I always say and you know, the first D is the data, the development of the model, and the deployment of the model. When I talk about operationalization, especially the deployment piece, is the one that gets the most challenging for our enterprise customers. Once you clean the data and you build the model how do you take that model and you bring it your existing infrastructure. I mean, you know, look at your large enterprises. Right? I mean, you know, they've been around for decades. So they have third party software. They have existing infrastructure. They have legacy systems. >> Dave: A zillion data marks and data warehouses >> Data marks, so into all of that, how do you infuse machine learning, becomes very challenging. I met with the CTO of a major bank a few months ago, and his statement kind of stands out to me. Where he said, "Dinesh, it only took us three weeks to build the model. It's been 11 months, we still haven't deployed it". So that's the challenge our customers face and that's where we bring in the skillset. Not just the tools but we bring the skills to enable and bring that into production. >> So is that the challenge? It's the skillsets or is it the organizational inertia around well I don't have the time to do that now because I've got to get this report out or ... >> Dinesh: Right Maybe you can talk about that a little. Right. So that is always there. Right? I mean, because once a priority is set obviously the different challenges pull you in different directions, so every organization faces that to a large extent. But I think if you take from a pure technical perspective, I would say the challenge is two things. Getting the right tools, getting the right skills. So, with IBM, what we are focusing is how do we bring the right tools, regardless of the form factor you have, whether Cloud, Private Cloud, Hybrid Cloud, and then how do we bring the right skills into it. So this week we announce the data science lead team, who can come in and help you with building models. Looking at the use cases. Should we be using vanilla machine learning or should we be using deep learning. All those things and how do we bring that model into the production environment itself. So I would say tools and skills. >> So skills wise, in the skills there's at least two paths. It's like the multi-tool athlete. You've got the understanding of the tech. >> Dinesh: Right. >> You know, the tools, most technology people say hey, I'll figure that out. But then there's this data and digital >> Right. >> Skills. It's like this double deep skills that is challenging. So you're saying you can help. >> Right. Sort of kick-start that and how does that work? That sort of a services engagement? That's part of the ... >> So, once you identify a use case, the data science lead team can come in, because they have the some level of vertical knowledge of your industry. They are very trained data scientists. So they can come assess the use case. Help you pick the algorithms to build it. And then help you deploy, cleanse the data. I mean, you bring up a very, very good point. I mean, let's just look at the data, right. The personas that's involved in data; there is the data engineer, there's the data scientist, there's the data worker, there's the data steward, there's the CTO. So, that's just the data piece. Right? I mean, there's so many personas that have to come together. And that's why I said the skills a very critical piece of all it, but also, working together. The collaboration is important. >> Alright, tell us more about IBM Cloud Private for data. We've heard about IBM Cloud Private. >> Danish: Right. >> Cloud Private for Data is new. What's that all about? >> Right, so we announced IBM Cloud Private for Data this week and let me tell you, Dave, this has been the most significant announcement from an analytic perspective, probably in a while, that we are getting such a positive response. And, I will tell you why. So when you look at the platform, our customers want three things. One, they want to be able to build on top of the platform. They want it to be open and they want it to be extensible. And we have all three available. The platform is built on Kubernetes. So it's completely open, it's scalable, it's elastic. All those features comes with it. And then we put that end-to-end so you can ingest the data, you can cleanse it or transform it. You can build models or do deep analytics on it. You can visualize it. So you can do everything on the platform. So I'll take an example, like block chain, for example, I mean you have, if I were to simplify it, Right? You have the ledger, where you are, obviously, putting your transactions in and then you have a stay database where you are putting your latest transactions in. The ledger's unstructured. So, how do you, as that is getting filled, How do you ingest that, transform it on the fly, and be able to write into a persistent place and do analytics on it. Only a platform can do with that kind of volume of data. And that's where the data platform brings in, which is very unique especially on the modern applications that you want to do. >> Yes, because if you don't have the platform ... Let's unpack this a little bit. You've got a series of bespoke products and then you've got, just a lot of latency in terms of the elapsed times to get to the insights. >> Dinesh: Right. >> Along the way you've got data consistency issues, data quality >> Dinesh: Right >> maybe is variable. Things change. >> Right. I mean, think about it, right. If you don't have the platform then you have side-load products. So all of a sudden you've got to get a product for your governance, your integration catalog. You need to get a product for ingest. You got to get a product for persistence. You got to get a product for analytics. You got to get a product for visualization. And then you add the complexity of the different personas working together between the multitude of products. You have a mess in your hand at that point. The platform solves that problem because it brings you an integrated end-to-end solution that you can use to build, for example, block chain in this case. >> Okay, I've asked you this before, but I've got to again and get it on record with Think. So, a lot of people would hear that and say Okay but it's a bunch of bespoke products that IBM has taken they've put a UI layer on top and called it a platform. So, what defines a platform and how have you not done that? >> Right. >> And actually created the platform? >> Right. So, we are taking the functionality of the existing parts and that's what differentiates us. Right? If you look at our governance portfolio, I can sit here and very confidently say no one can match that, so >> Dave: Sure. We obviously have that strength >> Real Tap >> Right, Real Tap. That we can bring. So we are bringing the functionality. But what we have done is we are taking the existing products and disintegrated in to micro services so we can make it cloud native. So that is a huge step for us, right? And then once you make that containerized and micro services it fits into the open platform that we talked about before. And now you have an end-to-end, well orchestrated pipeline that's available in the platform that can scale and be elastic as needed. So, it's not that we are bringing the products, we are bringing the functionality of it. >> But I want to keep on this for a second, so the experience for the user is different if you microserviced what you say because if you just did what I said and put a layer a UI layer on top, you would be going into these stovepipes and then cul-de-sac and then coming back >> Dinesh: Right. And coming back. So, the development effort for that must have been >> Oh, yeah. >> Fairly massive. You could have done the UI layer in, you know, in months. >> Right, right, right, then it is not really cloud native way of doing it, right? I mean, if you're just changing the UI and the experience, that's completely different. What we have done is that we have completely re-architected the underlying product suite to meet the experience and the underlying platform layer. So, what can happen? How long did this take? What kind of resources did you have to throw at this from a development standpoint? >> So this has been in development for 12-18 months. >> Yeah. >> And we put, you know, a tremendous amount of resources to make this happen. I mean, fortunately in our case we have the depth, we have the functionality. So it was about translating that into the cloud native way of doing the app development. >> So did you approach this with sort of multiple small teams? Or was there a larger team? What was your philosophy here? >> It was multiple small teams, right. So if you look at our governance portfolio we got to take our governance catalog, rewrite that code. If we look at our master data management portfolio, we got to take, so it's multiple of small teams with very core focus. >> I mean, I ask you these questions because I think it adds credibility to the claims that you're making about we have a platform not a series of bespoke products. >> Right and we demoed it. Actually tomorrow at 11, I'm going to deep dive into the architecture of the whole platform itself. How we built it. What are the components we used and I'm going to demo it. So the code is up and running and we are going to put it out there into Cube for everybody to go us it. >> At Mandalay Bay, where is that demo? >> It's in Mandalay Bay, yeah. >> Okay. >> We have a session at 11:30. >> Talk more about machine learning and how you've infused machine learning into the portfolio. >> Right. So, every part of our product portfolio has machinery so, I'll take two examples. One is DB2. So today, DB2 Optimizer is a cost-based optimizer. We have taken the optimizer and infused machine learning into it to say, you know, based on the query that's coming in take the right access path, predict the right access path and take it. And that has been such a great experience because we are seeing 30-50 percent performance improvement in most of the queries that we run through the machinery. So that's one. The other one is the classification, so let's say, you have a business term and you want to classify. So, if you have a zip code, we can use in our catalog to say there's an 80% chance this particular number is a zip code and then it can learn over time, if you tell it, no that's not a zip code, that's a post code in Canada. So the next time you put that in there it has learned. So every product we have infused machine learning and that's our goal is to become completely a cognitive platform pretty soon. I mean, you know, so that has also been a tremendous piece of work that we're doing. >> So what can we expect? I mean, you guys are moving fast. >> Yeah. >> We've seen you go from sort of a bespoke product company division to this platform division. Injecting now machine learning into the equation. You're bringing in new technologies like block chain, which you're able to do because you have a platform. >> Right. >> What should we expect in terms of the pace and the types of innovations that we could see going forward? What could you share with us without divulging secrets? >> Right. So, from a product perspective we want to infuse cognitive machine learning into every aspect of the product. So, we don't want to, we don't want our customers calling us, telling there's a problem. We want to be able to tell our customer a day or two hours ahead that there is a problem. So that is predictability, Right? So we want not just in the product, even in the services side, we want to infuse total machine learning into the product. From a platform perspective we want to make it completely open, extensible. So our partners can come and build on top of it. So every customer can take advantage of vertical and other solutions that they build. >> You get a platform, you get this fly-wheel effect, inject machine learning everywhere open API so you can bring in new technologies like block chain as they evolve. Dinesh, thank you very much for coming on the Cube. >> Oh, thank you so much. >> Always great to have you. >> It's a pleasure, thank you. >> Alright, keep it right there everybody. We'll be right back with our next guest. This is the Cube live from IBM Think 2018. We'll be right back. (techno music)

Published Date : Mar 21 2018

SUMMARY :

Brought to you by IBM. Dinesh, great to see you again. So, great to see you again. in New York City and in San Jose. all the clients together like this? I mean, you know, very positive. So on the analytics side, the data is growing So maybe you can talk I mean, you know, Not just the tools but we bring the skills So is that the challenge? obviously the different challenges pull you You've got the understanding of the tech. You know, the tools, most technology people So you're saying you can help. That's part of the ... I mean, let's just look at the data, right. Alright, tell us more about IBM Cloud Private for data. What's that all about? You have the ledger, where you are, obviously, Yes, because if you don't have the platform ... maybe is variable. And then you add the complexity of the different personas and how have you not done that? of the existing parts and that's what differentiates us. We obviously have that strength bringing the products, we are bringing So, the development effort You could have done the UI layer in, What kind of resources did you have to throw And we put, you know, a tremendous amount of resources So if you look at our governance portfolio I mean, I ask you these questions because I think So the code is up and running and we are going infused machine learning into the portfolio. So the next time you put that in there it I mean, you guys are moving fast. Injecting now machine learning into the equation. even in the services side, we want to infuse total You get a platform, you get this fly-wheel effect, This is the Cube live from IBM Think 2018.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

Dave VellantePERSON

0.99+

IBMORGANIZATION

0.99+

CanadaLOCATION

0.99+

Dinesh NirmalPERSON

0.99+

New YorkLOCATION

0.99+

San JoseLOCATION

0.99+

New York CityLOCATION

0.99+

three weeksQUANTITY

0.99+

two hoursQUANTITY

0.99+

80%QUANTITY

0.99+

Mandalay BayLOCATION

0.99+

11 monthsQUANTITY

0.99+

DineshPERSON

0.99+

Las VegasLOCATION

0.99+

11:30DATE

0.99+

two thingsQUANTITY

0.99+

a dayQUANTITY

0.99+

OneQUANTITY

0.99+

third dayQUANTITY

0.98+

this weekDATE

0.98+

three thingsQUANTITY

0.98+

two examplesQUANTITY

0.98+

threeQUANTITY

0.98+

todayDATE

0.98+

DB2TITLE

0.98+

oneQUANTITY

0.98+

12-18 monthsQUANTITY

0.97+

firstQUANTITY

0.97+

decadesQUANTITY

0.93+

terabytes a dayQUANTITY

0.93+

doubleQUANTITY

0.9+

ThinkORGANIZATION

0.88+

Vice-PresidentPERSON

0.87+

few months agoDATE

0.85+

petabyteQUANTITY

0.85+

tomorrow atDATE

0.83+

couple of weeks agoDATE

0.81+

30-50 percentQUANTITY

0.79+

DanishOTHER

0.76+

IBM Think 2018EVENT

0.75+

two pathsQUANTITY

0.73+

ThinkCOMMERCIAL_ITEM

0.72+

11DATE

0.68+

multiQUANTITY

0.68+

KubernetesTITLE

0.68+

a secondQUANTITY

0.66+

Cloud PrivateTITLE

0.62+

Cloud Private forTITLE

0.61+

Cloud Private for DataTITLE

0.61+

IBMEVENT

0.56+

CTOPERSON

0.56+

Strata ConferenceEVENT

0.55+

2018EVENT

0.54+

CubeTITLE

0.5+

CubeCOMMERCIAL_ITEM

0.45+

CubePERSON

0.27+

Dinesh Nirmal, IBM | Machine Learning Everywhere 2018


 

>> Announcer: Live from New York, it's theCUBE, covering Machine Learning Everywhere: Build Your Ladder to AI. Brought to you by IBM. >> Welcome back to Midtown, New York. We are at Machine Learning Everywhere: Build Your Ladder to AI being put on by IBM here in late February in the Big Apple. Along with Dave Vellante, I'm John Walls. We're now joined by Dinesh Nirmal, who is the Vice President of Analytics Development and Site Executive at the IBM Silicon Valley lab, soon. Dinesh, good to see you, this morning, sir. >> Thank you, John. >> Fresh from California. You look great. >> Thanks. >> Alright, you've talked about this, and it's really your world: data, the new normal. Explain that. When you say it's the new normal, what exactly... How is it transforming, and what are people having to adjust to in terms of the new normal. >> So, if you look at data, I would say each and every one of us has become a living data set. Our age, our race, our salary. What our likes or dislikes, every business is collecting every second. I mean, every time you use your phone, that data is transmitted somewhere, stored somewhere. And, airlines for example, is looking, you know, what do you like? Do you like an aisle seat? Do you like to get home early? You know, all those data. >> All of the above, right? >> And petabytes and zettabytes of data is being generated. So now, businesses' challenge is that how do you take that data and make insights out of it to serve you as a better customer. That's where I've come to, but the biggest challenge is that, how do you deal with this tremendous amount of data? That is the challenge. And creating sites out of it. >> That's interesting. I mean, that means the definition of identity is really... For decades it's been the same, and what you just described is a whole new persona, identity of an individual. >> And now, you take the data, and you add some compliance or provisioning like GDPR on top of it, all of a sudden how do-- >> John: What is GDPR? For those who might not be familiar with it. >> Dinesh: That's the regulatory term that's used by EU to make sure that, >> In the EU. >> If me as a customer come to an enterprise, say, I don't want any of my data stored, it's up to you to go delete that data completely, right? That's the term that's being used. And that goes into effect in May. How do you make sure that that data gets completely deleted by that time the customer has... How do you get that consent from the customer to go do all those... So there's a whole lot of challenges, as data multiplies, how do you deal with the data, how do you create insights to the data, how do you create consent on the data, how do you be compliant on that data, how do you create the policies that's needed to generate that data? All those things need to be... Those are the challenges that enterprises face. >> You bring up GDPR, which, for those who are not familiar with it, actually went into effect last year but the fines go into effect this year, and the fines are onerous, like 4% of turnover, I mean it's just hideous, and the question I have for you is, this is really scary for companies because they've been trying to catch up to the big data world, and so they're just throwing big data projects all over the place, which is collecting data, oftentimes private information, and now the EU is coming down and saying, "Hey you have to be able to, if requested, delete that." A lot of times they don't even know where it is, so big challenge. Are you guys, can you help? >> Yeah, I mean, today if you look at it, the data exists all over the place. I mean, whether it's in your relational database or in your Hadoop, unstructured data, whereas you know, optics store, it exists everywhere. And you have to have a way to say where the data is and the customer has to consent given to go, for you to look at the data, for you to delete the data, all those things. We have tools that we have built and we have been in the business for a very long time for example our governance catalog where you can see all the data sources, the policies that's associated with it, the compliance, all those things. So for you to look through that catalog, and you can see which data is GDPR compliant, which data is not, which data you can delete, which data you cannot. >> We were just talking in the open, Dave made the point that many companies, you need all-stars, not just somebody who has a specialty in one particular area, but maybe somebody who's in a particular regiment and they've got to wear about five different hats. So how do you democratize data to the point that you can make these all-stars? Across all kinds of different business units or different focuses within a company, because all of a sudden people have access to great reams of information. I've never had to worry about this before. But now, you've got to spread that wealth out and make everybody valuable. >> Right, really good question. Like I said, the data is existing everywhere, and most enterprises don't want to move the data. Because it's a tremendous effort to move from an existing place to another one and make sure the applications work and all those things. We are building a data virtualization layer, a federation layer, whereby which if you are, let's say you're a business unit. You want to get access to that data. Now you can use that federational data virtualization layer without moving data, to go and grab that small piece of data, if you're a data scientist, let's say, you want only a very small piece of data that exists in your enterprise. You can go after, without moving the data, just pick that data, do your work, and build a model, for example, based on that data. So that data virtualization layer really helps because it's basically an SQL statement, if I were to simplify it. That can go after the data that exists, whether it's at relational or non-relational place, and then bring it back, have your work done, and then put that data back into work. >> I don't want to be a pessimist, because I am an optimist, but it's scary times for companies. If they're a 20th century organization, they're really built around human expertise. How to make something, how to transact something, or how to serve somebody, or consult, whatever it is. The 21st century organization, data is foundational. It's at the core, and if my data is all over the place, I wasn't born data-driven, born in the cloud, all those buzzwords, how do traditional organizations catch up? What's the starting point for them? >> Most, if not all, enterprises are moving into a data-driven economy, because it's all going to be driven by data. Now it's not just data, you have to change your applications also. Because your applications are the ones that's accessing the data. One, how do you make an application adaptable to the amount of data that's coming in? How do you make accuracy? I mean, if you're building a model, having an accurate model, generating accuracy, is key. How do you make it performant, or govern and self-secure? That's another challenge. How do you make it measurable, monitor all those things? If you take three or four core tenets, that's what the 21st century's going to be about, because as we augment our humans, or developers, with AI and machine learning, it becomes more and more critical how do you bring these three or four core tenets into the data so that, as the data grows, the applications can also scale. >> Big task. If you look at the industries that have been disrupted, taxis, hotels, books, advertising. >> Dinesh: Retail. >> Retail, thank you. Maybe less now, and you haven't seen that disruption yet in banks, insurance companies, certainly parts of government, defense, you haven't seen a big disruption yet, but it's coming. If you've got the data all over the place, you said earlier that virtually every company has to be data-driven, but a lot of companies that I talk to say, "Well, our industry is kind of insulated," or "Yeah, we're going to wait and see." That seems to me to be the recipe for disaster, what are your thoughts on that? >> I think the disruption will come from three angles. One, AI. Definitely that will change the way, blockchain, another one. When you say, we haven't seen in the financial side, blockchain is going to change that. Third is quantum computing. The way we do compute is completely going to change by quantum computing. So I think the disruption is coming. Those are the three, if I have to predict into the 21st century, that will change the way we work. I mean, AI is already doing a tremendous amount of work. Now a machine can basically look at an image and say what it is, right? We have Watson for cancer oncology, we have 400 to 500,000 patients being treated by Watson. So AI is changing, not just from an enterprise perspective, but from a socio-economic perspective and a from a human perspective, so Watson is a great example for that. But yeah, disruption is happening as we speak. >> And do you agree that foundational to AI is the data? >> Oh yeah. >> And so, with your clients, like you said, you described it, they've got data all over the place, it's all in silos, not all, but much of it is in silos. How does IBM help them be a silo-buster? >> Few things, right? One, data exists everywhere. How do you make sure you get access to the data without moving the data, that's one. But if you look at the whole lifecycle, it's about ingesting the data, bringing the data, cleaning the data, because like you said, data becomes the core. Garbage in, garbage out. You cannot get good models unless the data is clean. So there's that whole process, I would say if you're a data scientist, probably 70% of your time is spent on cleaning the data, making the data ready for building a model or for a model to consume. And then once you build that model, how do you make sure that the model gets retrained on a regular basis, how do you monitor the model, how do you govern the model, so that whole aspect goes in. And then the last piece is visualizational reporting. How do you make sure, once the model or the application is built, how do you create a report that you want to generate or you want to visualize that data. The data becomes the base layer, but then there's a whole lifecycle on top of it based on that data. >> So the formula for future innovation, then, starts with data. You add in AI, I would think that cloud economics, however we define that, is also a part of that. My sense is most companies aren't ready, what's your take? >> For the cloud, or? >> I'm talking about innovation. If we agree that innovation comes from the data plus AI plus you've got to have... By cloud economics I mean it's an API economy, you've got massive scale, those kinds of, to compete. If you look at the disruptions in taxis and retail, it's got cloud economics underneath it. So most customers don't really have... They haven't yet even mastered cloud economics, yet alone the data and the AI component. So there's a big gap. >> It's a huge challenge. How do we take the data and create insights out of the data? And not just existing data, right? The data is multiplying by the second. Every second, petabytes or zettabytes of data are being generated. So you're not thinking about the data that exists within your enterprise right now, but now the data is coming from several different places. Unstructured data, structured data, semi-structured data, how do you make sense of all of that? That is the challenge the customers face, and, if you have existing data, on top of the newcoming data, how do you predict what do you want to come out of that. >> It's really a pretty tough conundrum that some companies are in, because if you're behind the curve right now, you got a lot of catching up to do. So you think that we have to be in this space, but keeping up with this space, because the change happens so quickly, is really hard, so we have to pedal twice as fast just to get in the game. So talk about the challenge, how do you address it? How do you get somebody there to say, "Yep, here's your roadmap. "I know it's going to be hard, "but once you get there you're going to be okay, "or at least you're going to be on a level playing field." >> I look at the three D's. There's the data, there's the development of the models or the applications, and then the deployment of those models or applications into your existing enterprise infrastructure. Not only the data is changing, but that development of the models, the tools that you use to develop are also changing. If you look at just the predictive piece, I mean look from the 80's to now. You look at vanilla machine learning versus deep learning, I mean there's so many tools available. How do you bring it all together to make sense which one would you use? I think, Dave, you mentioned Hadoop was the term from a decade ago, now it's about object store and how do you make sure that data is there or JSON and all those things. Everything is changing, so how do you bring, as an enterprise, you keep up, afloat, on not only the data piece, but all the core infrastructure piece, the applications piece, the development of those models piece, and then the biggest challenge comes when you have to deploy it. Because now you have a model that you have to take and deploy in your current infrastructure, which is not easy. Because you're infusing machine learning into your legacy applications, your third-party software, software that was written in the 60's and 70's, it's not an easy task. I was at a major bank in Europe, and the CTO mentioned to me that, "Dinesh, we built our model in three weeks. "It has been 11 months, we still haven't deployed it." And that's the reality. >> There's a cultural aspect too, I think. I think it was Rob Thomas, I was reading a blog that he wrote, and he said that he was talking to a customer saying, "Thank god I'm not in the technology industry, "things change so fast I could never, "so glad I'm not a software company." And Rob's reaction was, "Uh, hang on. (laughs) "You are in the technology business, "you are a software company." And so there's that cultural mindset. And you saw it with GE, Jeffrey Immelt said, "I went to bed an industrial giant, "woke up a software company." But look at the challenges that industrial giant has had transforming, so... They need partners, they need people that have done this before, they need expertise and obviously technology, but it's people and process that always hold it up. >> I mean technology is one piece, and that's where I think companies like IBM make a huge difference. You understand enterprise. Because you bring that wealth of knowledge of working with them for decades and they understand your infrastructure, and that is a core element, like I said the last piece is the deployment piece, how do you bring that model into your existing infrastructure and make sure that it fits into that architecture. And that involves a tremendous amount of work, skills, and knowledge. >> Job security. (all laugh) >> Dinesh, thanks for being with us this morning, we appreciate that and good luck with the rest of the event, here in New York City. Back with more here on theCUBE, right after this. (calming techno music)

Published Date : Feb 27 2018

SUMMARY :

Brought to you by IBM. and Site Executive at the IBM Silicon Valley lab, soon. You look great. When you say it's the new normal, what exactly... I mean, every time you use your phone, how do you take that data and make insights out of it and what you just described is a whole new persona, For those who might not be familiar with it. How do you get that consent from the customer and the question I have for you is, given to go, for you to look at the data, So how do you democratize data to the point a federation layer, whereby which if you are, It's at the core, and if my data is all over the place, One, how do you make If you look at the industries that have been disrupted, Maybe less now, and you haven't seen that disruption yet When you say, we haven't seen in the financial side, like you said, you described it, how do you make sure that the model gets retrained So the formula for future innovation, If you look at the disruptions in taxis and retail, how do you predict what do you want to come out of that. So talk about the challenge, how do you address it? and how do you make sure that data is there And you saw it with GE, Jeffrey Immelt said, how do you bring that model the rest of the event, here in New York City.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

TomPERSON

0.99+

MartaPERSON

0.99+

JohnPERSON

0.99+

IBMORGANIZATION

0.99+

DavidPERSON

0.99+

DavePERSON

0.99+

Peter BurrisPERSON

0.99+

Chris KegPERSON

0.99+

Laura IpsenPERSON

0.99+

Jeffrey ImmeltPERSON

0.99+

ChrisPERSON

0.99+

AmazonORGANIZATION

0.99+

Chris O'MalleyPERSON

0.99+

Andy DaltonPERSON

0.99+

Chris BergPERSON

0.99+

Dave VelantePERSON

0.99+

Maureen LonerganPERSON

0.99+

Jeff FrickPERSON

0.99+

Paul FortePERSON

0.99+

Erik BrynjolfssonPERSON

0.99+

AWSORGANIZATION

0.99+

Andrew McCafeePERSON

0.99+

YahooORGANIZATION

0.99+

CherylPERSON

0.99+

MarkPERSON

0.99+

Marta FedericiPERSON

0.99+

LarryPERSON

0.99+

Matt BurrPERSON

0.99+

SamPERSON

0.99+

Andy JassyPERSON

0.99+

Dave WrightPERSON

0.99+

MaureenPERSON

0.99+

GoogleORGANIZATION

0.99+

Cheryl CookPERSON

0.99+

NetflixORGANIZATION

0.99+

$8,000QUANTITY

0.99+

Justin WarrenPERSON

0.99+

OracleORGANIZATION

0.99+

2012DATE

0.99+

EuropeLOCATION

0.99+

AndyPERSON

0.99+

30,000QUANTITY

0.99+

MauricioPERSON

0.99+

PhilipsORGANIZATION

0.99+

RobbPERSON

0.99+

JassyPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Mike NygaardPERSON

0.99+

Dinesh Nirmal, IBM | CUBEConversation


 

(upbeat music) >> Hi everyone. We have a special program today. We are joined by Dinesh Nirmal, who is VP of Development and Analytics, for Analytics at the IBM company and Dinesh has an extremely broad perspective on what's going on in this part of the industry, and IBM has a very broad portfolio. So, between the two of us, I think we can cover a lot of ground today. So, Dinesh, welcome. >> Oh thank you George. Great to be here. >> So just to frame the discussion, I wanted to hit on sort of four key highlights. One is balancing the compatibility across cloud on-prem, and edge versus leveraging specialized services that might be on any one of those platforms. And then harmonizing and simplifying both the management and the development of services across these platforms. You have that trade-off between: do I do everything compatibly; or can I take advantage of platforms, specific stuff? And then, we've heard a huge amount of noise on Machine Learning. And everyone says they're democratizing it. We want to hear your perspective on how you think that's most effectively done. And then, if we have time, the how to manage Machine Learning feedback, data feedback loops to improve the models. So, having started with that. >> So you talked about the private cloud and the public cloud and then, how do you manage the data and the models, or the other analytical assets across the hybrid nature of today. So, if you look at our enterprises, it's a hybrid format that most customers adopt. I mean you have some data in the public side; but you have your mission critical data, that's very core to your transactions, exist in the private cloud. Now, how do you make sure that the data that you've pushed on the cloud, that you can go use to build models? And then you can take that model deployed on-prem or on public cloud. >> Is that the emerging sort of mainstream design pattern, where mission critical systems are less likely to move, for latency, for the fact that they're fused to their own hardware, but you take the data, and the researching for the models happens up in the cloud, and then that gets pushed down close to where the transaction decisions are. >> Right, so there's also the economics of data that comes into play, so if you are doing a, you know, large scale neural net, where you have GPUs, and you want to do deep learning, obviously, you know, it might make more sense for you to push it into the cloud and be able to do that or one of the other deep learning frameworks out there. But then you have your core transactional data that includes your customer data, you know, or your customer medical data, which I think some customers might be reluctant to push on a public cloud, and then, but you still want to build models and predict and all those things. So I think it's a hybrid nature, depending on the sensitivities of the data. Customers might decide to put it on cloud versus private cloud which is in their premises, right? So then how do you serve those customer needs, making sure that you can build a model on the cloud and that you can deploy that model on private cloud or vice versa. I mean, you can build that model on private cloud or only on private, and then deployed on your public cloud. Now the challenge, one last statement, is that people think, well, once I build a model, and I deploy it on public cloud, then it's easy, because it's just an API call at that time, just to call that model to execute the transactions. But that's not the case. You take support vector machine, for example, right, that still has vectors in there, that means your data is there, right, so even though you're saying you're deploying the model, you still have sensitive data there, so those are the kind of things customers need to think about before they go deploy those models. >> So I might, this is a topic for our Friday interview with a member of the Watson IT family, but it's not so black and white when you say we'll leave all your customer data with you, and we'll work on the models, because it, sort of like, teabags, you know, you can take the customer's teabag and squeeze some of the tea out, in your IBM or public cloud, and give them back the teabag, but you're getting some of the benefit of this data. >> Right, so like, it depends, depends on the algorithms you build. You could take a linear regression, and you don't have the challenges I mentioned, in support of retro machine, because none of the data is moving, it's just modeled. So it depends, I think that's where, you know, like Watson has done, will help tremendously because the data is secure in that sense. But if you're building on your own, it's a different challenge, you've got to make sure you pick the right algorithms to do that. >> Okay, so let's move on to the modern sort of what we call operational analytic pipeline, where the key steps are ingest, process, analyze, predict, serve, and you can drill down on those more. Today there's, those pipelines are pretty much built out of multi-vendor components. How do you see that evolving under pressures of, or tension between simplicity, coming from one vendor, and the pieces all designed together, and the specialization, where you want to have a, you know, unique tool in one component. >> Right, so you're exactly right. So you can take a two prong approach. One is, you can go to a cloud provider, and get each of the services, and you stitch it together. That's one approach. A challenging approach, but that has its benefits, right, I mean, you bring some core strengths from each vendor into it. The other one is the integrate approach, where you ingest the data, you shape or cleanse the data, you get it prepared for analytics, you build the model, you predict, you visualize. I mean, that all comes in one. The benefit there is you get the whole stack in one, you have one you have a whole pipeline that you can execute, you have one service provider that's giving them services, it's managed. So all those benefits come with it, and that's probably the preferred way for it integrated all together in one stack, I think that's the path most people go towards, because then you have the whole pipeline available to you, and also the services that comes with it. So any updates that comes with it, and how do you make sure, if you take the first round, one challenge you have is how do you make sure all these services are compatible with each other? How do you make sure they're compliant? So if you're an insurance company, you want it to be HIPAA compliant. Are you going to individually make sure that each of these services are HIPAA compliant? Or would you get from one integrated provider, you can make sure they are HIPAA compliant, tests are done, so all those benefits, to me, outweigh you going, putting unmanaged service all together, and then creating a data link to underlay all of it. >> Would it be fair to say, to use an analogy, where Hadoop, being sort of, originating in many different Apache products, is a quasi-multi vendor kind of pipeline, and the state of, the state of the machine learning analytic pipeline, is still kind of multi-vendor today. You see that moving toward single vendor pipeline, who do you see as the sort of last man standing? >> So, I mean, I can speak from an IBM perspective, I can say that the benefit that a company, a vendor like IBM brings forward, is like, so the different, public or private cloud or hybrid, you obviously have the choice of going to public cloud, you can get the same service on public cloud, so you get a hybrid experience, so that's one aspect of it. Then, if you get the integrated solution, all the way from ingest to visualization, you have one provider, it's tested, it's integrated, you know, it's combined, it works well together, so I would say, going forward, if you look at it purely from an enterprise perspective, I would say integrated solutions is the way to go, because that what will be the last man standing. I'll give you an example. I was with a major bank in Europe, about a month ago, and I took them through our data science experience, our machine learning project and all that, and you know, the CTO's take was that, Dinesh, I got it. Building the model itself, it only took us two days, but incorporating our model into our existing infrastructure, it has been 11 months, we haven't been able to do it. So that's the challenge our enterprises face, and they want an integrated solution to bring that model into their existing infrastructure. So that's, you know, that's my thought. >> Today though, let's talk about the IBM pipeline. Spark is core, Ingest is, off the-- >> Dinesh: Right, so you can do spark streaming, you can use Kafka, or you can use infostream which is our proprietary tool. >> Right, although, you wouldn't really use structured streaming for ingest, 'cause of the back pressure? >> Right, so they are-- >> The point that I'm trying to make is, it's still multi-vendor, and then the serving side, I don't know, where, once the analysis is done and predictions are made, some sort of sequel database has to take over, so it's, today, it's still pretty multi vendor. So how do you see any of those products broadening their footprints so that the number of pieces decreases. >> So good question, they are all going to get into end pipeline, because that's where the value is, unless you provide an integrated end to end solution for a customer, especially parts customer it's all about putting it all together, and putting these pieces together is not easy, even if you ingest the data, IOP kind of data, a lot of times, 99% of the time, data is not clean, unless you're in a competition where you get cleansed data, in real world, that never happens. So then, I would say 80% of a data scientists time is spent on cleaning the data, shaping the data, preparing the data to build that pipeline. So for most customers, it's critical that they get that end to end, well oiled, well connected solution integrated solution, than take it from each vendor, every isolated solution. To answer your question, yes, every vendor is going to move into the ingest, data cleansing phase, transformation, and the building the pipeline and then visualization, if you look at those five steps, has to be developed. >> But just building the data cleansing and transformation, having it in your, native to your own pipeline, that doesn't sound like it's going to solve the problem of messy data that needs, you know, human supervision to correct. >> I mean, so there is some level of human supervision to be sure, so I'll give you an example, right, so when data from an insurance company goes, a lot of times, the gender could be missing, how do you know if it's a male or female? Then you got to build another model to say, you know, this patient has gone for a prostate exam, you know, it's a male, gynecology is a female, so you have to do some intuitary work in there, to make sure that the data is clean, and then there's some human supervision to make sure that this is good to build models, because when you're executing that pipeline in real time, >> Yeah. >> It's all based on the past data, so you want to make sure that the data is as clean as possible to train and model, that you're going to execute on. >> So, let me ask you, turning to a slide we've got about complexity, and first, for developers, and then second, for admins, if we take the steps in the pipeline, as ingest, process, analyze, predict, serve, and sort of products or product categories as Kafka, Spark streaming and sequel, web service for predict, and MPP sequel, or no sequel for serve, even if they all came from IBM, would it be possible to unify the data model, the addressing and name space, and I'm just kicking off a few that I can think of, programming model, persistence, transaction model, workflow, testing integration, there's one thing to say it's all IBM, and then there's another thing, so that the developer working with it, sees as it as one suite. >> So it has to be validated, and that's the benefit that IBM brings already, because we obviously test each segment to make sure it works, but when you talk about complexity, building the model is one, you know, development of the model, but now the complexity also comes in the deployment of the model, now we talk about the management of the model, where, how you monitor it? When was the model deployed, was it deployed in tests, was it deployed in production, and who changed that model last, what was changed, and how is it scoring? Is it scoring high or low? You want to get notification when the model starts going low. So complexity is all the way across, all the way from getting the data, in cleaning the data, developing the model, it never ends. And the other benefit that IBM has added is the feedback loop, where when you talk about complexity, it reduces the complexity, so today, if the model scores low, you have to take it offline, retrain the model based on the new data, and then redeploy it. Usually for enterprises, there is slots where you can take it offline, put it back online, all these things, so it's a process. What we have done is created a feedback loop where we are training the model in real time, using real time data, so the model is continuously-- >> Online learning. >> Online learning. >> And challenger, champion, or AB testing to see which one is more robust. >> Right, so you can do that, I mean, you could have multiple models where you can do AB testing, but in this case, you can condition, train the model to say, okay, this model scores the best. And then, another benefit is that, if you look at the whole machine learning process, there's the data, there's development, there's deployment. On development side, more and more it's getting commoditized, meaning picking the right algorithm, there's a lot of tools, including IBM, where he can say, question what's the right one to use for this, so that piece is getting a little, less complex, I don't want to say easier, but less complex, but the data cleansing and the deployment, these are two enterprises, when you have thousands of models how do you make sure that you deploy the right model. >> So you might say that the pipeline for managing the model is sort of separate from the original data pipeline, maybe it includes the same technology, or as much of the same technology, but once your pipeline, your data pipeline is in production, the model pipeline has to keep cycling through. >> Exactly, so the data pipeline could be changing, so if you take a lone example, right, a lot of the data that goes in the model pipeline, is static, I mean, my age, it's not going to change every day, I mean, it is, but you know, the age that goes into my salary, my race, my gender, those are static data that you can take from data and put it in there, but then there's also real time data that's coming, my loan amount, my credit score, all those things, so how do you bring that data pipeline between real time and static data, into the model pipeline, so the model can predict accurately and based on the score dipping, you should be able to re-try the model using real time data. >> I want to take, Dinesh, to the issue of a multi-vendor stack again, and the administrative challenges, so here, we look at a slide that shows me just rattling off some of the admin challenges, governance, performance modeling, scheduling orchestration, availability, recovering authentication, authorization, resource isolation, elasticity, testing integration, so that's the Y-axis, and then for every different product in the pipeline, as the access, say Kafka, Spark, structured streaming MPP, sequel, no sequel, so you got a mess. >> Right. >> Most open source companies are trying to make life easier for companies by managing their software as a service for the customer, and that's typically how they monetize. But tell us what you see the problem is, or will be with that approach. >> So, great question. Let me take a very simple example. Probably most of our audience know about GDPR, which is the European law to write to forget. So if you're an enterprise, and say, George, I want my data deleted, you have to delete all of my data within a period of time. Now, that's where one of the aspects you talked about with governance comes in. How do you make sure you have governance across not just data but your individual assets? So if you're using a multi-vendor solution, in all of that, that state governance, how do I make sure that data get deleted by all these services that's all tied together. >> Let me maybe make an analogy. On CSI, when they pick up something at the crime scene, they got to make sure that it's bagged, and the chain of custody doesn't lose its integrity all the way back to the evidence room. I assume you're talking about something like that. >> Yeah, something similar. Where the data, as it moves between private cloud, public cloud, analytical assets, is using that data, all those things need to work seamlessly for you to execute that particular transaction to delete data from everywhere. >> So that's, it's not just administrative costs, but regulations that are pushing towards more homogenous platforms. >> Right, right, and even if you take some of the other things on the stack, monitoring, logging, metering, provides some of those capabilities, but you have to make sure when you put all these services together, how are they going to integrate all together? You have one monitoring stack, so if you're pulling you know, your IOT kind of data into a data center, or your whole stack evaluation, how do you make sure you're getting the right monitoring data across the board? Those are the kind of challenges that you will have. >> It's funny you mention that, because we were talking to an old Lotus colleague of mine, who was CTO of Microsoft's IT organization, and we were talking about how the cloud vendors can put machine learning application, machine learning management application across their properties, or their services, but he said one of the first problems he'll encounter is the telemetry, like it's really easy on hardware, CPUs, utilization, memory utilization, a noise enabler for iO, but as you get higher up in the application services, it becomes much more difficult to harmonize, so that a program can figure out what's going wrong. >> Right, and I mean, like anomaly detection, right? >> Yes. >> I mean, how do you make sure you're seeing patterns where you can predict something before it happens, right? >> Is that on the road map for...? >> Yeah, so we're already working with some big customers to say, if you have a data center, how do you look at outage to predict what can go wrong in the future, root cause analysis, I mean, that is a huge problem solved. So let's say customer hit a problem, you took an outage, what caused it? Because today, you have specialists who will come and try to figure out what the problem is, but can we use machine learning or deep learning to figure out, is it a fix that was missing, or an application got changed that caused a CPU spike, that caused the outage? So that whole cost analysis is the one that's the hardest to solve, because you are talking about people's decades worth of knowledge, now you are influencing a machine to do that prediction. >> And from my understanding, root cause analysis is most effective when you have a rich model of how your, in this case, data structure and apps are working, and there might be many little models, but they're held together by some sort of knowledge graph that says here is where all the pieces fit, these are the pieces below these, sort of as peers to these other things, how does that knowledge graph get built in, and is this the next generation of a configuration management database. >> Right, so I call it the self-healing, self-managing, self-fixing data center. It's easy for you to turn up the heat or A/C, the temperature goes down, I mean, those are good, but the real value for a customer is exactly what you mentioned, building up that knowledge graft from different models that all comes together, but the hardest part, is, how do you, predicting an anomaly is one thing, but getting to the root cause is a different thing, because at that point, now you're saying, I know exactly what's caused this problem, and I can prevent it from happening again. That's not easy. We are working with our customers to figure out how do we get to the root cause analysis, but it's all about building the knowledge graph with multiple models coming from different systems, today, I mean enterprises have different systems from multi-vendors. We have to bring all that monitoring data into one source, and that's where that knowledge comes in, and then different models will feed that data, and then you need to prime that data, using deep learning algorithms to say, what caused this? >> Okay, so this actually sounds extremely relevant, although we're probably, in the interest of time, going to have to dig down on that one another time, but, just at a high level, it sounds like the knowledge graph is sort of your web or directory, into how local components or local models work, and then, knowing that, if it sees problems coming up here, it can understand how it affects something else tangentially. >> So think of knowledge graph as a neural net, because it's just building new neural net based on the past data, and it has that built-in knowledge where it says, okay, these symptoms seem to be a problem that I have encountered in the past. Now I can predict the root cause because I know this happened in the past. So it's kind of like you putting that net to build new problem determinations as it goes along. So it's a complex task. It's not easy to get to root cause analysis. But that's something we are aggressively working on developing. >> Okay, so let me ask, let's talk about sort of democratizing machine learning and the different ways of doing that. You've actually talked about the big pain points, maybe not so sexy, but that are critical, which is operationalizing the models, and preparing the data. Let me bounce off you some of the other approaches. One that we have heard from Amazon is that they're saying, well, data expunging might be an issue, and operationalizing the models might be an issue, but the biggest issue in terms of making this developer ready, is we're going to take the machine learning we use to run our business, whether it's merchandising fashion, running recommendation engines, managing fulfillment or logistics, and just like I did with AWS, they're dog-fooding it internally, and then they're going to put it out on AWS as a new layer of a platform. Where do you see that being effective, and where less effective? >> Right, so let me answer the first part of your question, the democratization of learning. So that happens when for example, a real estate agent who has no idea about machine learning, be able to come and predict the house prices in this area. That's to me, is democratizing. Because at that time, you have made it available to everyone, everyone can use it. But that comes back to our first point, which is having that clean set of data. You can build all the pre-canned pipelines out there, but if you're not feeding the set of data into, none of this, you know. Garbage in, garbage out, that's what you're going to get. So when we talk about democratization, it's not that easy and simple because you can build all this pre-canned pipelines that you have used in-house for your own purposes, but every customer has many unique cases. So if I take you as a bank, your fraud detection methods is completely different than me as a bank, my limit for fraud detection could be completely different. So there is always customization that's involved, the data that's coming in is different, so while it's a buzzword, I think there's knowledge that people need to feed it, there's models that needs to be tuned and trained, and there's deployment that is completely different, so you know, there is work that has to be done. >> So then what I'm taking away from what you're saying is, you don't have to start from ground zero with your data, but you might want to add some of your data, which is specialized, or slightly different from what the pre-trained model is, you still have to worry about operationalizing it, so it's not a pure developer ready API, but it uplevels the skills requirement so that it's not quite as demanding as working with TensorFlow or something like that. >> Right, I mean, so you can always build pre-canned pipelines and make it available, so we have already done that. For example, fraud detection, we have pre-canned pipelines for IT analytics, we have pre-canned pipelines. So it's nothing new, you can always do what you have done in house, and make it available to the public or the customers, but then they have to take it and have to do customization to meet their demands, bring their data to re-train the model, all those things has to be done, it's not just about providing the model, but every customer use case is completely different, whether you are looking at fraud detection from that one bank's perspective, not all banks are going to do the same thing. Same thing for predicting, for example, the loan, I mean, your loan approval process is going to be completely different than me as a bank loan approval process. >> So let me ask you then, and we're getting low on time here, but what would you, if you had to characterize Microsoft, Azure, Google, Amazon, as each bringing to bear certain advantages and disadvantages, and you're now the ambassador, so you're not a representative of IBM, help us understand the sweet spot for each of those. Like you're trying to fix the two sides of the pipeline, I guess, thinking of it like a barbell, you know, where are the others based on their data assets and their tools, where do they need to work. >> So, there's two aspects of it, there's enterprise aspect, so as an enterprise, I would like to say, it's not just about the technology, but there's also the services aspect. If my model goes down in the middle of the night, and my banking app is down, who do I call? If I'm using a service that is available on the cloud provider which is open source, do I have the right amount of coverage to call somebody and fix it. So there's the enterprise capabilities, availabilities, reliability, that is different, than a developer comes in, has a CSV file that he or she wants to build a model to predict something, that's different, this is different, two different aspects. So if you talk about, you know, all these vendors, if I'm bearing an enterprise card, some of the things I would look is, can I get an integrated solution, end to end on the machine learning platform. >> And that means end to end in one location, >> Right. >> So you don't have network issues or latency and stuff like that. >> Right, it's an integrated solution, where I can bring in the data, there's no challenges to latency, those kinds of things, and then can I get the enterprise level service, SLA all those things, right? So, in there, the named vendors obviously have an upper hand, because they are preferred to enterprises than a brand new open source that will come along, but then there is, within enterprises, there are a line of businesses building models, using some of the open source vendors, which is okay, but eventually they'd have to get deployed and then how do you make sure you have that enterprise capabilities up there. So if you ask me, I think each vendor brings some capabilities. I think the benefit IBM brings in is, one, you have the choice or the freedom to bring in cloud or on-prem or hybrid, you have all the choices of languages, like we support R, Python Spar, Spark, I mean, SPS, so I mean, the choice, the freedom, the reliability, the availability, the enterprise nature, that's where IBM comes in and differentiates, and that's for our customers, a huge plus. >> One last question, and we're really out of time, in terms of thinking about a unified pipeline, when we were at Spark Summit, sitting down with Matei Zaharia and Reynold Shin, the question came up, the data breaks has an incomplete pipeline, no persistence, no ingest, not really much in the way of serving, but boy are they good at, you know, data transmigration, and munging and machine learning, but they said they consider it part of their ultimate responsibility to take control. And on the ingest side it's Kafka, the serving side, might be Redis or something else, or the Spark databases like Snappy Data and Splice Machine. Spark is so central to IBM's efforts. What might a unified Spark pipeline look like? Have you guys thought about that? >> It's not there, obviously they probably could be working on it, but for our purpose, Spark is critical for us, and the reason we invested in Spark so much is because of the executions, where you can take a tremendous amount of data, and, you know, crunch through it in a very short amount of time, that's the reason, we also invented Spark Sequel, because we have a good chunk of customers still use Sequel heavily, We put a lot of work into the Spark ML, so we are continuing to invest, and probably they will get to and integrated into a solution, but it's not there yet, but as it comes along, we'll adapt. If it meets our needs and demands, and enterprise can do it, then definitely, I mean, you know, we saw that Spark's core engine has the ability to crunch a tremendous amount of data, so we are using it, I mean, 45 of our internal products use Spark as our core engine. Our DSX, Data Science Experience, has Spark as our core engine. So, yeah, I mean, today it's not there, but I know they're probably working on it, and if there are elements of this whole pipeline that comes together, that is convenient for us to use, and at enterprise level, we will definitely consider using it. >> Okay, on that note, Dinesh, thanks for joining us, and taking time out of your busy schedule. My name is George Gilbert, I'm with Dinesh Nirmal from IBM, VP of Analytics Development, and we are at the Cube studio in Palo Alto, and we will be back in the not too distant future, with more interesting interviews with some of the gurus at IBM. (peppy music)

Published Date : Aug 22 2017

SUMMARY :

So, between the two of us, I think Oh thank you George. the how to manage Machine Learning feedback, that you can go use to build models? but you take the data, and the researching for and that you can deploy that model on private cloud but it's not so black and white when you say and you don't have the challenges I mentioned, and the specialization, where you want to have and get each of the services, and you stitch it together. who do you see as the sort of last man standing? So that's, you know, that's my thought. Spark is core, Ingest is, off the-- Dinesh: Right, so you can do spark streaming, so that the number of pieces decreases. and then visualization, if you look at those five steps, of messy data that needs, you know, human supervision so you want to make sure that the data is as clean as in the pipeline, as ingest, process, analyze, if the model scores low, you have to take it offline, to see which one is more robust. Right, so you can do that, I mean, you could have So you might say that the pipeline for managing I mean, it is, but you know, the age that goes MPP, sequel, no sequel, so you got a mess. But tell us what you see the problem is, Now, that's where one of the aspects you talked about and the chain of custody doesn't lose its integrity for you to execute that particular transaction to delete but regulations that are pushing towards more Those are the kind of challenges that you will have. It's funny you mention that, because we were to say, if you have a data center, how do you look at most effective when you have a rich model and then you need to prime that data, using deep learning but, just at a high level, it sounds like the knowledge So it's kind of like you putting that net Let me bounce off you some of the other approaches. pipelines that you have used in-house for your own purposes, the pre-trained model is, you still have to worry So it's nothing new, you can always do what you have So let me ask you then, and we're getting low on time So if you talk about, you know, all these vendors, So you don't have network issues or latency and then how do you make sure you have that but boy are they good at, you know, where you can take a tremendous amount of data, of the gurus at IBM.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
MicrosoftORGANIZATION

0.99+

George GilbertPERSON

0.99+

IBMORGANIZATION

0.99+

GeorgePERSON

0.99+

EuropeLOCATION

0.99+

AmazonORGANIZATION

0.99+

Dinesh NirmalPERSON

0.99+

99%QUANTITY

0.99+

GoogleORGANIZATION

0.99+

80%QUANTITY

0.99+

Palo AltoLOCATION

0.99+

twoQUANTITY

0.99+

HIPAATITLE

0.99+

DineshPERSON

0.99+

Reynold ShinPERSON

0.99+

FridayDATE

0.99+

AWSORGANIZATION

0.99+

TodayDATE

0.99+

todayDATE

0.99+

five stepsQUANTITY

0.99+

45QUANTITY

0.99+

two daysQUANTITY

0.99+

11 monthsQUANTITY

0.99+

each segmentQUANTITY

0.99+

first partQUANTITY

0.99+

two enterprisesQUANTITY

0.99+

OneQUANTITY

0.99+

first pointQUANTITY

0.99+

first roundQUANTITY

0.99+

each vendorQUANTITY

0.99+

LotusTITLE

0.99+

eachQUANTITY

0.99+

AzureORGANIZATION

0.99+

two aspectsQUANTITY

0.99+

one challengeQUANTITY

0.99+

one approachQUANTITY

0.99+

SparkTITLE

0.99+

two sidesQUANTITY

0.99+

CubeORGANIZATION

0.99+

one stackQUANTITY

0.98+

one componentQUANTITY

0.98+

one sourceQUANTITY

0.98+

GDPRTITLE

0.98+

One last questionQUANTITY

0.98+

oneQUANTITY

0.98+

thousands of modelsQUANTITY

0.98+

one vendorQUANTITY

0.98+

bothQUANTITY

0.98+

KafkaTITLE

0.97+

one thingQUANTITY

0.97+

SequelTITLE

0.97+

one locationQUANTITY

0.97+

secondQUANTITY

0.96+

Roland Voelskow & Dinesh Nirmal - IBM Fast Track Your Data 2017


 

>> Narrator: Live from Munich, Germany, it's theCube, covering IBM, Fast Track Your Data. Brought to you by IBM. >> Welcome to Fast Track Your Data, everybody, welcome to Munich, Germany, this is theCube, the leader in live tech coverage, I'm Dave Vellante with my co-host Jim Kobielus. Dinesh Nirmal is here, he's the vice president of IBM Analytics Development, of course, at IBM, and he's joined by Roland Voelskow, who is the Portfolio Executive at T-Systems, which is a division of Deutche Telekom. Gentlemen, welcome to theCube, Dinesh, good to see you again. >> Thank you. Roland, let me start with you. So your role inside T-Systems, talk about that a little bit. >> Yeah, so thank you for being here, at T-Systems we serve our customers with all kinds of informal hosting services, from infrastructure up to application services, and we have recently, I'd say, about five years ago started to standardize our offerings as a product portfolio and are now focusing on coming from the infrastructure and infrastructure as a service offerings. We are now putting a strong effort in the virtualization container, virtualization to be able to move complete application landscapes from different platforms from, to T-Systems or between T-Systems platforms. The goal is to make, to enable customers to talk with us about their application needs, their business process needs, and have everything which is related to the right place to run the application will be managed automatically by our intelligent platform, which will decide in a multi-platform environment if an application, particularly a business application runs on high available private cloud or a test dev environment, for example, could run on a public cloud, so the customer should not need to deal with this kind of technology questions anymore, so we want to cover the application needs and have the rest automated. >> Yeah, we're seeing a massive trend in our community for organizations like yours to try to eliminate wherever possible undifferentiated infrastructure management, and provisioning of hardware, and Lund management and those things that really don't add value to the business trying to support their digital transformations and raise it up a little bit, and that's clearly what you just described, right? >> Roland: Exactly. >> Okay, and one of those areas that companies want to invest, of course, is data, you guys here in Munich, you chose this for a reason, but Dinesh, give us the update in what's going on in your world and what you're doing here, in Fast Track Your Data. >> Right, so actually myself and Roland was talking about this yesterday. One of the challenges our clients, customers have is the hybrid data management. So how do you make sure your data, whether it's on-premise or on the cloud, you have a seamless way to interact with that data, manage the data, govern the data, and that's the biggest challenge. I mean, lot of customers want to move to the cloud, but the critical, transactional data sits still on-prem. So that's one area that we are focusing in Munich here, is, especially with GDPR coming in 2018, how do we help our customers manage the data and govern the data all through that life cycle of the data? >> Okay, well, how do you do that? I mean, it's a multi-cloud world, most customers have, they might have some Bluemix, they might have some Amazon, they have a lot of on-prem, they got mainframe, they got all kinds of new things happening, like containers, and microservices, some are in the cloud, some are on-prem, but generally speaking, what I just described is a series of stovepipes, they each have their different lifecycle and data lifecycle and management frameworks. Is it your vision to bring all of those together in a single management framework and maybe share with us where you are on that journey and where you're going. >> Exactly, that's exactly our effort right now to bring every application service which we provide to our customers into containerized version which we can move across our platforms or which we can also transform from the external platforms from competition platforms, and onboard them into T-Systems when we acquire new customers. Is also a reality that customers work with different platforms, so we want to be the integrator, and so we would like to expand our product portfolio as an application portfolio and bring new applications, new, attractive applications into our application catalog, which is the containerized application catalog, and so here comes the part, the cooperation with IBM, so we are already a partner with IBM DB2, and we are now happy to talk about expanding the partnership into hosting the analytics portfolio of IBM, so we bring the strength of both companies together the marked excess credibility, security, in terms of European data law for T-Systems, from T-Systems, and the very attractive analytics portfolio of IBM so we can bring the best pieces together and have a very attractive offering to the market. >> So Dinesh, how does IBM fulfill that vision? Is it a product, is it a set of services, is it a framework, series of products, maybe you could describe in some more depth. >> Yeah, it all has to start with the platform. So you have the underlying platform, and then you build what you talked about, that container services on top of it, to meet the need of our enterprise customers, and then the biggest challenge is that how do you govern the data through the lifecycle of that data, right? Because that data could be sitting on-prem, data could be sitting on cloud, on a private cloud, how do you make sure that you can take that data, who touched the data, where that tech data went, and not just the data, but the analytical asset, right, so if your model's built, when was it deployed, where was it deployed? Was it deployed in QA, was it deployed in development? All those things have to be governed, so you have one governance policy, one governance console that you can go as a CDO to make sure that you can see where the data is moving and where the data is managed. So that's the biggest challenge, and that's what we are trying to make sure that, to our enterprise customers, we solve that problem. >> So IBM has announced at this show a unified governance catalog. Is that an enabler for this-- >> Dinesh: Oh, yeah. >> capability you're describing here? >> Oh yeah, I mean, that is the key piece of all of this would be the unified governance, >> Jim: Right. >> which is, you have one place to go govern that data as the CDO. >> And you've mentioned, as has Roland, the containerization of applications, now, I know that DB2 Developer Community Edition, the latest version, announced at this show, has the ability to orchestrate containerized applications, through Kubernetes, can you describe how that particular tool might be useful in this context? And how you might play DB2 Developer Community Edition in an environment where you're using the catalog to manage all the layers of data or metadata or so forth associated with these applications. >> Right, so it goes back to Dave's question, How do you manage the new products that's coming, so our goal is to make every product a container. A containerized way to deliver, so that way you have a doc or registry where you can go see what the updates are, you can update it when you're ready, all those things, but once you containerize the product and put it out there, then you can obviously have the governing infrastructures that sits on top of it to make sure all those containerized products are being managed. So that's one step towards that, but to go back to your DB2 Community Edition, our goal here is how do we simplify our product for our customers? So if you're a developer, how can we make it easy enough for you to assemble your application in matter of minutes, so that's our goal, simplify, be seamless, and be able to scale, so those are the three things we focused on the DB2 Community Edition. >> So in terms of the simplicity aspect of the tool, can you describe a few features or capabilities of the developer edition, the community edition, that are simpler than in the previous version, because I believe you've had a community edition for DB2 for developers for at least a year or two. Describe the simplifications that are introduced in this latest version. >> So one, I will give you is the JSON support. >> Okay. >> So today you want to combine the unstructured data with structured data? >> Yeah. >> I mean, it's simple, what we have a demo coming up in our main tent, where asset dialup, where you can easily go, get a JSON document put it in there, combined with your structured data, unstructured data, and you are ready to go, so that's a great example, where we are making it really easy, simple. The other example is download and go, where you can easily download in less than five clicks, less than 10 minutes, the product is up and running. So those are a couple of the things that we are doing to make sure that it is much more simpler, seamless and scalable for our customers. >> And what is Project Event Store, share with us whatever you can about that. >> Dinesh: Right. >> You're giving a demo here, I think, >> Dinesh: Yeah, yeah. >> So what is it, and why is it important? >> Yeah, so we are going to do a demo at the main tent on Project Event Store. It's about combining the strength of IBM Innovation with the power of open source. So it's about how do we do fast ingest, inserts into a object store, for example, and be able to do analytics on it. So now you have the strength of not only bringing data at very high speed or volume, but now you can do analytics on it. So for example, just to give you a very high level number we can do more than one million inserts per second. More than one million. And our closest competition is at 30,000 inserts per second. So that's huge for us. >> So use cases at the edge, obviously, could take advantage of something like this. Is that sort of where it's targeted? >> Well, yeah, so let's say, I'll give you a couple of examples. Let's say you're a hospital chain, you want the patient data coming in real time, streaming the data coming in, you want to do analytics on it, that's one example, or let's say you are a department store, you want to see all the traffic that goes into your stores and you want to do analytics on how well your campaign did on the traffic that came in. Or let's say you're an airline, right? You have IOT data that's streaming or coming in, millions of inserts per second, how do you do analytics, so this is, I would say this is a great innovation that will help all kinds of industries. >> Dinesh, I've had streaming price for quite awhile and fairly mature ones like IBM Streams, but also the structured streaming capability of Spark, and you've got a strong Spark portfolio. Is there any connection between Product Event Store and these other established IBM offerings? >> No, so what we have done is, like I said, took the power of open source, so Spark becomes obviously the execution engine, we're going to use something called the Parquet format where the data can be stored, and then we obviously have our own proprietary ingest Mechanism that brings in. So some similarity, but this is a brand new work that we have done between IBM research and it has been in the works for the last 12 to 18 months, now we are ready to bring it into the market. >> So we're about out of time, but Roland, I want to end with you and give us the perspective on Europe and European customers, particular, Rob Thomas was saying to us that part of the reason why IBM came here is because they noticed that 10 of the top companies that were out-performing the S&P 500 were US companies. And they were data-driven. And IBM kind of wanted to shake up Europe a little bit and say, "Hey guys, time to get on board." What do you see here in Europe? Obviously there are companies like Spotify which are European-based that are very data-driven, but from your perspective, what are you seeing in Europe, in terms of adoption of these data-driven technologies and to use that buzzword. >> Yes, so I think we are in an early stage of adoption of these data-driven applications and analytics, and the European companies are certainly very careful, cautious about, and sensitive about their data security. So whenever there's news about another data leakage, everyone is becoming more cautious and so here comes the unique, one of the unique positions of T-Systems, which has history and credibility in the market for data protection and uninterrupted service for our customers, so that's, we have achieved a number of cooperations, especially also with the American companies, where we do a giant approach to the European markets. So as I said, we bring the strength of T-Systems to the table, as the very competitive application portfolio, analytics portfolio, in this case, from our partner IBM, and the best worlds together for our customers. >> All right, we have to leave it there. Thank you, Roland, very much for coming on. Dinesh, great to see you again. >> Dinesh: Thank you. >> All right, you're welcome. Keep it right there, buddy. Jim and I will be back with our next guests on theCube. We're live from Munich, Germany, at Fast Track Your Data. Be right back.

Published Date : Jun 22 2017

SUMMARY :

Brought to you by IBM. Dinesh, good to see you again. So your role inside T-Systems, talk about that a little bit. so the customer should not need to deal is data, you guys here in Munich, So how do you make sure your data, where you are on that journey and where you're going. and so here comes the part, the cooperation with IBM, maybe you could describe in some more depth. to make sure that you can see where the data is moving So IBM has announced at this show which is, you have has the ability to orchestrate containerized applications, and be able to scale, So in terms of the simplicity aspect of the tool, So one, I will give you The other example is download and go, where you can easily whatever you can about that. So for example, just to give you a very high level number Is that sort of where it's targeted? and you want to do analytics but also the structured streaming capability of Spark, and then we obviously have our own proprietary I want to end with you and give us the perspective and so here comes the unique, one of the unique positions Dinesh, great to see you again. Jim and I will be back with our next guests on theCube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jim KobielusPERSON

0.99+

JimPERSON

0.99+

IBMORGANIZATION

0.99+

Roland VoelskowPERSON

0.99+

RolandPERSON

0.99+

DavePERSON

0.99+

Rob ThomasPERSON

0.99+

Dave VellantePERSON

0.99+

EuropeLOCATION

0.99+

AmazonORGANIZATION

0.99+

MunichLOCATION

0.99+

Deutche TelekomORGANIZATION

0.99+

T-SystemsORGANIZATION

0.99+

Dinesh NirmalPERSON

0.99+

less than 10 minutesQUANTITY

0.99+

More than one millionQUANTITY

0.99+

2018DATE

0.99+

SpotifyORGANIZATION

0.99+

DineshPERSON

0.99+

less than five clicksQUANTITY

0.99+

twoQUANTITY

0.99+

yesterdayDATE

0.99+

DB2 Developer Community EditionTITLE

0.99+

one exampleQUANTITY

0.98+

DB2TITLE

0.98+

Munich, GermanyLOCATION

0.98+

both companiesQUANTITY

0.98+

JSONTITLE

0.98+

2017DATE

0.98+

KubernetesTITLE

0.97+

BluemixORGANIZATION

0.97+

todayDATE

0.97+

Project Event StoreORGANIZATION

0.96+

OneQUANTITY

0.96+

one stepQUANTITY

0.95+

singleQUANTITY

0.95+

oneQUANTITY

0.94+

one areaQUANTITY

0.94+

IBM Analytics DevelopmentORGANIZATION

0.93+

30,000 inserts per secondQUANTITY

0.92+

three thingsQUANTITY

0.92+

SparkTITLE

0.91+

18 monthsQUANTITY

0.91+

AmericanOTHER

0.91+

about five years agoDATE

0.91+

eachQUANTITY

0.91+

IBM InnovationORGANIZATION

0.9+

more than one million inserts per secondQUANTITY

0.9+

DB2 Community EditionTITLE

0.9+

EuropeanOTHER

0.89+

a yearQUANTITY

0.88+

USLOCATION

0.87+

theCubeCOMMERCIAL_ITEM

0.86+

Dinesh Nirmal, IBM - IBM Machine Learning Launch - #IBMML - #theCUBE


 

>> [Announcer] Live from New York, it's theCube, covering the IBM Machine Learning Launch Event brought to you by IBM. Now, here are your hosts, Dave Vellante and Stu Miniman. >> Welcome back to the Waldorf Astoria, everybody. This is theCube, the worldwide leader in live tech coverage. We're covering the IBM Machine Learning announcement. IBM bringing machine learning to its zMainframe, its private cloud. Dinesh Nirmel is here. He's the Vice President of Analytics at IBM and a Cube alum. Dinesh, good to see you again. >> Good to see you, Dave. >> So let's talk about ML. So we went through the big data, the data lake, the data swamp, all this stuff with the dupe. And now we're talking about machine learning and deep learning and AI and cognitive. Is it same wine, new bottle? Or is it an evolution of data and analytics? >> Good. So, Dave, let's talk about machine learning. Right. When I look at machine learning, there's three pillars. The first one is the product. I mean, you got to have a product, right. And you got to have a different shared set of functions and features available for customers to build models. For example, Canvas. I mean, those are table stakes. You got to have a set of algorithms available. So that's the product piece. >> [Dave] Uh huh. >> But then there's the process, the process of taking that model that you built in a notebook and being able to operationalize it. Meaning able to deploy it. That is, you know, I was talking to one of the customers today, and he was saying, "Machine learning is 20% fun and 80% elbow grease." Because that operationalizing of that model is not easy. Although they make it sound very simple, it's not. So if you take a banking, enterprise banking example, right? You build a model in the notebook. Some data sense build it. Now you have to take that and put it into your infrastructure or production environment, which has been there for decades. So you could have a third party software that you cannot change. You could have a set of rigid rules that already is there. You could have applications that was written in the 70's and 80's that nobody want to touch. How do you all of a sudden take the model and infuse in there? It's not easy. And so that is a tremendous amount of work. >> [Dave] Okay. >> The third pillar is the people or the expertise or the experience, the skills that needs to come through, right. So the product is one. The process of operationalizing and getting it into your production environment is another piece. And then the people is the third one. So when I look at machine learning, right. Those are three key pillars that you need to have to have a successful, you know, experience of machine learning. >> Okay, let's unpack that a little bit. Let's start with the differentiation. You mentioned Canvas, but talk about IBM specifically. >> [Dinesh] Right. What's so great about IBM? What's the differentiation? >> Right, exactly. Really good point. So we have been in the productive side for a very long time, right. I mean, it's not like we are coming into ML or AI or cognitive yesterday. We have been in that space for a very long time. We have SPSS predictive analytics available. So even if you look from all three pillars, what we are doing is we are, from a product perspective, we are bringing in the product where we are giving a choice or a flexibility to use the language you want. So there are customers who only want to use R. They are religious R users. They don't want to hear about anything else. There are customers who want to use Python, you know. They don't want to use anything else. So how do we give that choice of languages to our customers to say use any language you want. Or execution engines, right? Some folks want to use Park as execution engine. Some folks want to use R or Python, so we give that choice. Then you talked about Canvas. There are folks who want to use the GUI portion of the Canvas or a modeler to build models, or there are, you know, tekkie guys that we'll approach who want to use notebook. So how do you give that choice? So it becomes kind of like a freedom or a flexibility or a choice that we provide, so that's the product piece, right? We do that. Then the other piece is productivity. So one of the customers, the CTO of (mumbles) TV's going to come on stage with me during the main session, talk about how collaboration helped from an IBM machine learning perspective because their data scientists are sitting in New York City, our data scientists who are working with them are sitting in San Jose, California. And they were real time collaborating using notebooks in our ML projects where they can see the real time. What changes their data scientists are making. They can slack messages between each other. And that collaborative piece is what really helped us. So collaboration is one. Right from a productivity piece. We introduced something called Feedback Loop, whereby which your model can get trained. So today, you deploy a model. It could lose the score, and it could get degraded over time. Then you have to take it off-line and re-train, right? What we have done is like we introduced the Feedback Loops, so when you deploy your model, we give you two endpoints. The first endpoint is, basically, a URI, for you to plug-in your application when you, you know, run your application able call the scoring API. The second endpoint is this feedback endpoint, where you can choose to re-train the model. If you want three hours, if you want it to be six hours, you can do that. So we bring that flexibility, we bring that productivity into it. Then, the management of the models, right? How do we make sure that once you develop the model, you deploy the model. There's a life cycle involved there. How do you make sure that we enable, give you the tools to manage the model? So when you talk about differentiation, right? We are bringing differentiation on all three pillars. From a product perspective, with all the things I mentioned. From a deployment perspective. How do we make sure we have different choices of deployment, whether it's streaming, whether it's realtime, whether it's batch. You can do deployment, right? The Feedback Loop is another one. Once you deployed, how do we keep re-training it. And the last piece I talked about is the expertise or the people, right? So we are today announcing IBM Machine Learning Hub, which will become one place where our customers can go, ask questions, get education sessions, get training, right? Work together to build models. I'll give you an example, that although we are announcing hub, the IBM Machine Learning Hub today, we have been working with America First Credit Union for the last month or so. They approached us and said, you know, their underwriting takes a long time. All the knowledge is embedded in 15 to 20 human beings. And they want to make sure a machine should be able to absorb that knowledge and make that decision in minutes. So it takes hours or days. >> [Dave] So, Stu, before you jump in, so I got, put the portfolio. You know, you mentioned SPSS, expertise, choice. The collaboration, which I think you really stressed at the announcement last fall. The management of the models, so you can continuously improve it. >> Right. >> And then this knowledge base, what you're calling the hub. And I could argue, I guess, that if I take any one of those individual pieces, there, some of your competitors have them. Your argument would be it's all there. >> It all comes together, right? And you have to make sure that all three pillars come together. And customers see great value when you have that. >> Dinesh, customers today are used to kind of the deployment model on the public cloud, which is, "I want to activate a new service," you know. I just activate it, and it's there. When I think about private cloud environments, private clouds are operationally faster, but it's usually not miniature hours. It's usually more like months to deploy projects, which is still better than, you know, kind of, I think, before big data, it was, you know, oh, okay, 18 months to see if it works, and let's bring that down to, you know, a couple of months. Can you walk us through what does, you know, a customer today and says, "Great, I love this approach. "How long does it take?" You know, what's kind of the project life cycle of this? And how long will it take them to play around and pull some of these levers before they're, you know, getting productivity out of it? >> Right. So, really good questions, Stu. So let me back one step. So, in private cloud, we are going, we have new initiative called Download and Go, where our goal is to have our desktop products be able to install on your personal desktop in less than five clicks, in less than fifteen minutes. That's the goal. So the other day, you know, the team told me it's ready. That the first product is ready where you can go less than five clicks, fifteen minutes. I said the real test is I'm going to bring my son, who's five years old. Can he install it, and if he can install it, you know, we are good. And he did it. And I have a video to prove it, you know. So after the show, I will show you because and that's, when you talk about, you know, in the private cloud side, or the on-premise side, it has been a long project cycle. What we want is like you should be able to take our product, install it, and get the experience in minutes. That's the goal. And when you talk about private cloud and public cloud, another differentiating factor is that now you get the strength of IBM public cloud combined with the private cloud, so you could, you know, train your model in public cloud, and score on private cloud. You have the same experience. Not many folks, not many competitors can offer that, right? So that's another . .. >> [Stu] So if I get that right. If I as a customer have played around with the machine learning in Bluemix, I'm going to have a similar look, feel, API. >> Exactly the same, so what you have in Bluemix, right? I mean, so you have the Watson in Bluemix, which, you know, has deep learning, machine learning--all those capabilities. What we have done is we have done, is like, we have extracted the core capabilities of Watson on private cloud, and it's IBM Machine Learning. But the experience is the same. >> I want to talk about this notion of operationalizing analytics. And it ties, to me anyway, it ties into transformation. You mentioned going from Notebook to actually being able to embed analytics in workflow of the business. Can you double click on that a little bit, and maybe give some examples of how that has helped companies transform? >> Right. So when I talk about operationalizing, when you look at machine learning, right? You have all the way from data, which is the most critical piece, to building or deploying the model. A lot of times, data itself is not clean. I'll give you an example, right. So >> OSYX. >> Yeah. And when we are working with an insurance company, for example, the data that comes in. For example, if you just take gender, a lot of times the values are null. So we have to build another model to figure out if it's male or female, right? So in this case, for example, we have to say somebody has done a prostate exam. Obviously, he's a male. You know, we figured that. Or has a gynocology exam. It's a female. So we have to, you know, there's a lot of work just to get that data cleansed. So that's where I mentioned it's, you know, machine learning is 20% fun, 80% elbow grease because it's a lot of grease there that you need to make sure that you cleanse the data. Get that right. That's the shaping piece of it. Then, comes the building the model, right. And then, once you build the model on that data comes the operationalization of that model, which in itself is huge because how do you make sure that you infuse that model into your current infrastructure, which is where a lot of skill set, a lot of experience, and a lot of knowledge that comes in because you want to make sure, unless you are a start-up, right? You already have applications and programs and third-party vendors applications worth running for years, or decades, for that matter. So, yeah, so that's operationalization's a huge piece. Cleansing of the data is a huge piece. Getting the model right is another piece. >> And simplifying the whole process. I think about, I got to ingest the data. I've now got to, you know, play with it, explore. I've got to process it. And I've got to serve it to some, you know, some business need or application. And typically, those are separate processes, separate tools, maybe different personas that are doing that. Am I correct that your announcement in the Fall addressed that workflow. How is it being, you know, deployed and adopted in the field? How is it, again back to transformation, are you seeing that people are actually transforming their analytics processes and ultimately creating outcomes that they expect? >> Huge. So good point. We announced data science experience in the Fall. And the customers that who are going to speak with us today on stage, are the customers who have been using that. So, for example, if you take AFCU, America First Credit Union, they worked with us. In two weeks, you know, talk about transformation, we were able to absorb the knowledge of their underwriters. You know, what (mumbles) is in. Build that, get that features. And was able to build a model in two weeks. And the model is predicting 90%, with 90% accuracy. That's what early tests are showing. >> [Dave] And you say that was in a couple of weeks. You were, you developed that model. >> Yeah, yeah, right. So when we talk about transformation, right? We couldn't have done that a few years ago. We have transformed where the different personas can collaborate with each other, and that's a collaboration piece I talked about. Real time. Be able to build a model, and put it in the test to see what kind of benefits they're getting. >> And you've obviously got edge cases where people get really sophisticated, but, you know, we were sort of talking off camera, and you know like the 80/20 rule, or maybe it's the 90/10. You say most use cases can be, you know, solved with regression and classification. Can you talk about that a little more? >> So, so when we talk about machine learning, right? To me, I would say 90% of it is regression or classification. I mean there are edge case of our clustering and all those things. But linear regression or a classification can solve most of the, most of our customers problems, right? So whether it's fraud detection. Or whether it's underwriting the loan. Or whether you're trying to determine the sentiment analysis. I mean, you can kind of classify or do regression on it. So I would say that 90% of the cases can be covered, but like I said, most of the work is not about picking the right algorithm, but it's also about cleansing the data. Picking the algorithm, then comes building the model. Then comes deployment or operationalizing the model. So there's a step process that's involved, and each step involves some amount of work. So if I could make one more point on the technology and the transformation we have done. So even with picking the right algorithm, we automated, so you as a data scientist don't need to, you know, come in and figure out if I have 50 classifiers and each classifier has four parameters. That's 200 different combinations. Even if you take one hour on each combination, that's 200 hours or nine days that takes you to pick the right combination. What we have done is like in IBM Machine Learning we have something called cognitive assistance for data science, which will help you pick the right combination in minutes instead of days. >> So I can see how regression scales, and in the example you gave of classification, I can see how that scales. If you've got a, you know, fixed classification or maybe 200 parameters, or whatever it is, that scales, what happens, how are people dealing with, sort of automating that classification as things change, as they, some kind of new disease or pattern pops up. How do they address that at scale? >> Good point. So as the data changes, the model needs to change, right? Because everything that model knows is based on the training data. Now, if the data has changed, the symptoms of cancer or any disease has changed, obviously, you have to retrain that model. And that's where I talk about the, where the feedback loop comes in, where we will automatically retrain the model based on the new data that's coming in. So you, as an end user, for example, don't need to worry about it because we will take care of that piece also. We will automate that, also. >> Okay, good. And you've got a session this afternoon with you said two clients, right? AFCU and Kaden dot TV, and you're on, let's see, at 2:55. >> Right. >> So you folks watching the live stream, check that out. I'll give you the last word, you know, what shall we expect to hear there. Show a little leg on your discussion this afternoon. >> Right. So, obviously, I'm going to talk about the different shading factors, what we are delivering IBM Machine Learning, right? And I covered some of it. There's going to be much more. We are going to focus on how we are making freedom or flexibility available. How are we going to do productivity, right? Gains for our data scientists and developers. We are going to talk about trust, you know, the trust of data that we are bringing in. Then I'm going to bring the customers in and talk about their experience, right? We are delivering a product, but we already have customers using it, so I want them to come on stage and share the experiences of, you know, it's one thing you hear about that from us, but it's another thing that customers come and talk about it. So, and the last but not least is we are going to announce our first release of IBM Machine Learning on Z because if you look at 90% of the transactional data, today, it runs through Z, so they don't have to off-load the data to do analytics on it. We will make machine learning available, so you can do training and scoring right there on Z for your real time analytics, so. >> Right. Extending that theme that we talked about earlier, Stu, bringing analytics and transactions together, which is a big theme of the Z 13 announcement two years ago. Now you're seeing, you know, machine learning coming on Z. The live stream starts at 2 o'clock. Silicon Angle dot com had an article up on the site this morning from Maria Doucher on the IBM announcement, so check that out. Dinesh, thanks very much for coming back on theCube. Really appreciate it, and good luck today. >> Thank you. >> All right. Keep it right there, buddy. We'll be back with our next guest. This is theCube. We're live from the Waldorf Astoria for the IBM Machine Learning Event announcement. Right back.

Published Date : Feb 15 2017

SUMMARY :

brought to you by IBM. Dinesh, good to see you again. the data lake, the data swamp, And you got to have a different shared set So if you take a banking, to have a successful, you know, experience Let's start with the differentiation. What's the differentiation? the Feedback Loops, so when you deploy your model, The management of the models, so you can And I could argue, I guess, And customers see great value when you have that. and let's bring that down to, you know, So the other day, you know, the machine learning in Bluemix, I mean, so you have the Watson in Bluemix, Can you double click on that a little bit, when you look at machine learning, right? So we have to, you know, And I've got to serve it to some, you know, So, for example, if you take AFCU, [Dave] And you say that was in a couple of weeks. and put it in the test to see what kind You say most use cases can be, you know, we automated, so you as a data scientist and in the example you gave of classification, So as the data changes, with you said two clients, right? So you folks watching the live stream, you know, the trust of data that we are bringing in. on the IBM announcement, for the IBM Machine Learning Event announcement.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
20%QUANTITY

0.99+

Dave VellantePERSON

0.99+

AFCUORGANIZATION

0.99+

15QUANTITY

0.99+

one hourQUANTITY

0.99+

New York CityLOCATION

0.99+

Dinesh NirmalPERSON

0.99+

Dinesh NirmelPERSON

0.99+

Stu MinimanPERSON

0.99+

IBMORGANIZATION

0.99+

200 hoursQUANTITY

0.99+

six hoursQUANTITY

0.99+

90%QUANTITY

0.99+

DavePERSON

0.99+

80%QUANTITY

0.99+

less than fifteen minutesQUANTITY

0.99+

New YorkLOCATION

0.99+

fifteen minutesQUANTITY

0.99+

Maria DoucherPERSON

0.99+

America First Credit UnionORGANIZATION

0.99+

50 classifiersQUANTITY

0.99+

nine daysQUANTITY

0.99+

three hoursQUANTITY

0.99+

two clientsQUANTITY

0.99+

Kaden dot TVORGANIZATION

0.99+

less than five clicksQUANTITY

0.99+

18 monthsQUANTITY

0.99+

San Jose, CaliforniaLOCATION

0.99+

two weeksQUANTITY

0.99+

200 different combinationsQUANTITY

0.99+

DineshPERSON

0.99+

each classifierQUANTITY

0.99+

200 parametersQUANTITY

0.99+

each combinationQUANTITY

0.99+

PythonTITLE

0.99+

todayDATE

0.99+

each stepQUANTITY

0.99+

two years agoDATE

0.99+

three key pillarsQUANTITY

0.99+

oneQUANTITY

0.98+

first productQUANTITY

0.98+

one stepQUANTITY

0.98+

two endpointsQUANTITY

0.98+

third oneQUANTITY

0.98+

first oneQUANTITY

0.98+

WatsonTITLE

0.98+

2 o'clockDATE

0.98+

last monthDATE

0.98+

first endpointQUANTITY

0.98+

three pillarsQUANTITY

0.98+

Silicon Angle dot comORGANIZATION

0.98+

70'sDATE

0.97+

80'sDATE

0.97+

this afternoonDATE

0.97+

Z 13TITLE

0.97+

ZTITLE

0.97+

last fallDATE

0.96+

BluemixTITLE

0.96+

yesterdayDATE

0.95+

2:55DATE

0.95+

DockerCon 2020 Kickoff


 

>>From around the globe. It's the queue with digital coverage of DockerCon live 2020 brought to you by Docker and its ecosystem partners. >>Hello everyone. Welcome to Docker con 2020 I'm John furrier with the cube. I'm in our Palo Alto studios with our quarantine crew. We have a great lineup here for DockerCon con 2020 virtual event. Normally it was in person face to face. I'll be with you throughout the day from an amazing lineup of content over 50 different sessions, cube tracks, keynotes, and we've got two great co-hosts here with Docker, Jenny Marcio and Brett Fisher. We'll be with you all day, all day today, taking you through the program, helping you navigate the sessions. I'm so excited, Jenny. This is a virtual event. We talk about this. Can you believe it? We're, you know, may the internet gods be with us today and hope everyone's having an easy time getting in. Jenny, Brett, thank you for being here. Hey, >>Yeah. Hi everyone. Uh, so great to see everyone chatting and telling us where they're from. Welcome to the Docker community. We have a great day planned for you >>Guys. Great job. I'm getting this all together. I know how hard it is. These virtual events are hard to pull off. I'm blown away by the community at Docker. The amount of sessions that are coming in the sponsor support has been amazing. Just the overall excitement around the brand and the, and the opportunities given this tough times where we're in. Um, it's super exciting. Again, made the internet gods be with us throughout the day, but there's plenty of content. Uh, Brett's got an amazing all day marathon group of people coming in and chatting. Jenny, this has been an amazing journey and it's a great opportunity. Tell us about the virtual event. Why DockerCon virtual. Obviously everyone's cancelling their events, but this is special to you guys. Talk about Docker con virtual this year. >>Yeah. You know, the Docker community shows up at DockerCon every year and even though we didn't have the opportunity to do an in person event this year, we didn't want to lose the time that we all come together at DockerCon. The conversations, the amazing content and learning opportunities. So we decided back in December to make Docker con a virtual event. And of course when we did that, there was no quarantine. Um, we didn't expect, you know, I certainly didn't expect to be delivering it from my living room, but we were just, I mean we were completely blown away. There's nearly 70,000 people across the globe that have registered for Docker con today. And when you look at backer cons of past right live events, really and more learning are just the tip of the iceberg. And so thrilled to be able to deliver a more inclusive vocal event today. And we have so much planned. Uh, I think Brett, you want to tell us some of the things that you have planned? >>Well, I'm sure I'm going to forget something cause there's a lot going on. But, uh, we've obviously got interviews all day today on this channel with John the crew. Um, Jenny has put together an amazing set of all these speakers all day long in the sessions. And then you have a captain's on deck, which is essentially the YouTube live hangout where we just basically talk shop. Oh, it's all engineers, all day long, captains and special guests. And we're going to be in chat talking to you about answering your questions. Maybe we'll dig into some stuff based on the problems you're having or the questions you have. Maybe there'll be some random demos, but it's basically, uh, not scripted. It's an all day long unscripted event, so I'm sure it's going to be a lot of fun hanging out in there. >>Well guys, I want to just say it's been amazing how you structured this so everyone has a chance to ask questions, whether it's informal laid back in the captain's channel or in the sessions where the speakers will be there with their, with their presentations. But Jenny, I want to get your thoughts because we have a site out there that's structured a certain way for the folks watching. If you're on your desktop, there's a main stage hero. There's then tracks and Brett's running the captain's tracks. You can click on that link and jump into his session all day long. He's got an amazing set of line of sleet, leaning back, having a good time. And then each of the tracks, you can jump into those sessions. It's on a clock. It'll be available on demand. All that content is available if you're on your desktop, if you're on your mobile, it's the same thing. >>Look at the calendar, find the session that you want. If you're interested in it, you could watch it live and chat with the participants in real time or watch it on demand. So there's plenty of content to navigate through. We do have it on a clock and we'll be streaming sessions as they happen. So you're in the moment and that's a great time to chat in real time. But there's more, Jenny, you're getting more out of this event. We, you guys try to bring together the stimulation of community. How does the participants get more out of the the event besides just consuming some of the content all day today? >>Yeah. So first set up your profile, put your picture next to your chat handle and then chat. We have like, uh, John said we have various setups today to help you get the most out of your experience are breakout sessions. The content is prerecorded so you get quality content and the speakers and chat. So you can ask questions the whole time. Um, if you're looking for the hallway track, then definitely check out the captain's on deck channel. Uh, and then we have some great interviews all day on the queue so that up your profile, join the conversation and be kind, right. This is a community event. Code of conduct is linked on every page at the top and just have a great day. >>And Brett, you guys have an amazing lineup on the captain, so you have a great YouTube channel that you have your stream on. So the folks who were familiar with that can get that either on YouTube or on the site. The chat is integrated in, so you're set up, what do you got going on? Give us the highlights. What are you excited about throughout your day? Take us through your program on the captains. That's going to be probably pretty dynamic in the chat too. >>Yeah. Yeah. So, uh, I'm sure we're going to have less, uh, lots of, lots of stuff going on in chat. So no concerns there about, uh, having crickets in the, in the chat. But we're going to, uh, basically starting the day with two of my good Docker captain friends, uh, Nirmal Mehta and Laura taco. And we're going to basically start you out and at the end of this keynote, at the end of this hour, and we're going to get you going. And then you can maybe jump out and go to take some sessions. Maybe there's some cool stuff you want to check out in other sessions that are, you want to chat and talk with the, the instructors, the speakers there, and then you're going to come back to us, right? Or go over, check out the interview. So the idea is you're hopping back and forth and throughout the day we're basically changing out every hour. >>We're not just changing out the, uh, the guests basically, but we're also changing out the topics that we can cover because different guests will have different expertise. We're going to have some special guests in from Microsoft, talk about some of the cool stuff going on there. And basically it's captains all day long. And, uh, you know, if you've been on my YouTube live show you, you've watched that, you've seen a lot of the guests we have on there. I'm lucky to just hang out with all these really awesome people around the world, so it's going to be fun. >>Awesome. And the content again has been preserved. You guys had a great session on call for paper sessions. Jenny, this is good stuff. What are the things can people do to make it interesting? Obviously we're looking for suggestions. Feel free to, to chirp on Twitter about ideas that can be new. But you guys got some surprises. There's some selfies. What else? What's going on? Any secret, uh, surprises throughout the day. >>There are secret surprises throughout the day. You'll need to pay attention to the keynotes. Brett will have giveaways. I know our wonderful sponsors have giveaways planned as well in their sessions. Uh, hopefully right you, you feel conflicted about what you're going to attend. So do know that everything is recorded and will be available on demand afterwards so you can catch anything that you miss. Most of them will be available right after they stream the initial time. >>All right, great stuff. So they've got the Docker selfie. So the Docker selfies, the hashtag is just Docker con hashtag Docker con. If you feel like you want to add some of the hashtag no problem, check out the sessions. You can pop in and out of the captains is kind of the cool, cool. Kids are going to be hanging out with Brett and then all they'll knowledge and learning. Don't miss the keynote. The keynote should be solid. We got changed governor from red monk delivering a keynote. I'll be interviewing him live after his keynote. So stay with us and again, check out the interactive calendar. All you gotta do is look at the calendar and click on the session you want. You'll jump right in. Hop around, give us feedback. We're doing our best. Um, Brett, any final thoughts on what you want to share to the community around, uh, what you got going on the virtual event? Just random thoughts. >>Yeah. Uh, so sorry, we can't all be together in the same physical place. But the coolest thing about as business online is that we actually get to involve everyone. So as long as you have a computer and internet, you can actually attend DockerCon if you've never been to one before. So we're trying to recreate that experience online. Um, like Jenny said, the code of conduct is important. So, you know, we're all in this together with the chat, so try to try to be nice in there. These are all real humans that, uh, have feelings just like me. So let's, let's try to keep it cool and, uh, over in the Catherine's channel be taking your questions and maybe playing some music, playing some games, giving away some free stuff. Um, while you're, you know, in between sessions learning. Oh yeah. >>And I gotta say props to your rig. You've got an amazing setup there, Brett. I love what your show you do. It's really bad ass and kick ass. So great stuff. Jenny sponsors ecosystem response to this event has been phenomenal. The attendance 67,000. We're seeing a surge of people hitting the site now. So, um, if you're not getting in, just, you know, just wait going, we're going to crank through the queue, but the sponsors on the ecosystem really delivered on the content side and also the sport. You want to share a few shout outs on the sponsors who really kind of helped make this happen. >>Yeah, so definitely make sure you check out the sponsor pages and you go, each page is the actual content that they will be delivering. So they are delivering great content to you. Um, so you can learn and a huge thank you to our platinum and gold authors. >>Awesome. Well I got to say, I'm super impressed. I'm looking forward to the Microsoft Amazon sessions, which are going to be good. And there's a couple of great customer sessions there and you know, I tweeted this out last night and let them get you guys' reaction to this because you know, there's been a lot of talk around the covert crisis that we're in, but there's also a positive upshot to this is Cambridge and explosion of developers that are going to be building new apps. And I said, you know, apps apps aren't going to just change the world. They're gonna save the world. So a lot of the theme years, the impact that developers are having right now in the current situation, you know, if we get the goodness of compose and all the things going on in Docker and the relationships, this real impact happening with the developer community. And it's pretty evident in the program and some of the talks and some of the examples how containers and microservices are certainly changing the world and helping save the world. Your thoughts. >>Yeah. So if you, I think we have a, like you said, a number of sessions and interviews in the program today that really dive into that. And even particularly around coven, um, Clemente is sharing his company's experience, uh, from being able to continue operations in Italy when they were completely shut down. Uh, beginning of March, we have also in the cube channel several interviews about from the national Institute of health and precision cancer medicine at the end of the day. And you just can really see how containerization and, uh, developers are moving in industry and, and really humanity forward because of what they're able to build and create, uh, with advances in technology. Yeah. >>And first responders and these days is developers. Brett compose is getting a lot of traction on Twitter. I can see some buzz already building up. There's huge traction with compose, just the ease of use and almost a call for arms for integrating into all the system language libraries. I mean, what's going on with compose? I mean, what's the captain say about this? I mean, it seems to be really tracking in terms of demand and interest. >>Yeah, it's, it's a, I think we're over 700,000 composed files on GitHub. Um, so it's definitely beyond just the standard Docker run commands. It's definitely the next tool that people use to run containers. Um, just by having that we just by, and that's not even counting. I mean, that's just counting the files that are named Docker compose Yammel so I'm sure a lot of you out there have created a gamma file to manage your local containers or even on a server with Docker compose. And the nice thing is, is Docker is doubling down on that. So we've gotten some news recently, um, from them about what they want to do with opening the spec up, getting more companies involved, because compose is already gathered so much interest from the community. You know, AWS has importers, there's Kubernetes importers for it. So there's more stuff coming and we might just see something here in a few minutes. >>Well, let's get into the keynote. Guys, jump into the keynote. If you missed anything, come back to the stream, check out the sessions, check out the calendar. Let's go. Let's have a great time. Have some fun. Thanks for enjoy the rest of the day. We'll see you soon..

Published Date : May 28 2020

SUMMARY :

It's the queue with digital coverage of DockerCon I'll be with you throughout the day from an amazing lineup of content over 50 different We have a great day planned for you Obviously everyone's cancelling their events, but this is special to you guys. have the opportunity to do an in person event this year, we didn't want to lose the And we're going to be in chat talking to you about answering your questions. And then each of the tracks, you can jump into those sessions. Look at the calendar, find the session that you want. So you can ask questions the whole time. So the folks who were familiar with that can get that either on YouTube or on the site. the end of this keynote, at the end of this hour, and we're going to get you going. And, uh, you know, if you've been on my YouTube live show you, you've watched that, you've seen a lot of the What are the things can people do to make it interesting? you can catch anything that you miss. click on the session you want. So as long as you have a computer and internet, And I gotta say props to your rig. Um, so you can learn and a huge thank you in the current situation, you know, if we get the goodness of compose and all the things going on in Docker and the relationships, medicine at the end of the day. just the ease of use and almost a call for arms for integrating into all the system language libraries. I mean, that's just counting the files that are named Docker compose Yammel so I'm sure a lot of you out there have created a gamma file check out the sessions, check out the calendar.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JennyPERSON

0.99+

ClementePERSON

0.99+

BrettPERSON

0.99+

ItalyLOCATION

0.99+

JohnPERSON

0.99+

Brett FisherPERSON

0.99+

AWSORGANIZATION

0.99+

DecemberDATE

0.99+

MicrosoftORGANIZATION

0.99+

Jenny MarcioPERSON

0.99+

twoQUANTITY

0.99+

Palo AltoLOCATION

0.99+

DockerConEVENT

0.99+

LauraPERSON

0.99+

eachQUANTITY

0.99+

DockerORGANIZATION

0.99+

67,000QUANTITY

0.99+

YouTubeORGANIZATION

0.99+

each pageQUANTITY

0.99+

DockerCon con 2020EVENT

0.99+

Docker conEVENT

0.98+

todayDATE

0.98+

Nirmal MehtaPERSON

0.98+

CatherinePERSON

0.98+

Docker con 2020EVENT

0.97+

firstQUANTITY

0.97+

Brett composePERSON

0.97+

over 50 different sessionsQUANTITY

0.96+

this yearDATE

0.96+

last nightDATE

0.96+

DockerTITLE

0.96+

over 700,000 composed filesQUANTITY

0.96+

AmazonORGANIZATION

0.95+

TwitterORGANIZATION

0.95+

nearly 70,000 peopleQUANTITY

0.95+

GitHubORGANIZATION

0.94+

DockerCon live 2020EVENT

0.94+

Institute of health and precision cancer medicineORGANIZATION

0.91+

DockerCon 2020 KickoffEVENT

0.89+

John furrierPERSON

0.89+

CambridgeLOCATION

0.88+

KubernetesTITLE

0.87+

two great co-hostsQUANTITY

0.84+

first respondersQUANTITY

0.79+

this yearDATE

0.78+

oneQUANTITY

0.75+

themQUANTITY

0.7+

nationalORGANIZATION

0.7+

beginning of MarchDATE

0.68+

every yearQUANTITY

0.5+

Docker con.EVENT

0.49+

red monkPERSON

0.43+

YammelPERSON

0.34+

Joel Horwitz, IBM | IBM CDO Summit Sping 2018


 

(techno music) >> Announcer: Live, from downtown San Francisco, it's theCUBE. Covering IBM Chief Data Officer Strategy Summit 2018. Brought to you by IBM. >> Welcome back to San Francisco everybody, this is theCUBE, the leader in live tech coverage. We're here at the Parc 55 in San Francisco covering the IBM CDO Strategy Summit. I'm here with Joel Horwitz who's the Vice President of Digital Partnerships & Offerings at IBM. Good to see you again Joel. >> Thanks, great to be here, thanks for having me. >> So I was just, you're very welcome- It was just, let's see, was it last month, at Think? >> Yeah, it's hard to keep track, right. >> And we were talking about your new role- >> It's been a busy year. >> the importance of partnerships. One of the things I want to, well let's talk about your role, but I really want to get into, it's innovation. And we talked about this at Think, because it's so critical, in my opinion anyway, that you can attract partnerships, innovation partnerships, startups, established companies, et cetera. >> Joel: Yeah. >> To really help drive that innovation, it takes a team of people, IBM can't do it on its own. >> Yeah, I mean look, IBM is the leader in innovation, as we all know. We're the market leader for patents, that we put out each year, and how you get that technology in the hands of the real innovators, the developers, the longtail ISVs, our partners out there, that's the challenging part at times, and so what we've been up to is really looking at how we make it easier for partners to partner with IBM. How we make it easier for developers to work with IBM. So we have a number of areas that we've been adding, so for example, we've added a whole IBM Code portal, so if you go to developer.ibm.com/code you can actually see hundreds of code patterns that we've created to help really any client, any partner, get started using IBM's technology, and to innovate. >> Yeah, and that's critical, I mean you're right, because to me innovation is a combination of invention, which is what you guys do really, and then it's adoption, which is what your customers are all about. You come from the data science world. We're here at the Chief Data Officer Summit, what's the intersection between data science and CDOs? What are you seeing there? >> Yeah, so when I was here last, it was about two years ago in 2015, actually, maybe three years ago, man, time flies when you're having fun. >> Dave: Yeah, the Spark Summit- >> Yeah Spark Technology Center and the Spark Summit, and we were here, I was here at the Chief Data Officer Summit. And it was great, and at that time, I think a lot of the conversation was really not that different than what I'm seeing today. Which is, how do you manage all of your data assets? I think a big part of doing good data science, which is my kind of background, is really having a good understanding of what your data governance is, what your data catalog is, so, you know we introduced the Watson Studio at Think, and actually, what's nice about that, is it brings a lot of this together. So if you look in the market, in the data market, today, you know we used to segment it by a few things, like data gravity, data movement, data science, and data governance. And those are kind of the four themes that I continue to see. And so outside of IBM, I would contend that those are relatively separate kind of tools that are disconnected, in fact Dinesh Nirmal, who's our engineer on the analytic side, Head of Development there, he wrote a great blog just recently, about how you can have some great machine learning, you have some great data, but if you can't operationalize that, then really you can't put it to use. And so it's funny to me because we've been focused on this challenge, and IBM is making the right steps, in my, I'm obviously biased, but we're making some great strides toward unifying the, this tool chain. Which is data management, to data science, to operationalizing, you know, machine learning. So that's what we're starting to see with Watson Studio. >> Well, I always push Dinesh on this and like okay, you've got a collection of tools, but are you bringing those together? And he flat-out says no, we developed this, a lot of this from scratch. Yes, we bring in the best of the knowledge that we have there, but we're not trying to just cobble together a bunch of disparate tools with a UI layer. >> Right, right. >> It's really a fundamental foundation that you're trying to build. >> Well, what's really interesting about that, that piece, is that yeah, I think a lot of folks have cobbled together a UI layer, so we formed a partnership, coming back to the partnership view, with a company called Lightbend, who's based here in San Francisco, as well as in Europe, and the reason why we did that, wasn't just because of the fact that Reactive development, if you're not familiar with Reactive, it's essentially Scala, Akka, Play, this whole framework, that basically allows developers to write once, and it kind of scales up with demand. In fact, Verizon actually used our platform with Lightbend to launch the iPhone 10. And they show dramatic improvements. Now what's exciting about Lightbend, is the fact that application developers are developing with Reactive, but if you turn around, you'll also now be able to operationalize models with Reactive as well. Because it's basically a single platform to move between these two worlds. So what we've continued to see is data science kind of separate from the application world. Really kind of, AI and cloud as different universes. The reality is that for any enterprise, or any company, to really innovate, you have to find a way to bring those two worlds together, to get the most use out of it. >> Fourier always says "Data is the new development kit". He said this I think five or six years ago, and it's barely becoming true. You guys have tried to make an attempt, and have done a pretty good job, of trying to bring those worlds together in a single platform, what do you call it? The Watson Data Platform? >> Yeah, Watson Data Platform, now Watson Studio, and I think the other, so one side of it is, us trying to, not really trying, but us actually bringing together these disparate systems. I mean we are kind of a systems company, we're IT. But not only that, but bringing our trained algorithms, and our trained models to the developers. So for example, we also did a partnership with Unity, at the end of last year, that's now just reaching some pretty good growth, in terms of bringing the Watson SDK to game developers on the Unity platform. So again, it's this idea of bringing the game developer, the application developer, in closer contact with these trained models, and these trained algorithms. And that's where you're seeing incredible things happen. So for example, Star Trek Bridge Crew, which I don't know how many Trekkies we have here at the CDO Summit. >> A few over here probably. >> Yeah, a couple? They're using our SDK in Unity, to basically allow a gamer to use voice commands through the headset, through a VR headset, to talk to other players in the virtual game. So we're going to see more, I can't really disclose too much what we're doing there, but there's some cool stuff coming out of that partnership. >> Real immersive experience driving a lot of data. Now you're part of the Digital Business Group. I like the term digital business, because we talk about it all the time. Digital business, what's the difference between a digital business and a business? What's the, how they use data. >> Joel: Yeah. >> You're a data person, what does that mean? That you're part of the Digital Business Group? Is that an internal facing thing? An external facing thing? Both? >> It's really both. So our Chief Digital Officer, Bob Lord, he has a presentation that he'll give, where he starts out, and he goes, when I tell people I'm the Chief Digital Officer they usually think I just manage the website. You know, if I tell people I'm a Chief Data Officer, it means I manage our data, in governance over here. The reality is that I think these Chief Digital Officer, Chief Data Officer, they're really responsible for business transformation. And so, if you actually look at what we're doing, I think on both sides is we're using data, we're using marketing technology, martech, like Optimizely, like Segment, like some of these great partners of ours, to really look at how we can quickly A/B test, get user feedback, to look at how we actually test different offerings and market. And so really what we're doing is we're setting up a testing platform, to bring not only our traditional offers to market, like DB2, Mainframe, et cetera, but also bring new offers to market, like blockchain, and quantum, and others, and actually figure out how we get better product-market fit. What actually, one thing, one story that comes to mind, is if you've seen the movie Hidden Figures- >> Oh yeah. >> There's this scene where Kevin Costner, I know this is going to look not great for IBM, but I'm going to say it anyways, which is Kevin Costner has like a sledgehammer, and he's like trying to break down the wall to get the mainframe in the room. That's what it feels like sometimes, 'cause we create the best technology, but we forget sometimes about the last mile. You know like, we got to break down the wall. >> Where am I going to put it? >> You know, to get it in the room! So, honestly I think that's a lot of what we're doing. We're bridging that last mile, between these different audiences. So between developers, between ISVs, between commercial buyers. Like how do we actually make this technology, not just accessible to large enterprise, which are our main clients, but also to the other ecosystems, and other audiences out there. >> Well so that's interesting Joel, because as a potential partner of IBM, they want, obviously your go-to-market, your massive company, and great distribution channel. But at the same time, you want more than that. You know you want to have a closer, IBM always focuses on partnerships that have intrinsic value. So you talked about offerings, you talked about quantum, blockchain, off-camera talking about cloud containers. >> Joel: Yeah. >> I'd say cloud and containers may be a little closer than those others, but those others are going to take a lot of market development. So what are the offerings that you guys are bringing? How do they get into the hands of your partners? >> I mean, the commonality with all of these, all the emerging offerings, if you ask me, is the distributed nature of the offering. So if you look at blockchain, it's a distributed ledger. It's a distributed transaction chain that's secure. If you look at data, really and we can hark back to say, Hadoop, right before object storage, it's distributed storage, so it's not just storing on your hard drive locally, it's storing on a distributed network of servers that are all over the world and data centers. If you look at cloud, and containers, what you're really doing is not running your application on an individual server that can go down. You're using containers because you want to distribute that application over a large network of servers, so that if one server goes down, you're not going to be hosed. And so I think the fundamental shift that you're seeing is this distributed nature, which in essence is cloud. So I think cloud is just kind of a synonym, in my opinion, for distributed nature of our business. >> That's interesting and that brings up, you're right, cloud and Big Data/Hadoop, we don't talk about Hadoop much anymore, but it kind of got it all started, with that notion of leave the data where it is. And it's the same thing with cloud. You can't just stuff your business into the public cloud. You got to bring the cloud to your data. >> Joel: That's right. >> But that brings up a whole new set of challenges, which obviously, you're in a position just to help solve. Performance, latency, physics come into play. >> Physics is a rough one. It's kind of hard to avoid that one. >> I hear your best people are working on it though. Some other partnerships that you want to sort of, elucidate. >> Yeah, no, I mean we have some really great, so I think the key kind of partnership, I would say area, that I would allude to is, one of the things, and you kind of referenced this, is a lot of our partners, big or small, want to work with our top clients. So they want to work with our top banking clients. They want, 'cause these are, if you look at for example, MaRisk and what we're doing with them around blockchain, and frankly, talk about innovation, they're innovating containers for real, not virtual containers- >> And that's a joint venture right? >> Yeah, it is, and so it's exciting because, what we're bringing to market is, I also lead our startup programs, called the Global Entrepreneurship Program, and so what I'm focused on doing, and you'll probably see more to come this quarter, is how do we actually bridge that end-to-end? How do you, if you're startup or a small business, ultimately reach that kind of global business partner level? And so kind of bridging that, that end-to-end. So we're starting to bring out a number of different incentives for partners, like co-marketing, so I'll help startups when they're early, figure out product-market fit. We'll give you free credits to use our innovative technology, and we'll also bring you into a number of clients, to basically help you not burn all of your cash on creating your own marketing channel. God knows I did that when I was at a start-up. So I think we're doing a lot to kind of bridge that end-to-end, and help any partner kind of come in, and then grow with IBM. I think that's where we're headed. >> I think that's a critical part of your job. Because I mean, obviously IBM is known for its Global 2000, big enterprise presence, but startups, again, fuel that innovation fire. So being able to attract them, which you're proving you can, providing whatever it is, access, early access to cloud services, or like you say, these other offerings that you're producing, in addition to that go-to-market, 'cause it's funny, we always talk about how efficient, capital efficient, software is, but then you have these companies raising hundreds of millions of dollars, why? Because they got to do promotion, marketing, sales, you know, go-to-market. >> Yeah, it's really expensive. I mean, you look at most startups, like their biggest ticket item is usually marketing and sales. And building channels, and so yeah, if you're, you know we're talking to a number of partners who want to work with us because of the fact that, it's not just like, the direct kind of channel, it's also, as you kind of mentioned, there's other challenges that you have to overcome when you're working with a larger company. for example, security is a big one, GDPR compliance now, is a big one, and just making sure that things don't fall over, is a big one. And so a lot of partners work with us because ultimately, a number of the decision makers in these larger enterprises are going, well, I trust IBM, and if IBM says you're good, then I believe you. And so that's where we're kind of starting to pull partners in, and pull an ecosystem towards us. Because of the fact that we can take them through that level of certification. So we have a number of free online courses. So if you go to partners, excuse me, ibm.com/partners/learn there's a number of blockchain courses that you can learn today, and will actually give you a digital certificate, that's actually certified on our own blockchain, which we're actually a first of a kind to do that, which I think is pretty slick, and it's accredited at some of the universities. So I think that's where people are looking to IBM, and other leaders in this industry, is to help them become experts in their, in this technology, and especially in this emerging technology. >> I love that blockchain actually, because it's such a growing, and interesting, and innovative field. But it needs players like IBM, that can bring credibility, enterprise-grade, whether it's security, or just, as I say, credibility. 'Cause you know, this is, so much of negative connotations associated with blockchain and crypto, but companies like IBM coming to the table, enterprise companies, and building that ecosystem out is in my view, crucial. >> Yeah, no, it takes a village. I mean, there's a lot of folks, I mean that's a big reason why I came to IBM, three, four years ago, was because when I was in start-up land, I used to work for H20, I worked for Alpine Data Labs, Datameer, back in the Hadoop days, and what I realized was that, it's an opportunity cost. So you can't really drive true global innovation, transformation, in some of these bigger companies because there's only so much that you can really kind of bite off. And so you know at IBM it's been a really rewarding experience because we have done things like for example, we partnered with Girls Who Code, Treehouse, Udacity. So there's a number of early educators that we've partnered with, to bring code to, to bring technology to, that frankly, would never have access to some of this stuff. Some of this technology, if we didn't form these alliances, and if we didn't join these partnerships. So I'm very excited about the future of IBM, and I'm very excited about the future of what our partners are doing with IBM, because, geez, you know the cloud, and everything that we're doing to make this accessible, is bar none, I mean, it's great. >> I can tell you're excited. You know, spring in your step. Always a lot of energy Joel, really appreciate you coming onto theCUBE. >> Joel: My pleasure. >> Great to see you again. >> Yeah, thanks Dave. >> You're welcome. Alright keep it right there, everybody. We'll be back. We're at the IBM CDO Strategy Summit in San Francisco. You're watching theCUBE. (techno music) (touch-tone phone beeps)

Published Date : May 2 2018

SUMMARY :

Brought to you by IBM. Good to see you again Joel. that you can attract partnerships, To really help drive that innovation, and how you get that technology Yeah, and that's critical, I mean you're right, Yeah, so when I was here last, to operationalizing, you know, machine learning. that we have there, but we're not trying that you're trying to build. to really innovate, you have to find a way in a single platform, what do you call it? So for example, we also did a partnership with Unity, to basically allow a gamer to use voice commands I like the term digital business, to look at how we actually test different I know this is going to look not great for IBM, but also to the other ecosystems, But at the same time, you want more than that. So what are the offerings that you guys are bringing? So if you look at blockchain, it's a distributed ledger. You got to bring the cloud to your data. But that brings up a whole new set of challenges, It's kind of hard to avoid that one. Some other partnerships that you want to sort of, elucidate. and you kind of referenced this, to basically help you not burn all of your cash early access to cloud services, or like you say, that you can learn today, but companies like IBM coming to the table, that you can really kind of bite off. really appreciate you coming onto theCUBE. We're at the IBM CDO Strategy Summit in San Francisco.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JoelPERSON

0.99+

Joel HorwitzPERSON

0.99+

EuropeLOCATION

0.99+

IBMORGANIZATION

0.99+

Kevin CostnerPERSON

0.99+

DavePERSON

0.99+

Dinesh NirmalPERSON

0.99+

Alpine Data LabsORGANIZATION

0.99+

LightbendORGANIZATION

0.99+

VerizonORGANIZATION

0.99+

San FranciscoLOCATION

0.99+

Hidden FiguresTITLE

0.99+

Bob LordPERSON

0.99+

BothQUANTITY

0.99+

MaRiskORGANIZATION

0.99+

bothQUANTITY

0.99+

iPhone 10COMMERCIAL_ITEM

0.99+

2015DATE

0.99+

DatameerORGANIZATION

0.99+

both sidesQUANTITY

0.99+

one storyQUANTITY

0.99+

ThinkORGANIZATION

0.99+

fiveDATE

0.99+

hundredsQUANTITY

0.99+

TreehouseORGANIZATION

0.99+

three years agoDATE

0.99+

developer.ibm.com/codeOTHER

0.99+

UnityORGANIZATION

0.98+

two worldsQUANTITY

0.98+

ReactiveORGANIZATION

0.98+

GDPRTITLE

0.98+

one sideQUANTITY

0.98+

Digital Business GroupORGANIZATION

0.98+

todayDATE

0.98+

UdacityORGANIZATION

0.98+

ibm.com/partners/learnOTHER

0.98+

last monthDATE

0.98+

Watson StudioORGANIZATION

0.98+

each yearQUANTITY

0.97+

threeDATE

0.97+

single platformQUANTITY

0.97+

Girls Who CodeORGANIZATION

0.97+

Parc 55LOCATION

0.97+

one thingQUANTITY

0.97+

four themesQUANTITY

0.97+

Spark Technology CenterORGANIZATION

0.97+

six years agoDATE

0.97+

H20ORGANIZATION

0.97+

four years agoDATE

0.97+

martechORGANIZATION

0.97+

UnityTITLE

0.96+

hundreds of millions of dollarsQUANTITY

0.94+

Watson StudioTITLE

0.94+

DineshPERSON

0.93+

one serverQUANTITY

0.93+