Image Title

Search Results for z/OS:

Jamie Thomas, IBM | IBM Think 2021


 

>> Narrator: From around the globe, it's the CUBE with digital coverage of IBM Think 2021, brought to you by IBM. >> Welcome back to IBM Think 2021, the virtual edition. This is the CUBEs, continuous, deep dive coverage of the people, processes and technologies that are really changing our world. Right now, we're going to talk about modernization and what's beyond with Jamie Thomas, general manager, strategy and development, IBM Enterprise Security. Jamie, always a pleasure. Great to see you again. Thanks for coming on. >> It's great to see you, Dave. And thanks for having me on the CUBE is always a pleasure. >> Yeah, it is our pleasure. And listen, we've been hearing a lot about IBM is focused on hybrid cloud, Arvind Krishna says we must win the architectural battle for hybrid cloud. I love that. We've been hearing a lot about AI. And I wonder if you could talk about IBM Systems and how it plays into that strategy? >> Sure, well, it's a great time to have this discussion Dave. As you all know, IBM Systems Technology is used widely around the world, by many, many 1000s of clients in the context of our IBM System Z, our power systems and storage. And what we have seen is really an uptake of monetization around those workloads, if you will, driven by hybrid cloud, the hybrid cloud agenda, as well as an uptake of Red Hat OpenShift, as a vehicle for this modernization. So it's pretty exciting stuff, what we see as many clients taking advantage of OpenShift on Linux, to really modernize these environments, and then stay close, if you will, to that systems of record database and the transactions associated with it. So they're seeing a definite performance advantage to taking advantage of OpenShift. And it's really fascinating to see the things that they're doing. So if you look at financial services, for instance, there's a lot of focus on risk analytics. So things like fraud, anti money laundering, mortgage risk, types of applications being done in this context, when you look at our retail industry clients, you see also a lot of customer centricity solutions, if you will, being deployed on OpenShift. And once again, having Linux close to those traditional LPARs of AIX, I-Series, or in the context of z/OS. So those are some of the things we see happening. And it's quite real. >> Now, you didn't mention power, but I want to come back and ask you about power. Because a few weeks ago, we were prompted to dig in a little bit with the when Arvind was on with Pat Kessinger at Intel and talking about the relationship you guys have. And so we dug in a little bit, we thought originally, we said, oh, it's about quantum. But we dug in. And we realized that the POWER10 is actually the best out there and the highest performance in terms of disaggregating memory. And we see that as a future architecture for systems and actually really quite excited about it about the potential that brings not only to build beyond system on a chip and system on a package, but to start doing interesting things at the Edge. You know, what do you what's going on with power? >> Well, of course, when I talked about OpenShift, we're doing OpenShift on power Linux, as well as Z Linux, but you're exactly right in the context for a POWER10 processor. We couldn't be more we're so excited about this processor. First of all, it's our first delivery with our partner Samsung with a seven nanometer form factor. The processor itself has only 18 billion transistors. So it's got a few transistors there. But one of the cool inventions, if you will, that we have created is this expansive memory region as part of this design point, which we call memory inception, it gives us the ability to reach memory across servers, up to two petabytes of memory. Aside from that, this processor has generational improvements and core and thread performance, improved energy efficiency. And all of this, Dave is going to give us a lot of opportunity with new workloads, particularly around artificial intelligence and inferencing around artificial intelligence. I mean, that's going to be that's another critical innovation that we see here in this POWER10 processor. >> Yeah, processor performance is just exploding. We're blowing away the historical norms. I think many people don't realize that. Let's talk about some of the key announcements that you've made in quantum last time we spoke on the qubit for last year, I think we did a deeper dive on quantum. You've made some announcements around hardware and software roadmaps. Give us the update on quantum please. >> Well, there is so much that has happened since we last spoke on the quantum landscape. And the key thing that we focused on in the last six months is really an articulation of our roadmaps, so the roadmap around hardware, the roadmap around software, and we've also done quite a bit of ecosystem development. So in terms of the roadmap around hardware, we put ourselves out there we've said we were going to get to over 1000 qubit machine and in 2023, so that's our milestone. And we've got a number of steps we've outlined along that way, of course, we have to make progress, frankly, every six months in terms of innovating around the processor, the electronics and the fridge associated with these machines. So lots of exciting innovation across the board. We've also published a software roadmap, where we're articulating how we improve a circuit execution speeds. So we hope, our plan to show shortly a 100 times improvement in circuit execution speeds. And as we go forward in the future, we're modifying our Qiskit programming model to not only allow a easily easy use by all types of developers, but to improve the fidelity of the entire machine, if you will. So all of our innovations go hand in hand, our hardware roadmap, our software roadmap, are all very critical in driving the technical outcomes that we think are so important for quantum to become a reality. We've deployed, I would say, in our quantum cloud over, you know, over 20 machines over time, we never quite identify the precise number because frankly, as we put up a new generation machine, we often retire when it's older. So we're constantly updating them out there, and every machine that comes on online, and that cloud, in fact, represents a sea change and hardware and a sea change in software. So they're all the latest and greatest that our clients can have access to. >> That's key, the developer angle you got redshift running on quantum yet? >> Okay, I mean, that's a really good question, you know, as part of that software roadmap in terms of the evolution and the speed of that circuit execution is really this interesting marriage between classical processing and quantum processing and bring those closer together. And in the context of our classical operations that are interfacing with that quantum processor, we're taking advantage of OpenShift, running on that classical machine to achieve that. And once again, if, as you can imagine, that'll give us a lot of flexibility in terms of where that classical machine resides and how we continue the evolution the great marriage, I think that's going to that will exist that does exist and will exist between classical computing and quantum computing. >> I'm glad I asked it was kind of tongue in cheek. But that's a key thread to the ecosystem, which is critical to obviously, you know, such a new technology. How are you thinking about the ecosystem evolution? >> Well, the ecosystem here for quantum is infinitely important. We started day one, on this journey with free access to our systems for that reason, because we wanted to create easy entry for anyone that really wanted to participate in this quantum journey. And I can tell you, it really fascinates everyone, from high school students, to college students, to those that are PhDs. But during this journey, we have reached over 300,000 unique users, we have now over 500,000 unique downloads of our Qiskit programming model. But to really achieve that is his back plane by this ongoing educational thrust that we have. So we've created an open source textbook, around Qiskit that allows organizations around the world to take advantage of it from a curriculum perspective. We have over 200 organizations that are using our open source textbook. Last year, when we realized we couldn't do our in person programming camps, which were so exciting around the world, you can imagine doing an in person programming camp and South Africa and Asia and all those things we did in 2019. Well, we had just like you all, we had to go completely virtual, right. And we thought that we would have a few 100 people sign up for our summer school, we had over 4000 people sign up for our summer school. And so one of the things we had to do is really pedal fast to be able to support that many students in this summer school that kind of grew out of our proportions. The neat thing was once again, seeing all the kids and students around the world taking advantage of this and learning about quantum computing. And then I guess that the end of last year, Dave, to really top this off, we did something really fundamentally important. And we set up a quantum center for historically black colleges and universities, with Howard University being the anchor of this quantum center. And we're serving 23 HBCUs now, to be able to reach a new set of students, if you will, with STEM technologies, and most importantly, with quantum. And I find, you know, the neat thing about quantum is is very interdisciplinary. So we have quantum physicist, we have electrical engineers, we have engineers on the team, we have computer scientists, we have people with biology and chemistry and financial services backgrounds. So I'm pretty excited about the reach that we have with quantum into HBCUs and even beyond right I think we can do some we can have some phenomenal results and help a lot of people on this journey to quantum and you know, obviously help ourselves but help these students as well. >> What do you see in people do with quantum and maybe some of the use cases. I mean you mentioned there's sort of a connection to traditional workloads, but obviously some new territory what's exciting out there? >> Well, there's been a really a number of use cases that I think are top of mind right now. So one of the most interesting to me has been one that showed us a few months ago that we talked about in the press actually a few months ago, which is with Exxon Mobil. And they really started looking at logistics in the context of Maritime shipping, using quantum. And if you think of logistics, logistics are really, really complicated. Logistics in the face of a pandemic are even more complicated and logistics when things like the Suez Canal shuts down, are even more complicated. So think about, you know, when the Suez Canal shut down, it's kind of like the equivalent of several major airports around the world shutting down and then you have to reroute all the traffic, and that traffic and maritime shipping is has to be very precise, has to be planned the stops are plan, the routes are plan. And the interest that ExxonMobil has had in this journey is not just more effective logistics, but how do they get natural gas shipped around the world more effectively, because their goal is to bring energy to organizations into countries while reducing CO2 emissions. So they have a very grand vision that they're trying to accomplish. And this logistics operation is just one of many, then we can think of logistics, though being a being applicable to anyone that has a supply chain. So to other shipping organizations, not just Maritime shipping. And a lot of the optimization logic that we're learning from that set of work also applies to financial services. So if we look at optimization, around portfolio pricing, and everything, a lot of the similar characteristics will also go be applicable to the financial services industry. So that's one big example. And I guess our latest partnership that we announced with some fanfare, about two weeks ago, was with the Cleveland Clinic, and we're doing a special discovery acceleration activity with the Cleveland Clinic, which starts prominently with artificial intelligence, looking at chemistry and genomics, and improve speed around machine learning for all of the the critical healthcare operations that the Cleveland Clinic has embarked on but as part of that journey, they like many clients are evolving from artificial intelligence, and then learning how they can apply quantum as an accelerator in the future. And so they also indicated that they will buy the first commercial on premise quantum computer for their operations and place that in Ohio, in the the the years to come. So it's a pretty exciting relationship. These relationships show the power of the combination, once again, of classical computing, using that intelligently to solve very difficult problems. And then taking advantage of quantum for what it can uniquely do in a lot of these use cases. >> That's great description, because it is a strong connection to things that we do today. It's just going to do them better, but then it's going to open up a whole new set of opportunities. Everybody wants to know, when, you know, it's all over the place. Because some people say, oh, not for decades, other people say I think it's going to be sooner than you think. What are you guys saying about timeframe? >> We're certainly determined to make it sooner than later. Our roadmaps if you note go through 2023. And we think the 2023 is going to will be a pivotal year for us in terms of delivery around those roadmaps. But it's these kind of use cases and this intense working with these clients, 'cause when they work with us, they're giving us feedback on everything that we've done, how does this programming model really help me solve these problems? What do we need to do differently? In the case of Exxon Mobil, they've given us a lot of really great feedback on how we can better fine tune all elements of the system to improve that system. It's really allowed us to chart a course for how we think about the programming model in particular in the context of users. Just last week, in fact, we announced some new machine learning applications, which these applications are really to allow artificial intelligence users and programmers to get take advantage of quantum without being a quantum physicist or expert, right. So it's really an encapsulation of a composable elements so that they can start to use, using an interface allows them to access through PyTorch into the quantum computer, take advantage of some of the things we're doing around neural networks and things like that, once again, without having to be experts in quantum. So I think those are the kind of things we're learning how to do better, fundamentally through this co-creation and development with our quantum network. And our quantum network now is over 140 unique organizations and those are commercial, academic, national laboratories and startups that we're working with. >> The picture started become more clear, we're seeing emerging AI applications, a lot of work today in AI is in modeling. Over time, it's going to shift toward inference and real time and practical applications. Everybody talks about Moore's law being dead. Well, in fact, the yes, I guess, technically speaking, but the premise or the outcome of Moore's law is actually accelerating, we're seeing processor performance, quadrupling every two years now, when you include the GPU along with the CPU, the DSPs, the accelerators. And so that's going to take us through this decade, and then then quantum is going to power us, you know, well beyond who can even predict that. It's a very, very exciting time. Jamie, I always love talking to you. Thank you so much for coming back on the CUBE. >> Well, I appreciate the time. And I think you're exactly right, Dave, you know, we talked about POWER10, just for a few minutes there. But one of the things we've done in POWER10, as well as we've embedded AI into every core that processor, so you reduce that latency, we've got a 10 to 20 times improvement over the last generation in terms of artificial intelligence, you think about the evolution of a classical machine like that state of the art, and then combine that with quantum and what we can do in the future, I think is a really exciting time to be in computing. And I really appreciate your time today to have this dialogue with you. >> Yeah, it's always fun and it's of national importance as well. Jamie Thomas, thanks so much. This is Dave Vellante with the CUBE keep it right there our continuous coverage of IBM Think 2021 will be right back. (gentle music) (bright music)

Published Date : May 12 2021

SUMMARY :

it's the CUBE with digital of the people, processes and technologies the CUBE is always a pleasure. and how it plays into that strategy? and the transactions associated with it. and talking about the that we have created is of the key announcements And the key thing that we And in the context of the ecosystem evolution? And so one of the things we and maybe some of the use cases. And a lot of the optimization to things that we do today. of the things we're doing going to power us, you know, like that state of the art, and it's of national importance as well.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jamie ThomasPERSON

0.99+

Pat KessingerPERSON

0.99+

Dave VellantePERSON

0.99+

Cleveland ClinicORGANIZATION

0.99+

JamiePERSON

0.99+

SamsungORGANIZATION

0.99+

IBMORGANIZATION

0.99+

Exxon MobilORGANIZATION

0.99+

Arvind KrishnaPERSON

0.99+

DavePERSON

0.99+

Jamie ThomasPERSON

0.99+

10QUANTITY

0.99+

2019DATE

0.99+

ExxonMobilORGANIZATION

0.99+

100 timesQUANTITY

0.99+

OhioLOCATION

0.99+

AsiaLOCATION

0.99+

Last yearDATE

0.99+

2023DATE

0.99+

Howard UniversityORGANIZATION

0.99+

last weekDATE

0.99+

ArvindPERSON

0.99+

last yearDATE

0.99+

South AfricaLOCATION

0.99+

Suez CanalLOCATION

0.99+

over 300,000 unique usersQUANTITY

0.99+

IntelORGANIZATION

0.99+

23 HBCUsQUANTITY

0.99+

QiskitTITLE

0.99+

firstQUANTITY

0.99+

todayDATE

0.99+

MoorePERSON

0.99+

Z LinuxTITLE

0.99+

over 200 organizationsQUANTITY

0.99+

LinuxTITLE

0.98+

over 4000 peopleQUANTITY

0.98+

first deliveryQUANTITY

0.98+

OpenShiftTITLE

0.98+

Think 2021COMMERCIAL_ITEM

0.97+

over 140 unique organizationsQUANTITY

0.97+

FirstQUANTITY

0.97+

seven nanometerQUANTITY

0.97+

over 20 machinesQUANTITY

0.97+

pandemicEVENT

0.97+

18 billion transistorsQUANTITY

0.97+

oneQUANTITY

0.96+

20 timesQUANTITY

0.96+

day oneQUANTITY

0.95+

over 500,000 unique downloadsQUANTITY

0.95+

one big exampleQUANTITY

0.94+

Think 2021COMMERCIAL_ITEM

0.93+

100 peopleQUANTITY

0.93+

about two weeks agoDATE

0.92+

over 1000 qubitQUANTITY

0.9+

I-SeriesCOMMERCIAL_ITEM

0.87+

z/OSTITLE

0.85+

six monthsQUANTITY

0.82+

few months agoDATE

0.8+

POWER10TITLE

0.79+

upQUANTITY

0.78+

PyTorchTITLE

0.78+

few weeks agoDATE

0.78+

1000s of clientsQUANTITY

0.76+

BOS19 Jamie Thomas VTT


 

(bright music) >> Narrator: From around the globe, it's the CUBE with digital coverage of IBM Think 2021, brought to you by IBM. >> Welcome back to IBM Think 2021, the virtual edition. This is the CUBEs, continuous, deep dive coverage of the people, processes and technologies that are really changing our world. Right now, we're going to talk about modernization and what's beyond with Jamie Thomas, general manager, strategy and development, IBM Enterprise Security. Jamie, always a pleasure. Great to see you again. Thanks for coming on. >> It's great to see you, Dave. And thanks for having me on the CUBE is always a pleasure. >> Yeah, it is our pleasure. And listen, we've been hearing a lot about IBM is focused on hybrid cloud, Arvind Krishna says we must win the architectural battle for hybrid cloud. I love that. We've been hearing a lot about AI. And I wonder if you could talk about IBM Systems and how it plays into that strategy? >> Sure, well, it's a great time to have this discussion Dave. As you all know, IBM Systems Technology is used widely around the world, by many, many 1000s of clients in the context of our IBM System Z, our power systems and storage. And what we have seen is really an uptake of monetization around those workloads, if you will, driven by hybrid cloud, the hybrid cloud agenda, as well as an uptake of Red Hat OpenShift, as a vehicle for this modernization. So it's pretty exciting stuff, what we see as many clients taking advantage of OpenShift on Linux, to really modernize these environments, and then stay close, if you will, to that systems of record database and the transactions associated with it. So they're seeing a definite performance advantage to taking advantage of OpenShift. And it's really fascinating to see the things that they're doing. So if you look at financial services, for instance, there's a lot of focus on risk analytics. So things like fraud, anti money laundering, mortgage risk, types of applications being done in this context, when you look at our retail industry clients, you see also a lot of customer centricity solutions, if you will, being deployed on OpenShift. And once again, having Linux close to those traditional LPARs of AIX, I-Series, or in the context of z/OS. So those are some of the things we see happening. And it's quite real. >> Now, you didn't mention power, but I want to come back and ask you about power. Because a few weeks ago, we were prompted to dig in a little bit with the when Arvind was on with Pat Kessinger at Intel and talking about the relationship you guys have. And so we dug in a little bit, we thought originally, we said, oh, it's about quantum. But we dug in. And we realized that the POWER10 is actually the best out there and the highest performance in terms of disaggregating memory. And we see that as a future architecture for systems and actually really quite excited about it about the potential that brings not only to build beyond system on a chip and system on a package, but to start doing interesting things at the Edge. You know, what do you what's going on with power? >> Well, of course, when I talked about OpenShift, we're doing OpenShift on power Linux, as well as Z Linux, but you're exactly right in the context for a POWER10 processor. We couldn't be more we're so excited about this processor. First of all, it's our first delivery with our partner Samsung with a seven nanometer form factor. The processor itself has only 18 billion transistors. So it's got a few transistors there. But one of the cool inventions, if you will, that we have created is this expansive memory region as part of this design point, which we call memory inception, it gives us the ability to reach memory across servers, up to two petabytes of memory. Aside from that, this processor has generational improvements and core and thread performance, improved energy efficiency. And all of this, Dave is going to give us a lot of opportunity with new workloads, particularly around artificial intelligence and inferencing around artificial intelligence. I mean, that's going to be that's another critical innovation that we see here in this POWER10 processor. >> Yeah, processor performance is just exploding. We're blowing away the historical norms. I think many people don't realize that. Let's talk about some of the key announcements that you've made in quantum last time we spoke on the qubit for last year, I think we did a deeper dive on quantum. You've made some announcements around hardware and software roadmaps. Give us the update on quantum please. >> Well, there is so much that has happened since we last spoke on the quantum landscape. And the key thing that we focused on in the last six months is really an articulation of our roadmaps, so the roadmap around hardware, the roadmap around software, and we've also done quite a bit of ecosystem development. So in terms of the roadmap around hardware, we put ourselves out there we've said we were going to get to over 1000 qubit machine and in 2023, so that's our milestone. And we've got a number of steps we've outlined along that way, of course, we have to make progress, frankly, every six months in terms of innovating around the processor, the electronics and the fridge associated with these machines. So lots of exciting innovation across the board. We've also published a software roadmap, where we're articulating how we improve a circuit execution speeds. So we hope, our plan to show shortly a 100 times improvement in circuit execution speeds. And as we go forward in the future, we're modifying our Qiskit programming model to not only allow a easily easy use by all types of developers, but to improve the fidelity of the entire machine, if you will. So all of our innovations go hand in hand, our hardware roadmap, our software roadmap, are all very critical in driving the technical outcomes that we think are so important for quantum to become a reality. We've deployed, I would say, in our quantum cloud over, you know, over 20 machines over time, we never quite identify the precise number because frankly, as we put up a new generation machine, we often retire when it's older. So we're constantly updating them out there, and every machine that comes on online, and that cloud, in fact, represents a sea change and hardware and a sea change in software. So they're all the latest and greatest that our clients can have access to. >> That's key, the developer angle you got redshift running on quantum yet? >> Okay, I mean, that's a really good question, you know, as part of that software roadmap in terms of the evolution and the speed of that circuit execution is really this interesting marriage between classical processing and quantum processing and bring those closer together. And in the context of our classical operations that are interfacing with that quantum processor, we're taking advantage of OpenShift, running on that classical machine to achieve that. And once again, if, as you can imagine, that'll give us a lot of flexibility in terms of where that classical machine resides and how we continue the evolution the great marriage, I think that's going to that will exist that does exist and will exist between classical computing and quantum computing. >> I'm glad I asked it was kind of tongue in cheek. But that's a key thread to the ecosystem, which is critical to obviously, you know, such a new technology. How are you thinking about the ecosystem evolution? >> Well, the ecosystem here for quantum is infinitely important. We started day one, on this journey with free access to our systems for that reason, because we wanted to create easy entry for anyone that really wanted to participate in this quantum journey. And I can tell you, it really fascinates everyone, from high school students, to college students, to those that are PhDs. But during this journey, we have reached over 300,000 unique users, we have now over 500,000 unique downloads of our Qiskit programming model. But to really achieve that is his back plane by this ongoing educational thrust that we have. So we've created an open source textbook, around Qiskit that allows organizations around the world to take advantage of it from a curriculum perspective. We have over 200 organizations that are using our open source textbook. Last year, when we realized we couldn't do our in person programming camps, which were so exciting around the world, you can imagine doing an in person programming camp and South Africa and Asia and all those things we did in 2019. Well, we had just like you all, we had to go completely virtual, right. And we thought that we would have a few 100 people sign up for our summer school, we had over 4000 people sign up for our summer school. And so one of the things we had to do is really pedal fast to be able to support that many students in this summer school that kind of grew out of our proportions. The neat thing was once again, seeing all the kids and students around the world taking advantage of this and learning about quantum computing. And then I guess that the end of last year, Dave, to really top this off, we did something really fundamentally important. And we set up a quantum center for historically black colleges and universities, with Howard University being the anchor of this quantum center. And we're serving 23 HBCUs now, to be able to reach a new set of students, if you will, with STEM technologies, and most importantly, with quantum. And I find, you know, the neat thing about quantum is is very interdisciplinary. So we have quantum physicist, we have electrical engineers, we have engineers on the team, we have computer scientists, we have people with biology and chemistry and financial services backgrounds. So I'm pretty excited about the reach that we have with quantum into HBCUs and even beyond right I think we can do some we can have some phenomenal results and help a lot of people on this journey to quantum and you know, obviously help ourselves but help these students as well. >> What do you see in people do with quantum and maybe some of the use cases. I mean you mentioned there's sort of a connection to traditional workloads, but obviously some new territory what's exciting out there? >> Well, there's been a really a number of use cases that I think are top of mind right now. So one of the most interesting to me has been one that showed us a few months ago that we talked about in the press actually a few months ago, which is with Exxon Mobil. And they really started looking at logistics in the context of Maritime shipping, using quantum. And if you think of logistics, logistics are really, really complicated. Logistics in the face of a pandemic are even more complicated and logistics when things like the Suez Canal shuts down, are even more complicated. So think about, you know, when the Suez Canal shut down, it's kind of like the equivalent of several major airports around the world shutting down and then you have to reroute all the traffic, and that traffic and maritime shipping is has to be very precise, has to be planned the stops are plan, the routes are plan. And the interest that ExxonMobil has had in this journey is not just more effective logistics, but how do they get natural gas shipped around the world more effectively, because their goal is to bring energy to organizations into countries while reducing CO2 emissions. So they have a very grand vision that they're trying to accomplish. And this logistics operation is just one of many, then we can think of logistics, though being a being applicable to anyone that has a supply chain. So to other shipping organizations, not just Maritime shipping. And a lot of the optimization logic that we're learning from that set of work also applies to financial services. So if we look at optimization, around portfolio pricing, and everything, a lot of the similar characteristics will also go be applicable to the financial services industry. So that's one big example. And I guess our latest partnership that we announced with some fanfare, about two weeks ago, was with the Cleveland Clinic, and we're doing a special discovery acceleration activity with the Cleveland Clinic, which starts prominently with artificial intelligence, looking at chemistry and genomics, and improve speed around machine learning for all of the the critical healthcare operations that the Cleveland Clinic has embarked on but as part of that journey, they like many clients are evolving from artificial intelligence, and then learning how they can apply quantum as an accelerator in the future. And so they also indicated that they will buy the first commercial on premise quantum computer for their operations and place that in Ohio, in the the the years to come. So it's a pretty exciting relationship. These relationships show the power of the combination, once again, of classical computing, using that intelligently to solve very difficult problems. And then taking advantage of quantum for what it can uniquely do in a lot of these use cases. >> That's great description, because it is a strong connection to things that we do today. It's just going to do them better, but then it's going to open up a whole new set of opportunities. Everybody wants to know, when, you know, it's all over the place. Because some people say, oh, not for decades, other people say I think it's going to be sooner than you think. What are you guys saying about timeframe? >> We're certainly determined to make it sooner than later. Our roadmaps if you note go through 2023. And we think the 2023 is going to will be a pivotal year for us in terms of delivery around those roadmaps. But it's these kind of use cases and this intense working with these clients, 'cause when they work with us, they're giving us feedback on everything that we've done, how does this programming model really help me solve these problems? What do we need to do differently? In the case of Exxon Mobil, they've given us a lot of really great feedback on how we can better fine tune all elements of the system to improve that system. It's really allowed us to chart a course for how we think about the programming model in particular in the context of users. Just last week, in fact, we announced some new machine learning applications, which these applications are really to allow artificial intelligence users and programmers to get take advantage of quantum without being a quantum physicist or expert, right. So it's really an encapsulation of a composable elements so that they can start to use, using an interface allows them to access through PyTorch into the quantum computer, take advantage of some of the things we're doing around neural networks and things like that, once again, without having to be experts in quantum. So I think those are the kind of things we're learning how to do better, fundamentally through this co-creation and development with our quantum network. And our quantum network now is over 140 unique organizations and those are commercial, academic, national laboratories and startups that we're working with. >> The picture started become more clear, we're seeing emerging AI applications, a lot of work today in AI is in modeling. Over time, it's going to shift toward inference and real time and practical applications. Everybody talks about Moore's law being dead. Well, in fact, the yes, I guess, technically speaking, but the premise or the outcome of Moore's law is actually accelerating, we're seeing processor performance, quadrupling every two years now, when you include the GPU along with the CPU, the DSPs, the accelerators. And so that's going to take us through this decade, and then then quantum is going to power us, you know, well beyond who can even predict that. It's a very, very exciting time. Jamie, I always love talking to you. Thank you so much for coming back on the CUBE. >> Well, I appreciate the time. And I think you're exactly right, Dave, you know, we talked about POWER10, just for a few minutes there. But one of the things we've done in POWER10, as well as we've embedded AI into every core that processor, so you reduce that latency, we've got a 10 to 20 times improvement over the last generation in terms of artificial intelligence, you think about the evolution of a classical machine like that state of the art, and then combine that with quantum and what we can do in the future, I think is a really exciting time to be in computing. And I really appreciate your time today to have this dialogue with you. >> Yeah, it's always fun and it's of national importance as well. Jamie Thomas, thanks so much. This is Dave Vellante with the CUBE keep it right there our continuous coverage of IBM Think 2021 will be right back. (gentle music) (bright music)

Published Date : Apr 16 2021

SUMMARY :

it's the CUBE with digital of the people, processes and technologies the CUBE is always a pleasure. and how it plays into that strategy? and the transactions associated with it. and talking about the that we have created is of the key announcements And the key thing that we And in the context of the ecosystem evolution? And so one of the things we and maybe some of the use cases. And a lot of the optimization to things that we do today. of the things we're doing going to power us, you know, like that state of the art, and it's of national importance as well.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Cleveland ClinicORGANIZATION

0.99+

Pat KessingerPERSON

0.99+

Jamie ThomasPERSON

0.99+

SamsungORGANIZATION

0.99+

JamiePERSON

0.99+

DavePERSON

0.99+

IBMORGANIZATION

0.99+

Arvind KrishnaPERSON

0.99+

10QUANTITY

0.99+

Exxon MobilORGANIZATION

0.99+

Jamie ThomasPERSON

0.99+

ExxonMobilORGANIZATION

0.99+

100 timesQUANTITY

0.99+

2019DATE

0.99+

OhioLOCATION

0.99+

AsiaLOCATION

0.99+

Last yearDATE

0.99+

2023DATE

0.99+

last weekDATE

0.99+

Howard UniversityORGANIZATION

0.99+

last yearDATE

0.99+

South AfricaLOCATION

0.99+

23 HBCUsQUANTITY

0.99+

over 200 organizationsQUANTITY

0.99+

Suez CanalLOCATION

0.99+

MoorePERSON

0.99+

ArvindPERSON

0.99+

Z LinuxTITLE

0.98+

over 300,000 unique usersQUANTITY

0.98+

todayDATE

0.98+

QiskitTITLE

0.98+

over 4000 peopleQUANTITY

0.98+

LinuxTITLE

0.98+

IntelORGANIZATION

0.98+

first deliveryQUANTITY

0.98+

OpenShiftTITLE

0.98+

over 140 unique organizationsQUANTITY

0.97+

over 20 machinesQUANTITY

0.97+

oneQUANTITY

0.97+

20 timesQUANTITY

0.97+

FirstQUANTITY

0.97+

seven nanometerQUANTITY

0.96+

Think 2021COMMERCIAL_ITEM

0.96+

pandemicEVENT

0.96+

18 billion transistorsQUANTITY

0.95+

100 peopleQUANTITY

0.94+

day oneQUANTITY

0.94+

over 1000 qubitQUANTITY

0.93+

over 500,000 unique downloadsQUANTITY

0.93+

one big exampleQUANTITY

0.93+

PyTorchTITLE

0.92+

z/OSTITLE

0.9+

endDATE

0.9+

Think 2021COMMERCIAL_ITEM

0.87+

Red Hat OpenShiftTITLE

0.87+

about two weeks agoDATE

0.86+

first commercialQUANTITY

0.86+

few weeks agoDATE

0.85+

POWER10TITLE

0.84+

AIXTITLE

0.83+

a few months agoDATE

0.83+

few months agoDATE

0.81+

BOS19COMMERCIAL_ITEM

0.79+

last six monthsDATE

0.78+

Skyla Loomis, IBM | AnsibleFest 2020


 

>> (upbeat music) [Narrator] From around the globe, it's theCUBE with digital coverage of AnsibleFest 2020, brought to you by Red Hat. >> Hello welcome back to theCUBE virtual coverage of AnsibleFest 2020 Virtual. We're not face to face this year. I'm John Furrier, your host. We're bringing it together remotely. We're in the Palo Alto Studios with theCUBE and we're going remote for our guests this year. And I hope you can come together online enjoy the content. Of course, go check out the events site on Demand Live. And certainly I have a lot of great content. I've got a great guest Skyla Loomis Vice president, for the Z Application Platform at IBM. Also known as IBM Z talking Mainframe. Skyla, thanks for coming on theCUBE Appreciate it. >> Thank you for having me. So, you know, I've talked many conversations about the Mainframe of being relevant and valuable in context to cloud and cloud native because if it's got a workload you've got containers and all this good stuff, you can still run anything on anything these days. By integrating it in with all this great glue layer, lack of a better word or oversimplifying it, you know, things going on. So it's really kind of cool. Plus Walter Bentley in my previous interview was talking about the success of Ansible, and IBM working together on a really killer implementation. So I want to get into that, but before that let's get into IBM Z. How did you start working with IBM Z? What's your role there? >> Yeah, so I actually just got started with Z about four years ago. I spent most of my career actually on the distributed platform, largely with data and analytics, the analytics area databases and both On-premise and Public Cloud. But I always considered myself a friend to Z. So in many of the areas that I'd worked on, we'd, I had offerings where we'd enabled it to work with COS or Linux on Z. And then I had this opportunity come up where I was able to take on the role of leading some of our really core runtimes and databases on the Z platform, IMS and z/TPF. And then recently just expanded my scope to take on CICS and a number of our other offerings related to those kind of in this whole application platform space. And I was really excited because just of how important these runtimes and this platform is to the world,really. You know, our power is two thirds of our fortune 100 clients across banking and insurance. And it's you know, some of the most powerful transaction platforms in the world. You know doing hundreds of billions of transactions a day. And you know, just something that's really exciting to be a part of and everything that it does for us. >> It's funny how distributed systems and distributed computing really enable more longevity of everything. And now with cloud, you've got new capabilities. So it's super excited. We're seeing that a big theme at AnsibleFest this idea of connecting, making things easier you know, talk about distributed computing. The cloud is one big distribute computer. So everything's kind of playing together. You have a panel discussion at AnsibleFest Virtual. Could you talk about what your topic is and share, what was some of the content in there? Content being, content as in your presentation? Not content. (laughs) >> Absolutely. Yeah, so I had the opportunity to co-host a panel with a couple of our clients. So we had Phil Allison from Black Knight and Pat Lane from Allstate and they were really joining us and talking about their experience now starting to use Ansible to manage to z/OS. So we just actually launched some content collections and helping to enable and accelerate, client's use of using Ansible to manage to z/OS back in March of this year. And we've just seen tremendous client uptake in this. And these are a couple of clients who've been working with us and, you know, getting started on the journey of now using Ansible with Z they're both you know, have it in the enterprise already working with Ansible on other platforms. And, you know, we got to talk with them about how they're bringing it into Z. What use cases they're looking at, the type of culture change, that it drives for their teams as they embark on this journey and you know where they see it going for them in the future. >> You know, this is one of the hot items this year. I know that events virtual so has a lot of content flowing around and sessions, but collections is the top story. A lot of people talking collections, collections collections, you know, integration and partnering. It hits so many things but specifically, I like this use case because you're talking about real business value. And I want to ask you specifically when you were in that use case with Ansible and Z. People are excited, it seems like it's working well. Can you talk about what problems that it solves? I mean, what was some of the drivers behind it? What were some of the results? Could you give some insight into, you know, was it a pain point? Was it an enabler? Can you just share why that was getting people are getting excited about this? >> Yeah well, certainly automation on Z, is not new, you know there's decades worth of, of automation on the platform but it's all often proprietary, you know, or bundled up like individual teams or individual people on teams have specific assets, right. That they've built and it's not shared. And it's certainly not consistent with the rest of the enterprise. And, you know, more and more, you're kind of talking about hybrid cloud. You know, we're seeing that, you know an application is not isolated to a single platform anymore right. It really expands. And so being able to leverage this common open platform to be able to manage Z in the same way that you manage the entire rest of your enterprise, whether that's Linux or Windows or network or storage or anything right. You know you can now actually bring this all together into a common automation plane in control plane to be able to manage to all of this. It's also really great from a skills perspective. So, it enables us to really be able to leverage. You know Python on the platform and that's whole ecosystem of Ansible skills that are out there and be able to now use that to work with Z. >> So it's essentially a modern abstraction layer of agility and people to work on it. (laughs) >> Yeah >> You know it's not the joke, Hey, where's that COBOL programmer. I mean, this is a serious skill gap issues though. This is what we're talking about here. You don't have to replace the, kill the old to bring in the new, this is an example of integration where it's classic abstraction layer and evolution. Is that, am I getting that right? >> Absolutely. I mean I think that Ansible's power as an orchestrator is part of why, you know, it's been so successful here because it's not trying to rip and replace and tell you that you have to rewrite anything that you already have. You know, it is that glue sort of like you used that term earlier right? It's that glue that can span you know, whether you've got rec whether you've got ACL, whether you're using z/OSMF you know, or any other kind of custom automation on the platform, you know, it works with everything and it can start to provide that transparency into it as well, and move to that, like infrastructure as code type of culture. So you can bring it into source control. You can have visibility to it as part of the Ansible automation platform and tower and those capabilities. And so you, it really becomes a part of the whole enterprise and enables you to codify a lot of that knowledge. That, you know, exists again in pockets or in individuals and make it much more accessible to anybody new who's coming to the platform. >> That's a great point, great insight.& It's worth calling out. I'm going to make a note of that and make a highlight from that insight. That was awesome. I got to ask about this notion of client uptake. You know, when you have z/OS and Ansible kind of come in together, what are the clients area? When do they get excited? When do they know that they've got to do? And what are some of the client reactions? Are they're like, wake up one day and say, "Hey, yeah I actually put Ansible and z/OS together". You know peanut butter and chocolate is (mumbles) >> Honestly >> You know, it was just one of those things where it's not obvious, right? Or is it? >> Actually I have been surprised myself at how like resoundingly positive and immediate the reactions have been, you know we have something, one of our general managers runs a general managers advisory council and at some of our top clients on the platform and you know we meet with them regularly to talk about, you know, the future direction that we're going. And we first brought this idea of Ansible managing to Z there. And literally unanimously everybody was like yes, give it to us now. (laughs) It was pretty incredible, you know? And so it's you know, we've really just seen amazing uptake. We've had over 5,000 downloads of our core collection on galaxy. And again that's just since mid to late March when we first launched. So we're really seeing tremendous excitement with it. >> You know, I want to want to talk about some of the new announcements, but you brought that up. I wanted to kind of tie into it. It is addictive when you think modernization, people success is addictive. This is another theme coming out of AnsibleFest this year is that when the sharing, the new content you know, coders content is the theme. I got to ask you because you mentioned earlier about the business value and how the clients are kind of gravitating towards it. They want it.It is addictive, contagious. In the ivory towers in the big, you know, front office, the business. It's like, we've got to make everything as a service. Right. You know, you hear that right. You know, and say, okay, okay, boss You know, Skyla, just go do it. Okay. Okay. It's so easy. You can just do it tomorrow, but to make everything as a service, you got to have the automation, right. So, you know, to bridge that gap has everything is a service whether it's mainframe. I mean okay. Mainframe is no problem. If you want to talk about observability and microservices and DevOps, eventually everything's going to be a service. You got to have the automation. Could you share your, commentary on how you view that? Because again, it's a business objective everything is a service, then you got to make it technical then you got to make it work and so on. So what's your thoughts on that? >> Absolutely. I mean, agility is a huge theme that we've been focusing on. We've been delivering a lot of capabilities around a cloud native development experience for folks working on COBOL, right. Because absolutely you know, there's a lot of languages coming to the platform. Java is incredibly powerful and it actually runs better on Z than it runs on any other platform out there. And so, you know, we're seeing a lot of clients you know, starting to, modernize and continue to evolve their applications because the platform itself is incredibly modern, right? I mean we come out with new releases, we're leading the industry in a number of areas around resiliency, in our security and all of our, you know, the face of encryption and number of things that come out with, but, you know the applications themselves are what you know, has not always kept pace with the rate of change in the industry. And so, you know, we're really trying to help enable our clients to make that leap and continue to evolve their applications in an important way, and the automation and the tools that go around it become very important. So, you know, one of the things that we're enabling is the self service, provisioning experience, right. So clients can, you know, from Open + Shift, be able to you know, say, "Hey, give me an IMS and z/OS connect stack or a kicks into DB2 stack." And that is all under the covers is going to be powered by Ansible automation. So that really, you know, you can get your system programmers and your talent out of having to do these manual tasks, right. Enable the development community. So they can use things like VS Code and Jenkins and GET Lab, and you'll have this automated CICB pipeline. And again, Ansible under the covers can be there helping to provision those test environments. You know, move the data, you know, along with the application, changes through the pipeline and really just help to support that so that, our clients can do what they need to do. >> You guys got the collections in the hub there, so automation hub, I got to ask you where do you see the future of the automating within z/OS going forward? >> Yeah, so I think, you know one of the areas that we'd like to see go is head more towards this declarative state so that you can you know, have this declarative configuration defined for your Z environment and then have Ansible really with the data and potency right. Be able to, go out and ensure that the environment is always there, and meeting those requirements. You know that's partly a culture change as well which goes along with it, but that's a key area. And then also just, you know, along with that becoming more proactive overall part of, you know, AI ops right. That's happening. And I think Ansible on the automation that we support can become you know, an integral piece of supporting that more intelligent and proactive operational direction that, you know, we're all going. >> Awesome Skyla. Great to talk to you. And so insightful, appreciate it. One final question. I want to ask you a personal question because I've been doing a lot of interviews around skill gaps and cybersecurity, and there's a lot of jobs, more job openings and there are a lot of people. And people are with COVID working at home. People are looking to get new skilled up positions, new opportunities. Again cybersecurity and spaces and event we did and want to, and for us its huge, huge openings. But for people watching who are, you know, resetting getting through this COVID want to come out on the other side there's a lot of online learning tools out there. What skill sets do you think? Cause you brought up this point about modernization and bringing new people and people as a big part of this event and the role of the people in community. What areas do you think people could really double down on? If I wanted to learn a skill. Or an area of coding and business policy or integration services, solution architects, there's a lot of different personas, but what skills can I learn? What's your advice to people out there? >> Yeah sure. I mean on the Z platform overall and skills related to Z, COBOL, right. There's, you know, like two billion lines of COBOL out there in the world. And it's certainly not going away and there's a huge need for skills. And you know, if you've got experience from other platforms, I think bringing that in, right. And really being able to kind of then bridge the two things together right. For the folks that you're working for and the enterprise we're working with you know, we actually have a bunch of education out there. You got to master the mainframe program and even a competition that goes on that's happening now, for folks who are interested in getting started at any stage, whether you're a student or later in your career, but you know learning, you know, learn a lot of those platforms you're going to be able to then have a career for life. >> Yeah. And the scale on the data, this is so much going on. It's super exciting. Thanks for sharing that. Appreciate it. Want to get that plug in there. And of course, IBM, if you learn COBOL you'll have a job forever. I mean, the mainframe's not going away. >> Absolutely. >> Skyla, thank you so much for coming on theCUBE Vice President, for the Z Application Platform and IBM, thanks for coming. Appreciate it. >> Thanks for having me. >> I'm John Furrier your host of theCUBE here for AnsibleFest 2020 Virtual. Thanks for watching. (upbeat music)

Published Date : Oct 2 2020

SUMMARY :

brought to you by Red Hat. And I hope you can come together online So, you know, I've And it's you know, some you know, talk about with us and, you know, getting started And I want to ask you in the same way that you of agility and people to work on it. kill the old to bring in on the platform, you know, You know, when you have z/OS and Ansible And so it's you know, we've I got to ask you because You know, move the data, you know, so that you can you know, But for people watching who are, you know, And you know, if you've got experience And of course, IBM, if you learn COBOL Skyla, thank you so much for coming I'm John Furrier your host of theCUBE

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IBMORGANIZATION

0.99+

John FurrierPERSON

0.99+

Phil AllisonPERSON

0.99+

Red HatORGANIZATION

0.99+

AnsibleFestORGANIZATION

0.99+

Walter BentleyPERSON

0.99+

Skyla LoomisPERSON

0.99+

JavaTITLE

0.99+

PythonTITLE

0.99+

tomorrowDATE

0.99+

LinuxTITLE

0.99+

two thingsQUANTITY

0.99+

WindowsTITLE

0.99+

Pat LanePERSON

0.99+

this yearDATE

0.99+

SkylaPERSON

0.99+

AnsibleORGANIZATION

0.98+

bothQUANTITY

0.98+

midDATE

0.98+

100 clientsQUANTITY

0.98+

oneQUANTITY

0.98+

One final questionQUANTITY

0.98+

over 5,000 downloadsQUANTITY

0.97+

ZTITLE

0.97+

two billion linesQUANTITY

0.97+

March of this yearDATE

0.95+

Z.TITLE

0.95+

VS CodeTITLE

0.95+

COBOLTITLE

0.93+

z/OSTITLE

0.92+

single platformQUANTITY

0.91+

hundreds of billions of transactions a dayQUANTITY

0.9+

firstQUANTITY

0.9+

AllstateORGANIZATION

0.88+

Palo Alto StudiosLOCATION

0.88+

Z Application PlatformTITLE

0.86+

four years agoDATE

0.84+

COVIDEVENT

0.81+

late MarchDATE

0.81+

aboutDATE

0.8+

VicePERSON

0.79+

JenkinsTITLE

0.78+

Vice PresidentPERSON

0.77+

AnsibleFest 2020EVENT

0.77+

IBM Z.TITLE

0.72+

two thirdsQUANTITY

0.72+

one big distribute computerQUANTITY

0.72+

one dayQUANTITY

0.71+

z/OSMFTITLE

0.69+

Z.ORGANIZATION

0.69+

Black KnightTITLE

0.64+

ACLTITLE

0.64+

CICSORGANIZATION

0.63+

IMSTITLE

0.63+

Robyn Bergeron and Matt Jones, Red Hat | AnsibleFest 2020


 

>> Announcer: From around the globe, it's theCUBE! With digital coverage of AnsibleFest 2020. Brought to you by Red Hat. >> Hello, everyone. Welcome back to theCUBE's coverage of AnsibleFest 2020. I'm your host with theCUBE John Furrier. And we've got two great guests. A CUBE alumni, Robyn Bergeron, senior manager, Ansible community team. Welcome back, she's with Ansible and Red Hat. Good to see you. And Matt Jones, chief architect for the Ansible Automation Platform. Again, both with Red Hat, Ansible was acquired by Red Hat. Robyn used to work for Red Hat, then went to Ansible. Ansible got bought by Red Hat. Robyn, great to see you, Matt, great to see you. >> Yep, thanks for having me back again. It's good to see you. >> We're not in person. It's the virtual event. Thanks for coming on remotely to our CUBE virtual, really appreciate it. I want to talk about the, and I brought that Red Hat kind of journey Robyn. We talked about it last year, but it really is an important point. The roots of Ansible and kind of where it's come from and what it's turned into and where it is today, is an interesting journey because the mission is still the same. I would like to get your perspectives because you know, Red Hat was acquired by IBM, Ansible's under Red Hat, all part of one big happy family. A lot's going on around the platform, Matt, you're the chief architect, Robyn you're on the community team. Collections, collections, collections, is the message, content, content, content, community, a lot going on. So take a minute, both of you explain the Ansible roots, where it is today, and the mission. >> Right, so beginning of Ansible was really, there was a small team of folks and they'd actually been through an iteration before that didn't use SSH called Funk, but you know, it was, let's make a piece of software that is open source that allows people to automate other things. And we knew at the time that, you know, based on a piece of research that we had seen out of Harvard that having a piece of software be architected in a modular fashion wasn't just great for the software, but it was also great for developing pathways and connections for the community to actually contribute stuff. If you have a car, this is always my analogy. If you have a car, you don't have to know how the engine works in order to swap out the windshield wipers or embed new windshield wipers, things like that. The nice thing about modular architectures is that it doesn't just mean that things can plug in. It means you can actually separate them into different spots to enable them to be plugged in. And that's sort of where we are today with collections, right? We've always had this sense of modules, but everything except for a couple of points in time, all of the modules, the ways that you connect Ansible to the vast array of technologies that you can use it with. All of those have always been in the full Ansible repository. Now we've separated out most of, you know, nearly everything that is not absolutely essential to having in a, you know, a very minimal Ansible installation, broken them out into separate repositories, that are usually grouped by function, right? So there's probably like a VMware something and a cloud something, and a IBM, z/OS something, things like that, right? Each in their own individual groups. So now, not only can contributors find what they want to contribute to in much smaller spots that are not a sea of 5,000 plus folks doing work. But now you can also choose to use your Ansible collections, update them, run them independently of just the singular release of Ansible, where you got everything, all the batteries included in one spot. >> Matt, this brings up the point about she's bringing in more advanced functionality, she's talking about collections. This has been kind of the Ansible formula from the beginning in its startup days, ease of use, easy, fast automation. Talk about the, you know, back in 2013 it was a startup. Now it's part of Red Hat. The game is still the same. Can you just share kind of what's the current guiding principles around Ansible this year? Because lots going on, like I said, faster, bigger, a lot going on, share your perspective. You've been there. >> Yeah, you know, what we're working on now is we're taking this great tool that has changed the way that automation works for a lot of people and we want to make it faster and bigger and better. We want it to scale better. We want it to automate more and be easier to automate, automate all the things that people want to do. And so we're really focusing on that scalability and flexibility. Robyn talked about content and collections, right? And what we want to enable is people to bring the content collections, the collections, the roles, the models, and use them in the way that they feel works best for them, leaving aside some of the things that they maybe aren't quite as interested in and put it together in a way that scales for them and scales for a global automation, automation everywhere. >> Yeah, I want to dig into the collections later, Robyn, for sure. And Matt, so let's, we'll put that on pause for a minute. I want to get into the event, the virtual event. Obviously we're not face to face, this year's virtual. You guys are both keynoting. Matt, we'll start with you. If you can each give 60 seconds, kind of a rundown of your keynote talk, give us the quick summary this year on the keynotes, Matt, we'll start with you. >> Yeah. That's, 60 seconds is- >> If you need a minute and a half, we'll give you 90 seconds, Robyn, that's going to be tough. Matt, we'll start with you. >> I'll try. So this year, and I mentioned the focus on scalability and flexibility, we on the product and on the platform, on the Ansible Automation Platform, the goal here is to bring content and flexibility of that content into the platform for you. We focused a lot on how you execute, how you run automation, how you manage your automation, and so bringing that content management automation into the system for you. It's really important to us. But what we're also noticing is that we, people are managing automation at a much larger scale. So we are updating the Ansible Tower, Ansible AWX, the automation platform, we're updating it to be more flexible in how it runs content, and where it can run content. We're making it so that execution of automation doesn't just have to happen in your data center, in one data center, we recognize that automation occurs globally, and we want to expand that automation execution capability to be able to run globally and all report back into your central business. We're also expanding over the next six months, a year, how well Ansible integrates with OpenShift and Kubernetes. This is a huge focus for us. We want that experience for automation to feel the same, whether you're automating at the edge, in devices and virtual machines and data centers, as well as clusters and Kubernetes clusters anywhere in the world. >> That's awesome. That's why I brought that up earlier. I wanted to get that out there because it's worth calling out that the Ansible mission from the beginning was similar scope, easy to do and simplify, but now it's larger scale. Again, it's everywhere, harder to do, hence complexity being extracted away. So thank you for sharing. We'll dig into that in a second. Okay, Robyn, 60 seconds or more, if you need it, your keynote this year at AnsibleFest, give us the quick rundown. >> All right. Well, I think we probably know at this point, one of the main themes this year is called automate to connect and, you know, the purpose of the community keynote is really to highlight the achievements of the community. So, you know, we are talking about, well, we are talking about collections, you know, going through some of the very broad highlights of that, and also how that has contributed, or, not contributed, how that is included as part of the recent release of Ansible 2.10, which was really the first release where we've got it very easy for people to actually start using collections and getting familiar with what that brings to them. A good portion of the keynote is also just about innovation, right? Like how we do things in open source and why we do things in certain ways in open source to accelerate us. And how that compares with the Red Hat, traditional product model, which is, we kind of, we do a lot of innovation upstream. We move quickly so that if something is maybe not the right idea, we can move on. And then in our products, that's sort of the thing that we give to our customers that is tried, tested and true. All of that kind of jazz. We also talk about, or I guess I also talk about the, all of our initiatives that we're doing around diversity and inclusiveness, including some of the code changes that we've made for better, more inclusive language in our projects and our downstream products, our diversity and inclusion working group that we have in the community land, which is, you know, just looking to embrace more and more people. It's a lot about connectivity, right? To one of Matt's points about all the things that we're trying to achieve and how it's similar to the original principles, the third one was, it's always, we need to have it to be easy to contribute to. It doesn't necessarily just mean in our community, right? Like we see in all of these workplaces, which is one of the reasons why we brought in Automation Hub, that folks inside large organizations, companies, government, whatever it is, are using Ansible and there's more and more, and, you know, there's one person, they tell their friend, they tell another friend, and next thing you know, it's the whole department. And then you find people in other departments and then you've got a ton of people doing stuff. And we all know that you can do a bunch of stuff by yourself, but you can accomplish a lot more together. And so, making it easy to contribute inside your organization is not much different than being able to contribute inside the community. So this is just a further recognition, I think, of what we see as just a natural extension of open source. >> I think the community angle is super important 'cause you have the community in terms of people contributing, but you also have multiple vendors now, multiple clouds, multiple integrations, the stakeholders of collaboration have increased. It was just like, "Oh, here's the upstream and et cetera, we're done, and have meetings, do all that stuff." And Matt, that brings me to my next question. Can you talk about some of the recent releases that have changed the content experience for the Ansible users in the upstream and within the automation platform? >> Well, so last year we released collections, and we've really been moving towards that over the 2.9, 2.10 timeframe. And now I think you're starting to see sort of the realization of that, right? This year we've released Automation Hub on cloud.redhat.com so that we can concentrate that vendor and partner content that Red Hat supports and certifies. In AnsibleFest you'll hear us talk about Private Automation Hub. This is bringing that content experience to the customer, to the user of this content, sort of helping you curate and manage that content yourself, like Robyn said, like we want to build communities around the content that you've developed. That's the whole reason that we've done this with collections is we don't want to bind it to Ansible core releases. We don't want to block content releases, all of this great functionality that the community is building. This is what collections mean. You should be free to use the collections that you want when you want it, regardless of when Ansible core itself has released. >> Can you just take a minute real quick and just explain what is collections, for folks out there who are rich? 'Cause that's the big theme here, collections, collections, collections. That's what I'm hearing resonate throughout the virtual hallways, if you will. Twitter and beyond. >> That's a good question. Like what is a collection itself? So we've talked a lot in the past about reusable content for Ansible. We talk a lot about roles and modules and we sort of put those off to the side a little bit and say, "These are your reusable components." You can put 'em anywhere you want. You can put 'em in source control, distribute them through email, it doesn't matter. And then your playbooks, that's what you write. And that's your sort of blessed content. Collections are really about taking the modules and roles and plugins, the things that make automation possible, and bundling those up together in groups of content, groups of modules and roles, or standing by themselves so that you can decide how that's distributed and how you consume that, right? Like you might have the Azure, VMware or Red Hat satellite collection that you're using. And you're happy with that. But you want a new version of Ansible. You're not bound to using one and the same. You can stick with the content that matters to you, the roles, the modules, the plugins that work for you. And you decide when to update those and you know, what the actual modules and plugins you're using are. >> So I got to ask the content question, you know, I'm a content producer. We do videos as content, blog posts content. When you talk about content, it's code, clarify that role for us because you got, you're enabling developers with content and helping them find experts. This is a concept. Robyn, talk about this. And Matt, you can weigh in, too, define what does content mean? It means different things. (indistinct) again, content could be. >> It is one of those words, it's right up there with developers, you know, so many different things that that can mean, especially- >> Explain content and the importance of the semantics of that. Explain it, it's important that people understand the semantics of the word "content" with respect to what's going on with Ansible. >> Yeah, and Matt and I actually had a conversation about the murkiness of this word, I believe that was yesterday. So what I think about our content, you know, and I try to put myself in the mind, my first job was a CIS admin. So I try to put myself in the mind of someone who might be using this content that I'm about to attempt to explain. Like Matt just explained, we've always had these modules, which were included in Ansible. People have pieces of code that show very basic things, right? If I get one of the AWS modules, it would, I am able to do things like "I would like to create a new user." So you might make a role that actually describes the steps in Ansible, that you would have to create a new user that is able to access AWS services at your company. There may be a number of administrators who want to use that piece of stuff, that piece of code over and over and over again, because hopefully most companies are getting bigger and not smaller, right? They want to have more people accessing all sorts of pieces of technology. So making some of these chunks accessible to lots of folks is really important, right? Because what good is automation, if, sure we've taken care of half of it, but if you still have to come up with your own bits of code from scratch every time you want to invoke it, you're still not really leveraging the full power of collaboration. So when we talk about content, to me, it really is things that are constantly reusable, that are accessible, that you tie together with modules that you're getting from collections. And I think it's that bundle, you can keep those pits of reusable content in the collections or keep them separate. But, you know, it's stuff that is baked for you, or that maybe somebody inside your organization bakes, but they only have to bake it once. They don't have to bake it in 25 silos over and over and over again. >> Matt, the reason why we're talking about this is interesting, 'cause you know what this points out, in my opinion, it's my opinion. This points out that we're talking about content as a word means that you guys were on the cutting edge of new paradigms, which is content, it's essentially code, but it's addressable, community it's being shared. Someone wrote the code and it's a whole 'nother level of thinking. This is kind of a platform automation. I get it. So give us your thoughts because this is a critical component because the origination of the content, the code, I mean, I love it. Content is, I've always said content, our content should be code. It's all data, but this is interesting. This is the cutting edge concept. Could you explain what it means from your perspective? >> This is about building communities around that content, right? Like it's that sharing that didn't exist before, like Robyn mentioned, like, you know, you shouldn't have to build the same thing a dozen times or 100 times, you should be able to leverage the capabilities of experts and people who understand that section of automation the best, like I might be an expert in one field or Robyn's an expert in another field, we're automating in the same space. We should be able to bring our own expertise and resources together. And so this is what that content is. Like, I'm an expert in one, you're an expert in another, let's bring them together as part of our automation community and share them so that we can use them iterate on them and build on them and just constantly make them better. >> And the concepts are consumption, there's consumption of the content. There's the collaboration of the content. There's the sharing, all this, and there's reputation, there's expertise. I mean, it's a multi sided marketplace here, isn't it? >> Yeah. I read a article, I don't know, a year or two ago that said, we've always evolved in the technology industry around, if you have access to this, first it was the mainframes. Then it was, whatever, personal computers, the cloud, now it's containers, all of this, but, once everybody buys that mainframe or once everybody levels up their skills to whatever the next thing is that you can just buy, there's not much left that actually can help you to differentiate from your competitors, other than your ability to actually leverage all of those tools. And if you can actually have better collaboration, I think than other folks, then that is one of those points that actually will get you ahead in your digital transformation curve. >> I've been harping on this for a while. I think that cloud native finally has gone, when I say "mainstream" I mean like on everyone's mind, you look at the container uptake, you're looking at containers. We had IDC on, five to 10% of the enterprises are containerizing. That's huge growth opportunity. The IPO of, say, Snowflake's on Amazon. I mean, how does this happen? That's a company that's went public, It's the most valuable IPO in the history of IPOs on Wall Street. And it's built on Amazon, it has its own cloud. So it's like, I mean, this points to the new value that's being created on top of these new cloud native architectures. So I really think you guys are onto something big here. And I think you're starting to see this, new notions of how things are being rethought and reimagined. So let's keep it, while I've got you guys here real quick, Ansible 2.1 community release. Tell us more about the updates there. >> Oh, 2.10, because, yeah. Oh, that's fine. I know I too have had, I'm like, "Why do we do that?" But it's semantic versioning. So I am more accustomed to this now, it's a slightly different world from when I worked on Fedora. You know, I think the big highlight there is really collections. I mean, it's collections, collections, collections. That is all the work that we did, it's under the hood, over the hood, and really, how we went from being all in one repo to breaking things out. It's a big line for, we're advancing both the tool and also advancing the community's ability to actually collaborate together. And, you know, as folks start to actually use it, it's a big change for them potentially in how they can actually work together in their organizations using Ansible. One of the big things we did focus on was ensuring that their ease of use, that their experience did not change. So if they have existing Ansible stuff that they're running, playbooks, mod roles, et cetera, they should be able to use 2.10 and not see any discernible change. That's all the under the hood. That was a lot of surgery, wasn't it, Matt? Serious amounts of work. >> So Matt, 2.10, does that impact the release piece of it for the developers and the customers out there? What does it change? >> It's a good point. Like at least for the longer term, this means that we can focus on the Ansible core experience. And this is the part that we didn't touch on much before now with the collections pieces that now when we're fixing bugs, when we're iterating and making Ansible as an engine of automation better, we can do that without negatively impacting the automation that people actually use. We could focus on the core experience of actually automating itself. >> Execution environments, let's talk about that. What are they, are they being used in the community today? What do you guys react to that? >> We're actually, we're sort of in the middle of building this right now. Like one of the things that we've struggled with is when you, you need to automate, you need this content that we've talked about before. But beyond that, you have the system that sits underneath the version of Linux, the kernel that you're using, going even further, you need Python dependencies, you need library dependencies. These are hard and complicated things, like in the Ansible Tower space, we have virtual environments, which lets you install those things right alongside the Ansible Tower control plane. This can cause a lot of problems. So execution environments, they take those dependencies, the unit that is the environment that you need to run your automation in, and we're going to containerize it. You were just talking about this from the containerization perspective, right? We're going to build more easily isolated, easy to use distinct units of environments that will let you run your automation. This is great. This lets you, the person who's building the content for your organization, he can develop it and test it and send it through the CI process all the way up through production, it's the exact same environment. You could feel confident that the automation that you're running against the libraries and the models, the version of Ansible that you're using, is the same when you're developing the content as when you're running it in production for your business, for your users, for your customers. >> And that's the Nirvana. This is really where you talk about pushing it to new limits. Real quick, just to kind of end it out here for Ansible 2020, AnsibleFest 2020. Obviously we're now virtual, people aren't there in person, which is really an intimate event. Last year was awesome. Had theCUBE set right there, great event, people were intimate. What's going on for what you guys have for people that obviously we got the videos and got the media content. What's the main theme, Robyn and Matt, and what's going on for resources that might be available for folks who want to learn more, what's going on in the community, can you just take a minute each to talk about some of the exciting things that are going on at the event that they should pay attention to, and obviously, it's asynchronous so they can go anywhere anytime they want, it's the internet. Where can they go to hang out? Is there a hang space? Just give the quick two second commercial, Robyn, we'll start with you. >> All right. Well of course you can catch the keynotes early in the morning. I look forward to everybody's super exciting, highly polite comments. 'Cause I hear there's a couple people coming to this event, at least a few. I know within the event platform itself, there are chat rooms for each track. I myself will be probably hanging out in some of the diversity and inclusion spaces, honestly, and I, this is part of my keynote. You know, one of the great things about AnsibleFest is for me, and I was at the original AnsibleFest that had like 20 people in Boston in 2013. And it happened directly across the street from Red Hat Summit, which is why I was able to just ditch my job and go across the street to my future job, so to speak. We were... Well, I just lost my whole train of thought and ruined everything. Jeez. >> We got that you're going to be in the chat rooms for the diversity and community piece, off platform, is there a Slack? Is there like a site? Anything else? 'Cause you know, when the event's over, they're going to come back and consume on demand, but also the community, is there a Discord? I mean, all kinds of stuff's going on, popping up with these virtual spaces. >> One thing I should highlight is we do have the Ansible Contributor Summit that goes on the day before AnsibleFest and the day after AnsibleFest. Now, normally this is a pretty intimate event with the large outreach that we've gotten with this Fest, which is much bigger than the original one, much, much, much bigger, we've, and signing up for the contributor summit is part of the registration process for AnsibleFest. So we've actually geared our first day of that event to be towards new or aspiring contributors rather than the traditional format that we've had, which is where we have a lot of engineers, and can you remember sit down physically or in a virtual room and really talk about all of the things going on under the hood, which is, you know, can be intimidating for new people. Like "I just wanted to learn about how to contribute, not how to do surgery." So the first day is really geared towards making everything accessible to new people because turns out there's a lot of new people who are very excited about Ansible and we want to make sure that we're giving them the content that they need. >> Think about architects. I mean, SREs are jumping in, Matt, you talked about large scale. You're the chief architect, new blood's coming in. But give us an update on your perspective, what people should pay attention to at the event, after the event, communities they could be involved in, certainly people want to tap into you are an expert and find out what's going on. What's your comment? >> Yeah, you know, we have a whole new session track this year on architects, specifically for SREs and automation architects. We really want to highlight that. We want to give that sort of empowerment to the personas of people who, you know, maybe you're not a developer, maybe you're not, operations or a VP of your company. You're looking at the architecture of automation, how you can make our automation better for you and your organization. Everybody's suffered a lot and struggled with the COVID-19. We're no different, right? We want to show how automation can empower you, empower your organization and your company, just like we've struggled also. And we're excited about the things that we want to deliver in the next six months to a year. We want you to hear about those. We want you to hear about content and collections. We want you to hear about scalability, execution environments, we're really excited about what we're doing. You know, use the tools that we've provided in the AnsibleFest event experience to communicate with us, to talk to us. You can always find us on IRC via email, GitHub. We want people to continue to engage with us, our community, our open source community, to engage with us in the same ways that they have. And now we just want to share the things that we're working on, so that we can all collaborate on it and automate better. >> I'm really glad you said that. I mean, again, people are impacted by COVID-19. I got, it sounds like all channels are open. I got to say of all the communities that are having to work from home and are impacted by digital, developers probably are less impacted. They got more time to gain, they don't have to travel, they could hang out, they're used to some of these tools. So I think I guess the strategy is turn on all the channels and engage in new ways. And that seems to be the message, right? >> Yeah, exactly. >> Alright, Robyn Bergeron, great to see you again, Matt Jones, great to chat with you, chief architect for Ansible Automation Platform and of course, Robyn senior manager for the community team. Thanks so much for joining me today. I appreciate it. >> Thank you so much. >> Okay. It's theCUBE's coverage. I'm John Furrier, your host. We're here in the studio in Palo Alto. We're virtual. This is theCUBE virtual with AnsibleFest virtual. We're not face to face. Thank you for watching. (calm music)

Published Date : Oct 1 2020

SUMMARY :

Brought to you by Red Hat. for the Ansible Automation Platform. It's good to see you. collections, is the message, the ways that you connect Ansible to This has been kind of the Ansible that has changed the way into the collections later, If you need a minute and a half, the goal here is to bring content that the Ansible mission automate to connect and, you know, that have changed the content experience the collections that you want 'Cause that's the big theme here, so that you can decide clarify that role for us because you got, and the importance of that you would have to create a new user means that you guys that section of automation the best, And the concepts are consumption, is that you can just buy, 10% of the enterprises One of the big things we did focus on for the developers and We could focus on the core experience What do you guys react to that? that you need to run your automation in, and got the media content. and go across the street to for the diversity and community piece, that goes on the day before AnsibleFest You're the chief architect, in the next six months to a year. And that seems to be the message, right? great to see you again, We're here in the studio in

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
RobynPERSON

0.99+

MattPERSON

0.99+

Robyn BergeronPERSON

0.99+

Matt JonesPERSON

0.99+

AnsibleORGANIZATION

0.99+

BostonLOCATION

0.99+

John FurrierPERSON

0.99+

IBMORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

Red HatORGANIZATION

0.99+

90 secondsQUANTITY

0.99+

2013DATE

0.99+

100 timesQUANTITY

0.99+

Palo AltoLOCATION

0.99+

last yearDATE

0.99+

fiveQUANTITY

0.99+

AWSORGANIZATION

0.99+

60 secondsQUANTITY

0.99+

Red HatORGANIZATION

0.99+

25 silosQUANTITY

0.99+

PythonTITLE

0.99+

20 peopleQUANTITY

0.99+

This yearDATE

0.99+

oneQUANTITY

0.99+

a minute and a halfQUANTITY

0.99+

Last yearDATE

0.99+

yesterdayDATE

0.99+

AnsibleFestEVENT

0.99+

first releaseQUANTITY

0.99+

this yearDATE

0.99+

Automation HubTITLE

0.99+

one personQUANTITY

0.98+

todayDATE

0.98+

COVID-19OTHER

0.98+

one spotQUANTITY

0.98+

SnowflakeORGANIZATION

0.98+

AnsibleFestORGANIZATION

0.98+

bothQUANTITY

0.98+

10%QUANTITY

0.98+

theCUBEORGANIZATION

0.98+

Nataraj Nagaratnam, IBM Hybrid Cloud & Rohit Badlaney, IBM Systems | IBM Think 2019


 

>> Live, from San Francisco, it's theCUBE covering IBM Think 2019. Brought to you by IBM. >> Hello everyone, welcome back to theCUBE's live coverage here in San Francisco for IBM Think 2019. I'm John Furrier, Stu Miniman with theCUBE. Stu, it's been a great day. We're on our fourth day of four days of wall to wall coverage. A theme of AI, large scale compute with Cloud and data that's great. Great topics. Got two great guests here. Rohit Badlaney, who's the director of IBM Z As a Service, IBM Systems. Real great to see you. And Nataraj Nagaratnam, Distinguished Engineer and CTO and Director of Cloud Security at IBM and Hybrid Cloud, thanks for joining us. >> Glad to be here. >> So, the subtext to all the big messaging around AI and multi-cloud is that you need power to run this. Horsepower, you need big iron, you need the servers, you need the storage, but software is in the heart of all this. So you guys had some big announcements around capabilities. The Hyper Protect was a big one on the securities side but now you've got Z As a Service. We've seen Linux come on Z. So it's just another network now. It's just network computing is now tied in with cloud. Explain the offering. What's the big news? >> Sure, so two major announcements for us this week. One's around our private cloud capabilities on the platform. So we announced our IBM Cloud Private set of products fully supported on our LinuxOne systems, and what we've also announced is the extensions of those around hyper-secure workloads through a capability called the Secure Services Container, as well as giving our traditional z/OS clients cloud consumption through a capability called the z/OS Cloud Broker. So it's really looking at how do we cloudify the platform for our existing base, as well as clients looking to do digital transformation projects on-premise. How do we help them? >> This has been a key part of this. I want to just drill down this cloudification because we've been talking about how you guys are positioned for growth. All the REORG's are done. >> Sure, yeah >> The table's all set. Products have been modernized, upgraded. Now the path is pretty clear. Kind of like what Microsoft's playbook was. Build the core cloudification. Get your core set of products cloudified. Target your base of customers. Grow that and expand into the modern era. This is a key part of the strategy, right? >> Absolutely right. A key part of our private cloud strategy is targeted to our existing base and moving them forward on their cloud journey, whether they're looking to modernize parts of their application. Can we start first with where they are on-premise is really what we're after. >> Alright, also you have the Hyper Protect. >> Correct. >> What is that announcement? Can you explain Hyper Protect? >> Absolutely. Like Rohit talked about, taking our LinuxOne capabilities, now that enterprise trusts the level of assurance, the level of security that they're dependent on, on-premise and now in private cloud. We are taking that further into the public cloud offering as Hyper Protect services. So these are set of services that leverage the underlyings of security hardening that nobody else has the level of control that you can get and offering that as a service so you don't need to know Z or LinuxOne from a consumption perspective. So I'll take two examples. Hyper Protect Crypto Service is about exposing the level of control. That you can manage they keys. What we call "keep your own keys" because encryption is out there but it's all about key management so we provide that with the highest level of security that LinuxOne servers from us offer. Another example is database as a service, which runs in this Hyper Secure environment. Not only encryption and keys, but leveraging down the line pervasive encryption capabilities so nobody can even get into the box, so to say. >> Okay, so I get the encryption piece. That's solid, great. Internet encryption is always good. Containers, there's been discussions at the CNCF about containers not being part of the security boundaries and putting a VMware around it. Different schools of thought there. How do you guys look at the containerization? Does that fit into Secure Protect? Talk about that dynamic because encryption I get, but are you getting containers? >> Great question because it's about the workload, right? When people are modernizing their apps or building cloud-native apps, it's built on Kubernetes and containers. What we have done, the fantastic work across both the IBM Cloud Private on Z, as well as Hyper Protect, underlying it's all about containers, right? So as we deliver these services and for customers also to build data services as containers or VM's, they can deploy on this environment or consume these as a compute. So fundamentally it's kubernetes everywhere. That's a foundational focus for us. When it can go public, private and multicloud, and we are taking that journey into the most austere environment with a performance and scale of Z and LinuxONE. >> Alright, so Rohit, help bring us up to date. We've been talking about this hybrid and multi-cloud stuff for a number of years, and the idea we've heard for many years is, "I want to have the same stack on both ends. I want encryption all the way down to the chip set." I've heard of companies like Oracle, like IBM say, "We have resources in both. We want to do this." We understand kubernetes is not a magic layer, it takes care of a certain piece you know and we've been digging in that quite a bit. Super important, but there's more than that and there still are differences between what I'm doing in the private cloud and public cloud just naturally. Public cloud, I'm really limited to how many data centers, private cloud, everything's different. Help us understand what's the same, what's different. How do we sort that out in 2019? >> Sure, from a brand perspective we're looking at private cloud in our IBM Cloud Private set of products and standardizing on that from a kubernetes perspective, but also in a public cloud, we're standardizing on kubernetes. The key secret source is our Secure Services Container under there. It's the same technology that we use under our Blockchain Platform. Right, it brings the Z differentiation for hyper-security, lockdown, where you can run the most secure workloads, and we're standardizing that on both public and private cloud. Now, of course, there are key differences, right? We're standardizing on a different set of workloads on-premise. We're focusing on containerizing on-premise. That journey to move for the public cloud, we still need to get there. >> And the container piece is super important. Can you explain the piece around, if I've got multi-cloud going on, Z becomes a critical node on the network because if you have an on-premise base, Z's been very popular, LinuxONE has been really popular, but it's been for the big banks, and it seems like the big, you know, it's big ire, it's IBM, right? But it's not just the mainframe. It's not proprietary software anymore, it's essentially large-scale capability. >> Right. >> So now, when that gets factored into the pool of resources and cloud, how should customers look at Z? How should they look at the equation? Because this seems to me like an interesting vector into adding more head room for you guys, at least on the product side, but for a customer, it's not just a use case for the big banks, or doing big backups, it seems to have more legs now. Can you explain where this fits into the big picture? Because why wouldn't someone want to have a high performant? >> Why don't I use a customer example? I had a great session this morning with Brad Chun from Shuttle Fund, who joined us on stage. They know financial industry. They are building a Fintech capability called Digital Asset Custody Services. It's about how you digitize your asset, how do you tokenize them, how you secure it. So when they look at it from that perspective, they've been partnering with us, it's a classic hybrid workload where they've deployed some of the apps on the private cloud and on-premise with Z/LinuxONE and reaching out to the cloud using the Hyper Protect services. So when they bring this together, built on Blockchain under the covers, they're bringing the capability being agile to the market, the ability for them to innovate and deliver with speed, but with the level of capability. So from that perspective, it's a Fintech, but they are not the largest banks that you may know of, but that's the kind of innovation it enables, even if you don't have quote, unquote a mainframe or a Z. >> This gives you guys more power, and literally, sense of pretty more reach in the market because what containers and now these kubernetes, for example, Ginni Rometty said "kubernetes" twice in her keynote. I'm like, "Oh my God. The CEO of IBM said 'kubernetes' twice." We used to joke about it. Only geeks know about kubernetes. Here she is talking about kubernetes. Containers, kubernetes, and now service missions around the corner give you guys reach into the public cloud to extend the Z capability without foreclosing the benefits of Z. So that seems to be a trend. Who's the target for that? Give me an example of who's the customer or use case? What's the situation that would allow me to take advantage of cloud and extend the capability to Z? >> If you just step back, what we're really trying to do is create a higher shorten zone in our cloud called Hyper Protect. It's targeted to our existing Z base, who want to move on this enterprise out journey, but it's also targeted to clients like Shuttle Fund and DAX that Raj talked about that are building these hyper secure apps in the cloud and want the capabilities of the platform, but wanted more cloud-native style. It's the breadth of moving our existing base to the cloud, but also these new security developers who want to do enterprise development in the cloud. >> Security is key. That's the big drive. >> And that's the beauty of Z. That's what it brings to the table. And to a cloud is the hyper lockdown, the scale, the performance, all those characteristics. >> We know that security is always an on-going journey, but one of the ones that has a lot of people concerned is when we start adding IoT into the mix. It increased the surface area by orders of magnitude. How do those type of applications fit into these offerings? >> Great question. As a matter of fact, I didn't give you the question by the way, but this morning, KONE joined me on stage. >> We actually talked about it on Twitter. (laughs) >> KONE joined us on stage. It's about in the residential workflow, how they're enabling here their integration, access, and identity into that. As an example, they're building on our IoT platform and then they integrate with security services. That's the beauty of this. Rohit talked about developers, right? So when developers build it, our mission is to make it simple for a developer to build secure applications. With security skill shortage, you can't expect every developer to be a security geek, right? So we're making it simple, so that you can kind of connect your IoT to your business process and your back-end application seamlessly in a multi-cloud and hybrid-cloud fashion. That's where both from a cloud native perspective comes in, and building some of these sensitive applications on Hyper Protect or Z/LinuxONE and private cloud enables that end to end. >> I want to get you guys take while you're here because one of the things I've observed here at Think, which is clearly the theme is Cloud AI and developers all kind of coming together. I mean, AI, Amazon's event, AI, AI, AI, in cloud scale, you guys don't have that. But developer angle is really interesting. And you guys have a product called IBM Cloud Private, which seems to be a very big centerpiece of the strategy. What is this product? Why is it important? It seems to be part of all the key innovative parts that we see evolving out of the thing. Can you explain what is the IBM Cloud Private and how does it fit into the puzzle? >> Let me take a pass at it Raj. In a way it is, well, we really see IBM Cloud Private as that key linchpin on-premise. It's a Platform as a Service product on-premise, it's built on kubernetes and darker containers, but what it really brings is that standardized cloud consumption for containerized apps on-premise. We've expanded that, of course, to our Z footprint, and let me give you a use case of clients and how they use it. We're working with a very big, regulated bank that's looking to modernize a massive monolithic piece of WebSphere application server on-premise and break it down into micro-services. They're doing that on IBM Cloud Private. They've containerized big parts of the application on WebSphere on-premise. Now they've not made that journey to the cloud, to the public cloud, but they are using... How do you modernize your existing footprint into a more containerized micro-services one? >> So this is the trend we're seeing, the decomposition of monolithic apps on-premise is step one. Let's get that down, get the culture, and attract the new, younger people who come in, not the older guys like me, mini-computer days. Really make it ready, composable, then they're ready to go to the cloud. This seems to be the steps. Talk about that dynamic, Raj, from a technical perspective. How hard is it to do that? Is it a heavy lift? Is it pretty straight-forward? >> Great question. IBM, we're all about open, right? So when it comes to our cloud strategy open is the centerpiece offered, that's why we have banked on kubernetes and containers as that standardization layer. This way you can move a workflow from private to public, even ICP can be on other cloud vendors as well, not just IBM Cloud. So it's a private cloud that customers can manage, or in the public cloud or IBM kubernetes that we manage for them. Then it's about the app, the containerized app that can be moved around and that's where our announcements about Multicloud Manager, that we made late last year come into play, which helps you seamlessly move and integrate applications that are deployed on communities across private, public or multicloud. So that abstraction venire enables that to happen and that's why the open... >> So it's an operational construct? Not an IBM product, per say, if you think about it that way. So the question I have for you, I know Stu wants to jump in, he's got some questions. I want to get to this new mindset. The world's flipped upside down. The applications and workloads are dictating architecture and programmability to the DevOps, or infrastructure, in this case, Z or cloud. This is changing the game on how the cloud selection is. So we've been having a debate on theCUBE here, publicly, that in some cases it's the best cloud for the job decision, not a procurement, "I need multi-vendor cloud," versus I have a workload that runs best with this cloud. And it might be as if you're running 365, or G Suite as Google, Amazon's got something so it seems to be the trend. Do you agree with that? And certainly, there'll be many clouds. We think that's true, it's already happened. Your thoughts on this workload driving the requirements for the cloud? Whether it's a sole purpose cloud, meaning for the app. >> That's right. I'll start and Rohit will add in as well. That's where this chapter two comes into play, as we call Chapter Two of Cloud because it is about how do you take enterprise applications, the mission-critical complex workloads, and then look for the enablers. How do you make that modernization seamless? How do you make the cloud native seamless? So in that particular journey, is where IBM cloud and our Multicloud and Hybrid Cloud strategy come into play to make that transition happen and provide the set of capabilities that enterprises are looking for to move their critical workloads across private and public in bit much more assurance and performance and scale, and that's where the work that we are doing with Z, LinuxONE set of as an underpinning to embark on the journey to move those critical workloads to their cloud. So you're absolutely right. When they look at which cloud to go, it's about capabilities, the tools, the management orchestration layers that a cloud provider or a cloud vendor provide and it's not only just about IBM Public Cloud, but it's about enabling the enterprises to provide them the choice and then offer. >> So it's not multicloud for multicloud sake, it's multicloud, that's the reality. Workload drives the functionality. >> Absolutely. We see that as well. >> Validated on theCUBE by the gurus of IBM. The cloud for the job is the best solution. >> So I guess to kind of put a bow on this, the journey we're having is talking about distributed architectures, and you know, we're down on the weeds, we've got micro-services architectures, containerization, and we're working at making those things more secure. Obviously, there's still a little bit more work to do there, but what's next is we look forward, what are the challenges customers have. They live in this, you know, heterogeneous multicloud world. What do we have to do as an industry? Where is IBM making sure that they have a leadership position? >> From my perspective, I think really the next big wave of cloud is going to be looking at those enterprise workloads. It's funny, I was just having a conversation with a very big bank in the Netherlands, and they were, of course, a very big Z client, and asking us about the breadth of our cloud strategy and how they can move forward. Really looking at a private cloud strategy helping them modernize, and then looking at which targeted workloads they could move to public cloud is going to be the next frontier. And those 80 percent of workloads that haven't moved. >> An integration is key, and for you guys competitive strategy-wise, you've got a lot of business applications running on IBM's huge customer base. Focus on those. >> Yes. >> And then give them the path to the cloud. The integration piece is where the linchpin is and OSSI secure. >> Enterprise out guys. >> Love encryption, love to follow up more on the secure container thing, I think that's a great topic. We'll follow-up after this show Raj. Thanks for coming on. theCUBE coverage here. I'm John Furrier, Stu Miniman. Live coverage, day four, here live in San Francisco for IBM Think 2019. Stay with us more. Our next guests will be here right after a short break. (upbeat music)

Published Date : Feb 14 2019

SUMMARY :

Brought to you by IBM. and CTO and Director of Cloud Security at IBM So, the subtext to all the big messaging One's around our private cloud capabilities on the platform. All the REORG's are done. Grow that and expand into the modern era. is targeted to our existing base that nobody else has the level of control that you can get about containers not being part of the security boundaries Great question because it's about the workload, right? and the idea we've heard for many years is, It's the same technology that we use and it seems like the big, you know, it's big ire, at least on the product side, the ability for them to innovate and extend the capability to Z? It's the breadth of moving our existing base to the cloud, That's the big drive. And that's the beauty of Z. but one of the ones that has a lot of people concerned As a matter of fact, I didn't give you the question We actually talked about it on Twitter. It's about in the residential workflow, and how does it fit into the puzzle? to our Z footprint, and let me give you a use case Let's get that down, get the culture, Then it's about the app, the containerized app that in some cases it's the best cloud for the job decision, but it's about enabling the enterprises it's multicloud, that's the reality. We see that as well. The cloud for the job is the best solution. the journey we're having is talking about is going to be the next frontier. An integration is key, and for you guys And then give them the path to the cloud. on the secure container thing,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Nataraj NagaratnamPERSON

0.99+

Ginni RomettyPERSON

0.99+

Rohit BadlaneyPERSON

0.99+

AmazonORGANIZATION

0.99+

IBMORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

John FurrierPERSON

0.99+

RohitPERSON

0.99+

San FranciscoLOCATION

0.99+

2019DATE

0.99+

Brad ChunPERSON

0.99+

Shuttle FundORGANIZATION

0.99+

OracleORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

80 percentQUANTITY

0.99+

NetherlandsLOCATION

0.99+

RajPERSON

0.99+

IBM SystemsORGANIZATION

0.99+

bothQUANTITY

0.99+

fourth dayQUANTITY

0.99+

twiceQUANTITY

0.98+

LinuxTITLE

0.98+

GoogleORGANIZATION

0.98+

this weekDATE

0.98+

WebSphereTITLE

0.98+

late last yearDATE

0.98+

two great guestsQUANTITY

0.98+

four daysQUANTITY

0.98+

G SuiteTITLE

0.98+

DAXORGANIZATION

0.98+

ZTITLE

0.97+

two examplesQUANTITY

0.96+

two major announcementsQUANTITY

0.96+

ThinkORGANIZATION

0.96+

z/OSTITLE

0.95+

StuPERSON

0.95+

IBM ZORGANIZATION

0.95+

oneQUANTITY

0.95+

Hyper ProtectTITLE

0.95+

day fourQUANTITY

0.95+

Hybrid CloudORGANIZATION

0.94+

Chapter TwoOTHER

0.93+

firstQUANTITY

0.93+

CEOPERSON

0.93+

step oneQUANTITY

0.92+

IBM Cloud PrivateTITLE

0.91+

this morningDATE

0.91+

REORGORGANIZATION

0.91+

LinuxONETITLE

0.91+

chapter twoOTHER

0.89+

Multicloud ManagerTITLE

0.87+

waveEVENT

0.87+

both endsQUANTITY

0.86+

TwitterORGANIZATION

0.85+

ServicesOTHER

0.82+

bigEVENT

0.81+

HyperTITLE

0.81+

Jamie Thomas, IBM | IBM Think 2018


 

>> Narrator: Live from Las Vegas, it's TheCUBE! Covering IBM Think 2018. Brought to you by IBM. >> Hello everyone I'm John Furrier, we're here inside TheCUBE Studios at Think 2018. We're extracting the scene, even though it's actually our live event coverage leader, covering IBM Think. The big tent event taking six shows down to one. Big tent event. Everyone's here; the customers, developers, all the action. My next guest is Jamie Thomas, General Manager of IBM's Systems Strategy and Development. Good to see you Cube alumni, thanks for coming by. >> Good to see you, it's always one of the highlights of my parts of these meetings is getting a chance to talk with you all about what we're doing. >> We've had, I can't even remember how many, it's like eight years now, but you've been on pretty much every year, giving the update. I was just riffing on the opening about blockchain the innovation sandwich at IBM. I'm calling it the innovation sandwich, that's not what you guys are calling it. It really is about the data, and then blockchain and AI, that's the main thing with Cloud as the foundational element. You're in strategy. Systems. So you have all the underlying enabling technology with IBM and looking at that direction. Part of the innovation sandwich is systems. >> Absolutely, I think it fundamentally what we're seeing is all of the work and innovation we've invested in over the last few years is finally culminating in a really nice conclusion for us, if you will. Because if you look at the trajectory of those forces you spoke about right? Which is how do we harness the power of data? Of course, to harness that data we have to apply techniques like artificial intelligence, machine learning, deep learning to really get the value out of the data. And then we have to underpin that with a multi-cloud architecture. So we really do feel that all the innovations that we've been working on for the last few years are now coming to bear to help our clients solve these problems in really unique ways. >> We've had many conversations, we've gone down in the weeds, we've been under the hood, we've talked about business value. But I think that what I'm seeing and what TheCube is reporting over the past year and more recently is, there's now a clear line of sight for the customers. The interesting thing is the model's flipped around as we've always been seeing, but it's clear, dev ops enabled cloud to be successful where we have a programmable infrastructure. You guys have been doing software defined systems for a long time. But now with blockchain, cryptocurrency and decentralized application developers, you have inefficiencies being disrupted by making things more efficient. We're seeing the business logic be the intellectual property. So users, business users, business decision makers are looking at the business model of token economics. It's kind of at the top of the business stack that have to manage technology now. So the risk is flipped around. It used to be that technology was the risk. Technology purchase, payback period over ten plus years, more longevity to the cycle. Now you've got Agile now going real-time, this requires everything to be programmable. The data's got to be programmable, the systems have to be programmable. What's the IBM solution there? How do you guys fit that formula? Do you agree with it? Your thoughts. >> Well absolutely, I think that fundamentally you have infrastructure that can really meet the needs and characteristics of the next generation killer applications, right? So whether that's blockchain, or whether we're talking about artificial intelligence across numerous industries and every industry is looking at applying those techniques. You have to ensure that you have an architectural approach with your infrastructure that allows you to actually get the result from a client perspective. When we look at the things that we've invested in we're really investing in infrastructure that we feel allow clients to achieve those goals. If you look at what we've done with things like Power9, the ability to create a high speed interconnect with things like GPU acceleration using our partner NVIDIA's technology as an example. Those are really important characteristics of the infrastructure to be able to enable clients to then achieve the goals of something like artificial intelligence. >> What's different for the people that are now getting this, coming in, how do you summarize the past few years of strategy and development around the systems piece? Because systems programming is all about making things smaller, faster, cheaper, Moore's Law. But also having a network effect in supply chains or value chains, blockchain or whatever that is, that's the business side. What's new, how do you talk about that to the first time to someone who's now for the first time going, okay, I get it. It's clear. What's the system equation? How do you explain that to someone? >> Well I think it's a combination of focusing on both economics, but also having a keen eye on where the puck is going. In the world of hardware development, you have to have that understanding at least a year and a half, two years back, to actually culminate in a product offering that can serve the needs at the right time. So I think we've looked at both of those combinations. It's not just about economics. Is is about also being specialized, being able to serve the needs of the next generation of killer applications and therefore the programmers that support those applications. >> What's the big bet that you guys have made? If you could look back of the past three, four years, in the trials and tribulations of storage, compute, cloud, and it's been a lot of zigging and zagging. Not pivoting, because you guys have been innovating. What's the one thing, a few things you can point to, one thing or a few things and saying that was a good bet, that's now fruits coming off the tree in this new equation. >> Well, I think there's a few things and all of these things were done with a context that we believe that artificial intelligence and cloud architectures were here to stay. But if you look at the bets we made around the architecture of Power9, which was really how do we make this the best architecture in the world for artificial intelligence execution? All of those design points, all of the thought about the ecosystem around the partners, OpenPOWER, the connectivity between the GPU and the CPU that I mentioned. All of that and the software stack the investments we've made in things like PowerAI to allow developers to easily use the platform for that have been fundamentally important. Then if you look at what we did in the Z platform, it's really about this notion about pervasive encryption. Allowing developers to use encryption without forethought. Ensuring that performance would always be on. They would not have to change their applications. That's really fundamentally important for applications like blockchain. To be able to have encryption in the cloud, the kind of services we announced yesterday. So these bets of understanding that it's not just about the short term, it's about the long term and this next generation of applications. As we all know, as you and I know, you can't serve those kind of applications without having an understanding of the data map. How are you going to manage the just huge amounts of data that these organizations are dealing with? So our investments, for years now, in software defined storage, our Spectrum Storage family, and our Flash have served us well. Because now we have the mechanisms, if you will, at our fingertips to manage storage and data in these multi-cloud architectures as well as improve data latency. Access to data through the things we've done. >> So the performance is critical there? >> Yeah, absolutely, the things we've done with Flash, and the things we've done with our high end storage with the mainframe, the zHyperlink capability we've built in there between the KEK and the storage device, those are really, really important in this new world order of these kinds of next generation applications. >> Yeah, skating where the puck is is great and then sometimes you're just near there and the puck comes to you, however, whatever way you want to look at it. Take a minute to explain your role now, what specifically does systems mean? Where does it begin and where does it stop? You mentioned software stack, software defined storage, we get that piece. What's the system portfolio look like? >> We're focused on the modern infrastructure of the future. And of course that infrastructure involves hardware. It involves systems and storage. But it also fundamentally involves infrastructure-related hardware, software stacks. So we own and manage critical software stacks. The creation of things like PowerAI that work with the IBM Cloud team to ensure that IBM Cloud Private can support our platforms, Power and Z out of the box. Those are all fundamentally important initiatives. We of course still own all of the operating systems everybody loves, whether it's Linux, AIX, Z/OS, as well as the work around all the transactional systems. But first and foremost, there's a really tight tie as we all know, between hardware and then the software that needs to be brought to bear to execute against that hardware, the two have to be together, right? >> What about R&D? What's the priority on R&D? It's the continuation of some of the things you just mentioned, but is there anything on the radar that you can share in R&D that's worth noting? >> Well I think, clearly we're working on the next evolution of these systems already. The next series of Power9's we have new machines rolling out this month from a Power9 perspective. We're always working on the next generation of the mainframe of course. But I'd say that our project that's gotten a lot of note at the conferences is our Quantum project. So IBM Systems is partnering with IBM Research to create the Quantum computer. That would be the most leading edge effort that we have going on right now, so that's pretty exciting. >> Yeah, and that's always good stuff coming out. Smaller, how big is this Quantum, can you put it on your finger? Was that the big news? A lot of great action there. >> Well the Quantum computer is a very different form factor. It's truly an evolutionary, revolutionary event, if you will, from a hardware perspective, right? Because the qubit itself has to run at absolute zero. So it has to run in a very cold environment. And then we speak to it through a wave-based communications, if you will, coming in from an electronic stack. It's fundamentally a huge change in hardware architecture. >> What's that going to enable for the folks watching? Is it more throughput? More data? New things, what kind of enablement do you guys envision? >> Well first of all the Quantum computer will never replace classical computers because they're very different in terms of what they can process. There's many problems today in the world that are really not solvable. Problems around chemistry, material science, molecular modeling. There's certainly certain financial equations that really are processable but not processable in the right amount of time. So when you look at what we can do with Quantum, I think there will be problems that we can solve today that we can't even solve. As well as it will be an accelerator to a lot of the existing traditional systems if you will, to allow us to accelerate certain operations. If we think about the creation of more intelligent training models for instance, to apply against artificial intelligence problems, we could anticipate that the Quantum computer could help speed up the evolution and development of these models. There is a lot of interest in working on this evolution of hardware because it's somewhat like the 1940's era of the mainframe. We're at the very beginning stages and we all know that when we evolve the mainframe it was through significant partnerships. Helping the man get to the moon. Working with airlines on the airline's reservation system. It was these partnerships that really enabled us to understand what the power of the machine could be. I think it will be the same way with Quantum as we work with our partners on that endeavor. >> Talk about the, because performance is critical, and you know blockchain has been criticized as having performance problems, writing to the chain, if you will. So clearly there's a problem opportunity basis you can work on there. What are the problems in blockchain, is that your area? Do you work on that? Are you vectoring into blockchain? >> Well of course we're very involved in the blockchain efforts because IBM secure blockchain is running on our z14 processor. One of the things we want to take advantage there is not only the performance of the system, but also, once again, the security characteristics. The ability to just encrypt on the fly. The exploitation of the fast encryption, the cryptology module, all of that, is really key fundamental in our journey on blockchain. I also think that we have a unique perspective in IBM on blockchain because we're a consumer of blockchain. We're already using it in our CFO office. I've spoken to you guys before about supply chains, I own the supply chain manufacturing for IBM and we're also running a shadow process for blockchain where we're working on customs declarations just like Maersk was talking about yesterday. Because customs declarations is a very difficult process. Very manual, labor intensive, a lot of paper. So we're doing that as well, and we'll be a test case for IBM's blockchain work. >> And I've heard from last night that you have 100 customers already. You've heard my opening, I was ranting on the opportunity that blockchain has which is to take away inefficiencies. And supply chain, you guys no stranger to supply chain, you've been bringing technology to solve supply chain problems for generations at IBM. Blockchain brings a new opportunity. >> It does, and my team fundamentally realizes this of course, as a supply chain organization. We ship over five million pieces of stuff every year. We're shipping into 170 countries. We have a tight but dispersed manufacturing operations, so we see this everyday. We have to ship products into every country in the world. We have to work with a very dispersed network through our partners of logistics. So we see the opportunity in blockchain for things like customs declarations as a first priority, but obviously, the logistics network, there's just huge opportunities here where far too much of this is really done manually. >> You guys could really run the table on this area. I mean blockchain, supply chain, chain I mean similar concept it's just decentralized and distributed. >> Well I think supply chain is such an area ripe for this kind of application. Something that's really going to breakthrough what has been so labor intensive from a manual perspective. Even if you look at how ports are managed and Maersk talked about that yesterday. >> So you're long on blockchain? >> Well, I'm excited about it because I'm a customer of blockchain. I see the issues that occur in supply chains everyday and I fundamentally think it will be a game changer. >> Yeah, I'm biased, I mean we're trying to move our media business to the blockchain because everything's decentralized. I'm excited about the application developer movement that's starting now. You're starting to see with crytocurrency, token economics come into play around the business model innovations. Do you guys talk about that internally when you do R&D? You have to cross-connect the business model logic token economics with the technology? >> Well of course you know that's a fundamental part of what the blockchain focus on right? It's just like any new project that we embarked on. You've got to get the underlying technology right but you've always have to do that in the context of the business execution, the business deployment. So we're learning from all the engagements we're doing. And then that shapes the direction that we take the underlying technology into. >> Jamie, talk about the IBM Think 2018, it's a big event. I mean you can't multiply yourself times six. You go to all the events. This is a big event. You must be super busy. What's the focus? What's your reaction, what have you been talking about? >> Well it's kind of nice to talk to you kind of towards the end of the event. Sometimes I talk to you guys at the very beginning of the event so they all kind of have a retrospective of the things that have happened. I think it was a great event in terms of showcasing our innovation, but also having a number of key CEO's from various firms talk to us about how they're really using this technology. Great examples from RBC, from Maersk, from Verizon, from the NVIDIA CEO yesterday. And also some really pointed discussions around looking into the future. So we had a research talk about, Arvind Krishna spoke about, the next five big plays. Which are artificial intelligence, blockchain, Quantum were on that list certainly. As well as now we'll be having a Quantum keynote later today so we'll dive into Quantum a little bit more in terms of how the future will be shaped by that technology. But I think it was a nice mix of hearing about the realization of deploying some of the things that we've done in IBM, but combined with where are things going and stimulating thought with the client which is always important in these kind of meetings. It is having that strategic discussion about how we can really partner with them. >> Real conversations. >> Yeah, real conversations about how we can partner with them to be successful as they leave this conference and go back to their home offices. >> Well congratulations on a great strategy, you've been running strategy. I know we've talked in the past. You've kind of had to bring it all together into one package, into one message, but still have the ability and flexibility to manage the tech. So my final question for you is where's the puck going next? Where are you skating now strategy wise to catch that next puck? >> Well I think that what we'll see is a continued progression, if you will, and speed around some of the things that we've already talked about here. I think there's been a lot of discussion for instance, around multi-cloud architectures. But I really think we're still at the tip of the spear in fundamentally getting the value out of those architectures. That real deployment of some of those architectures as clients modernize their applications and really take advantage of Cloud, I think will drive a different utilization of storage, and it will require different characteristics out of our systems as we go forward. So I think that we're at the tip of a journey here that will inform us. >> The modernization and business model innovation, technology enablement all coming together. >> Right, we were talking about that right? So think about the primary use case of IBM Cloud Private right now is modernization of those applications. So as those clients modernize those applications and then start to deploy these new techniques in combination with that; around artificial intelligence and blockchain, there's just a huge opportunity for us to continue this infrastructure innovation journey. >> International Business Machines. The name of the company obviously, and you know my opinion on this, we're reporting that the real critical intellectual property for customers is going to be the business innovation, the business model opportunities in blockchain, AI, really accelerate that piece. >> And as Ginni said yesterday, we're here to serve our clients, to make sure that they're successful in moving from where they have been and the continuation of this journey. And so that will be where we keep our focus as we go forward. >> Well looking forward to talking about token economics. I think that's going to be a continued conversation as you guys create more speed, more performance, the business model innovations around token economics. And then decentralized application developers will probably impact IoT, will probably impact a lot of these fringe, emerging, use cases that need compute, that need power. They need network effect, they need data. >> Absolutely, so I mean there's been a lot of discussion this week about making sure that we move the processing to the data, not the data to the processing because obviously you can't move all that data around. That's why I think these and Fungible architecture and Agile architecture will give clients the ability to do that more effectively. And as you said, we always have to worry about those developers. We have to make sure that they have the modern tools and techniques that allow them to move with speed and still take advantage of a lot of those. >> And educate the business users . >> Exactly, exactly. >> Are you having fun? >> I'm having great fun, this has been a great conference. It's always great to talk with you guys. >> We really appreciate your friendship and always coming on TheCube and sharing your insights. Always great to get the data out there. Again, we're data driven, this data driven interview with Jamie Thomas, General Manager of System Strategy and Development here at IBM Think inside TheCube studios we're on the floor here in Las Vegas. I'm John Furrier. We'll be back with more after this short break.

Published Date : Mar 21 2018

SUMMARY :

Brought to you by IBM. Good to see you Cube alumni, thanks for coming by. to talk with you all about what we're doing. Part of the innovation sandwich is systems. all of the work and innovation we've invested in the systems have to be programmable. of the infrastructure to be able to of strategy and development around the systems piece? that can serve the needs at the right time. What's the big bet that you guys have made? All of that and the software stack and the things we've done with our high end storage and the puck comes to you, however, We of course still own all of the of the mainframe of course. Was that the big news? Because the qubit itself has to run at absolute zero. a lot of the existing traditional systems if you will, What are the problems in blockchain, is that your area? One of the things we want to take advantage there is that you have 100 customers already. but obviously, the logistics network, You guys could really run the table on this area. Something that's really going to breakthrough I see the issues that occur in supply chains everyday around the business model innovations. Well of course you know that's a fundamental part What's the focus? Well it's kind of nice to talk to you to their home offices. You've kind of had to bring it all together of the spear in fundamentally getting The modernization and business model innovation, and then start to deploy these new techniques The name of the company obviously, and the continuation of this journey. I think that's going to be a continued conversation the ability to do that more effectively. the business users . It's always great to talk with you guys. Always great to get the data out there.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IBMORGANIZATION

0.99+

Jamie ThomasPERSON

0.99+

NVIDIAORGANIZATION

0.99+

Arvind KrishnaPERSON

0.99+

JamiePERSON

0.99+

John FurrierPERSON

0.99+

100 customersQUANTITY

0.99+

yesterdayDATE

0.99+

IBM SystemsORGANIZATION

0.99+

VerizonORGANIZATION

0.99+

eight yearsQUANTITY

0.99+

Las VegasLOCATION

0.99+

GinniPERSON

0.99+

first timeQUANTITY

0.99+

twoQUANTITY

0.99+

six showsQUANTITY

0.99+

bothQUANTITY

0.99+

RBCORGANIZATION

0.99+

LinuxTITLE

0.99+

one messageQUANTITY

0.99+

last nightDATE

0.99+

AIXTITLE

0.99+

Z/OSTITLE

0.99+

Think 2018EVENT

0.98+

1940'sDATE

0.98+

over five million piecesQUANTITY

0.98+

firstQUANTITY

0.97+

OneQUANTITY

0.97+

todayDATE

0.97+

oneQUANTITY

0.97+

one packageQUANTITY

0.97+

one thingQUANTITY

0.97+

IBM ResearchORGANIZATION

0.96+

170 countriesQUANTITY

0.96+

FlashTITLE

0.96+

over ten plus yearsQUANTITY

0.96+

CubeORGANIZATION

0.96+

this weekDATE

0.96+

MaerskPERSON

0.95+

AgileTITLE

0.95+

two years backDATE

0.94+

IBM ThinkORGANIZATION

0.94+

zeroQUANTITY

0.93+

five big playsQUANTITY

0.92+

this monthDATE

0.92+

four yearsQUANTITY

0.9+

both economicsQUANTITY

0.88+

first priorityQUANTITY

0.86+

IBM Think 2018EVENT

0.86+

Power9COMMERCIAL_ITEM

0.85+

a year and a halfQUANTITY

0.84+

MaerskORGANIZATION

0.83+

TheCubeORGANIZATION

0.83+

System Strategy and DevelopmentORGANIZATION

0.82+

TheCUBEORGANIZATION

0.82+

past yearDATE

0.82+

yearsDATE

0.79+

last few yearsDATE

0.78+

2018DATE

0.78+

lastDATE

0.75+

PowerAITITLE

0.74+

IBM CloudORGANIZATION

0.73+

IBM ThinkEVENT

0.67+

later todayDATE

0.65+

everyQUANTITY

0.62+

every yearQUANTITY

0.62+

KEKORGANIZATION

0.61+

Barry Baker, IBM - IBM Machine Learning Launch - #IBMML - #theCUBE


 

>> [Narrator] Live from New York, it's theCUBE! Covering the IBM Machine Learning Launch Event, brought to you by IBM. Now, here are your hosts: Dave Vellante and Stu Miniman. >> Hi everybody, we're back, this is theCUBE. We're live at the IBM Machine Learning Launch Event. Barry Baker is here, he's the Vice President of Offering Management for z Systems. Welcome to theCUBE, thanks for coming on! >> Well, it's my first time, thanks for having me! >> A CUBE newbie, alright! Let's get right into it! >> [Barry Baker] Go easy! >> So, two years ago, January of 2015, we covered the z13 launch. The big theme there was bringing analytics and transactions together, z13 being the platform for that. Today, we're hearing about machine learning on mainframe. Why machine learning on mainframe, Barry? >> Well, for one, it is all about the data on the platform, and the applications that our clients have on the platform. And it becomes a very natural fit for predictive analytics and what you can get from machine learning. So whether you're trying to do churn analysis or fraud detection at the moment of the transaction, it becomes a very natural place for us to inject what is pretty advanced capability from a machine learning perspective into the mainframe environment. We're not trying to solve all analytics problems on the mainframe, we're not trying to become a data lake, but for the applications and the data that reside on the platform, we believe it's a prime use case that our clients are waiting to adopt. >> Okay, so help me think through the use case of I have all this transaction data on the mainframe. Not trying to be a data lake, but I've got this data lake elsewhere, that might be useful for some of the activity I want to do. How do I do that? I'm presuming I'm not extracting my sensitive transaction data and shipping it into the data lake. So, how am I getting access to some of that social data or other data? >> Yeah, and we just saw an example in the demo pad before, whereby the bulk of the data you want to perform scoring on, and also the machine learning on to build your models, is resident on the mainframe, but there does exist data out there. In the example we just saw, it was social data. So the demo that was done was how you can take and use IBM Bluemix and get at key pieces of social data. Not a whole mass of the volume of unstructured data that lives out there. It's not about bringing that to the platform and doing machine learning on it. It's about actually taking a subset of that data, a filtered subset that makes sense to be married with the bigger data set that sits on the platform. And so that's how we envision it. We provide a number of ways to do that through the IBM Machine Learning offering, where you can marry data sources from different places. But really, the bulk of the data needs to be on z and on the platform for it to make sense to have this workload running there. >> Okay. One of the big themes, of course, that IBM puts forth is platform modernization, application modernization. I think it kind of started with Linux on z? Maybe there were other examples, but that was a big one. I don't know what the percentage is, but a meaningful percentage of workloads running on z are Linux-based, correct? >> Yeah, so, the way I would view it is it's still today that the majority of workload on the platform is z/OS based, but Linux is one of our fastest growing workloads on the platform. And it is about how do you marry and bring other capabilities and other applications closer to the systems of record that is sitting there on z/OS. >> So, last week, at AnacondaCON, you announced Anaconda on z, certainly Spark, a lot of talk on Spark. Give us the update on the sort of tooling. >> We recognized a few years back that Spark was going to be key to our platform longer-term. So, contrary to what people have seen from z in the past, we jumped on it fast. We view it as an enabling technology, an enabling piece of infrastructure that allows for analytics solutions to be built and brought to market really rapidly. And the machine learning announcement today is proof of that. In a matter of months, we've been able to take the cloud-based IBM Watson Machine Learning offering and have the big chunk of it run on the mainframe, because of the investment we made in spark a year and a half ago, two years ago. We continue to invest in Spark, we're at 2.0.2 level. The announcement last week around Anaconda is, again, how do we continue to bring the right infrastructure, from an analytics perspective, onto the platform. And you'll see later, maybe in the session, where the roadmap for ML isn't just based on Spark. The roadmap for ML also requires us to go after and provide new runtimes and new languages on the platform, like Python and Anaconda in particular. So, it's a coordinated strategy where we're laying the foundation on the infrastructure side to enable the solutions from the analytics unit. >> Barry, when I hear about streaming, it reminds me of the general discussion we've been having with customers about digital transformation. How does mainframe fit into that digital mandate that you hear from customers? >> That's a great, great question. From our perspective, we've come out of the woods of many of our discussions with clients being about, I need to move off the platform, and rather, I need to actually leverage this platform, because the time it's going to take me to move off this platform, by the time I do that, digital's going to overwash me and I'm going to be gone." So the very first step that our clients take, and some of our leading clients take, on the platform for digital transformation, is moving toward standard RESTful APIs, taking z/OS Connect Enterprise Edition, putting that in front of their core, mission-critical applications and data stores, and enabling those assets to be exposed externally. And what's happening is those clients then build out new engaging mobile web apps that are then coming directly back to the mainframe at those high value assets. But in addition, what that is driving is a whole other set of interaction patterns that we're actually able to see on the mainframe in how they're being used. So, opening up the API channel is the first step our clients are taking. Next is how do they take the 200 billion lines of COBOL code that is out there in the wild, running on these systems, and how do they over time modernize it? And we have some leading clients that are doing very tight integration whereby they have a COBOL application, and as they want to make changes to it, we give them the ability to make changes in it, but do it in Java, or do it in another language, a more modern language, tightly integrated with the COBOL runtime. So, we call that progressive modernization. It's not about come in and replace the whole app and rewrite that thing. That's one next step on the journey, and then as the clients start to do that, they start to really need to lay down a continuous integration, continuous delivery tool chain, building a whole dev ops end-to-end flow. That's kind of the path that our clients are on for really getting much more faster and getting more productivity out of their development side of things. And in turn, the platform is now becoming a platform that they can deliver results on, just like they could on any other platform. >> That's big because a lot of customers use to complain, well, I can't get COBOL skills or, you know, and so IBM's answer was often, well, we got 'em. You can outsource it to us and that's not always the preferred approach so, glad to hear you're addressing that. On the dev ops discussion, you know, a lot of times dev ops is about breaking stuff. How about the main frame workload's all about not breaking stuff so, waterfall, more traditional methodologies are still appropriate. Can you help us understand how customers are dealing with that, sort of, schism. >> Yeah, I think dev ops, some people would come at it and say, that's just about moving fast and breaking some eggs and cleaning up the mess and then moving forward from but from our perspective it's, that's not it, right? That can't be it for our customers because of the criticality of these systems will not allow that so from our, our dev ops model is not so much about move fast and break some eggs, it's about move fast in smaller increments and in establishing clear chains and a clear pipeline with automated test suites getting executed and run at each phase of the pipeline before you move to production. So, we're not going to... And our approach is not to compromise on quality as you kind of move towards dev ops and we have, internally, our major subsystems right? So, KIX, IMS, DB2. They're all on their own journey to deliver and move towards continuous integration in dev ops internally. So, we're eating our own... We're dog fooding this here, right? We're building our own teams around this and we're not seeing a decline in quality. In fact, as we start to really fix and move testing to the left, as they call it, shift left testing, right? Earlier in the cycle you regression test. We are seeing better quality come because of that effort. >> You put forth this vision, as I said, at the top of this segment. Vision, this vision of bringing data in analytics, in transactions together. That was the Z13 announcement. But the reality is, a lot of customers would have their main frame and then they'd have, you know, some other data warehouse, some infiniband pipe, maybe to that data warehouse was there approximation of real time. So, the vision that you put forth was to consolidate that. And has that happened? Are you starting to do that? What are they doing with the data warehouse? >> So, we're starting to see it. I mean, and frankly, we have clients that struggle with that model, right? And that's precisely why we have a very strong point of view that says, if this is data that you're going to get value from, from an analytics perspective and you can use it on the platform, moving it off the platform is going to create a number of challenges for you. And we've seen it first hand. We've seen companies that ETL the data off the platform. They end up with 9, 10, 12 copies of the data. As soon as you do that, the data is, it's old, it's stale and so any insights you derive are then going to be potentially old and stale as well. The other side of it is, our customers in the industries that heavy users of the mainframe, finance, banking, healthcare. These are heavily regulated industries that are getting more regulated. And they're under more pressure to ensure governance and, in their meeting, the various regulation needs. As soon as you start to move that data off the platform, your problem just got that much harder. So, we are seeing a shift in approaches and it's going to take some time for clients to get past this, right? Because, enterprise data warehouse is a pretty big market and there's a lot of them out there but we're confident that for specific use cases, it makes a great deal of sense to leave the data where it is bring the analytics as close to that data as possible, and leverage the insight right there at the point of impact as opposed to pushing it off. >> How about the economics? So, I have talked, certainly talked to customers that understand it for a lot of the work that they're doing. Doing it on the Z platform is more cost effective than maybe, try to manage a bunch of, you know, bespoke X86 boxes, no question. But at the end of the day, there's still that CAPEX. What is IBM doing to help customers, sort of, absorb, you know, the costs and bring together, more aggressively, analytic and transaction data. >> Yeah, so, in agreement a 100%, I think we can create the best technology in the world but if we don't close on the financials, it's not going to go anywhere, it's not going to get, it's not going to move. So, from an analytics perspective, just starting at the ground level with spark, even underneath the spark layer, there are things we've done in the hardware to accelerate performance and so that's one layer. Then you move into spark. Well, spark is running on our java, our JDK and it takes advantage of using and being moved off to the ziip offload processors. So, those processors alone are lower cost than general purpose processors. We then have additionally thought this through, in terms of working with clients and seeing that, you know, a typical use case for running spark on the platform, they require three or four ziips and then a hundred, two hundred gig of additional memory. We've come at that as a, let's do a bundled offer and with you that comes in and says, for that workload, we're going to come in with a different price point for you. So, the other side of it is, we've been delivering over the last couple of years, ways to isolate workload from a software license cost perspective, right. 'Cause the other knock that people will say is, as I add new workload it impacts all the rest of my software Well, no. There are multiple paths forward for you to isolate that workload, add new workload to the platform and not have it impact your existing MLC charges so we continue to actually evolve that and make that easier to do but that's something we're very focused on. >> But that's more than just, sort of an LPAR or... >> Yeah, so there's other ways we could do that with... (mumbles) We're IBM so there's acronyms right. So there's ZCAP and there's all other pricing mechanisms that we can take advantage of to help you, you know, the way I simply say it is, we have to enable for new workload, we need to enable the pricing to be supportive of growth, right, not protecting and so we are very focused on, how do we do this in the right way that clients can adopt it, take advantage of the capabilities and also do it in a cost effective way. >> And what about security? That's another big theme that you guys have put forth. What's new there? >> Yeah so we have a lot underway from the security perspective. I'm going to say stay tuned, more to come there but there's a heavy investment, again, going back to what our clients are struggling with and that we hear in day in and day out, is around how do I ensure, you know, and how do I do encryption pervasively across the platform for all of the data being managed by the system, how do I do that with ease, and how do I do that without having to drive changes at the application layer, having to drive operational changes. How do I enable these systems to get that much more secure with these and low cost. >> Right, because if you... In an ideal world you'd encrypt everything but there's a cost of doing that. There are some downstream nuances with things like compression >> Yup. >> And so forth so... Okay, so more to come there. We'll stay tuned. >> More to come. >> Alright, we'll give you the final word. Big day for you, guys so congratulations on the announcement You got a bunch of customers who're comin' in very shortly. >> Yeah no... It's extremely, we're excited to be here. We think that the combination of IBM systems, working with the IBM analytics team to put forward an offering that pulls key aspects of Watson and delivers it on the mainframe is something that will get noticed and actually solve some real challenges so we're excited. >> Great. Barry, thanks very much for coming to theCUBE, appreciate it >> Thanks for having me. Thanks for going easy on me. >> You're welcome. Keep it right there. We'll be back with our next guest, right after this short break. (techno music)

Published Date : Feb 15 2017

SUMMARY :

brought to you by IBM. Barry Baker is here, he's the analytics and transactions together, that reside on the platform, we believe So, how am I getting access to and also the machine learning on to build your models, One of the big themes, of course, that the majority of workload on the platform is z/OS based, you announced Anaconda on z, and have the big chunk of it run on the mainframe, it reminds me of the general discussion we've been having because the time it's going to take me to move On the dev ops discussion, you know, a lot of times dev ops Earlier in the cycle you regression test. So, the vision that you put forth was to consolidate that. moving it off the platform is going to create But at the end of the day, there's still that CAPEX. and make that easier to do but the way I simply say it is, we have to enable That's another big theme that you guys have put forth. and that we hear in day in and day out, but there's a cost of doing that. Okay, so more to come there. Alright, we'll give you the final word. and delivers it on the mainframe Barry, thanks very much for coming to theCUBE, appreciate it Thanks for going easy on me. We'll be back with our next guest,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

BarryPERSON

0.99+

IBMORGANIZATION

0.99+

Barry BakerPERSON

0.99+

Stu MinimanPERSON

0.99+

New YorkLOCATION

0.99+

9QUANTITY

0.99+

last weekDATE

0.99+

COBOLTITLE

0.99+

PythonTITLE

0.99+

100%QUANTITY

0.99+

JavaTITLE

0.99+

10QUANTITY

0.99+

LinuxTITLE

0.99+

first stepQUANTITY

0.99+

two years agoDATE

0.99+

a year and a half agoDATE

0.99+

200 billion linesQUANTITY

0.99+

12 copiesQUANTITY

0.99+

first timeQUANTITY

0.98+

one layerQUANTITY

0.98+

January of 2015DATE

0.98+

todayDATE

0.98+

TodayDATE

0.98+

threeQUANTITY

0.98+

SparkTITLE

0.97+

AnacondaORGANIZATION

0.97+

z/OSTITLE

0.96+

2.0.2 levelQUANTITY

0.94+

IBM Machine Learning Launch EventEVENT

0.94+

each phaseQUANTITY

0.93+

AnacondaCONORGANIZATION

0.93+

X86COMMERCIAL_ITEM

0.92+

zTITLE

0.91+

AnacondaTITLE

0.91+

Vice PresidentPERSON

0.91+

z SystemsORGANIZATION

0.9+

javaTITLE

0.9+

z13TITLE

0.9+

a hundredQUANTITY

0.89+

oneQUANTITY

0.88+

four ziipsQUANTITY

0.88+

MLTITLE

0.86+

OneQUANTITY

0.85+

BluemixCOMMERCIAL_ITEM

0.82+

z/OS Connect Enterprise EditionTITLE

0.76+

SparkORGANIZATION

0.76+

two hundred gigQUANTITY

0.75+

a few years backDATE

0.74+

lastDATE

0.69+

firstQUANTITY

0.69+

AnacondaLOCATION

0.68+

z13COMMERCIAL_ITEM

0.68+

one nextQUANTITY

0.66+

theCUBEORGANIZATION

0.64+

WatsonTITLE

0.64+

Z13ORGANIZATION

0.62+

sparkORGANIZATION

0.61+

ZCAPTITLE

0.6+

#IBMMLTITLE

0.57+

lakeLOCATION

0.57+

KIXTITLE

0.46+

yearsDATE

0.44+

Jean Francois Puget, IBM | IBM Machine Learning Launch 2017


 

>> Announcer: Live from New York, it's theCUBE, covering the IBM machine learning launch event. Brought to you by IBM. Now, here are your hosts, Dave Vellante and Stu Miniman. >> Alright, we're back. Jean Francois Puget is here, he's the distinguished engineer for machine learning and optimization at IBM analytics, CUBE alum. Good to see you again. >> Yes. >> Thanks very much for coming on, big day for you guys. >> Jean Francois: Indeed. >> It's like giving birth every time you guys give one of these products. We saw you a little bit in the analyst meeting, pretty well attended. Give us the highlights from your standpoint. What are the key things that we should be focused on in this announcement? >> For most people, machine learning equals machine learning algorithms. Algorithms, when you look at newspapers or blogs, social media, it's all about algorithms. Our view that, sure, you need algorithms for machine learning, but you need steps before you run algorithms, and after. So before, you need to get data, to transform it, to make it usable for machine learning. And then, you run algorithms. These produce models, and then, you need to move your models into a production environment. For instance, you use an algorithm to learn from past credit card transaction fraud. You can learn models, patterns, that correspond to fraud. Then, you want to use those models, those patterns, in your payment system. And moving from where you run the algorithm to the operation system is a nightmare today, so our value is to automate what you do before you run algorithms, and then what you do after. That's our differentiator. >> I've had some folks in theCUBE in the past have said years ago, actually, said, "You know what, algorithms are plentiful." I think he made the statement, I remember my friend Avi Mehta, "Algorithms are free. "It's what you do with them that matters." >> Exactly, that's, I believe in autonomy that open source won for machine learning algorithms. Now the future is with open source, clearly. But it solves only a part of the problem you're facing if you want to action machine learning. So, exactly what you said. What do you do with the results of algorithm is key. And open source people don't care much about it, for good reasons. They are focusing on producing the best algorithm. We are focusing on creating value for our customers. It's different. >> In terms of, you mentioned open source a couple times, in terms of customer choice, what's your philosophy with regard to the various tooling and platforms for open source, how do you go about selecting which to support? >> Machine learning is fascinating. It's overhyped, maybe, but it's also moving very quickly. Every year there is a new cool stuff. Five years ago, nobody spoke about deep learning. Now it's everywhere. Who knows what will happen next year? Our take is to support open source, to support the top open source packages. We don't know which one will win in the future. We don't know even if one will be enough for all needs. We believe one size does not fit all, so our take is support a curated list of mid-show open source. We start with Spark ML for many reasons, but we won't stop at Spark ML. >> Okay, I wonder if we can talk use cases. Two of my favorite, well, let's just start with fraud. Fraud has become much, much better over the past certainly 10 years, but still not perfect. I don't know if perfection is achievable, but lot of false positives. How will machine learning affect that? Can we expect as consumers even better fraud detection in more real time? >> If we think of the full life cycle going from data to value, we will provide a better answer. We still use machine learning algorithm to create models, but a model does not tell you what to do. It will tell you, okay, for this credit card transaction coming, it has a high probability to be fraud. Or this one has a lower priority, uh, probability. But then it's up to the designer of the overall application to make decisions, so what we recommend is to use machine learning data prediction but not only, and then use, maybe, (murmuring). For instance, if your machine learning model tells you this is a fraud with a high probability, say 90%, and this is a customer you know very well, it's a 10-year customer you know very well, then you can be confident that it's a fraud. Then if next fraud tells you this is 70% probability, but it's a customer since one week. In a week, we don't know the customer, so the confidence we can get in machine learning should be low, and there you will not reject the transaction immediately. Maybe you will enter, you don't approve it automatically, maybe you will send a one-time passcode, or you enter a serve vendor system, but you don't reject it outright. Really, the idea is to use machine learning predictions as yet another input for making decisions. You're making decision informed on what you could learn from your past. But it's not replacing human decision-making. Our approach with IBM, you don't see IBM speak much about artificial intelligence in general because we don't believe we're here to replace humans. We're here to assist humans, so we say, augmented intelligence or assistance. That's the role we see for machine learning. It will give you additional data so that you make better decisions. >> It's not the concept that you object to, it's the term artificial intelligence. It's really machine intelligence, it's not fake. >> I started my career as a PhD in artificial intelligence, I won't say when, but long enough. At that time, there were already promise that we have Terminator in the next decade and this and that. And the same happened in the '60s, or it was after the '60s. And then, there is an AI winter, and we have a risk here to have an AI winter because some people are just raising red flags that are not substantiated, I believe. I don't think that technology's here that we can replace human decision-making altogether any time soon, but we can help. We can certainly make some proficient, more efficient, more productive with machine learning. >> Having said that, there are a lot of cognitive functions that are getting replaced, maybe not by so-called artificial intelligence, but certainly by machines and automation. >> Yes, so we're automating a number of things, and maybe we won't need to have people do quality check and just have an automated vision system detect defects. Sure, so we're automating more and more, but this is not new, it has been going on for centuries. >> Well, the list evolved. So, what can humans do that machines can't, and how would you expect that to change? >> We're moving away from IMB machine learning, but it is interesting. You know, each time there is a capacity that a machine that will automate, we basically redefine intelligence to exclude it, so you know. That's what I foresee. >> Yeah, well, robots a while ago, Stu, couldn't climb stairs, and now, look at that. >> Do we feel threatened because a robot can climb a stair faster than us? Not necessarily. >> No, it doesn't bother us, right. Okay, question? >> Yeah, so I guess, bringing it back down to the solution that we're talking about today, if I now am doing, I'm doing the analytics, the machine learning on the mainframe, how do we make sure that we don't overrun and blow out all our MIPS? >> We recommend, so we are not using the mainframe base compute system. We recommend using ZIPS, so additional calls to not overload, so it's a very important point. We claim, okay, if you do everything on the mainframe, you can learn from operational data. You don't want to disturb, and you don't want to disturb takes a lot of different meanings. One that you just said, you don't want to slow down your operation processings because you're going to hurt your business. But you also want to be careful. Say we have a payment system where there is a machine learning model predicting fraud probability, a part of the system. You don't want a young bright data scientist decide that he had a great idea, a great model, and he wants to push his model in production without asking anyone. So you want to control that. That's why we insist, we are providing governance that includes a lot of things like keeping track of how models were created from which data sets, so lineage. We also want to have access control and not allow anyone to just deploy a new model because we make it easy to deploy, so we want to have a role-based access and only someone someone with some executive, well, it depends on the customer, but not everybody can update the production system, and we want to support that. And that's something that differentiates us from open source. Open source developers, they don't care about governance. It's not their problem, but it is our customer problem, so this solution will come with all the governance and integrity constraints you can expect from us. >> Can you speak to, first solution's going to be on z/OS, what's the roadmap look like and what are some of those challenges of rolling this out to other private cloud solutions? >> We are going to shape this quarter IBM machine learning for Z. It starts with Spark ML as a base open source. This is not, this is interesting, but it's not all that is for machine learning. So that's how we start. We're going to add more in the future. Last week we announced we will shape Anaconda, which is a major distribution for Python ecosystem, and it includes a number of machine learning open source. We announced it for next quarter. >> I believe in the press release it said down the road things like TensorFlow are coming, H20. >> But Anaconda will announce for next quarter, so we will leverage this when it's out. Then indeed, we have a roadmap to include major open source, so major open source are the one from Anaconda (murmuring), mostly. Key deep learning, so TensorFlow and probably one or two additional, we're still discussing. One that I'm very keen on, it's called XGBoost in one word. People don't speak about it in newspapers, but this is what wins all Kaggle competitions. Kaggle is a machine learning competition site. When I say all, all that are not imagery cognition competitions. >> Dave: And that was ex-- >> XGBoost, X-G-B-O-O-S-T. >> Dave: XGBoost, okay. >> XGBoost, and it's-- >> Dave: X-ray gamma, right? >> It's really a package. When I say we don't know which package will win, XGBoost was introduced a year ago also, or maybe a bit more, but not so long ago, and now, if you have structure data, it is the best choice today. It's a really fast-moving, but so, we will support mid-show deep learning package and mid-show classical learning package like the one from Anaconda or XGBoost. The other thing we start with Z. We announced in the analyst session that we will have a power version and a private cloud, meaning XTC69X version as well. I can't tell you when because it's not firm, but it will come. >> And in public cloud as well, I guess we'll, you've got components in the public cloud today like the Watson Data Platform that you've extracted and put here. >> We have extracted part of the testing experience, so we've extracted notebooks and a graphical tool called ModelBuilder from DSX as part of IBM machine learning now, and we're going to add more of DSX as we go. But the goal is to really share code and function across private cloud and public cloud. As Rob Thomas defined it, we want with private cloud to offer all the features and functionality of public cloud, except that it would run inside a firewall. We are really developing machine learning and Watson machine learning on a command code base. It's an internal open source project. We share code, and then, we shape on different platform. >> I mean, you haven't, just now, used the word hybrid. Every now and then IBM does, but do you see that so-called hybrid use case as viable, or do you see it more, some workloads should run on prem, some should run in the cloud, and maybe they'll never come together? >> Machine learning, you basically have to face, one is training and the other is scoring. I see people moving training to cloud quite easily, unless there is some regulation about data privacy. But training is a good fit for cloud because usually you need a large computing system but only for limited time, so elasticity's great. But then deployment, if you want to score transaction in a CICS transaction, it has to run beside CICS, not cloud. If you want to score data on an IoT gateway, you want to score other gateway, not in a data center. I would say that may not be what people think first, but what will drive really the split between public cloud, private, and on prem is where you want to apply your machine learning models, where you want to score. For instance, smart watches, they are switching to gear to fit measurement system. You want to score your health data on the watch, not in the internet somewhere. >> Right, and in that CICS example that you gave, you'd essentially be bringing the model to the CICS data, is that right? >> Yes, that's what we do. That's a value of machine learning for Z is if you want to score transactions happening on Z, you need to be running on Z. So it's clear, mainframe people, they don't want to hear about public cloud, so they will be the last one moving. They have their reasons, but they like mainframe because it ties really, really secure and private. >> Dave: Public cloud's a dirty word. >> Yes, yes, for Z users. At least that's what I was told, and I could check with many people. But we know that in general the move is for public cloud, so we want to help people, depending on their journey, of the cloud. >> You've got one of those, too. Jean Francois, thanks very much for coming on theCUBE, it was really a pleasure having you back. >> Thank you. >> You're welcome. Alright, keep it right there, everybody. We'll be back with our next guest. This is theCUBE, we're live from the Waldorf Astoria. IBM's machine learning announcement, be right back. (electronic keyboard music)

Published Date : Feb 15 2017

SUMMARY :

Brought to you by IBM. Good to see you again. on, big day for you guys. What are the key things that we and then what you do after. "It's what you do with them that matters." So, exactly what you said. but we won't stop at Spark ML. the past certainly 10 years, so that you make better decisions. that you object to, that we have Terminator in the next decade cognitive functions that and maybe we won't need to and how would you expect that to change? to exclude it, so you know. and now, look at that. Do we feel threatened because No, it doesn't bother us, right. and you don't want to disturb but it's not all that I believe in the press release it said so we will leverage this when it's out. and now, if you have structure data, like the Watson Data Platform But the goal is to really but do you see that so-called is where you want to apply is if you want to score so we want to help people, depending on it was really a pleasure having you back. from the Waldorf Astoria.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

Dave VellantePERSON

0.99+

Jean FrancoisPERSON

0.99+

IBMORGANIZATION

0.99+

10-yearQUANTITY

0.99+

Stu MinimanPERSON

0.99+

Avi MehtaPERSON

0.99+

New YorkLOCATION

0.99+

AnacondaORGANIZATION

0.99+

70%QUANTITY

0.99+

Jean Francois PugetPERSON

0.99+

next yearDATE

0.99+

TwoQUANTITY

0.99+

Last weekDATE

0.99+

next quarterDATE

0.99+

90%QUANTITY

0.99+

Rob ThomasPERSON

0.99+

one-timeQUANTITY

0.99+

todayDATE

0.99+

Five years agoDATE

0.99+

one wordQUANTITY

0.99+

CICSORGANIZATION

0.99+

PythonTITLE

0.99+

a year agoDATE

0.99+

oneQUANTITY

0.99+

twoQUANTITY

0.99+

next decadeDATE

0.98+

one weekQUANTITY

0.98+

first solutionQUANTITY

0.98+

XGBoostTITLE

0.98+

a weekQUANTITY

0.97+

Spark MLTITLE

0.97+

'60sDATE

0.97+

ModelBuilderTITLE

0.96+

one sizeQUANTITY

0.96+

OneQUANTITY

0.95+

firstQUANTITY

0.94+

Watson Data PlatformTITLE

0.93+

each timeQUANTITY

0.93+

KaggleORGANIZATION

0.92+

StuPERSON

0.91+

this quarterDATE

0.91+

DSXTITLE

0.89+

XGBoostORGANIZATION

0.89+

Waldorf AstoriaORGANIZATION

0.86+

Spark ML.TITLE

0.85+

z/OSTITLE

0.82+

yearsDATE

0.8+

centuriesQUANTITY

0.75+

10 yearsQUANTITY

0.75+

DSXORGANIZATION

0.72+

TerminatorTITLE

0.64+

XTC69XTITLE

0.63+

IBM Machine Learning Launch 2017EVENT

0.63+

couple timesQUANTITY

0.57+

machine learningEVENT

0.56+

XTITLE

0.56+

WatsonTITLE

0.55+

these productsQUANTITY

0.53+

-G-BCOMMERCIAL_ITEM

0.53+

H20ORGANIZATION

0.52+

TensorFlowORGANIZATION

0.5+

theCUBEORGANIZATION

0.49+

CUBEORGANIZATION

0.37+

Bryan Smith, Rocket Software - IBM Machine Learning Launch - #IBMML - #theCUBE


 

>> Announcer: Live from New York, it's theCUBE, covering the IBM Machine Learning Launch Event, brought to you by IBM. Now, here are your hosts, Dave Vellante and Stu Miniman. >> Welcome back to New York City, everybody. We're here at the Waldorf Astoria covering the IBM Machine Learning Launch Event, bringing machine learning to the IBM Z. Bryan Smith is here, he's the vice president of R&D and the CTO of Rocket Software, powering the path to digital transformation. Bryan, welcome to theCUBE, thanks for coming on. >> Thanks for having me. >> So, Rocket Software, Waltham, Mass. based, close to where we are, but a lot of people don't know about Rocket, so pretty large company, give us the background. >> It's been around for, this'll be our 27th year. Private company, we've been a partner of IBM's for the last 23 years. Almost all of that is in the mainframe space, or we focused on the mainframe space, I'll say. We have 1,300 employees, we call ourselves Rocketeers. It's spread around the world. We're really an R&D focused company. More than half the company is engineering, and it's spread across the world on every continent and most major countries. >> You're esstenially OEM-ing your tools as it were. Is that right, no direct sales force? >> About half, there are different lenses to look at this, but about half of our go-to-market is through IBM with IBM-labeled, IBM-branded products. We've always been, for the side of products, we've always been the R&D behind the products. The partnership, though, has really grown. It's more than just an R&D partnership now, now we're doing co-marketing, we're even doing some joint selling to serve IBM mainframe customers. The partnership has really grown over these last 23 years from just being the guys who write the code to doing much more. >> Okay, so how do you fit in this announcement. Machine learning on Z, where does Rocket fit? >> Part of the announcement today is a very important piece of technology that we developed. We call it data virtualization. Data virtualization is really enabling customers to open their mainframe to allow the data to be used in ways that it was never designed to be used. You might have these data structures that were designed 10, 20, even 30 years ago that were designed for a very specific application, but today they want to use it in a very different way, and so, the traditional path is to take that data and copy it, to ETL it someplace else they can get some new use or to build some new application. What data virtualization allows you to do is to leave that data in place but access it using APIs that developers want to use today. They want to use JSON access, for example, or they want to use SQL access. But they want to be able to do things like join across IMS, DB2, and VSAM all with a single query using an SQL statement. We can do that relational databases and non-relational databases. It gets us out of this mode of having to copy data into some other data store through this ETL process, access the data in place, we call it moving the applications or the analytics to the data versus moving the data to the analytics or to the applications. >> Okay, so in this specific case, and I have said several times today, as Stu has heard me, two years ago IBM had a big theme around the z13 bringing analytics and transactions together, this sort of extends that. Great, I've got this transaction data that lives behind a firewall somewhere. Why the mainframe, why now? >> Well, I would pull back to where I said where we see more companies and organizations wanting to move applications and analytics closer to the data. The data in many of these large companies, that core business-critical data is on the mainframe, and so, being able to do more real time analytics without having to look at old data is really important. There's this term data gravity. I love the visual that presents in my mind that you have these different masses, these different planets if you will, and the biggest, massivest planet in that solar system really is the data, and so, it's pulling the smaller satellites if you will into this planet or this star by way of gravity because data is, data's a new currency, data is what the companies are running on. We're helping in this announcement with being able to unlock and open up all mainframe data sources, even some non-mainframe data sources, and using things like Spark that's running on the platform, that's running on z/OS to access that data directly without having to write any special programming or any special code to get to all their data. >> And the preferred place to run all that data is on the mainframe obviously if you're a mainframe customer. One of the questions I guess people have is, okay, I get that, it's the transaction data that I'm getting access to, but if I'm bringing transaction and analytic data together a lot of times that analytic data might be in social media, it might be somewhere else not on the mainframe. How do envision customers dealing with that? Do you have tooling them to do that? >> We do, so this data virtualization solution that I'm talking about is one that is mainframe resident, but it can also access other data sources. It can access DB2 on Linux Windows, it can access Informix, it can access Cloudant, it can access Hadoop through IBM's BigInsights. Other feeds like Twitter, like other social media, it can pull that in. The case where you'd want to do that is where you're trying to take that data and integrate it with a massive amount of mainframe data. It's going to be much more highly performant by pulling this other small amount of data into, next to that core business data. >> I get the performance and I get the security of the mainframe, I like those two things, but what about the economics? >> Couple of things. One, IBM when they ported Spark to z/OS, they did it the right way. They leveraged the architecture, it wasn't just a simple port of recompiling a bunch of open source code from Apache, it was rewriting it to be highly performant on the Z architecture, taking advantage of specialty engines. We've done the same with the data virtualization component that goes along with that Spark on z/OS offering that also leverages the architecture. We actually have different binaries that we load depending on which architecture of the machine that we're running on, whether it be a z9, an EC12, or the big granddaddy of a z13. >> Bryan, can you speak the developers? I think about, you're talking about all this mobile and Spark and everything like that. There's got to be certain developers that are like, "Oh my gosh, there's mainframe stuff. "I don't know anything about that." How do you help bridge that gap between where it lives in the tools that they're using? >> The best example is talking about embracing this API economy. And so, developers really don't care where the stuff is at, they just want it to be easy to get to. They don't have to code up some specific interface or language to get to different types of data, right? IBM's done a great job with the z/OS Connect in opening up the mainframe to the API economy with ReSTful interfaces, and so with z/OS Connect combined with Rocket data virtualization, you can come through that z/OS Connect same path using all those same ReSTful interfaces pushing those APIs out to tools like Swagger, which the developers want to use, and not only can you get to the applications through z/OS Connect, but we're a service provider to z/OS Connect allowing them to also get to every piece of data using those same ReSTful APIs. >> If I heard you correctly, the developer doesn't need to even worry about that it's on mainframe or speak mainframe or anything like that, right? >> The goal is that they never do. That they simply see in their tool-set, again like Swagger, that they have data as well as different services that they can invoke using these very straightforward, simple ReSTful APIs. >> Can you speak to the customers you've talked to? You know, there's certain people out in the industry, I've had this conversation for a few years at IBM shows is there's some part of the market that are like, oh, well, the mainframe is this dusty old box sitting in a corner with nothing new, and my experience has been the containers and cool streaming and everything like that, oh well, you know, mainframe did virtualization and Linux and all these things really early, decades ago and is keeping up with a lot of these trends with these new type of technologies. What do you find in the customers that, how much are they driving forward on new technologies, looking for that new technology and being able to leverage the assets that they have? >> You asked a lot of questions there. The types of customers certainly financial and insurance are the big two, but that doesn't mean that we're limited and not going after retail and helping governments and manufacturing customers as well. What I find is talking with them that there's the folks who get it and the folks who don't, and the folks who get it are the ones who are saying, "Well, I want to be able "to embrace these new technologies," and they're taking things like open source, they're looking at Spark, for example, they're looking at Anaconda. Last week, we just announced at the Anaconda Conference, we stepped on stage with Continuum, IBM, and we, Rocket, stood up there talking about this partnership that we formed to create this ecosystem because the development world changes very, very rapidly. For a while, all the rage was JDBC, or all the rage was component broker, and so today it's Spark and Anaconda are really in the forefront of developers' minds. We're constantly moving to keep up with developers because that's where the action's happening. Again, they don't care where the data is housed as long as you can open that up. We've been playing with this concept that came up from some research firm called two-speed IT where you have maybe your core business that has been running for years, and it's designed to really be slow-moving, very high quality, it keeps everything running today, but they want to embrace some of their new technologies, they want to be able to roll out a brand-new app, and they want to be able to update that multiple times a week. And so, this two-speed IT says, you're kind of breaking 'em off into two separate teams. You don't have to take your existing infrastructure team and say, "You must embrace every Agile "and every DevOps type of methodology." What we're seeing customers be successful with is this two-speed IT where you can fracture these two, and now you need to create some nice integration between those two teams, so things like data virtualization really help with that. It opens up and allows the development teams to very quickly access those assets on the mainframe in this case while allowing those developers to very quickly crank out an application where quality is not that important, where being very quick to respond and doing lots of AB testing with customers is really critical. >> Waterfall still has its place. As a company that predominately, or maybe even exclusively is involved in mainframe, I'm struck by, it must've been 2008, 2009, Paul Maritz comes in and he says VMWare our vision is to build the software mainframe. And of course the world said, "Ah, that's, mainframe's dead," we've been hearing that forever. In many respects, I accredit the VMWare, they built sort of a form of software mainframe, but now you hear a lot of talk, Stu, about going back to bare metal. You don't hear that talk on the mainframe. Everything's virtualized, right, so it's kind of interesting to see, and IBM uses the language of private cloud. The mainframe's, we're joking, the original private cloud. My question is you're strategy as a company has been always focused on the mainframe and going forward I presume it's going to continue to do that. What's your outlook for that platform? >> We're not exclusively by the mainframe, by the way. We're not, we have a good mix. >> Okay, it's overstating that, then. It's half and half or whatever. You don't talk about it, 'cause you're a private company. >> Maybe a little more than half is mainframe-focused. >> Dave: Significant. >> It is significant. >> You've got a large of proportion of the company on mainframe, z/OS. >> So we're bullish on the mainframe. We continue to invest more every year. We invest, we increase our investment every year, and so in a software company, your investment is primarily people. We increase that by double digits every year. We have license revenue increases in the double digits every year. I don't know many other mainframe-based software companies that have that. But I think that comes back to the partnership that we have with IBM because we are more than just a technology partner. We work on strategic projects with IBM. IBM will oftentimes stand up and say Rocket is a strategic partner that works with us on hard problem-solving customers issues every day. We're bullish, we're investing more all the time. We're not backing away, we're not decreasing our interest or our bets on the mainframe. If anything, we're increasing them at a faster rate than we have in the past 10 years. >> And this trend of bringing analytics and transactions together is a huge mega-trend, I mean, why not do it on the mainframe? If the economics are there, which you're arguing that in many use cases they are, because of the value component as well, then the future looks pretty reasonable, wouldn't you say? >> I'd say it's very, very bright. At the Anaconda Conference last week, I was coming up with an analogy for these folks. It's just a bunch of data scientists, right, and during most of the breaks and the receptions, they were just asking questions, "Well, what is a mainframe? "I didn't know that we still had 'em, "and what do they do?" So it was fun to educate them on that. But I was trying to show them an analogy with data warehousing where, say that in the mid-'90s it was perfectly acceptable to have a separate data warehouse separate from your transaction system. You would copy all this data over into the data warehouse. That was the model, right, and then slowly it became more important that the analytics or the BI against that data warehouse was looking at more real time data. So then it became more efficiencies and how do we replicate this faster, and how do we get closer to, not looking at week-old data but day-old data? And so, I explained that to them and said the days of being able to do analytics against old data that's copied are going away. ETL, we're also bullish to say that ETL is dead. ETL's future is very bleak. There's no place for it. It had its time, but now it's done because with data virtualization you can access that data in place. I was telling these folks as they're talking about, these data scientists, as they're talking about how they look at their models, their first step is always ETL. And so I told them this story, I said ETL is dead, and they just look at me kind of strange. >> Dave: Now the first step is load. >> Yes, there you go, right, load it in there. But having access from these platforms directly to that data, you don't have to worry about any type of a delay. >> What you described, though, is still common architecture where you've got, let's say, a Z mainframe, it's got an InfiniBand pipe to some exit data warehouse or something like that, and so, IBM's vision was, okay, we can collapse that, we can simplify that, consolidate it. SAP with HANA has a similar vision, we can do that. I'm sure Oracle's got their vision. What gives you confidence in IBM's approach and legs going forward? >> Probably due to the advances that we see in z/OS itself where handling mixed workloads, which it's just been doing for many of the 50 years that it's been around, being able to prioritize different workloads, not only just at the CPU dispatching, but also at the memory usage, also at the IO, all the way down through the channel to the actual device. You don't see other operating systems that have that level of granularity for managing mixed workloads. >> In the security component, that's what to me is unique about this so-called private cloud, and I say, I was using that software mainframe example from VMWare in the past, and it got a good portion of the way there, but it couldn't get that last mile, which is, any workload, any application with the performance and security that you would expect. It's just never quite got there. I don't know if the pendulum is swinging, I don't know if that's the accurate way to say it, but it's certainly stabilized, wouldn't you say? >> There's certainly new eyes being opened every day to saying, wait a minute, I could do something different here. Muscle memory doesn't have to guide me in doing business the way I have been doing it before, and that's this muscle memory I'm talking about of this ETL piece. >> Right, well, and a large number of workloads in mainframe are running Linux, right, you got Anaconda, Spark, all these modern tools. The question you asked about developers was right on. If it's independent or transparent to developers, then who cares, that's the key. That's the key lever this day and age is the developer community. You know it well. >> That's right. Give 'em what they want. They're the customers, they're the infrastructure that's being built. >> Bryan, we'll give you the last word, bumper sticker on the event, Rocket Software, your partnership, whatever you choose. >> We're excited to be here, it's an exciting day to talk about machine learning on z/OS. I say we're bullish on the mainframe, we are, we're especially bullish on z/OS, and that's what this even today is all about. That's where the data is, that's where we need the analytics running, that's where we need the machine learning running, that's where we need to get the developers to access the data live. >> Excellent, Bryan, thanks very much for coming to theCUBE. >> Bryan: Thank you. >> And keep right there, everybody. We'll be back with our next guest. This is theCUBE, we're live from New York City. Be right back. (electronic keyboard music)

Published Date : Feb 15 2017

SUMMARY :

Event, brought to you by IBM. powering the path to close to where we are, but and it's spread across the Is that right, no direct sales force? from just being the Okay, so how do you or the analytics to the data versus Why the mainframe, why now? data is on the mainframe, is on the mainframe obviously It's going to be much that also leverages the architecture. There's got to be certain They don't have to code up some The goal is that they never do. and my experience has been the containers and the folks who get it are the ones who You don't hear that talk on the mainframe. the mainframe, by the way. It's half and half or whatever. half is mainframe-focused. of the company on mainframe, z/OS. in the double digits every year. the days of being able to do analytics directly to that data, you don't have it's got an InfiniBand pipe to some for many of the 50 years I don't know if that's the in doing business the way I is the developer community. They're the customers, bumper sticker on the the developers to access the data live. very much for coming to theCUBE. This is theCUBE, we're

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IBMORGANIZATION

0.99+

BryanPERSON

0.99+

Dave VellantePERSON

0.99+

Paul MaritzPERSON

0.99+

DavePERSON

0.99+

Stu MinimanPERSON

0.99+

Rocket SoftwareORGANIZATION

0.99+

50 yearsQUANTITY

0.99+

2009DATE

0.99+

New York CityLOCATION

0.99+

2008DATE

0.99+

OracleORGANIZATION

0.99+

27th yearQUANTITY

0.99+

New York CityLOCATION

0.99+

first stepQUANTITY

0.99+

twoQUANTITY

0.99+

JDBCORGANIZATION

0.99+

1,300 employeesQUANTITY

0.99+

ContinuumORGANIZATION

0.99+

Last weekDATE

0.99+

New YorkLOCATION

0.99+

AnacondaORGANIZATION

0.99+

two thingsQUANTITY

0.99+

mid-'90sDATE

0.99+

SparkTITLE

0.99+

RocketORGANIZATION

0.99+

z/OS ConnectTITLE

0.99+

10DATE

0.99+

two teamsQUANTITY

0.99+

LinuxTITLE

0.99+

todayDATE

0.99+

two-speedQUANTITY

0.99+

two separate teamsQUANTITY

0.99+

Z. Bryan SmithPERSON

0.99+

SQLTITLE

0.99+

Bryan SmithPERSON

0.99+

z/OSTITLE

0.98+

two years agoDATE

0.98+

ReSTfulTITLE

0.98+

SwaggerTITLE

0.98+

last weekDATE

0.98+

decades agoDATE

0.98+

DB2TITLE

0.98+

HANATITLE

0.97+

IBM Machine Learning Launch EventEVENT

0.97+

Anaconda ConferenceEVENT

0.97+

HadoopTITLE

0.97+

SparkORGANIZATION

0.97+

OneQUANTITY

0.97+

InformixTITLE

0.96+

VMWareORGANIZATION

0.96+

More than halfQUANTITY

0.95+

z13COMMERCIAL_ITEM

0.95+

JSONTITLE

0.95+