Walter Bentley and Jason Smith, Red Hat | AnsibleFest 2020
(upbeat music) >> Narrator: From around the globe, it's theCUBE with digital coverage of Ansible Fest 2020 brought to you by Red Hat. >> Welcome back to theCUBE's coverage, Cube virtual's coverage of Ansible Fest 2020 virtual. We're not face to face this year. I'm your host John Furrier with theCube. We're virtual, this theCube virtual and we're doing our part, getting the remote interviews with all the best thought leaders experts and of course the Red Hat experts. We've got Walter Bentley, Senior Manager of Automation practice with Red Hat and Jason Smith, Vice President of North American services, back on theCube. We were in Atlanta last year in person. Guys, thanks for coming on virtually. Good morning to you. Thanks for coming on. >> Good morning John. Good morning, good morning. >> So since Ansible Fest last year a lot's happened where she's living in seems to be an unbelievable 2020. Depending on who you talk to it's been the craziest year of all time. Fires in California, crazy presidential election, COVID whole nine yards, but the scale of Cloud has just unbelievably moved some faster. I was commenting with some of your colleagues around the snowflake IBO it's built on Amazon, right? So value is changed, people are shifting, you starting to clear visibility on what these modern apps are looking like, it's Cloud native, it's legacy integrations, it's beyond lift and shift as we've been seeing in the business. So I'd love to get, Jason we'll start with you, your key points you would like people to know about Ansible Fest 2020 this year because there's a lot going on this year because there's a lot to build on and there's a tailwind for Cloud native and customers have to move fast. What's your thoughts? >> Yeah so, a lot has happened since last year and customers are looking to be a lot more selective around their automation technologies. So they're not just looking for another tool. They're really looking for an automation platform, a platform that they can leverage more of an enterprise strategy and really be able to make sure that they have something that's secure, scalable, and they can use across the enterprise to be able to bring teams together and really drive value and productivity out of their automation platform. >> What's the key points in the customers and our audience around the conversations around the learning, that's the new stuff happening in using Ansible this year? What are the key top things, Jason? Can you comment on what you're seeing the big takeaway for our audience watching? >> Yeah, so a lots change like you said, since last year. We worked with a lot of customers around the world to implement Ansible and automation at scale. So we're using our automation journeys as we talked about last year and really helping customers lay out a more prescriptive approach on how they're going to deliver automation across their enterprise. So customers are really working with us because we're working with the largest customers in the world to implement their strategies. And when we work with new customers we can bring those learnings and that experience to them. So they're not having to learn that for the first time and figure it out on their own, but they're really able to learn and leverage the experience we have through hundreds of customers and at enterprise scale and can take the value that we can bring in and help them through those types of projects much more quickly than they could on their own. >> It's interesting. We were looking at the research numbers and look at the adoption of what Ansible's doing and you guys are with Red Hat it's pretty strong. Could you share on the services side because there's a lot of services going on here? Not just network services and software services, just traditional services. What are the one or two reasons why customer engaged with Red Hat services? What would that be? >> Yeah so, like I said, I mean, we bring that experience. So customers that typically might have to spend weeks troubleshooting and making decisions on how they're going to deliver their implementations, they can work with us and we can bring those best practices in and allow them to make those decisions and implement those best practices within hours instead of weeks, and really be able to accelerate their projects. Another thing is we're a services company as part of a product company. So we're not there just to deliver services. We're really focused on the success of the customer, leveraging our technologies. So we're there to really train and mentor them through the process so that they're really getting up to speed quickly. They're taking advantage of all of the expertise that we have to be able to build their own experience and expertise. So they can really take over once we're gone and be able to support and advance that technology on their own. So they're really looking to us to not only implement those technologies for them, but really with them and be able to train and mentor them. Like I said, and take advantage of those learnings. We also help them. We don't just focus on the technologies but really look at the people in process side of things. So we're bringing in a lot of principles from DevOps and Agile on open practices and helping customers really transform and be able to do things in a new way, to be much more efficient, a lot more agile, be able to drive a lot more value out of our technology. >> Walter, I got to ask you, last year we were chatting about this, but I want to get the update. And I'd like you to just give us a quick refresh definition about the automation adoption journey because this is a real big deal. I mean, we're looking at the trends. Everyone realizes automation is super important at scale, as you think about whether it's software data, anything's about automation it's super important, but it's hard. I mean, the marketplace we were looking at the numbers. I was talking to IDC for you guys at this festival and of Ansible Fest, and they said about five to 10% of enterprises are containerized, which means this huge wave coming of containerization. This is about the automation adoption journey because you start containerizing, (laughs) right? You start looking at the workflows on the pipelinig and how the codes being released and everything. This is important stuff. Give us the update on the automation adoption journey and where it is in the portfolio. >> Well, yeah, just as you called it out, last year on main stage and Ansible fest, almost every customer expressed the need and desire to have to have a strategy as to how they drive their adoption of automation inside their enterprise. And as we've gone over the past few months of splitting this in place with many customers, what we've learned is that many customers have matured into a place where they are now looking at the end to end workflow. Instead of just looking at the tactical thing that they want to automate, they are actually looking at the full ribbon, the full workflow and determining are there changes that need to be made and adjusted to be more efficient when it comes to dealing with automation. And then the other piece as we alluded to already is the contagious nature of that adoption. We're finding that there are organizations that are picking up the automation adoption journey, and because of the momentum it creates inside of that organization we're finding other municipalities that are associated with them are now also looking to be able to take on the journey because of that contagious nature. So we can see that how it's spreading in a positive way. And we're really looking forward to being able to do more of it as the next quarter and the next year comes up. >> Yeah, and that whole sharing thing is a big part of the content theme and the community thing. So great reference on that, good thing is word of mouth and community and collaboration is a good call out there. A quick question for you, you guys recently had a big win with NTT DoCoMo and their engagement with you guys on the automation, adoption journey. Walter, what were some of the key takeaways? Jason you can chime in too I'd like to get some specifics around where it's been successful >> To me, that customer experience was one that really was really exciting, primarily because we learned very early on that they were completely embodying that open source culture and they were very excited to jump right in and even went about creating their own community of practice. We call them communities of practice. You may know them as centers of excellence. They wanted to create that very early in increment, way before we were even ready to introduce it. And that's primarily because they saw how being able to have that community of practice in place created an environment of inclusion across the organization. They had legacy tools in place already, actually, there was a home grown legacy tool in place. And they very quickly realized that it didn't need to remove that tool, they just needed to figure out a way of being able to how to optimize and streamline how they leverage it and also be able to integrate it into the Ansible automation platform. Another thing I wanted to very quickly note is that they very quickly jumped onto the idea of being able to take those large workflows that they had and breaking them up into smaller chunks. And as you already know, from last year when we spoke about it, that's a pivotal part of what the automation adoption journey brings to our organization. So to sum it all up, they were all in, automation first mindset is what that was driving them. And all of those personas, all of those personal and cultural behaviors are what really helped drive that engagement to be very successful. >> Jason, we'll get your thoughts on this because again, Walter brought up last year's reference to breaking things up into modules. We look at this year's key news it's all about collections. You're seeing content is a big focus, content being not like a blog post or a media asset. Like this is content, but code is content. It's sharing. If it's being consumed by other people, there's now community. You're seeing the steam of enabling. I mean, you're looking at successes, like you guys are having with NTT DoCoMo and others. Once people realize there's a better way and success is contagious, as Walter was saying, you are now enabling new ways to do things faster at scale and all that good stuff has been go check out the keynotes. You guys talk about it all day long with the execs. But I want to learn, right? So when you enable success, people want to be a part of it. And I could imagine there's a thirst and demand for training and the playbooks and all the business models, innovations that's going on. What are you seeing for people that want to learn? Is there training? Is there certifications? Because once you get the magic formula as Walter pointed out, and we all know once people see what success looks like, they're going to want to duplicate it. So as this wave comes, it's like having the new surfboard. I want to surf that wave. So what's the update on Ansible's training, the tools, how do I learn, it's a certification of all. Just take a minute to explain what's going on. >> Yeah, so it's been a crazy world as we've talked about over the last six, seven months here, and we've really had to adapt ourselves and our training and consulting offerings to be able to support our remote delivery models. So we very, very quickly back in the March timeframe, we're able to move our consultants to a remote work force and really implement the tools and technologies to be able to still provide the same value to customers remotely as we have in person historically. And so it's actually been really great. We've been able to make a really seamless transition and actually our C-SAT net promoter scores have actually gone up over the last six months or so. So I think we've done a great job being able to still offer the same consulting capabilities remotely as we have onsite. And so that's obviously with a real personal touch working hand in hand with our customers to deliver these solutions. But from a training perspective, we've actually had to do the same thing because customers aren't onsite, they can't do in person training. We've been able to move our training offerings to completely virtual. So we're continuing to train our customers on Ansible and our other technologies through a virtual modality. And we've also been able to take all of our certifications and now offer those remotely. So as, whereas customers historically, would have had to gone into a center and get those certifications in person, they can now do those certifications remotely. So all of our training offerings and consulting offerings are now available remotely as well as they were in person in the past and will be hopefully soon enough, but it's really not-- >> You would adopt to virtual. >> Excuse me. >> You had to adopt to the virtual model quickly for trainings. >> Exactly. >> What about the community role? What's the role of the community? You guys have a very strong community. Walter pointed out the sharing aspect. Well, I pointed out he talked about the contagious people are talking. You guys have a very robust community. What's the role of community in all of this? >> Yeah, so as Walter said, we have our communities a practice that we use internally we work with customers to build communities of practice, which are very much like a centers of excellence, where people can really come together and share ideas and share best practices and be able to then leverage them more broadly. So, whereas in the past knowledge was really kept in silos, we're really helping customers to build those communities and leverage those communities to share ideas and be able to leverage the best practices that are being adopted more broadly. >> That's awesome. Yeah, break down those silos of course. Open up the data, good things will happen, a thousand flowers bloom, as we always say. Walter, I want to get your thoughts on this collection, what that enables back to learning and integrations. So if collections are going to be more pervasive and more common place the ability to integrate, we were covering for VMware world, there's a VMware module collection, I should say. What are customers doing when you integrate in cross technology parties because now obviously customers are going to have a lot of choice and options. If I'm an integration partner, it's all about Cloud native and the kinds of things we're talking about, you're going to have a lot of integration touch points. What's the most effective way for customers integrating other technology partners into Ansible? >> And this is one of the major benefits that came out of the announcement last year with the Ansible automation platform. The Anible automation platform really enables our customers to not just be able to do automation, but also be able to connect the dots or be able to connect other tools, such as other ITM SM tools or be able to connect into other parts of their workflows. And what we're finding in breaking down really quickly is two things. Collections obviously, is a huge aspect. And not just necessarily the collections but the automation service catalog is really where the value is because that's where we're placing all of these certified collections and certified content that's certified by Red Hat now that we create alongside with these vendors and they're unavailable to customers who are consuming the automation platform. And then the other component is the fact that we're now moved into a place where we now have something called the automation hub. which is very similar to galaxy, which is the online version of it. But the automation hub now is a focus area that's dedicated to a customer, where they can store their content and store those collections, not just the ones that they pull down that are certified by Red hat, but the ones that they create themselves. And the availability of this tool, not only just as a SaaS product, but now being able to have a local copy of it, which is brand new out of the press, out of the truck, feature is huge. That's something that customers have been asking for a very long time and I'm very happy that we're finally able to supply it. >> Okay, so backup for a second, rewind, fell off the truck. What does that mean? It's downloadable. You're saying that the automation hub is available locally. Is that what-- >> Yes, Sir. >> So what does that mean for the customer? What's the impact for them? >> So what that means is that previously, customers would have to connect into the internet. And the automation hub was a SaaS product, meaning it was available via the internet. You can go there, you can sync up and pull down content. And some customers prefer to have it in house. They prefer to have it inside of their firewall, within their control, not accessible through the internet. And that's just their preferences obviously for sometimes it's for compliance or business risk reasons. And now, because of that, we were able to meet that ask and be able to make a local version of it. Whereas you can actually have automation hub locally your environment, you can still sync up data that's out on the SaaS version of automation hub, but be able to bring it down locally and have it available with inside of your firewall, as well as be able to add your content and collections that you create internally to it as well. So it creates a centralized place for you to store all of your automation goodness. >> Jason, I know you got a hard stop and I want to get to you on the IBM question. Have you guys started any joint service engages with IBM? >> Yeah, so we've been delivering a lot of engagements jointly through IBM. We have a lot of joint customers and they're really looking for us to bring the best of both Red Hat services, Red Hat products, and IBM all together to deliver joint solutions. We've actually also worked with IBM global technology services to integrate Ansible into their service offerings. So they're now really leveraging the power of Ansible to drive lower cost and more innovation with our customers and our joint customers. >> I think that's going to be a nice lift for you guys. We'll get into the IBM machinery. I mean, you guys got a great offering, you always had great reviews, great community. I mean, IBM's is just going to be moving this pretty quickly through the system, I can imagine. What's some of the the feedback so far? >> Yeah, it's been great. I mean, we have so many, a large joint customers and they're helping us to get to a lot of customers that we were never able to reach before with their scale around the world. So it's been great to be able to leverage the IBM scale with the great products and services that Red Hat offers to really be able to take that more broadly and continue to drive that across customers in an accelerated pace. >> Well, Jason, I know you've got to go. We're going to stay with Walter while you drop off, but I want to ask you one final question. For the folks watching or asynchronously coming in and out of Ansible Fest 2020 this year. What is the big takeaway that you'd like to share? What is the most important thing people should pay attention to? Well, a couple things it don't have to be one thing, do top three things. what should people be paying attention to this year? And what's the most important stories that you should highlight? >> Yeah, I think there's a lot going on, this technology is moving very quickly. So I think there's a lot of great stories. I definitely take advantage of the customer use cases and hearing how other customers are leveraging Ansible for automation. And again really looking to not use it just as a tool, but really in an enterprise strategy that can really change their business and really drive cost down and increase revenues by leveraging the innovation that Ansible and automation provides. >> Jason, thank you for taking the time. Great insight. Really appreciate the commentary and hopefully we'll see you next year in person Walter. (all talking simultaneously) Walter, let's get back to you. I want to get into this use case and some of the customer feedback, love the stories. And we look, we'd love to get the new data, we'd love to hear about the new products, but again, success is contagious, you mentioned that I want to hear the use cases. So a lot of people have their ear to the ground, they look up the virtual environments, they're learning through new ways, they're looking for signals of success. So I got to ask you what are the things that you're hearing over and over again, as you guys are spinning up engagements? What are some of the patterns that are emerging that are becoming a trend in terms of what customers are consistently doing to overcome some of their challenges around automation? >> Okay, absolutely. So what we're finding is that over time that customers are raising the bar on us. And what I mean by that is that their expectations out of being able to take on tools now has completely changed and specifically when we're talking around automation. Our customers are now leading with the questions of trying to find out, well, how do we reduce our operational costs with this automation tool? Are we able to increase revenue? Are we able to really truly drive productivity and efficiency within our organization by leveraging it? And then they dovetail into, "Well, are we able to mitigate business risk, "even associated with leveraging this automation tool?" So as I mentioned, customers are up leveling what their expectations are out of the automation tools. And what I feel very confident about is that with the launch of the Ansible automation platform we're really able to be able to deliver and show our customers how they're able to get a return on their investment, how by taking part and looking at re-working their workflows how we're able to bring productivity, drive that efficiency. And by leveraging it to be able to mitigate risks you do get the benefits that they're looking for. And so that's something that I'm very happy that we were able to rise to the occasion and so far so good. >> Last year I was very motivated and very inspired by the Ansible vision and content product progress. Just the overall vibe was good, community of the product it's always been solid, but one of the things that's happening I want to get your commentary and reaction to this is that, and we've been riffing on this on theCube and inside the community is certainly automation, no brainer, machine learning automation, I mean, you can't go wrong. Who doesn't want automation? That's like saying, "I want to watch more football "and have good food and good wifi. I mean, it's good things, right? Automation is a good thing. So get that. But the business model issues you brought up ROI from the top of the ivory tower and these companies, certainly with COVID, we need to make money and have modern apps. And if you try to make that sound simple, right? X as a service, SaaS everything is a service. That's easy to say, "Hey, Walter, make everything as a service." "Got it, boss." Well, what the hell do you do? I mean, how do you make that happen? You got Amazon, you got Multicloud, you got legacy apps. You're talking about going in and re-architecting the application development process. So you need automation for the business model of everything as a service. What's your reaction to that? Because it's very complicated. It's doable. People are getting there but the Nirvana is, everything is a service. This is a huge conversation. I mean, it's really big, but what's your reaction to that when I bring that up. >> Right. And you're right, it is a huge undertaking. And you would think that with the delivery of COVID into our worlds that many organizations would probably shy away from making changes. Actually, they're doing the opposite. Like you mentioned, they're running towards automation and trying to figure out how do they optimize and be able to scale, based on this new demand that they're having, specifically new virtual demand. I'm happy you mentioned that we actually added something to the automation adoption journey to be able to combat or be able to solve for that change. And being able to take on that large ask of everything as a service, so to speak. And increment zero at the very beginning of the automation adoption journey we added something called navigate. And what navigate is, is it's a framework where we would come in and not just evaluate what they want to automate and bring that into a new workflow, but we evaluate what they already have in place, what automation they have in place, as well as the manual tasks and we go through, and we try to figure out how do you take that very complex, large thing and stream it down into something that can be first off determined as a service and made available for your organization to consume, and as well as be able to drive the business risks or be able to drive your business objectives forward. And so that exercise that we're now stepping our customers through makes a huge difference and puts it all out in front of you so that you can make decisions and decide which way you want to go taking one step at a time. >> And you know it's interesting, great insight, great comment. I think this is really where the dots are going to connect over the next few years. Everything is as a service. You got to lay the foundation. But if you really want to get this done I got to ask you the question around Ansible's ability to integrate and implement with other products. So could you give an examples of how Ansible has integrated and implemented with other Red Hat products or other types of technology vendors products? >> Right. So one example that always pops to the top of my head and I have to give a lot of credit to one of my managing architects who was leading this effort. Was the simple fact that you when you think about a mainframe, right? So now IBM is our new family member. When you think about mainframes, you think about IBM and it just so happens that there's a huge ask and demand and push around being able to automate ZOS mainframe. And IBM had already embarked on the path of determining, well, can this be done with Ansible? And as I mentioned before, my managing architect partnered up with the folks on IBM's side, so the we're bringing in Red Hat consulting, and now we have IBM and we're working together to move that idea forward of saying, "Hey, you can automate things with the mainframe." So think about it. We're in 2020 now in the midst of a new normal. And now we're thinking about and talking about automating mainframes. So that just shows how things have evolved in such a great way. And I think that that story is a very interesting one. >> It's so funny the evolution. I'm old enough to remember. I came out of college in the 80s and I would look at the old mainframe guys who were like "You guys are going to be dinosaurs." They're still around. I mean, some of the banking apps, I mean some of them are not multi threaded and all the good stuff, but they are powering, they are managing a workload, but this is the beautiful thing about Cloud. And some of the Cloud activities is that you can essentially integrate, you don't have to replace the old to bring in the new. This has been a common pattern. This is where containers, microservices, and Cloud has been a dream state because you can essentially re layer and glue it together. This is a big deal. What's your reaction to that? >> No, it's a huge deal. And the reality is, is that we need all of it. We need the legacy behaviors around infrastructure. So we need the mainframe still because they has a distinct purpose. And like you mentioned, a lot of our FSI customers that is the core of where a lot of their data and performance comes out of. And so it's not definitely not a pull out and replace. It's more of how they integrate and how can you streamline them working together to create your end to end workflow. And as you mentioned, making it available to your organizations to consume as a service. So definitely a fan of being able to integrate and add to and everything has a purpose. Is what we're coming to learn. >> Agility, the modern application, horizontal scalability, Cloud is the new data center. Walter great insights, always great to chat with you. You always got some good commentary. I want to ask you one final question. I asked Jason before he dropped off. Jason Smith, who was our guest here and hit a hard stop. What is the most important story that people should pay attention to this year at Ansible Fest? Remember it's virtual, so there's going to be a lot of content around there, people are busy, it's asynchronous consumption. What should they pay attention to from a content standpoint, maybe some community sizes or a discord group? I mean, what should people look at in this year? What should they walk away with as a key message? Take a minute to share your thoughts. >> Absolutely. Absolutely key messages is that, kind of similar to the message that we have when it comes down to the other circumstances going on in the world right now, is that we're all in this together. As an Ansible community, we need to work together, come together to be able to share what we're doing and break down those silos. So that's the overall theme. I believe we're doing that with the new. So definitely pay attention to the new features that are coming out with the Ansible automation platform. I alluded to the on-prem automation hub, that's huge. Definitely pay attention to the new content that is being released in the service catalog. There's tons of new content that focus on the ITSM and a tool. So being able to integrate and leverage those tools then the easier math model, there's a bunch of network automation advances that have been made, so definitely pay attention to that. And the last teaser, and I won't go into too much of it, 'cause I don't want to steal the thunder. But there is some distinct integrations that are going to go on with OpenShift around containers and the SQL automation platform that you definitely are going to want to pay attention to. If anyone is running OCP in their environment they definitely going to want to pay attention to this. Cause it's going to be huge. >> Private cloud is back, OpenStack is back, OCP. You got OpenShift has done really well. I mean, again, Cloud has been just a great enabler and bringing all this together for developers and certainly creating more glue, more abstractions, more automation, infrastructure is code is here. We're excited for it Walter, great insight. Great conversation. Thank you for sharing. >> No, it's my pleasure. And thank you for having me. >> I'm John Furrier with theCube, your host for theCube virtual's, part of Ansible Fest, virtual 2020 coverage. Thanks for watching. (gentle upbeat music)
SUMMARY :
brought to you by Red Hat. and of course the Red Hat experts. Good morning John. and customers have to move fast. and really be able to make sure and that experience to them. and look at the adoption and really be able to and how the codes being and because of the momentum it creates and their engagement with you guys and also be able to integrate it and the playbooks and and technologies to be able to You had to adopt to What about the community role? and be able to leverage the best practices the ability to integrate, that came out of the You're saying that the automation and be able to make a local version of it. and I want to get to to drive lower cost and more innovation I mean, IBM's is just going to and continue to drive We're going to stay with And again really looking to So I got to ask you what are the things And by leveraging it to and reaction to this of everything as a service, so to speak. the dots are going to connect and I have to give a lot of credit the old to bring in the new. and add to and everything has a purpose. that people should pay attention to that are going to go on with and bringing all this And thank you for having me. I'm John Furrier with theCube,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jason | PERSON | 0.99+ |
Walter | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Jason Smith | PERSON | 0.99+ |
one | QUANTITY | 0.99+ |
Atlanta | LOCATION | 0.99+ |
John | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Ansible | ORGANIZATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
2020 | DATE | 0.99+ |
Walter Bentley | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
March | DATE | 0.99+ |
Last year | DATE | 0.99+ |
next year | DATE | 0.99+ |
Red hat | ORGANIZATION | 0.99+ |
this year | DATE | 0.99+ |
ZOS | TITLE | 0.98+ |
two things | QUANTITY | 0.98+ |
first time | QUANTITY | 0.98+ |
SQL | TITLE | 0.98+ |
next quarter | DATE | 0.98+ |
two reasons | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
Ansible Fest | EVENT | 0.98+ |
California | LOCATION | 0.98+ |
one final question | QUANTITY | 0.98+ |
OpenStack | TITLE | 0.97+ |
Greg Lotko, Broadcom Inc. | IBM Think 2020
Narrator: From the Cube studios in Palo Alto and Boston, (upbeat intro music) it's theCUBE! Covering IBM Think. Brought to you by IBM. >> Hi, everybody, we're back. This is Dave Vellante and you're watching theCUBE's coverage of the IBM Think 2020 digital event experience, wall to wall coverage, of course in the remote Cube studios in Palo Alto and Boston. Greg Lotko is here. He's with Broadcom. He's a senior vice president and general manager of the Broadcom mainframe division. Greg, great to see you. Thanks for coming on. >> Hey good seeing you too, happy to be here. >> Hey, lets talk Z. You know, I got to say when Broadcom made a nearly 19 billion dollar acquisition of CA, many people, myself included said, "Huh? I don't really get it." But as you start to see what's happening, the massive CA install base and the cross selling opportunities that have come to Broadcom, you start to connect the dots and say, "Ah, maybe this does make some sense." But you know, how's it going? How's the acquisition been? It's been, you know, what now, two years since that move? >> Yeah we're coming up on two years. I think it kind of shocked the world, right? I mean, there is a lot of value there and the customers that have been using the mainframe and running their core businesses for many, many years, they knew this, right? So Broadcom came in and said, "Hey, you know, I don't think this is the cash cow "that others maybe have been treating it as." You know, we absolutely believed with some investment that you could actually drive greater value to customers and you know, what a novel concept right? You know, expand expense, invest, drive greater value, and that would be the way you'd expand revenue and profit. >> Yeah, I mean I think generally, the mainframe market is misunderstood. It obviously goes in cycles. I did a report, you know, a couple of months ago on really focusing on Z15X, it was last summer. And how historically, IBM performance overall as a company is really driven still by mainframe cycles because it all still drags so much software and services and so we're in the midst of a Z15 tailwind and so, of course, the COVID changes everything. But nonetheless it's a good business. IBM's a dominant player in that business. Customers continue to buy mainframes because it just works. It's too risky to rip 'em out. People say, "Oh, why don't you get rid of the mainframe?" No way customers are going to do that. It's running their business. So it's a fabulous business if you have a play there and clearly... (poor internet connection interrupts Dave speaking) >> Yeah, and if you think about those cycles that's largely driven by the hardware, right? As each generation comes out, and if you look at traditional pricing metrics that really look at using that capacity, or even using full capacity, that's what caused this cyclicality with the software as well but, you know, there's a lot of changes even in that space. I mean with us, with mainframe consumption licensing from Broadcom, with IBM doing tailor fit pricing, you know, the idea that you can have that headroom on the hardware and then pay as you go, pay as you grow. I think that actually will smooth out and remove some of that cyclicality from the software space. And as you said, correctly, you look at the COVID stuff going on, I mean there's an awful lot of transactions going on online. People are obviously checking their financials with the economics going on. The shipping companies are booming with what they have to do, so that's actually driving transactions up as well, to use that capacity that's in the boxes. >> Yeah, and financial services is actually in really good... I know that the stocks have been hit, but the liquidity in the banks is very, very strong because of the 2009 crisis. So the fiscal policy sort of, you know, dictated that or, you know, the public policy dictated that. And the banks are obviously huge consumers of mainframe. >> Sure. >> One of the things that IBM did years ago was to sort of embrace Linux, was one of its first moves to open up the mainframe. But it's much more than just Linux. I wonder if you could talk about sort of your point of view on open meets mainframe. >> Yeah, so open is way more than just Linux, right? I mean Linux is good, running around the mainframe. I mean that's absolutely an open paradigm from the operating system, but open is also about opening up the API's, opening up the connectivities so that it's easier to interact with the platform. And, you know, sometimes people think open is just about dealing with open source. Certainly we've made a lot of investments there. We contributed the command line interface and actually a little more than 50% of the original contribution to the Zowe project, under the OMP, the Open Mainframe Project. So that was about allowing open source technologies that interact with distributed and cloud technologies to now interact with that mainframe. So it's not just the open source technologies, but opening up the API's, so you can then connect across technologies that are on the platform or off platform. >> So what about the developer community? I mean there's obviously a lot of talk in the industry about DevOps. How does DevOps fit into the mainframe world? What about innovations like Agile? And sort of beyond DevOps, if you will. Can you comment on that? >> Yeah, absolutely, I mean you can bring all those paradigms, all those capabilities to the mainframe now with opening up those API's. So I mean we had a large European retail bank that has actually used the Git Bridge that we work with providing, you know, through Zowe, to connect into Endeavor, so they could leverage all the investments they had made in that existing technology over the years, but actually use the same kind of CICD pipeline, the same interaction that they do across distributed platforms and mainframe together, and open up that experience across their development community. What that really means is you're using the same concepts, the same tools that they maybe became comfortable with in university or on different platforms, to then interact with the mainframe and it's not that you're doing anything that, you know, takes away from core capabilities of the mainframe. You're still leveraging the stability, the resiliency, the through put, the service ability. But you're pressing down on it and interacting with it just like you do with other platforms. So it's really cool. And that goes beyond Linux, right? Because you're interacting with capabilities and technologies that are on the mainframe and ZOS environment. >> Yeah, and the hardened security as well, >> Absolutely. >> is another key aspect of the mainframe. Let's talk about cloud. A lot of people talk about cloud, cloud first, multicloud. Where does the mainframe fit in the cloud world? >> So, there's a lot of definitions of cloud out there, right? I mean people will talk about private cloud, public cloud, hybrid cloud across multiple private clouds. They'll talk about, you know, this multicloud. We actually talk about it a little differently. We think about the customer's cloud environment. You know, our institution that we're dealing with, say it's a financial institution, to their end customers, their cloud is however you interact. And you think about it. If you're checking an account balance, if you're depositing in a check, if you're doing any of these interactions, you're probably picking up a mobile device or a PC. You're dealing with an edge server, you're going back into distributed servers, and you're eventually interacting with the mainframe and then that's got to come all the way back out to you. That is our customer's cloud. So we talk about their cloud environment, and you have to think about this paradigm of allowing the mainframe to connect through and to all of that while you hit it, preserving the security. So we think of cloud as being much more expansive and the mainframe is an integral part of that, absolutely. >> Yeah, and I've seen some of your discussions where you've talked about and sort of laid out, look, you know, the mainframe sits behind all this other infrastructure that, you know, ultimately the consumer on his or her mobile phone, you know, goes through a gateway, goes through, you know, some kind of site to buy something. But, you know, ends up ultimately doing a transaction and that transaction you want to be, you know, secure. You want it to be accurate. And then how does that happen? The majority of the word's transactions are running on some kind of, you know, IBM mainframe somewhere, in someway touches that transaction. You know, as the world gets more complex, that mainframe is... I called it sort of the hardened, you know, sort of back end. And that has to evolve to be able to adapt to the changes at the front end. And that's really kind of what's happening, whether it's cloud, whether it's mobile, whether it's, you know, Linux, and other open source technology. >> Right, it's fabulous that the mainframe has, you know, IO rates and throughput that no other platform can match, but if you can't connect that to the transactions that the customer is driving to it, then you're not leveraging the value, right? So you really have to think about it from a perspective of how do you open up everything you possibly can on the mainframe while preserving that security? >> I want to end with just talking about the Broadcom portfolio. When you hit the Broadcom mainframe site, it's actually quite mind boggling, the dozens and dozens of services and software capabilities that you provide. How would you describe that portfolio and what do you see as the vision for that portfolio going forward? >> Yeah, so when people normally say portfolio, they're thinking software products, and we have hundreds of software products. But we're looking at our portfolio as more than just the software. Sometimes people talk about, hey let me just talk to you about my latest and greatest product. One of the things we were really afforded the opportunity to do with Broadcom acquiring us was to reinvest, to double down on core products that customers have had for many years and we know that they want to be able to count on for many years to come. But the other really important thing we believe about driving value to our customers was those offerings and capabilities that you put around that, you know? Think about the idea of if you want to migrate off of a competitive product, or if you want to adopt an additional product that have the ability to tie these together. Often in our customer's shops, they don't have all the skills that they need or they just don't have the capacity to do it. So we've been investing in partnership. You know, we kept our services business from, at least the resources, the people, from CA. We rolled them directly into the division and we're investing them in true partnership, working side by side with our customers to help them deploy these capabilities, get up and running, and be successful. And we believe that that's the value of a true partnership. You invest side by side to have them be successful with the software and the capabilities and their operation. >> Well, like I said, it caught a lot of people, myself included, by surprise that acquisition. It was a big number, but you could see it, you know, Broadcom's performance post. You know, the July 2018 acquisition, done quite well. Obviously COVID has affected, you know, much of the market, but it seems to be paying off great. Thanks so much for coming to theCUBE and sharing your insights, and best of luck going forward. Stay safe. >> Pleasure being here. Everybody here, yourself, and everybody out there, be safe, be well. Take care. >> And thank you for everybody for watching. This is theCUBE's coverage of the IBM Think 2020 digital event experience. We'll be right back, right after this short break. You're watching theCUBE. (upbeat outro music)
SUMMARY :
Brought to you by IBM. of the Broadcom mainframe division. Hey good seeing you and the cross selling opportunities and you know, what a novel concept right? I did a report, you know, with the software as well but, you know, I know that the stocks have been hit, One of the things that of the original contribution And sort of beyond DevOps, if you will. and technologies that are on the mainframe is another key aspect of the mainframe. of allowing the mainframe to connect and that transaction you and what do you see as the vision and capabilities that you you know, much of the market, and everybody out there, of the IBM Think 2020
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Greg Lotko | PERSON | 0.99+ |
Broadcom | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Greg | PERSON | 0.99+ |
July 2018 | DATE | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Boston | LOCATION | 0.99+ |
two years | QUANTITY | 0.99+ |
Linux | TITLE | 0.99+ |
Dave | PERSON | 0.99+ |
hundreds | QUANTITY | 0.99+ |
ZOS | TITLE | 0.99+ |
CA. | LOCATION | 0.99+ |
Cube | ORGANIZATION | 0.99+ |
dozens | QUANTITY | 0.98+ |
each generation | QUANTITY | 0.98+ |
COVID | ORGANIZATION | 0.98+ |
CA | LOCATION | 0.98+ |
last summer | DATE | 0.98+ |
IBM Think 2020 | EVENT | 0.97+ |
Broadcom Inc. | ORGANIZATION | 0.97+ |
Z15 | COMMERCIAL_ITEM | 0.97+ |
DevOps | TITLE | 0.97+ |
Z15X | COMMERCIAL_ITEM | 0.97+ |
one | QUANTITY | 0.96+ |
theCUBE | ORGANIZATION | 0.96+ |
One | QUANTITY | 0.95+ |
nearly 19 billion dollar | QUANTITY | 0.93+ |
more than 50% | QUANTITY | 0.92+ |
Agile | TITLE | 0.91+ |
first moves | QUANTITY | 0.89+ |
OMP | TITLE | 0.89+ |
Git Bridge | TITLE | 0.89+ |
couple of months ago | DATE | 0.89+ |
Zowe | ORGANIZATION | 0.87+ |
years ago | DATE | 0.82+ |
COVID | OTHER | 0.67+ |
software | QUANTITY | 0.67+ |
2009 crisis | EVENT | 0.65+ |
Zowe | TITLE | 0.59+ |
European | OTHER | 0.58+ |
Think | COMMERCIAL_ITEM | 0.52+ |
Project | TITLE | 0.46+ |
COVID | TITLE | 0.35+ |
John Thomas, IBM | IBM Data Science For All
(upbeat music) >> Narrator: Live from New York City, it's the Cube, covering IBM Data Science for All. Brought to you by IMB. >> Welcome back to Data Science for All. It's a whole new game here at IBM's event, two-day event going on, 6:00 tonight the big keynote presentation on IBM.com so be sure to join the festivities there. You can watch it live stream, all that's happening. Right now, we're live here on the Cube, along with Dave Vellente, I'm John Walls and we are joined by John Thomas who is a distinguished engineer and director at IBM. John, thank you for your time, good to see you. >> Same here, John. >> Yeah, pleasure, thanks for being with us here. >> John Thomas: Sure. >> I know, in fact, you just wrote this morning about machine learning, so that's obviously very near and dear to you. Let's talk first off about IBM, >> John Thomas: Sure. >> Not a new concept by any means, but what is new with regard to machine learning in your work? >> Yeah, well, that's a good question, John. Actually, I get that question a lot. Machine learning itself is not new, companies have been doing it for decades, so exactly what is new, right? I actually wrote this in a blog today, this morning. It's really three different things, I call them democratizing machine learning, operationalizing machine learning, and hybrid machine learning, right? And we can talk through each of these if you like. But I would say hybrid machine learning is probably closest to my heart. So let me explain what that is because it's sounds fancy, right? (laughter) >> Right. It's what we need is another hybrid something, right? >> In reality, what it is is let data gravity decide where your data stays and let your performance requirements, your SLA's, dictate where your machine learning models go, right? So what do I mean by that? You might have sensitive data, customer data, which you want to keep on a certain platform, right? Instead of moving data off that platform to do machine learning, bring machine learning to that platform, whether that be the mainframe or specialized appliances or hadoop clusters, you name it, right? Bring machine learning to where the data is. Do the training, building of the model, where that is, but then have complete flexibility in terms of where you deploy that model. As an example, you might choose to build and train your model on premises behind the firewall using very sensitive data, but the model that has been built, you may choose to deploy that into a Cloud environment because you have other applications that need to consume it. That flexibility is what I mean by hybrid. Another example is, especially when you get into so many more complex machine learning, deep learning domains, you need exploration and there is hardware that provides that exploration, right? For example, GPU's provide exploration. Well, you need to have the flexibility to train and build the models on hardware that provides that kind of exploration, but then the model that has been built might go into inside of a CICS mainframe transaction for some second scoring of a credit card transaction as to whether it's fraudulent or not, right? So there's flexibility off peri, on peri, different platforms, this is what I mean by hybrid. >> What is the technical enabler to allow that to happen? Is it just a modern software architecture, microservices, containers, blah, blah, blah? Explain that in more detail. >> Yeah, that's a good question and we're not, you know, it's a couple different things. One is bringing native machine learning to these platforms themselves. So you need native machine learning on the mainframe, in the Cloud, in a hadoop cluster environment, in an appliance, right? So you need the run times, the libraries, the frameworks running native on those platforms. And that is not easy to do that, you know? You've got machine learning running native on ZOS, not even Linux on Z. It's native to ZOS on the mainframe. >> At the very primitive level you're talking about. >> Yeah. >> So you get the performance you need. >> You have the runtime environments there and then what you need is a seamless experience across all of these platforms. You need way to export models, repositories into which you can save models, the same API's to save models into a different repository and then consume from them there. So it's a bit of engineering that IBM is doing to enable this, right? Native capabilities on the platforms, the same API's to talk to repositories and consume from the repositories. >> So the other piece of that architecture is talking a lot of tooling that integrated and native. >> John Thomas: Yes. >> And the tooling, as you know, changes, I feel like daily. There's a new tool out there and everybody gloms onto it, so the architecture has to be able to absorb those. What is the enabler there? >> Yeah, so you actually bring up a very good point. There is a new language, a new framework everyday, right? I mean, we all know that, in the world of machine learning, Python and R and Scala. Frameworks like Spark and TensorFlow, they're table scapes now, you know? You have to support all of these, scikit-learning, you name it, right? Obviously, you need a way to support all these frameworks on the platforms you want to enable, right? And then you need an environment which lets you work with the tools of your choice. So you need an environment like a workbench which can allow you to work in the language, the framework that you are the most comfortable with. And that's what we are doing with data science experience. I don't know if you have thought of this, but data science experience is an enterprise ML platform, right, runs in the Cloud, on prem, on x86 machines, you can have it on a (mumbles) box. The idea here is support for a variety of open languages, frameworks, enable through a collaborative workbench kind of interface. >> And the decision to move, whether it's on-prem or in the Cloud, it's a function of many things, but let's talk about those. I mean, data volume is one. You can't just move your business into the Cloud. It's not going to work that well. >> It's a journey, yeah. >> It's too expensive. But then there's others, there's governance edicts and security edicts, not that the security in the Cloud is any worse, it might just different than what your organization requires, and the Cloud supplier might not support that. It's different Clouds, it's location, etc. When you talked about the data thing being on trend, maybe training a model, and then that model moving to the Cloud, so obviously, it's a lighter weight ... It's not as much-- >> Yeah, yeah, yeah, you're not moving the entire data. Right. >> But I have a concern. I wonder if clients as you about this. Okay, well, it's my data, my data, I'm going to keep behind my firewall. But that data trained that model and I'm really worried that that model is now my IP that's going to seep out into the industry. What do you tell a client? >> Yeah, that's a fair point. Obviously, you still need your security mechanisms, you access control mechanisms, your governance control mechanisms. So you need governance whether you are on the Cloud or on prem. And your encryption mechanisms, your version control mechanisms, your governance mechanisms, all need to be in place, regardless of where you deploy, right? And to your question of how do you decide where the model should go, as I said earlier to John, you know, let data gravity SLA's performance security requirements dictate where the model should go. >> We're talking so much about concepts, right, and theories that you have. Lets roll up our sleeves and get to the nitty-gritty a little bit here and talk about what are people really doing out there? >> John Thomas: Oh yeah, use cases. >> Yeah, just give us an idea for some of the ... Kind of the latest and greatest that you're seeing. >> Lots of very interesting, interesting use cases out there so actually, a part of what IBM calls a data science elite team. We go out and engage with customers on very interesting use cases, right? And we see a lot of these hybrid discussions happen as well. On one end of the spectrum is understanding customers better. So I call this reading the customer's mind. So can you understand what is in the customer's mind and have an interaction with the client without asking a bunch of questions, right? Can you look at his historical data, his browsing behavior, his purchasing behavior, and have an offer that he will really love? Can you really understand him and give him a celebrity experience? That's one class of use cases, right? Another class of use cases is around improving operations, improving your own internal processes. One example is fraud detection, right? I mean, that is a hot topic these days. So how do you, as the credit card is swiped, right, it's just a few milliseconds before that travels through a network and kicks you back in mainframe and a scoring is done to as to whether this should be approved or not. Well, you need to have a prediction of how likely this is to be fraudulent or not in the span of the transaction. Here's another one. I don't know if you call help desks now. I sometimes call them "helpless desks." (laughter) >> Try not to. >> Dave: Hell desks. >> Try not to helpless desks but, you know, for pretty every enterprise that I am talking to, there is a goal to optimize their help desk, their call centers. And call center optimization is good. So as the customer calls in, can you understand the intent of the customer? See, he may start off talking about something, but as the call progresses, the intent might change. Can you understand that? In fact, not just understand, but predict it and intercept with something that the client will love before the conversation takes a bad turn? (laughter) >> You must be listening in on my calls. >> Your calls, must be your calls! >> I meander, I go every which way. >> I game the system and just go really mad and go, let me get you an operator. (laughter) Agent, okay. >> You tow guys, your data is a special case. >> Dave: Yeah right, this guy's pissed. >> We are red-flagged right off the top. >> We're not even analyzing you. >> Day job, forget about, you know. What about things, you know, because they're moving so far out to the edge and now with mobile and that explosion there, and sensor data being what it is and all this is tremendous growth. Tough to manage. >> Dave: It is, it really is. >> I guess, maybe tougher to make sense of it, so how are you helping people make sense of this so they can really filter through and find the data that matters? >> Yeah, this is a lot of things rolled up into that question, right? One is just managing those devices, those endpoints in multiple thousands, tens of thousands, millions of these devices. How would you manage them? Then, are you doing the processing of the data and applying ML and DL right at the edge, or are you bringing the data back behind the firewall or into Cloud and then processing it there? If you are doing image reduction in a car, in a self-driving car, can you allow the latency of data being shipping of an image of a pedestrian jumping in front, do we ship across the Cloud for a deep-learning network to process it and give you an answer - oh, that's a pedestrian? You know, you may not have that latency now. So you may want to do some processing on the edge, so that is another interesting discussion, right? And you need exploration there as well. Another aspect now is, as you said, separating the signal from the noise, you know. It's just really, really coming down to the different industries that we go into, what are the signals that we understand now? Can we build on them and can we re-use them? That is an interesting discussion as well. But, yeah, you're right. With the world of exploding data that we are in, with all these devices, it's very important to have systematic approach to managing your data, cataloging it, understanding where to apply ML, where to apply exploration, governance. All of these things become important. >> I want to ask you about, come back to the use cases for a moment. You talk about celebrity experiences, I put that in sort of a marketing category. Fraud detection's always been one of the favorite, big data use cases, help desks, recommendation engines and so forth. Let's start with the fraud detection. About a year ago, first of all, fraud detection in the last six, seven years, has been getting immensely better, no question. And it's great. However, the number of false positives, about a year ago, it was too many. We're a small company but we buy a lot of equipment and lights and cameras and stuff. The number of false positives that I personally get was overwhelming. >> Yeah. >> They've gone down dramatically. >> Yeah. >> In the last 12 months. Is that just a coincidence, happenstance, or is it getting better? >> No, it's not that the bad guys have gone down in number. It's not that at all, no. (laughter) >> Well, that, I know. >> No, I think there is a lot of sophistication in terms of the algorithms that are available now. In terms of ... If you have tens of thousands of features that you're looking at, how do you collapse that space and how do you do that efficiently, right? There are techniques that are evolving in terms of handing that kind of information. In terms of the actual algorithms, are different types of innovations that are happening in that space. But I think, perhaps, the most important one is that things that use to take weeks or days to train and test, now can be done in days or minutes, right? The exploration that comes from GPU's, for example, allows you to test out different algorithms, different models and say, okay, well, this performs well enough for me to roll it out and try this out, right? It gives you a very quick cycle of innovation. >> The time to value is really compressed. Okay, now let's take one that's not so good. Ad recommendations, the Google ads that pop up. One in a hundred are maybe relevant, if that, right? And they pop up on the screen and they're annoying. I worry that Siri's listening somehow. I talk to my wife about Israel and then next thing I know, I'm getting ads for going to Israel. Is that a coincidence or are they listening? What's happening there? >> I don't know about what Google's doing. I can't comment on that. (laughter) I don't want to comment on that. >> Maybe just from a technology perspective. >> From a technology perspective, this notion of understanding what is in the customer's mind and really getting to a customer segment at one, this is top interest for many, many organizations. Regardless of which industry you are, insurance or banking or retail, doesn't matter, right? And it all comes down to the fundamental principles about how efficiently can you do. Now, can you identify the features that have the most predictive power? This is a level of sophistication in terms of the feature engineering, in terms of collapsing that space of features that I had talked about, and then, how do I actually go to the latest science of this? How do I do the exploratory analysis? How do I actually build and test my machine learning models quickly? Do the tools allow me to be very productive about this? Or do I spend weeks and weeks coding in lower-level formats? Or do I get help, do I get guided interfaces, which guide me through the process, right? And then, the topic of exploration we talk about, right? These things come together and then couple that with cognitive API's. For example, speech to text, the word (mumbles) have gone down dramatically now. So as you talk on the phone, with a very high accuracy, we can understand what is being talked about. Image recognition, the accuracy has gone up dramatically. You can create custom classifiers for industry-specific topics that you want to identify in pictures. Natural language processing, natural language understanding, all of these have evolved in the last few years. And all these come together. So machine learning's not an island. All these things coming together is what makes these dramatic advancements possible. >> Well, John, if you've figured out anything about the past 20 minutes or so, is that Dave and I want ads delivered that matter and we want our help desk questions answered right away. (laugher) so if you can help us with that, you're welcome back on the Cube anytime, okay? >> We will try, John. >> That's all we want, that's all we ask. >> You guys, your calls are still being screened. (laughter) >> John Thomas, thank you for joining us, we appreciate that. >> Thank you. >> Our panel discussion coming up at 4:00 Eastern time. Live here on the Cube, we're in New York City. Be back in a bit. (upbeat music)
SUMMARY :
Brought to you by IMB. John, thank you for your time, good to see you. I know, in fact, you just wrote this morning And we can talk through each of these if you like. It's what we need is another hybrid something, right? of where you deploy that model. What is the technical enabler to allow that to happen? And that is not easy to do that, you know? and then what you need is a seamless experience So the other piece of that architecture is And the tooling, as you know, changes, I feel like daily. the framework that you are the most comfortable with. And the decision to move, whether it's on-prem and security edicts, not that the security in the Cloud is Yeah, yeah, yeah, you're not moving the entire data. I wonder if clients as you about this. So you need governance whether you are and theories that you have. Kind of the latest and greatest that you're seeing. I don't know if you call help desks now. So as the customer calls in, can you understand and go, let me get you an operator. What about things, you know, because they're moving the signal from the noise, you know. I want to ask you about, come back to the use cases In the last 12 months. No, it's not that the bad guys have gone down in number. and how do you do that efficiently, right? I talk to my wife about Israel and then next thing I know, I don't know about what Google's doing. So as you talk on the phone, with a very high accuracy, so if you can help us with that, You guys, your calls are still being screened. Live here on the Cube, we're in New York City.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellente | PERSON | 0.99+ |
John | PERSON | 0.99+ |
John Thomas | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
John Walls | PERSON | 0.99+ |
Israel | LOCATION | 0.99+ |
ORGANIZATION | 0.99+ | |
New York City | LOCATION | 0.99+ |
Siri | TITLE | 0.99+ |
ZOS | TITLE | 0.99+ |
today | DATE | 0.99+ |
Linux | TITLE | 0.99+ |
One example | QUANTITY | 0.99+ |
Python | TITLE | 0.99+ |
thousands | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
Scala | TITLE | 0.99+ |
Spark | TITLE | 0.98+ |
tens of thousands | QUANTITY | 0.98+ |
this morning | DATE | 0.98+ |
each | QUANTITY | 0.98+ |
IMB | ORGANIZATION | 0.96+ |
one | QUANTITY | 0.96+ |
TensorFlow | TITLE | 0.95+ |
millions | QUANTITY | 0.95+ |
About a year ago | DATE | 0.95+ |
first | QUANTITY | 0.94+ |
one class | QUANTITY | 0.92+ |
Z. | TITLE | 0.91+ |
4:00 Eastern time | DATE | 0.9+ |
decades | QUANTITY | 0.9+ |
6:00 tonight | DATE | 0.9+ |
CICS | ORGANIZATION | 0.9+ |
about a year ago | DATE | 0.89+ |
second | QUANTITY | 0.88+ |
two-day event | QUANTITY | 0.86+ |
three different things | QUANTITY | 0.85+ |
last 12 months | DATE | 0.84+ |
IBM Data Science | ORGANIZATION | 0.82+ |
Cloud | TITLE | 0.8+ |
R | TITLE | 0.78+ |
past 20 minutes | DATE | 0.77+ |
Cube | COMMERCIAL_ITEM | 0.75+ |
a hundred | QUANTITY | 0.72+ |
one end | QUANTITY | 0.7+ |
seven years | QUANTITY | 0.69+ |
features | QUANTITY | 0.69+ |
couple | QUANTITY | 0.67+ |
last six | DATE | 0.66+ |
few milliseconds | QUANTITY | 0.63+ |
last few years | DATE | 0.59+ |
x86 | QUANTITY | 0.55+ |
IBM.com | ORGANIZATION | 0.53+ |
SLA | ORGANIZATION | 0.49+ |
Calline Sanchez, IBM Enterprise System | VMworld 2017
>> Narrator: Live from Las Vegas It's The Cube Covering VMworld 2017. Brought to you by VMware and its ecosystem partners. >> Hey, welcome back to The Cube. Continuing coverage of VMworld 2017. Day two of the event, lots of exciting conversations that we've had so far. I'm Lisa Martin with my cohost Dave Vellante-- >> Hey. >> Hey! We're excited to be cohosting together, right Dave? >> That's right. >> Of course! And we have Cube alumni Calline Sanchez, Vice President of IBM Enterprise Storage Systems. Welcome back to The Cube. >> Thank you for inviting me. It's always great to have discussions with you. >> Yeah. So, talk to us, we're at VMworld day two, what's new with IBM and VMware? >> So what was great about working, or walking through the expo floor, is hearing conversations about data backup, like as they say with the IBM Backup Bar-- >> It's hot! >> They have, and also this idea that we work to optimize data within the entire stack. So yeah, you have your base infrastructure, but you layer on top of that things that support the digital experience. >> Why is backup so hot? Why now? >> Well, so my favorite reason is because of tape. Tape allows you to cheaply store data, so it's like about a cent per gig. That's a big deal. And I don't know, I suspect you like really good deals on shoes, bags, et cetera, I know I do. So that's what's great about tape, is it's cost effective as well as it's a high performer, high capacity element that we intend to deliver. >> OK, so I buy that. I've always been a fan of the economic argument for tape. Let me ask you another question, and see if you see this, Calline. It seems like when virtualization came into vogue, people had to re-architect their backup for a variety of reasons, less physical resources, et cetera. Is cloud affecting the way in which people think about backup and if so, how? >> So we support cloud service providers. As they say with tape, if you're cost effective and you can meet certain performance and capacity requirements well, you usually are part of the stack associated with the delivery into the cloud service provider data centers worldwide. So all I'm saying is that it's relevant, it's important that we continue to innovate, associate with what's required with regards to tape. >> Well, while we're on the subject of tape, let's carry that through. The conventional wisdom from the spinning disk and now the flash guys, oh, tape, tape is dead, I've been hearing tape is dead since I've been in this business, which is now quite a long time. What's kept tape alive, it's obviously the economics, but it's got to be more than that. It's got to be easier to use, it's got to be functional, what kind of innovations have occurred around tape to make it continue to be viable? >> So I would say our focus on enhancing spectrum archive. It used to be called a linear tape file system, and really it's this idea of a USB for file access or data access. So we keep working on focusing and delivering data access patterns that are actually efficient for our clients, simple to use, and we enable automation, which has been something that's great based on Ed Walsh's focus or strategy for our storage portfolio, and I know you've just heard that we had two awesome growth quarters within IBM Storage and our goal is to continue that through modernizing our entire portfolio. >> Three would make a trend, I told Ed. And he's like, "Come on, gimme a break." No, but it is awesome to see IBM's storage business growing again and hopefully that can continue. >> So speaking of innovation, and you talked about tape and people think tape's been dead for a long time, but you're talking about it as a core component of cloud strategies for businesses. How has IBM evolved your messaging, your positioning, as technologies have evolved and customers are now going, "We have to keep a ton of data," Michael Dowd talked about the importance of data today being at the CEO agenda level. Talk to us about how some of the innovations IBM is doing to help customers understand the relevance of different types of storage according to data growth but also going from data centers to centers of data. >> Great question. So, one thing that's really interesting, being that I'm from the lab, we have delivered, or our intent is to accelerate the entire roadmap as it relates to tape, so that we stay ahead of the delivery path and meet the requirements based on clients worldwide whether they're scientific clients based on some of the advanced data that is required, as well as cloud service providers. They say, "Hey, we're expecting you to innovate "and deliver as quickly as possible." And sometimes it's like the requests are quite interesting and fascinating based on just even the digital or the analytics of measuring like, temperatures in data centers. And what we're doing with Rocky interface based on ethernet interfaces. The clients are pushing us with regards to improving overall and delivering to meet the cloud economics that they require, as well as the attributes of. >> What's changed at IBM, if anything, I'm inferring something's changed because I've always said, one of the criticisms I've had of IBM storage is the pace with which it was able to get products out of engineering and to the marketplace, and that pace has accelerated quite dramatically. I don't know if it's new leadership, you mentioned Ed Walsh before, or there's been a change in the philosophy, am I dreaming or have I noticed-- >> No, you're completely accurate. So when we're talking about development or delivery, we're so much more agile that we really work to reduce the complexity of delivery, and we're delivering major functions or complex things to more simple, and getting client input sooner, and partner input sooner than later. Where as previously, it was like we worked for over a year sometimes on technologies or advancements and it would take a while for those clients to then adopt. Now, we have to deliver something a heck of a lot faster than we had done before. >> And are customers part of that innovation process? It sounds like-- >> They are. >> That's been a big change-- >> So we're big. Historically, we always talked about betas. Now we're talking about alphas, and some of these original demos in order to grow our understanding of the use case in the very early phases. And usually we did not have this type of discussions prior, at least in my experience, but now it's like it's a requirement. So, with new leadership is a component, as we discussed, but also this idea of really focused agility. Delivering to the marketplace faster, listening to our clients, so that means improvement based on how we go to market as well. Because it's important that we deliver value to our clients or we're not relevant. >> We were talking earlier to another guest, a competitive company, and we were talking about the anatomy of a transaction, and we were going through it and at one point he said that it hits a mainframe in an associated database and he said, "And that's OK." So we know the mainframe, alive and well, we've done a bunch of Cube activities, we were there at the Z13 launch at the Jazz at Lincoln, which was a great event. >> That's awesome. >> And so, give us the update on what's happening there. You guys have made some new announcements there, new DS8000 class systems, new Z systems, what's going on at that transaction world? >> So I would say two, or actually three major things that are part of that announcement to collaborate with Z is improvement based on modernizing our service support structure, which is like remote code load, things like that, so that we can have experts remotely, via a control center, help clients load latest levels of code as well as new feature function. The second element that I would say is, lead with flash. So we've optimized flash storage that complements specifically some of the ZOS, the System Z workload, which is significant for us to deliver to the marketplace as well. And then third, is this idea of Z hyperlink. Z hyperlink is this idea of, like, synch iO. It's a different structure that, yes, it'll take a while for adoption, we have a number of our alphas that are working in partnership with us to solution. Well, we're going to be doing replication, and also some of the iO streams differently than we had in the past. >> Question for you on the alphas. >> Yeah. >> From a business perspective, since so much has changed, lots of announcements just in the last 36 hours, as technology changes rapidly and stop-run tech companies are, like you said, poised to deliver agility faster, when you're talking with alphas, as you said, kind of in the nascent stages of a use case being developed, what are some of the key business metrics that your alpha clients are articulating to you that, when we get to x stage of this alpha, we need to be able to demonstrate x, y, z back to the business, thinking of cost reductions, resource allocations, faster time to market, what are some of those business KPI's that you're hearing from your clients? >> Yeah. So I would say it's price performance as well as capacity based on the amount of data growth. So those three things are fundamental components that come up quite often. Now, it usually is made very clear to us that things like security, like quality, that's job one. That's table stakes. Like, if we want to have fine dining, we'll just assume there's going to be this nice handkerchief as well as tablecloth. Well, security and quality are just fundamental. So they want to think about those things less. Because they're just naturally being delivered via whatever technology we're putting out or delivering from the lab. >> Alright, let's bring it back to VMworld. We're here. VMware, VMworld, what do you guys got going here, what's the relevance of all the activity that you have going to this event? >> So what's great about the event is we have the data backup bar that's associated with what we're doing with Spectrum Protect Plus. What I personally like and love about the Spectrum Protect Plus is simplicity. It's delivering this idea of usability. Which is important because we received feedback from our clients in very early stages on how we deliver. So we have a data backup bar to discuss some of that technology and actually run through specific downloads which I think is great, cause you get feedback out on the floor immediately to ensure that we're improving. The other aspect of our booths is discussing things, some of the fundamental infrastructure just like we talked previously on tape, as well as DS8000, cause DS8000 is not only a mainframe attach, but it's attachment agnostic. So we support aspects of distributive storage as well. For instance, we have some of the VMware enhancements that will allow us to more efficiently capture or reclaim data in thin provision volumes, and VMware has been fundamental in partnering with us to deliver. >> So continue to go to market approaches with VMware on the backup side, also on the cloud foundation side for IBM? >> Yes. >> Excellent. Thank you so much for stopping by The Cube again and sharing your thoughts and what's going on with the industry and how IBM is moving forward with respect to innovation and working with clients together. >> Right. Wonderful. Thank you. >> For my cohost Dave Vellante, I'm Lisa Martin. >> Stick around, you're watching day two of The Cube's coverage of VMworld 2017, we'll be right back. [Upbeat Synth Music]
SUMMARY :
Brought to you by VMware lots of exciting conversations that we've had so far. And we have Cube alumni Calline Sanchez, It's always great to have discussions with you. what's new with IBM and VMware? and also this idea that we work to optimize data high capacity element that we intend to deliver. and see if you see this, Calline. it's important that we continue to innovate, and now the flash guys, oh, tape, and our goal is to continue that and hopefully that can continue. and you talked about tape so that we stay ahead of the delivery path and that pace has accelerated quite dramatically. that we really work to reduce and some of these original demos in order to grow and we were talking about the anatomy of a transaction, And so, give us the update on what's happening there. so that we can have experts remotely, Like, if we want to have fine dining, Alright, let's bring it back to VMworld. So we have a data backup bar and how IBM is moving forward with respect to innovation Thank you. of VMworld 2017,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Michael Dowd | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Calline Sanchez | PERSON | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
VMworld | ORGANIZATION | 0.99+ |
Three | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
second element | QUANTITY | 0.99+ |
Ed Walsh | PERSON | 0.99+ |
IBM Enterprise Storage Systems | ORGANIZATION | 0.99+ |
Ed | PERSON | 0.99+ |
third | QUANTITY | 0.99+ |
VMworld | EVENT | 0.99+ |
DS8000 | COMMERCIAL_ITEM | 0.99+ |
ZOS | TITLE | 0.99+ |
VMworld 2017 | EVENT | 0.98+ |
Spectrum Protect Plus | COMMERCIAL_ITEM | 0.98+ |
Dave | PERSON | 0.98+ |
three things | QUANTITY | 0.98+ |
Lincoln | LOCATION | 0.97+ |
Calline | ORGANIZATION | 0.97+ |
Day two | QUANTITY | 0.96+ |
one point | QUANTITY | 0.95+ |
Las Vegas | LOCATION | 0.94+ |
over a year | QUANTITY | 0.94+ |
Cube | ORGANIZATION | 0.94+ |
about a cent per gig | QUANTITY | 0.93+ |
day two | QUANTITY | 0.91+ |
one thing | QUANTITY | 0.87+ |
Z hyperlink | TITLE | 0.85+ |
Jazz | LOCATION | 0.84+ |
one | QUANTITY | 0.84+ |
three major things | QUANTITY | 0.83+ |
Rocky | ORGANIZATION | 0.83+ |
Vice President | PERSON | 0.8+ |
today | DATE | 0.79+ |
Bar | COMMERCIAL_ITEM | 0.76+ |
Z | TITLE | 0.73+ |
VMware | TITLE | 0.7+ |
The Cube | ORGANIZATION | 0.69+ |
hours | DATE | 0.55+ |
Narrator | TITLE | 0.5+ |
36 | QUANTITY | 0.5+ |
Z13 | COMMERCIAL_ITEM | 0.41+ |
System | ORGANIZATION | 0.37+ |
Enterprise | COMMERCIAL_ITEM | 0.25+ |
Jamie Thomas, IBM - IBM Interconnect 2017 - #ibminterconnect - #theCUBE
>> Announcer: Live, from Las Vegas, it's the Cube. Covering InterConnect 2017. Brought to you by, IBM. >> Okay welcome back everyone, we're here live in Las Vegas for IBM InterConnect 2017, this is the Cube coverage here, in Las Vegas for IBM's cloud and data shows. It turns out, I'm John Furrier, with my cohost Dave Vellante, next guess is Jamie Thomas, general manager of systems development and strategy at IBM, Cube Alum. Great to see you, welcome back. >> Thank you, great to see you guys as usual. >> So, huge crowds here. This is I think, the biggest show I've been to for IBM. It's got lines around the corner, just a ton of traffic online, great event. But it's the cloud show, but it's a little bit different. What's the twist here today at InterConnect? >> Well, if you saw the Keynote, I think we've definitely demonstrated that while we're focused on differentiating experience on the cloud through cloud native services, we're also interesting in bridging existing clients IT investments into that environment. So, supporting hybrid cloud scenarios, understanding how we can provide connective fabric solutions, if you will, to enable clients to run mobile applications on the cloud and take advantage of the investments they've made and their existing transactional infrastructure over a period of time. And so the Keynote really featured that combination of capabilities and what we're doing to bring those solution areas to clients and allow them to be productive. >> And the hybrid cloud is front and center, obviously. IOT on the data side, you've seen a lot of traction there. AI and machine learning, kind of powering and lifting this up, it's a systems world now, I mean this is the area that you're in. Cause you have the component pieces, the composibility of that. How are you guys facilitating the hybrid cloud journey for customers? Because now, it's not just all here it is, I might have a little bit of this and a little bit of that, so you have this component-isationer composobility that app developers are consistent with, yet the enterprises want that work load flexibility. What do you guys do to facilitate that? >> Well we absolutely believe that infrastructure innovation is critical on this hybrid cloud journey. And we're really focused on three main areas when we think about that innovation. So, integration, security, and supportive cognitive workloads. When we look at things like integration, we're focused on developers as key stake holders. We have to support the open communities and frameworks that they're leveraging, we have to support API's and allow them to tap into our infrastructure and those investments once again, and we also have to ensure that data and workload can be flexibly moved around in the future because these will allow better characteristics for developers in terms of how they're designing their applications as they move forward with this journey. >> And the insider threat, though, is a big thing too. >> Yes. >> I mean security is not only table stakes, it's a highly sensitive area. >> It's a given. And as you said, it's not just about protecting from the outside threats, it's about protecting from internal threats, even from those who may have privileged access to the systems, so that's why, with our systems infrastructure, we have protected from the chip, all the way through the levels of hardware into the software layer. You heard us talk about some of that today with the shipment of secure service containers that allow us to support the system both at install time and run time, and support the applications and the data appropriately. These systems that run Blockchain, our high security Blockchain services, LinuxONE, we have the highest certification in the industry, EAL five plus, and we're supporting FIPS 120-two, level four cryptology. So it's about protecting at all layers of the system, because our perspective is, there's not a traditional barrier, data is the new perimeter of security. So you've got to protect the data, at rest, in motion, and across the life cycle of the data. >> Let's go back to integration for a second. Give us an example of some of the integrations that you're doing that are high profile. >> Well one of the key integrations is that a lot of clients are creating new mobile applications. They're tapping back into the transactions that reside in the mainframe environment, so we've invested in ZOS Connect and this API set of capabilities to allow clients to do that. It's very prevalent in many different industries, whether it's retail banking, the retail sector, we have a lot of examples of that. It's allowing them to create new services as well. So it's not just about extending the system, but being able to create entirely new solutions. And the areas of credit card services is a good example. Some of the organizations are doing that. And it allows for developer productivity. >> And then, on the security side, where does encryption fit? You mentioned you're doing some stuff at the chip level, end to end encryption. >> Yeah it really, it's at all levels, right? From the chip level, through the firmware levels. Also, we've added encryption capability to ensure that data is encrypted at rest, as well as in motion, and we've done that in a way that encrypts these data sets that are heavily used in the main frame environment as an example, without impending on developer productivity. So that's another key aspect of how we look at this. How can we provide this data protection? But once again, not slow down the velocity of the developers. Cause if we slow down the velocity of the developers, they will be an inhibitor to achieving the end goal. >> How important is the ecosystem on that point? Because you have security, again, end to end, you guys have that fully, you're protecting the data as it moves around, so it's not just in storage, it's everywhere, moving around, in flight, as they say. But now you got ecosystem parties, cause you got API economy, you're dealing with no perimeter, but now also you have relationships as technology partners. >> Yes, well the ecosystem is really important. So if we think about it from a developer perspective, obviously supporting these open frameworks is critical. So supporting Linux and Docker and Spark and all of those things. But also, to be able to innovate at the rate and pace we need, particularly for things like cognitive workloads, that's why we created the Open Power Foundation. So we have more than 300 partners that we're able to innovate with, that allow us to create the solutions that we think we'll need for these cognitive workloads. >> What is a cognitive workload? >> So a cognitive workload is what I would call an extremely data hungry workload, the example that we can all think of is we're expecting, when we experience the world around us, we're expecting services to be brought to us, right, the digital economy understands our desires and wants and reacts immediately. So all of that is driving, that expectation is driving this growth and artificial intelligence, machine learning, deep learning type algorithms. Depending on what industry you're in, they take on a different persona, but there's so many different problems that can be solved by this, whether it's I need to have more insight into the retail offers I provide to an in consumer, to I need to be able to do fraud analytics because I'm in the financial services industry, there's so many examples of these cognitive applications. The key factors are just, tremendous amount of data, and a constrained amount of time to get business insight back to someone. >> When you do these integrations and you talk about the security investments that you're making, how do you balance the resource allocation between say, IBM platforms, mainframe, power, and the OS's, the power in those, and Linux, for example, which is such a mainstay of what you guys are doing. Are you doing those integrations on the open side as well in Linux and going deep into the core, or is it mostly focused on, sort of, IBM owned technology? >> So it really depends on what problem we're trying to solve. So, for instance, if we're trying to solve a problem where we're marrying data insight with a transaction, we're going to implement a lot of that capability on ZOS, cause we want to make sure that we're reducing data latency and how we execute the processing, if you will. If we're looking at things like new work loads and evolution of new work loads, and new things are being created, that's more naturally fit for purpose from a Linux perspective. So we have to use judgment, a lot of the new programming, the new applications, are naturally going to be done on a Linux platform, cause once again that's a platform of choice for the developer community. So, we have to think about whether we're trying to leverage existing transactions with speed, or whether we're allowing developers to create new assets, and that's a key factor in what we look at. >> Jamie, your role, is somewhat unique inside of IBM, the title of GM system's development and strategy. So what's your scope, specifically? >> So, I'm responsible for the systems development involved in our processor's mainframes, power systems, and storage. And of course, as a strategy person for a unit like that, I have responsibility for thinking about these hybrid scenarios and what do we need to do to make our clients successful on this journey? How do we take advantage of their tremendous investments they made with us over years. We have strong responsibility for those investments and making sure the clients get value. And also understanding where they need to go in the future and evolving our architecture and our strategic decisions, along those lines. >> So you influence development? >> Jamie: Yes. >> In a big way, obviously. It's a lot of roadmap work. >> Jamie: Yes. >> A lot of working with clients to figure out requirements? >> Well I have client support too, so I have to make sure things run. >> What about quantum computing? This has been a big topic, what's the road map look like? What's the evolution of that look like? Talk about that initiative. >> Well if I gave you the full road map they'd take me out of here with a hook out of this chair. >> You're too good for that, damn, almost got it from you. >> But we did announce the industries first commercial universal quantum computing project. A few weeks ago. It's called IBM Q, so we had some clever branding help, because Q makes me think of the personality in the James Bond movie who was always involved in the latest R&D research activity. And it really is the culmination of decades of research between IBM researchers and researchers around the world, to create this system that hopefully can solve problems to date, that are unsolvable today with classical computers. So, problems in areas like material science and chemistry. Last year we had announced quantum experience, which is an online access to a quantum capabilities in our Yorktown research laboratory. And over the last year, we've had more than 40,000 users access this capability. And they've actually executed a tremendous number of experiments. So we've learned from that, and now we're on this next leg of the journey. And we see a world where IBM Q could work together with our classical computers to solve really really tough problems. >> And that computing is driving a lot of the IOT, whether that's health care, to industrial, and everything in between. >> Well we're in the early stages of quantum, to be fair, but there's a lot of unique problems that we believe that it will solve. We do not believe that everything, of course, will move from classical to quantum. It will be a combination, an evolution, of the capabilities working together. But it's a very different system and it will have unique properties that allow us to do things differently. >> So, what are the basics? Why quantum computing? I presume it's performance, scale, cost, but it's not traditional, binary, computing, is that right? >> Yes. It's very, very different. In fact, if. >> Oh we just got the two minute sign. >> It's a very different computing model. It's a very different physical, computing model, right? It's built on this unit called a Q bit, and the interesting thing about a Q bit is it could be both a zero and a one at the same time. So it kind of twists our minds a little bit. But because of that, and those properties, it can solve very unique problems. But we're at the early part of the journey. So this year, our goal is to work with some organizations, learn from the commercialization of some of the first systems, which will be run in a cloud hosted model. And then we'll go from there. But, it's very promising. >> In the timeframe for commercial systems, have you guys released that? >> Well, this year, we'll start the commercial journey, but within the next few years we do plan to have a quantum computer that would then, basically, out strip the power of the largest super computers that we have today in the industry. But that's, you know, over the next few years we'll be evolving to that level. Because eventually, that's the goal, right? Is to solve the problems that we can't solve with today's classical computers. >> Talk about real quickly, in the last couple minutes, Blockchain, and where that's going, because you have a lot of banks and financial institutions looking at this as part of the messaging and the announcements here. >> Well, Blockchain is one of those workloads of course that we're optimizing with a lot that security work that I talked about earlier so. The target of our high security Blockchain services is LinuxONE, is driving a lot of encryption strategy. This week, in fact, we've seen a number of examples of Blockchain. One was talked about this morning, which was around diamond provenance, from the Everledger organization. Very clever implementation of Blockchain. We've had a number of financial institutions that are using Blockchain. And I also showed an interesting example today. Plastic Bank, which is an organization that's using Blockchain to allow ecosystem improvement, or improving our planet, if you will, by allowing communities to exchange plastic, recyclable plastic for currency. So it's really about enabling plastic to be turned into currency through the use of Blockchain. So a very novel example of a foundational research organization improving the environment and allowing communities to take advantage of that. >> Jamie thanks for stopping by the Cube, really appreciate giving the update and insight into the quantum, the Q project, and all the greatness around, all the hard work going to into the hybrid cloud, the security-osity is super important, thanks for sharing. >> It's good to see you. >> Okay we're live here, in Mandalay Bay, for IBM InterConnect 2017, stay with us for more live coverage, after this short break.
SUMMARY :
Announcer: Live, from Las Vegas, it's the Cube. and strategy at IBM, Cube Alum. the biggest show I've been to for IBM. and take advantage of the investments and a little bit of that, so you have this in the future because these will allow And the insider threat, though, it's a highly sensitive area. and support the applications and the data appropriately. Let's go back to integration for a second. So it's not just about extending the system, end to end encryption. of the developers. How important is the ecosystem on that point? So we have more than 300 partners that we're able the example that we can all think of and the OS's, the power in those, a lot of the new programming, the title of GM system's development and strategy. and making sure the clients get value. It's a lot of roadmap work. so I have to make sure things run. What's the evolution of that look like? Well if I gave you the full road map damn, almost got it from you. and researchers around the world, And that computing is driving a lot of the IOT, of the capabilities working together. In fact, if. and the interesting thing about a Q bit Because eventually, that's the goal, right? the messaging and the announcements here. of course that we're optimizing with a lot that and insight into the quantum, the Q project, Okay we're live here, in Mandalay Bay,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Jamie Thomas | PERSON | 0.99+ |
Jamie | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Last year | DATE | 0.99+ |
Mandalay Bay | LOCATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
more than 40,000 users | QUANTITY | 0.99+ |
Open Power Foundation | ORGANIZATION | 0.99+ |
This week | DATE | 0.99+ |
more than 300 partners | QUANTITY | 0.99+ |
ZOS | TITLE | 0.99+ |
this year | DATE | 0.99+ |
ZOS Connect | TITLE | 0.99+ |
FIPS 120-two | OTHER | 0.99+ |
two minute | QUANTITY | 0.99+ |
Yorktown | LOCATION | 0.99+ |
last year | DATE | 0.99+ |
Linux | TITLE | 0.98+ |
decades | QUANTITY | 0.98+ |
first systems | QUANTITY | 0.98+ |
both | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
today | DATE | 0.96+ |
Blockchain | TITLE | 0.96+ |
InterConnect 2017 | EVENT | 0.96+ |
Everledger | ORGANIZATION | 0.96+ |
Plastic Bank | ORGANIZATION | 0.96+ |
zero | QUANTITY | 0.95+ |
InterConnect | EVENT | 0.95+ |
Spark | TITLE | 0.92+ |
three main areas | QUANTITY | 0.92+ |
this morning | DATE | 0.9+ |
IBM InterConnect 2017 | EVENT | 0.89+ |
Cube Alum | ORGANIZATION | 0.88+ |
One | QUANTITY | 0.85+ |
level four | OTHER | 0.84+ |
few weeks ago | DATE | 0.84+ |
#ibminterconnect | EVENT | 0.8+ |
IBM Interconnect 2017 | EVENT | 0.78+ |
first commercial | QUANTITY | 0.74+ |
next few years | DATE | 0.73+ |
Docker | TITLE | 0.71+ |
years | DATE | 0.7+ |
Cube | COMMERCIAL_ITEM | 0.65+ |
Q | PERSON | 0.6+ |
second | QUANTITY | 0.56+ |
LinuxONE | ORGANIZATION | 0.55+ |
James Bond | PERSON | 0.54+ |
Keynote | TITLE | 0.54+ |
EAL five plus | OTHER | 0.52+ |
LinuxONE | TITLE | 0.51+ |
Cube | ORGANIZATION | 0.49+ |
#theCUBE | ORGANIZATION | 0.4+ |
James Kobielus, IBM - IBM Machine Learning Launch - #IBMML - #theCUBE
>> [Announcer] Live from New York, it's the Cube. Covering the IBM Machine Learning Launch Event. Brought to you by IBM. Now here are your hosts Dave Vellante and Stu Miniman. >> Welcome back to New York City everybody, this is the CUBE. We're here live at the IBM Machine Learning Launch Event. Bringing analytics and transactions together on Z, extending an announcement that IBM made a couple years ago, sort of laid out that vision, and now bringing machine learning to the mainframe platform. We're here with Jim Kobielus. Jim is the Director of IBM's Community Engagement for Data Science and a long time CUBE alum and friend. Great to see you again James. >> Great to always be back here with you. Wonderful folks from the CUBE. You ask really great questions and >> Well thank you. >> I'm prepared to answer. >> So we saw you last week at Spark Summit so back to back, you know, continuous streaming, machine learning, give us the lay of the land from your perspective of machine learning. >> Yeah well machine learning very much is at the heart of what modern application developers build and that's really the core secret sauce in many of the most disruptive applications. So machine learning has become the core of, of course, what data scientists do day in and day out or what they're asked to do which is to build, essentially artificial neural networks that can process big data and find patterns that couldn't normally be found using other approaches. And then as Dinesh and Rob indicated a lot of it's for regression analysis and classification and the other core things that data scientists have been doing for a long time, but machine learning has come into its own because of the potential for great automation of this function of finding patterns and correlations within data sets. So today at the IBM Machine Learning Launch Event, and we've already announced it, IBM Machine Learning for ZOS takes that automation promised to the next step. And so we're real excited and there'll be more details today in the main event. >> One of the most funs I had, most fun I had last year, most fun interviews I had last year was with you, when we interviewed, I think it was 10 data scientists, rock star data scientists, and Dinesh had a quote, he said, "Machine learning is 20% fun, 80% elbow grease." And data scientists sort of echoed that last year. We spent 80% of our time wrangling data. >> [Jim] Yeah. >> It gets kind of tedious. You guys have made announcements to address that, is the needle moving? >> To some degree the needle's moving. Greater automation of data sourcing and preparation and cleansing is ongoing. Machine learning is being used for that function as well. But nonetheless there is still a lot of need in the data science, sort of, pipeline for a lot of manual effort. So if you look at the core of what machine learning is all about, it's supervised learning involves humans, meaning data scientists, to train their algorithms with data and so that involves finding the right data and then of course doing the feature engineering which is a very human and creative process. And then to be training the data and iterating through models to improve the fit of the machine learning algorithms to the data. In many ways there's still a lot of manual functions that need expertise of data scientists to do it right. There's a lot of ways to do machine learning wrong you know there's a lot of, as it were, tricks of the trade you have to learn just through trial and error. A lot of things like the new generation of things like generative adversarial models ride on machine learning or deep learning in this case, a multilayered, and they're not easy to get going and get working effectively the first time around. I mean with the first run of your training data set, so that's just an example of how, the fact is there's a lot of functions that can't be fully automated yet in the whole machine learning process, but a great many can in fact, especially data preparation and transformation. It's being automated to a great degree, so that data scientists can focus on the more creative work that involves subject matter expertise and really also application development and working with larger teams of coders and subject matter experts and others, to be able to take the machine learning algorithms that have been proved out, have been trained, and to dry them to all manner of applications to deliver some disruptive business value. >> James, can you expand for us a little bit this democratization of before it was not just data but now the machine learning, the analytics, you know, when we put these massive capabilities in the broader hands of the business analysts the business people themselves, what are you seeing your customers, what can they do now that they couldn't do before? Why is this such an exciting period of time for the leveraging of data analytics? >> I don't know that it's really an issue of now versus before. Machine learning has been around for a number of years. It's artificial neural networks at the very heart, and that got going actually in many ways in the late 50s and it steadily improved in terms of sophistication and so forth. But what's going on now is that machine learning tools have become commercialized and refined to a greater degree and now they're in a form in the cloud, like with IBM machine learning for the private cloud on ZOS, or Watson machine learning for the blue mixed public cloud. They're at a level of consumability that they've never been at before. With software as a service offering you just, you pay for it, it's available to you. If you're a data scientist you being doing work right away to build applications, derive quick value. So in other words, the time to value on a machine learning project continues to shorten and shorten, due to the consumability, the packaging of these capabilities and to cloud offerings and into other tools that are prebuilt to deliver success. That's what's fundamentally different now and it's just an ongoing process. You sort of see the recent parallels with the business intelligence market. 10 years ago BI was reporting and OLEP and so forth, was only for the, what we now call data scientists or the technical experts and all that area. But in the last 10 years we've seen the business intelligence community and the industry including IBM's tools, move toward more self service, interactive visualization, visual design, BI and predictive analytics, you know, through our cognos and SPSS portfolios. A similar dynamic is coming in to the progress of machine learning, the democratization, to use your term, the more self service model wherein everybody potentially will be able to be, to do machine learning, to build machine learning and deep learning models without a whole of university training. That day is coming and it's coming fairly rapidly. It's just a matter of the maturation of this technology in the marketplace. >> So I want to ask you, you're right, 1950s it was artificial neural networks or AI, sort of was invented I guess, the concept, and then in the late 70s and early 80s it was heavily hyped. It kind of died in the late 80s or in the 90s, you never heard about it even the early 2000s. Why now, why is it here now? Is it because IBM's putting so much muscle behind it? Is it because we have Siri? What is it that has enabled that? >> Well I wish that IBM putting muscle behind a technology can launch anything to success. And we've done a lot of things in that regard. But the thing is, if you look back at the historical progress of AI, I mean, it's older than me and you in terms of when it got going in the middle 50s as a passion or a focus of computer scientists. What we had for the last, most of the last half century is AI or expert systems that were built on having to do essentially programming is right, declared a rule defining how AI systems could process data whatever under various scenarios. That didn't prove scalable. It didn't prove agile enough to learn on the fly from the statistical patterns within the data that you're trying to process. For face recognition and voice recognition, pattern recognition, you need statistical analysis, you need something along the lines of an artificial neural network that doesn't have to be pre-programmed. That's what's new now about in the last this is the turn of this century, is that AI has become predominantly now focused not so much on declarative rules, expert systems of old, but statistical analysis, artificial neural networks that learn from the data. See the, in the long historical sweep of computing, we have three eras of computing. The first era before the second world war was all electromechanical computing devices like IBM's start of course, like everybody's, was in that era. The business logic was burned into the hardware as it were. The second era from the second world war really to the present day, is all about software, programming, it's COBAL, 4trans, C, Java, where the business logic has to be developed, coded by a cadre of programmers. Since the turn of this millennium and really since the turn of this decade, it's all moved towards the third era, which is the cognitive era, where you're learning the business rules automatically from the data itself, and that involves machine learning at its very heart. So most of what has been commercialized and most of what is being deployed in the real world working, successful AI, is all built on artificial neural networks and cognitive computing in the way that I laid out. Where, you still need human beings in the equation, it can't be completely automated. There's things like unsupervised learning that take the automation of machine learning to a greater extent, but you still have the bulk of machine learning is supervised learning where you have training data sets and you need experts, data scientists, to manage that whole process, that over time supervised learning is evolving towards who's going to label the training data sets, especially when you have so much data flooding in from the internet of things and social media and so forth. A lot of that is being outsourced to crowd sourcing environments in terms of the ongoing labeling of data for machine learning projects of all sorts. That trend will continue a pace. So less and less of the actual labeling of the data for machine learning will need to be manually coded by data scientists or data engineers. >> So the more data the better. See I would argue in the enablement pie. You're going to disagree with that which is good. Let's have a discussion [Jim Laughs]. In the enablement pie, I would say the profundity of Hadup was two things. One is I can leave data where it is and bring code to data. >> [Jim] Yeah. >> 5 megabytes of code to petabyte of data, but the second was the dramatic reduction in the cost to store more data, hence my statement of the more data the better, but you're saying, meh maybe not. Certainly for compliance and other things you might not want to have data lying around. >> Well it's an open issue. How much data do you actually need to find the patterns of interest to you, the correlations of interest to you? Sampling of your data set, 10% sample or whatever, in most cases that might be sufficient to find the correlations you're looking for. But if you're looking for some highly deepened rare nuances in terms of anomalies or outliers or whatever within your data set, you may only find those if you have a petabyte of data of the population of interest. So but if you're just looking for broad historical trends and to do predictions against broad trends, you may not need anywhere near that amount. I mean, if it's a large data set, you may only need five to 10% sample. >> So I love this conversation because people have been on the CUBE, Abi Metter for example said, "Dave, sampling is dead." Now a statistician said that's BS, no way. Of course it's not dead. >> Storage isn't free first of all so you can't necessarily save and process all the data. Compute power isn't free yet, memory isn't free yet, so forth so there's lots... >> You're working on that though. >> Yeah sure, it's asymptotically all moving towards zero. But the bottom line is if the underlying resources, including the expertise of your data scientists that's not for free, these are human beings who need to make a living. So you've got to do a lot of things. A, automate functions on the data science side so that your, these experts can radically improve their productivity. Which is why the announcement today of IBM machine learning is so important, it enables greater automation in the creation and the training and deployment of machine learning models. It is a, as Rob Thomas indicated, it's very much a multiplier of productivity of your data science teams, the capability we offer. So that's the core value. Because our customers live and die increasingly by machine learning models. And the data science teams themselves are highly inelastic in the sense that you can't find highly skilled people that easily at an affordable price if you're a business. And you got to make the most of the team that you have and help them to develop their machine learning muscle. >> Okay, I want to ask you to weigh in on one of Stu's favorite topics which is man versus machine. >> Humans versus mechanisms. Actually humans versus bots, let's, okay go ahead. >> Okay so, you know a lot of discussions, about, machines have always replaced humans for jobs, but for the first time it's really beginning to replace cognitive functions. >> [Jim] Yeah. >> What does that mean for jobs, for skill sets? The greatest, I love the comment, the greatest chess player in the world is not a machine. It's humans and machines, but what do you see in terms of the skill set shift when you talk to your data science colleagues in these communities that you're building? Is that the right way to think about it, that it's the creativity of humans and machines that will drive innovation going forward. >> I think it's symbiotic. If you take Watson, of course, that's a star case of a cognitive AI driven machine in the cloud. We use a Watson all the time of course in IBM. I use it all the time in my job for example. Just to give an example of one knowledge worker and how he happens to use AI and machine learning. Watson is an awesome search engine. Through multi-structure data types and in real time enabling you to ask a sequence of very detailed questions and Watson is a relevance ranking engine, all that stuff. What I've found is it's helped me as a knowledge worker to be far more efficient in doing my upfront research for anything that I might be working on. You see I write blogs and I speak and I put together slide decks that I present and so forth. So if you look at knowledge workers in general, AI as driving far more powerful search capabilities in the cloud helps us to eliminate a lot of the grunt work that normally was attended upon doing deep research into like a knowledge corpus that may be preexisting. And that way we can then ask more questions and more intelligent questions and really work through our quest for answers far more rapidly and entertain and rule out more options when we're trying to develop a strategy. Because we have all the data at our fingertips and we've got this expert resource increasingly in a conversational back and forth that's working on our behalf predictively to find what we need. So if you look at that, everybody who's a knowledge worker which is really the bulk now of the economy, can be far more productive cause you have this high performance virtual assistant in the cloud. I don't know that it's really going, AI or deep learning or machine learning, is really going to eliminate a lot of those jobs. It'll just make us far smarter and more efficient doing what we do. That's, I don't want to belittle, I don't want to minimize the potential for some structural dislocation in some fields. >> Well it's interesting because as an example, you're like the, you're already productive, now you become this hyper-productive individual, but you're also very creative and can pick and choose different toolings and so I think people like you it's huge opportunities. If you're a person who used to put up billboards maybe it's time for retraining. >> Yeah well maybe you know a lot of the people like the research assistants and so forth who would support someone like me and most knowledge worker organizations, maybe those people might be displaced cause we would have less need for them. In the same way that one of my very first jobs out of college before I got into my career, I was a file clerk in a court in Detroit, it's like you know, a totally manual job, and there was no automation or anything. You know that most of those functions, I haven't revisited that court in recent years, I'm sure are automated because you have this thing called computers, especially PCs and LANs and so forth that came along since then. So a fair amount of those kinds of feather bedding jobs have gone away and in any number of bureaucracies due to automation and machine learning is all about automation. So who knows where we'll all end up. >> Alright well we got to go but I wanted to ask you about... >> [Jim] I love unions by the way. >> And you got to meet a lot of lawyers I'm sure. >> Okay cool. >> So I got to ask you about your community of data scientists that you're building. You've been early on in that. It's been a persona that you've really tried to cultivate and collaborate with. So give us an update there. What's your, what's the latest, what's your effort like these days? >> Yeah, well, what we're doing is, I'm on a team now that's managing and bringing together all of our program for community engagement programs for really for across portfolio not just data scientists. That involves meet ups and hack-a-thons and developer days and user groups and so forth. These are really important professional forums for our customers, our developers, our partners, to get together and share their expertise and provide guidance to each other. And these are very very important for these people to become very good at, to help them, get better at what they do, help them stay up to speed on the latest technologies. Like deep learning, machine learning and so forth. So we take it very seriously at IBM that communities are really where customers can realize value and grow their human capital ongoing so we're making significant investments in growing those efforts and bringing them together in a unified way and making it easier for like developers and IT administrators to find the right forums, the right events, the right content, within IBM channels and so forth, to help them do their jobs effectively and machine learning is at the heart, not just of data science, but other professions within the IT and business analytics universe, relying more heavily now on machine learning and understanding the tools of the trade to be effective in their jobs. So we're bringing, we're educating our communities on machine learning, why it's so critically important to the future of IT. >> Well your content machine is great content so congratulations on not only kicking that off but continuing it. Thanks Jim for coming on the CUBE. It's good to see you. >> Thanks for having me. >> You're welcome. Alright keep it right there everybody, we'll be back with our next guest. The CUBE, we're live from the Waldorf-Astoria in New York City at the IBM Machine Learning Launch Event right back. (techno music)
SUMMARY :
Brought to you by IBM. Great to see you again James. Wonderful folks from the CUBE. so back to back, you know, continuous streaming, and that's really the core secret sauce in many One of the most funs I had, most fun I had last year, is the needle moving? of the machine learning algorithms to the data. of machine learning, the democratization, to use your term, It kind of died in the late 80s or in the 90s, So less and less of the actual labeling of the data So the more data the better. but the second was the dramatic reduction in the cost the correlations of interest to you? because people have been on the CUBE, so you can't necessarily save and process all the data. and the training and deployment of machine learning models. Okay, I want to ask you to weigh in Actually humans versus bots, let's, okay go ahead. but for the first time it's really beginning that it's the creativity of humans and machines and in real time enabling you to ask now you become this hyper-productive individual, In the same way that one of my very first jobs So I got to ask you about your community and machine learning is at the heart, Thanks Jim for coming on the CUBE. in New York City at the IBM Machine Learning
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jim Kobielus | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
Dinesh | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
James | PERSON | 0.99+ |
80% | QUANTITY | 0.99+ |
James Kobielus | PERSON | 0.99+ |
20% | QUANTITY | 0.99+ |
Jim Laughs | PERSON | 0.99+ |
five | QUANTITY | 0.99+ |
Rob Thomas | PERSON | 0.99+ |
Detroit | LOCATION | 0.99+ |
1950s | DATE | 0.99+ |
last year | DATE | 0.99+ |
New York | LOCATION | 0.99+ |
New York City | LOCATION | 0.99+ |
10 data scientists | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
Siri | TITLE | 0.99+ |
Dave | PERSON | 0.99+ |
10% | QUANTITY | 0.99+ |
5 megabytes | QUANTITY | 0.99+ |
Abi Metter | PERSON | 0.99+ |
two things | QUANTITY | 0.99+ |
first time | QUANTITY | 0.99+ |
last week | DATE | 0.99+ |
second | QUANTITY | 0.99+ |
90s | DATE | 0.99+ |
ZOS | TITLE | 0.99+ |
Rob | PERSON | 0.99+ |
last half century | DATE | 0.99+ |
today | DATE | 0.99+ |
early 2000s | DATE | 0.98+ |
Java | TITLE | 0.98+ |
one | QUANTITY | 0.98+ |
C | TITLE | 0.98+ |
10 years ago | DATE | 0.98+ |
first run | QUANTITY | 0.98+ |
late 80s | DATE | 0.98+ |
Watson | TITLE | 0.97+ |
late 70s | DATE | 0.97+ |
late 50s | DATE | 0.97+ |
zero | QUANTITY | 0.97+ |
IBM Machine Learning Launch Event | EVENT | 0.96+ |
early 80s | DATE | 0.96+ |
4trans | TITLE | 0.96+ |
second world war | EVENT | 0.95+ |
IBM Machine Learning Launch Event | EVENT | 0.94+ |
second era | QUANTITY | 0.94+ |
IBM Machine Learning Launch | EVENT | 0.93+ |
Stu | PERSON | 0.92+ |
first jobs | QUANTITY | 0.92+ |
middle 50s | DATE | 0.91+ |
couple years ago | DATE | 0.89+ |
agile | TITLE | 0.87+ |
petabyte | QUANTITY | 0.85+ |
BAL | TITLE | 0.84+ |
this decade | DATE | 0.81+ |
three eras | QUANTITY | 0.78+ |
last 10 years | DATE | 0.78+ |
this millennium | DATE | 0.75+ |
third era | QUANTITY | 0.72+ |