Richard Henshall v1 ITA Red Hat Ansiblefest
>> Welcome. My name is Richard Henshall. I'm a senior manager for product management, for Ansible Automation Platform. Think to yourself, how did you adapt to the changes this year? How was your team forced to adapt? And were you prepared and had you been automating already? Talking for the Ansible team, we're ready to move forward. Now we suspect that sentiment is shared by many of us here. We just had a good lesson in why being able to adapt quickly is so important. The previous ways of working may not always be available to us, and we have to change the way we focus and look at things. And this is why I have such a strong belief in the power that automation can gift us. And if we remind ourselves of what the goal of automation is, and to put it very simply, to perform work with minimal human interaction On one hand, this sounds great, no work. But it can also seem very impersonal. And the reality is automation cannot be achieved without knowledge and experience. Because what needs to be automated is what we learn. So much of what we do is specific to our circumstances, to our business or our own personal backgrounds. So how we approach automation is also important. And that's why this year's message "Automate to connect" is relevant to the times we find ourselves in. As a rhetorical question, and of course, all of these are rhetorical questions. I'm sat in a room at my house, staring at a camera. I would next ask you why we need to connect? And what do we connect for? Do we connect to share knowledge, to learn from others, to work on common goals and objectives? Reality is it should be all of these. Any intent when we connect from our work perspective, needs to be about collaboration. Collaboration is essential when we approach how we deal with change. Because when we talk about change, we often see it explained as people process and technology. But when we're forced to change, the unexpected circumstances, you can't always be prepared. You're not always given the time to plan and prepare the way you'd like. So having a way to connect, to build relationships and to collaborate is more important than ever. Back in the days when I was learning my trade, middleware engineering before the endless video calls, presentations and spreadsheets, the most difficult relationship to improve was between us in engineering and the network team. And it wasn't because of the skills it wasn't because we didn't like each other, at least I'd like to think so. And it wasn't for lack of trying. It's because the network team, they're on a different floor, big security door, magnetic locks, special key cards that you needed to have access for. It was aggressively protected so they couldn't be interfered with. It wasn't this opportunity to build the relationships in the same way that we could when we could go and collaborate with the Linux Windows or storage teams. You couldn't wander off and discuss a problem, just have a chat, they were locked away. Now, maybe they like that and sometimes it's good to be locked away, but it forms a barrier. And it's a barrier to collaboration. And so with this group, collaboration required meetings, it required planning and this made it harder. And when something's hard, it makes it easier not to do it. And additionally, we didn't have a platform to help us. So ask yourself, does that sound familiar to your circumstance? What we needed to connect those relationships and we've seen this time and time again, is that for automation we need a consistent technology foundation to connect. With the foundation encourages simplicity for collaboration foundation to connect the people, process and technology and a foundation to help us build trust in those relationships. If we'd had that foundation, that platform, we could have been successful much faster. 'Cause it's important we understand that success depends on trust between groups. To be successful in adapting to change we need to know we trust when the situation may not be perfect. It might be different offices, could be different countries, probably different languages, maybe even different objectives between these different groups. It might be a global pandemic, which is a phrase I never thought I would say in a keynote, but connecting with your colleagues, collaborating and therefore participating in the work that's done. Working as a wider team, enables you to see a broader perspective. Because how else do we trust? Unless we understand each other. How do we trust what we can create? Who has created it? Is he up to standard? And how do we trust what's running where? And who's been running it that we can scale with the correct control? And how do we trust that we can engage removing friction and complexity. And we can do all these things by being given the opportunity to participate, to be included in the overall process. Ultimately, how do we participate to achieve our goals? And what goals do we choose? Your goals are your business challenges automate what makes both your business and IT successful because participation is key to that process. And the more people you can bring together to connect, the more benefit you can achieve. If we've connected and collaborated, we trust what's being produced because automation can be a selfish act. I, the individual do something to make my job easier, but you should think of automation as a gift of knowledge and experience. How can you automate your job to make your colleagues' lives easier? So as we assume and know that participation enables collaboration, how do we help you to collaborate? Well with Ansible, the language of collaboration. And to collaborate, we need to connect. And for that, we have the Ansible Automation Platform. Everything I've described so far is drawn from our collective experience with customers. When Ansible the tool was released, it started as a way to perform automation in a simpler way. As your needs changed, we added more domains and then your needs changed again. As complexity and scale surfaced, a different set of challenges for us to look after. Not only did you do the automation, you need to do more automation as you achieve some successes. And afterwards you have to manage all that automation. To be successful we have deserved that it's not just what you do, it's how and where you do it. It's not just about the tool. It's about the structure, the framework. A focal point and a user experience in maintaining your automation assets. And this is why we focused all of our product offerings into Ansible Automation Platform, a single offering for enterprise grade automation. We've supported your changes in the past, and we've been working to support your changes for the future, help you adapt and connect. Now, if Ansible is the language of collaboration, collections, Ansible content collections are the building blocks of how you simplify the connection of your trusted technologies. Last year, we launched collections as a way to improve the management of content distributed within the Ansible project and the Ansible products. The teams involved were busy working on making this happen over the last 12 months. Working with our community and partners to migrate over 4000.5 modules. This work including this summer with the Ansible collections, 1.0 release. Last Ansible Fest we unveiled certified platforms with the Ansible certified partner program. End to end support for Ansible content between Red Hat and our trusted partners. We now have over 50 certified platforms focused on curated enterprise technology domains. The platforms that you use and rely upon because connecting these domains is connecting your teams. I'm talking about connecting teams. I'm sure that your planning has started already working on cloud native adoption. Key to that cloud native journey and story are containers. And that brings its own set of changes to the way that we work. And we want to support you as you adapt to these changes. I assume most of you are aware that OpenShift is Red Hat's intuivating container orchestration platform based on Kubernetes. And I'd like to announce the release of certified Ansible content collections of Red Hat OpenShift. Whether it be for augmenting provisioning, customizing cluster nodes, or data operations. Collections gives us the perfect opportunity to deliver these use cases and more. Because we know Red Hat customers have chosen and trust Ansible Automation and OpenShift platforms to drive transformation programs. But the connection between these two platforms and the teams that deliver these has always been very implementation efforts. We know that we need to move away from that implementation effort and move to product integration. The reality of evolving tech is it's never all or nothing. If you're fortunate, you can deploy your cloud native application entirely on OpenShift. But what happens, we need to manage across clusters or access existing infrastructure like networks or databases. We're excited to bridge traditional container and edge through Ansible Automation. Perhaps the only automation and container platform solution that is truly agnostic Ansible just doesn't care whose platform you're running on. The new Ansible resource operator, which we deployed as part of Red Hat advanced cluster management is our answer. We're making the Ansible Automation platform a first class provider inside ACM. To enable call outs to automation assets deployed on the automation platform and to make it easily accessible to container management workflows and connect two industry leading technology platforms. Enabling this integration with our customers to identify and enforce policies, applied governance models consistently across multiple clusters, as a deploy and scale complex applications across hybrid multi cluster environments. In the future, the resource operator will be available for any OpenShift deployed service to integrate to the Ansible Automation Platform. And to find out more about this, be sure to checkout Matt Jones' "Future of Ansible Automation Talk" as well as the ACM breakout sessions. Now, as collections are about connecting technology and product integrations are about connecting process. We still need to think about connecting people. How do we ensure that users can find trusted content? So while many users are happy to get content from Ansible galaxy, we know that many enterprises are far less comfortable with that situation. And certainly not comfortable uploading private developed content themselves. We also know that galaxy isn't the only source of content for you to use. There are other source control, repositories, other locations, perhaps even file shares where you allow your teams to collaborate and connect. With all these different sources it can be hard for your users, your internal communities to connect and trust they're using approved content. So we want to connect teams, help them collaborate, have shared goals and ensure trust in how they automate. We need to fill that gap. And that's why last year we launched the automation hub on cloud@redhat.com. As a trusted source for download downstream certified Ansible content supported as part of ground sports automation platform subscription. And this is where you access the collections for those 50 certified platforms I mentioned earlier. But that was only part one of the plan. So while we can provide a location for trusted content that doesn't bring together content from other sources. Before, I mentioned collections were introduced to help the management of automation content. By adopting collections, you provide a path for automation developers to bring content together in a common location, allow multiple teams to increase their time to value in the automation adoption journeys. But to connect internal communities of practice, we need to provide a focal point for all things related to automation content. And that's why we're pleased to announce that the private version of automation hub will be released to the content and knowledge management component of the Ansible Automation Platform. Your privately hosted location for all your Ansible content, to allow you to curate which content is available from which sources, whether it's from Red Hat, the Ansible community, or develop internally. You now have the control over which content you trust. Finally, this year we launched our third hosted service and no additional cost to platform customers. The automation services catalog. The purpose of this service was to allow you to connect your business users with rules-based governance and a simplified user experience to the automation creator deployed via the platform. We're announcing a tech preview launch with the connected technology security connect to your own prem platform environments. It's based on a technology that's part of our future plans. And again, if you attend Matt Jones' "Future of Ansible Automation Talk", you'll hear more about what we're planning in this area. Because this year has been somewhat challenging, automation and Ansible have become more important to many individuals and organizations. So I could leave you with one set of thoughts to adapt and to change as we face, keep things simple, participate in making automation happen and understand the problems to be solved, but always try and keep it simple. Evolve and scale as you connect your teams, as you would grow and expand your automation, grow and expand the scale you're working at as you move forward. And collaborate to break down the silos and domains that build and build your automation that makes change possible. Whether you're an Ansible expert or someone looking for some way to start, we have sessions we hope will inspire you to make your own changes and sessions that will give you the knowledge of how to adapt for the future. Thank you and happy automating.
SUMMARY :
And to collaborate, we need to connect.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Richard Henshall | PERSON | 0.99+ |
Matt Jones' | PERSON | 0.99+ |
Last year | DATE | 0.99+ |
two platforms | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
Ansible | ORGANIZATION | 0.99+ |
50 certified platforms | QUANTITY | 0.99+ |
cloud@redhat.com | OTHER | 0.99+ |
both | QUANTITY | 0.99+ |
OpenShift | TITLE | 0.98+ |
over 50 certified platforms | QUANTITY | 0.98+ |
one set | QUANTITY | 0.97+ |
Ansible galaxy | ORGANIZATION | 0.97+ |
third hosted service | QUANTITY | 0.96+ |
this year | DATE | 0.95+ |
Ansible Automation | ORGANIZATION | 0.95+ |
Kubernetes | TITLE | 0.92+ |
Future of Ansible Automation Talk | TITLE | 0.92+ |
this summer | DATE | 0.91+ |
first class | QUANTITY | 0.9+ |
over 4000.5 modules | QUANTITY | 0.89+ |
Red Hat | ORGANIZATION | 0.89+ |
last 12 months | DATE | 0.89+ |
single offering | QUANTITY | 0.89+ |
part one | QUANTITY | 0.84+ |
Linux Windows | TITLE | 0.77+ |
Red Hat | TITLE | 0.77+ |
two industry leading technology platforms | QUANTITY | 0.73+ |
Red Hat OpenShift | TITLE | 0.72+ |
pandemic | EVENT | 0.71+ |
Hat | ORGANIZATION | 0.7+ |
one | QUANTITY | 0.68+ |
1.0 | QUANTITY | 0.67+ |
Red | EVENT | 0.66+ |
Future of Ansible Automation | TITLE | 0.63+ |
Ansible Automation Platform | TITLE | 0.61+ |
ITA | ORGANIZATION | 0.61+ |
ACM | ORGANIZATION | 0.6+ |
Red | TITLE | 0.55+ |
Ansiblefest | EVENT | 0.43+ |
Talk | EVENT | 0.43+ |
Fest | EVENT | 0.4+ |
Automation | TITLE | 0.3+ |
Vijay Luthra, Northern Trust | Nutanix .NEXT 2018
(upbeat music) >> Announcer: Live from New Orleans, Louisiana, it's theCUBE. Covering .NExT's conference 2018 brought to you by Nutanix. >> Welcome back to theCUBE, I'm Stu Miniman with my cohost Keith Townsend, and this is Nutanix .NEXT conference in New Orleans. Happy to welcome to the program, first time guest, Vijay Luthra, who's the Senior Vice President, global head of technology infrastructure services at Northern Trust. Vijay, great note on the keynote this morning. A lot of cool technologies that you're digging into, thanks for joining us. >> Well, thanks for having me. >> All right, so luckily it's easy, Northern Trust, understand finance, but tell us a little bit about you know, your organization and kind of give us a high level, what are some of the biggest challenges that you're facing? >> Yep, yep, absolutely. So again, Northern Trust, a global financial services firm, primarily in the asset servicing, asset management, wealth management business. Again, 128 year history, built on some very sound principles around service, integrity, and expertise. Some of the challenges we're facing is around growth. The firm is growing. Revenues are growing at a very, very healthy pace, especially when you compare that to our peers and competitors. And the challenge, number one, is how do we scale the business while managing the overall operating expenses, right? So we want to create some leverage in the business and we want the growth to be healthy growth. So that's challenge number one, and we recently, our CEO recently launched a value for spend program where we grow let's make sure we're spending and we're getting the value for the spend. >> Yeah, Vijay, can you just sketch out for us, you talk about growth and scale. You know, how many locations, how many users, they're emanating that you have to like wrap into this stuff? >> Yeah absolutely, so Northern Trust, about 18,200 employees, I would say 50, 60 plus locations. I would say well distributed across the different regions between US, Asia PAC, and EMEA. Yeah, at high level those are kind of the how we're kind of geographically dispersed. >> Okay great, and cloud. What does that mean to your organization? How does that fit in your role and across the company? >> So cloud, we've been on the cloud journey for several years now. We had a very mature virtualization strategy, which transitioned well into our private cloud strategy. We have enough scale internally where we said if we built an efficient private cloud, which by the way, if you heard on the keynote, was built 100% on converged, hyperconverged technologies. We were actually in the forefront when we adopted these technologies back in 2013, 2014. The goal back then was how do we get efficiencies and scale from the private cloud by leveraging automation, you know converged stacks are highly reliable because, for example, the patching is a lot more well tested by the vendors. So on our journey to the cloud that was essentially the first phase, when we transitioned from virtualization to private clouds, and since then we've actually built more value added layers on top of that. Because the goal is not just to cater to traditional applications that leverage infrastructure as a service, but also to cater to some of the more contemporary cloud native applications that we're building or even some of the containerized workloads. So we've built on top of that with PaaS and KaaS and software-defined components like SDN, and we are closely measuring the outcomes and the benefits. >> So Vijay, let's talk a little bit about that initial investment and decision. Northern Trust, well known for being conservative amongst the most conservative of investment firms. I'm from Chicago so I have an affinity for Northern Trust in general. Back in 2013 there was not as simple to look back it's simple to look back now and say oh yeah, right choice. Easy decision. But 2013 OpenStack was, you know, all the rave, building clouds based on these open source and available technologies was kind of the way to go. What made you guys take a look at hyperconverged and say, you know what, we're going to buck the trend and we're going to go with hyperconverged. Not many of your peers made that choice. >> Yes, that's a great question. So at Northern Trust we are heavily focused on outcome-based investment decisions. So within technology infrastructure our mission is pretty straightforward, which is as we scale infrastructure, how do we continue to reduce total cost of ownership, improve time to market, improve client experience, which is a very essential part of our decision-making process, and while we're doing that, not to lose site of reliability, stability, security. So with that kind of as our guiding principles, as we make major investments we take them through these lenders, and Nutanix hit all of them. So it was fairly straightforward. And again, some of the benefits, as you probably heard on stage, were phenomenal. We were able to increase capacity without increasing staff. We were able to reduce some of our automation, sorry bill teams, significantly. Reliability, stability improved drastically. For example, our virtual desktop infrastructure, the number of incidents, client generated incidents, internal client generated incidents, went down by 80, 90%. So again, to answer your question, outcome driven, have key metrics and measures on what you expect the outcomes to be, and then partner with the right firms to make sure you get the outcomes. >> So as we look at the next phase, you know, infrastructure's a service, you guys seem to have that down, mature virtualization practice, you lay it on top of that infrastructures and service. Now we look into the next phase, KaaS, PaaS type solutions. What are some of the major decision points and what's guiding your decisions? >> So clearly on everybody, all head of infrastructures or any organizations' mind today is how do we build a hybrid cloud strategy that is safe, secure, does not lock you into a specific vendor. It's the right application, the right type of workload, and that's where we're focused on now. We've got a few applications. Nothing that's production client centric is in a public cloud from an IaaS perspective. But we think we've invested in the right components to allow us to now orchestrate safely and securely across multi clouds. So that's what we're focused on now. >> Can you talk a little bit more about containerization? What's the experience like been working with Nutanix for those type of solutions? >> Yeah, so we were early adapters of containers with partnering with Docker built on the Nutanix platform. We've been working with them since the last couple years, and more recently since they announced the Kubernetes integration, we are factoring that into the Docker environment. The goal with containerization is, again, back to kind of those guiding principles, right? Lower costs of ownership, improved time to market, reliability, stability, we see an opportunity to consolidate Linux Windows based workloads because of the efficiencies that containers bring, as well as extend devops like functionality to app teams that might not be looking to refactor in a cloud native PaaS like format. They could take advantage of containers to get a devops like experience. We're enhancing security as we move to containers. There are several things we're doing there. So point being, we're looking at it through kind of the same lenses. >> So you threw out a couple things. I heard devops in there. In your keynote one of the things that you talked about a team going from 45 people (mumbles) to something to 12. Maybe explain a little bit about ops in your company, what happened to all those other 33 people. >> Okay quick questions, two parts. On the infrastructure as a service side, a few years ago we had a build team, mostly contractor driven, where we would use them to build servers, deploy applications, extremely manual. With converged technologies and all the automation that we had deployed, that team is down to 14, 15 people. Because a lot of the work has gone away, and our goal is to continue to fine tune that. So that's infrastructure as a service. Devops wise what we did was we carved out a team of four or five of what Gartner calls versatilists, multiskilled, multidisciplinary resources, senior engineers that focused on building out our devops practice on top of our platform as a service, and that has gone extremely well. You know the team has very successfully onboarded hundreds of microservices as we rearchitect some of our applications. So Vijay, talk us about the decision and the capability of being able to take monolithic applications, that are not going to be refactored and going to containers, there's a lot of debate on whether or not that's worth the trouble. But beyond that debate, talk to us about the importance and the reliance on the capability that Nutanix will be bringing in with ACS 2.0, are you guys looking to deploy that or are you looking to manage Kubernetes and that capability of managing these traditional applications inside of Kubernetes? >> Yeah, so we're a few steps ahead of Nutanix. In one of my conversations with Sunil I was saying man I wish this feature was available six months ago. So we are watching that development very closely. We are very interested in it. But because we have an existing Docker footprint, that's what we're leveraging for Kubernetes for now. >> Great, can you speak about Nutanix, the relationship you have with them, and their ecosystem. Things like secondary storage, you know how do you look at Nutanix as a partner and what do you see the maturity of their ecosystem? Any solutions you'd want to highlight there? >> Yeah, I mean, clearly what Nutanix did to the primary storage market, there's an emerging market on the secondary storage side, and we're working with a couple different companies to help us in that space. But just the Nutanix ecosystem itself, just because we are a few steps ahead and we've got some Nutanix components, we've got some third party components as part of Nutanix, I think the fact that Nutanix is becoming a one-stop shop as you can see with some of the announcements today, from a total cost of ownership perspective it becomes very interesting for someone like myself to say if I took out a few vendors or focused on the Nutanix stack, what does that mean? At what point is it usable? When can we start migrating, and those are the types of things we're going to focus on now. >> So have you gotten into a situation, especially considering that you're at least six months or so ahead of Nutanix when it comes to container orchestration, has the infrastructure gotten in your way at all? Have they done anything, because Nutanix can be opinionated in how they manage infrastructure. As that opinionation, has that opinion rather, gotten in your way as you look to go down your KaaS route? >> No, so far I would say no. I think a lot of their tooling is very intuitive, very easy to use. The engineers, with very minimal training, in fact, some of our engineers that retrained themselves on Nutanix happened over such a small duration and had to do with how simple to use Nutanix stack was. >> One of the lines I love from your presentation, you said you run IP as a business. What advice do you give to your peers out there, learnings you've had, staying a little bit ahead of some of the general marketplace. >> Yeah, I think the key is, back to kind of my initial comment, how do you build scale with an infrastructure? Meaning, how do you take on more workloads, new technologies, while managing your operating expenses tightly? We have essentially done that extremely well over the last few years where we have added to capacity, absorbed a lot of growth, introduced a lot of new technologies while keeping a very close eye on operating expenses. So I would say, if anything, when you run IT as a business, you take into account not just all the net adds, you have a program to consider optimizations. For example, we use a technology that helps us reduce physical VMWare based footprint. It helps us optimize and says here's where you have some debt capacity. You, as leaders and somebody in my position, should be willing and able to take out costs as well while taking on new technologies, I would say that's the key. So you're saying you guys are running IT as a business. What have been some of the KPIs and showing success in that transformation? >> Okay, we closely watch our operating expenses and we measure that as a percentage of total company operating expenses, what percentage are we of that? We closely watch time to market. How soon are we providing environments to ab dev teams. We closely watch stability off the underlying platforms, op time of those platforms. So I think those have direct impact on our internal clients as well as end clients. And then as a business, who are your competitors? >> As Northern Trust? >> No as you run IT as a business? >> Yeah, that's a great question. I would say cloud providers are competitors. But to be honest I shouldn't say that. I mean in a new model I want the team to think about cloud as just another endpoint, and we need to be able to safely and securely deploy the right app in the right cloud kind of the end state that we want to be in. So I wouldn't say they're competitors. I think we as a firm, or any firm should get comfortable with being able to orchestrate and move the right workloads in the right data centers to say. >> All right, Vijay, want to give you the final word. You're out there looking at some of the new technologies. What's exciting you, what's on your wishlist from the vendor community, maybe you can share. Personally I'm very excited about AI in ops. I know people talk about AI in financial services and the other industries. But I think the application of machine learning and AI within data center operations is relevant. And there's many things we're doing in that space in terms of a client facing chatbot that integrates with Link. Or certain add-ons onto Splunk that help you with machine learning and analyzing logs. Or bots that help you classify tickets and put them in the right cues at the right time. So we're looking at how to take advantage of those. Again, to build scale, to lower cost ownership, improve experience, etc., etc., so again, I think that's something I'm personally very, very excited about. >> All right, well Vijay Luthra, really appreciate sharing your story. Great success and look forward to catching up with you-- >> All right thank you. >> In the future. For Keith Townsend I'm Stu Miniman. Lots more coverage here from New Orleans Convention Center .NEXT, Nutanix's conference 2018. Thanks for watching theCUBE. (techno music)
SUMMARY :
NExT's conference 2018 brought to you by Nutanix. Vijay, great note on the keynote this morning. Some of the challenges we're facing is around growth. they're emanating that you have to I would say 50, 60 plus locations. What does that mean to your organization? Because the goal is not just to cater to and say, you know what, we're going to buck the trend to make sure you get the outcomes. So as we look at the next phase, It's the right application, the right type of workload, because of the efficiencies that containers bring, So you threw out a couple things. and the capability of being able to take So we are watching that development very closely. and what do you see the maturity of their ecosystem? as you can see with some of the announcements today, So have you gotten into a situation, and had to do with how simple to use Nutanix stack was. One of the lines I love from your presentation, So I would say, if anything, when you run IT as a business, and we measure that as a percentage kind of the end state that we want to be in. from the vendor community, maybe you can share. Great success and look forward to catching up with you-- In the future.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Vijay Luthra | PERSON | 0.99+ |
Keith Townsend | PERSON | 0.99+ |
2013 | DATE | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
Chicago | LOCATION | 0.99+ |
100% | QUANTITY | 0.99+ |
New Orleans | LOCATION | 0.99+ |
Northern Trust | ORGANIZATION | 0.99+ |
50 | QUANTITY | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Vijay | PERSON | 0.99+ |
two parts | QUANTITY | 0.99+ |
12 | QUANTITY | 0.99+ |
45 people | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
six months ago | DATE | 0.99+ |
first phase | QUANTITY | 0.99+ |
Gartner | ORGANIZATION | 0.99+ |
128 year | QUANTITY | 0.99+ |
33 people | QUANTITY | 0.99+ |
US | LOCATION | 0.99+ |
One | QUANTITY | 0.98+ |
about 18,200 employees | QUANTITY | 0.98+ |
first time | QUANTITY | 0.98+ |
New Orleans, Louisiana | LOCATION | 0.98+ |
four | QUANTITY | 0.98+ |
2014 | DATE | 0.98+ |
PaaS | TITLE | 0.97+ |
New Orleans Convention Center | LOCATION | 0.96+ |
KaaS | TITLE | 0.96+ |
today | DATE | 0.96+ |
ACS 2.0 | TITLE | 0.95+ |
one | QUANTITY | 0.94+ |
Docker | TITLE | 0.94+ |
Kubernetes | TITLE | 0.93+ |
60 plus locations | QUANTITY | 0.93+ |
Asia | LOCATION | 0.91+ |
.NExT | EVENT | 0.9+ |
Nutanix | TITLE | 0.89+ |
2018 | DATE | 0.89+ |
last couple years | DATE | 0.85+ |
this morning | DATE | 0.85+ |
few years ago | DATE | 0.83+ |
last | DATE | 0.83+ |
14, 15 people | QUANTITY | 0.83+ |
one-stop | QUANTITY | 0.82+ |
VMWare | TITLE | 0.81+ |
OpenStack | ORGANIZATION | 0.8+ |
hundreds of microservices | QUANTITY | 0.8+ |
couple | QUANTITY | 0.78+ |
80, 90% | QUANTITY | 0.76+ |
theCUBE | ORGANIZATION | 0.76+ |
Linux Windows | TITLE | 0.75+ |
least six months | QUANTITY | 0.71+ |
years | DATE | 0.71+ |
Nutanix | EVENT | 0.69+ |
EMEA | ORGANIZATION | 0.68+ |
conference 2018 | EVENT | 0.68+ |
Bryan Smith, Rocket Software - IBM Machine Learning Launch - #IBMML - #theCUBE
>> Announcer: Live from New York, it's theCUBE, covering the IBM Machine Learning Launch Event, brought to you by IBM. Now, here are your hosts, Dave Vellante and Stu Miniman. >> Welcome back to New York City, everybody. We're here at the Waldorf Astoria covering the IBM Machine Learning Launch Event, bringing machine learning to the IBM Z. Bryan Smith is here, he's the vice president of R&D and the CTO of Rocket Software, powering the path to digital transformation. Bryan, welcome to theCUBE, thanks for coming on. >> Thanks for having me. >> So, Rocket Software, Waltham, Mass. based, close to where we are, but a lot of people don't know about Rocket, so pretty large company, give us the background. >> It's been around for, this'll be our 27th year. Private company, we've been a partner of IBM's for the last 23 years. Almost all of that is in the mainframe space, or we focused on the mainframe space, I'll say. We have 1,300 employees, we call ourselves Rocketeers. It's spread around the world. We're really an R&D focused company. More than half the company is engineering, and it's spread across the world on every continent and most major countries. >> You're esstenially OEM-ing your tools as it were. Is that right, no direct sales force? >> About half, there are different lenses to look at this, but about half of our go-to-market is through IBM with IBM-labeled, IBM-branded products. We've always been, for the side of products, we've always been the R&D behind the products. The partnership, though, has really grown. It's more than just an R&D partnership now, now we're doing co-marketing, we're even doing some joint selling to serve IBM mainframe customers. The partnership has really grown over these last 23 years from just being the guys who write the code to doing much more. >> Okay, so how do you fit in this announcement. Machine learning on Z, where does Rocket fit? >> Part of the announcement today is a very important piece of technology that we developed. We call it data virtualization. Data virtualization is really enabling customers to open their mainframe to allow the data to be used in ways that it was never designed to be used. You might have these data structures that were designed 10, 20, even 30 years ago that were designed for a very specific application, but today they want to use it in a very different way, and so, the traditional path is to take that data and copy it, to ETL it someplace else they can get some new use or to build some new application. What data virtualization allows you to do is to leave that data in place but access it using APIs that developers want to use today. They want to use JSON access, for example, or they want to use SQL access. But they want to be able to do things like join across IMS, DB2, and VSAM all with a single query using an SQL statement. We can do that relational databases and non-relational databases. It gets us out of this mode of having to copy data into some other data store through this ETL process, access the data in place, we call it moving the applications or the analytics to the data versus moving the data to the analytics or to the applications. >> Okay, so in this specific case, and I have said several times today, as Stu has heard me, two years ago IBM had a big theme around the z13 bringing analytics and transactions together, this sort of extends that. Great, I've got this transaction data that lives behind a firewall somewhere. Why the mainframe, why now? >> Well, I would pull back to where I said where we see more companies and organizations wanting to move applications and analytics closer to the data. The data in many of these large companies, that core business-critical data is on the mainframe, and so, being able to do more real time analytics without having to look at old data is really important. There's this term data gravity. I love the visual that presents in my mind that you have these different masses, these different planets if you will, and the biggest, massivest planet in that solar system really is the data, and so, it's pulling the smaller satellites if you will into this planet or this star by way of gravity because data is, data's a new currency, data is what the companies are running on. We're helping in this announcement with being able to unlock and open up all mainframe data sources, even some non-mainframe data sources, and using things like Spark that's running on the platform, that's running on z/OS to access that data directly without having to write any special programming or any special code to get to all their data. >> And the preferred place to run all that data is on the mainframe obviously if you're a mainframe customer. One of the questions I guess people have is, okay, I get that, it's the transaction data that I'm getting access to, but if I'm bringing transaction and analytic data together a lot of times that analytic data might be in social media, it might be somewhere else not on the mainframe. How do envision customers dealing with that? Do you have tooling them to do that? >> We do, so this data virtualization solution that I'm talking about is one that is mainframe resident, but it can also access other data sources. It can access DB2 on Linux Windows, it can access Informix, it can access Cloudant, it can access Hadoop through IBM's BigInsights. Other feeds like Twitter, like other social media, it can pull that in. The case where you'd want to do that is where you're trying to take that data and integrate it with a massive amount of mainframe data. It's going to be much more highly performant by pulling this other small amount of data into, next to that core business data. >> I get the performance and I get the security of the mainframe, I like those two things, but what about the economics? >> Couple of things. One, IBM when they ported Spark to z/OS, they did it the right way. They leveraged the architecture, it wasn't just a simple port of recompiling a bunch of open source code from Apache, it was rewriting it to be highly performant on the Z architecture, taking advantage of specialty engines. We've done the same with the data virtualization component that goes along with that Spark on z/OS offering that also leverages the architecture. We actually have different binaries that we load depending on which architecture of the machine that we're running on, whether it be a z9, an EC12, or the big granddaddy of a z13. >> Bryan, can you speak the developers? I think about, you're talking about all this mobile and Spark and everything like that. There's got to be certain developers that are like, "Oh my gosh, there's mainframe stuff. "I don't know anything about that." How do you help bridge that gap between where it lives in the tools that they're using? >> The best example is talking about embracing this API economy. And so, developers really don't care where the stuff is at, they just want it to be easy to get to. They don't have to code up some specific interface or language to get to different types of data, right? IBM's done a great job with the z/OS Connect in opening up the mainframe to the API economy with ReSTful interfaces, and so with z/OS Connect combined with Rocket data virtualization, you can come through that z/OS Connect same path using all those same ReSTful interfaces pushing those APIs out to tools like Swagger, which the developers want to use, and not only can you get to the applications through z/OS Connect, but we're a service provider to z/OS Connect allowing them to also get to every piece of data using those same ReSTful APIs. >> If I heard you correctly, the developer doesn't need to even worry about that it's on mainframe or speak mainframe or anything like that, right? >> The goal is that they never do. That they simply see in their tool-set, again like Swagger, that they have data as well as different services that they can invoke using these very straightforward, simple ReSTful APIs. >> Can you speak to the customers you've talked to? You know, there's certain people out in the industry, I've had this conversation for a few years at IBM shows is there's some part of the market that are like, oh, well, the mainframe is this dusty old box sitting in a corner with nothing new, and my experience has been the containers and cool streaming and everything like that, oh well, you know, mainframe did virtualization and Linux and all these things really early, decades ago and is keeping up with a lot of these trends with these new type of technologies. What do you find in the customers that, how much are they driving forward on new technologies, looking for that new technology and being able to leverage the assets that they have? >> You asked a lot of questions there. The types of customers certainly financial and insurance are the big two, but that doesn't mean that we're limited and not going after retail and helping governments and manufacturing customers as well. What I find is talking with them that there's the folks who get it and the folks who don't, and the folks who get it are the ones who are saying, "Well, I want to be able "to embrace these new technologies," and they're taking things like open source, they're looking at Spark, for example, they're looking at Anaconda. Last week, we just announced at the Anaconda Conference, we stepped on stage with Continuum, IBM, and we, Rocket, stood up there talking about this partnership that we formed to create this ecosystem because the development world changes very, very rapidly. For a while, all the rage was JDBC, or all the rage was component broker, and so today it's Spark and Anaconda are really in the forefront of developers' minds. We're constantly moving to keep up with developers because that's where the action's happening. Again, they don't care where the data is housed as long as you can open that up. We've been playing with this concept that came up from some research firm called two-speed IT where you have maybe your core business that has been running for years, and it's designed to really be slow-moving, very high quality, it keeps everything running today, but they want to embrace some of their new technologies, they want to be able to roll out a brand-new app, and they want to be able to update that multiple times a week. And so, this two-speed IT says, you're kind of breaking 'em off into two separate teams. You don't have to take your existing infrastructure team and say, "You must embrace every Agile "and every DevOps type of methodology." What we're seeing customers be successful with is this two-speed IT where you can fracture these two, and now you need to create some nice integration between those two teams, so things like data virtualization really help with that. It opens up and allows the development teams to very quickly access those assets on the mainframe in this case while allowing those developers to very quickly crank out an application where quality is not that important, where being very quick to respond and doing lots of AB testing with customers is really critical. >> Waterfall still has its place. As a company that predominately, or maybe even exclusively is involved in mainframe, I'm struck by, it must've been 2008, 2009, Paul Maritz comes in and he says VMWare our vision is to build the software mainframe. And of course the world said, "Ah, that's, mainframe's dead," we've been hearing that forever. In many respects, I accredit the VMWare, they built sort of a form of software mainframe, but now you hear a lot of talk, Stu, about going back to bare metal. You don't hear that talk on the mainframe. Everything's virtualized, right, so it's kind of interesting to see, and IBM uses the language of private cloud. The mainframe's, we're joking, the original private cloud. My question is you're strategy as a company has been always focused on the mainframe and going forward I presume it's going to continue to do that. What's your outlook for that platform? >> We're not exclusively by the mainframe, by the way. We're not, we have a good mix. >> Okay, it's overstating that, then. It's half and half or whatever. You don't talk about it, 'cause you're a private company. >> Maybe a little more than half is mainframe-focused. >> Dave: Significant. >> It is significant. >> You've got a large of proportion of the company on mainframe, z/OS. >> So we're bullish on the mainframe. We continue to invest more every year. We invest, we increase our investment every year, and so in a software company, your investment is primarily people. We increase that by double digits every year. We have license revenue increases in the double digits every year. I don't know many other mainframe-based software companies that have that. But I think that comes back to the partnership that we have with IBM because we are more than just a technology partner. We work on strategic projects with IBM. IBM will oftentimes stand up and say Rocket is a strategic partner that works with us on hard problem-solving customers issues every day. We're bullish, we're investing more all the time. We're not backing away, we're not decreasing our interest or our bets on the mainframe. If anything, we're increasing them at a faster rate than we have in the past 10 years. >> And this trend of bringing analytics and transactions together is a huge mega-trend, I mean, why not do it on the mainframe? If the economics are there, which you're arguing that in many use cases they are, because of the value component as well, then the future looks pretty reasonable, wouldn't you say? >> I'd say it's very, very bright. At the Anaconda Conference last week, I was coming up with an analogy for these folks. It's just a bunch of data scientists, right, and during most of the breaks and the receptions, they were just asking questions, "Well, what is a mainframe? "I didn't know that we still had 'em, "and what do they do?" So it was fun to educate them on that. But I was trying to show them an analogy with data warehousing where, say that in the mid-'90s it was perfectly acceptable to have a separate data warehouse separate from your transaction system. You would copy all this data over into the data warehouse. That was the model, right, and then slowly it became more important that the analytics or the BI against that data warehouse was looking at more real time data. So then it became more efficiencies and how do we replicate this faster, and how do we get closer to, not looking at week-old data but day-old data? And so, I explained that to them and said the days of being able to do analytics against old data that's copied are going away. ETL, we're also bullish to say that ETL is dead. ETL's future is very bleak. There's no place for it. It had its time, but now it's done because with data virtualization you can access that data in place. I was telling these folks as they're talking about, these data scientists, as they're talking about how they look at their models, their first step is always ETL. And so I told them this story, I said ETL is dead, and they just look at me kind of strange. >> Dave: Now the first step is load. >> Yes, there you go, right, load it in there. But having access from these platforms directly to that data, you don't have to worry about any type of a delay. >> What you described, though, is still common architecture where you've got, let's say, a Z mainframe, it's got an InfiniBand pipe to some exit data warehouse or something like that, and so, IBM's vision was, okay, we can collapse that, we can simplify that, consolidate it. SAP with HANA has a similar vision, we can do that. I'm sure Oracle's got their vision. What gives you confidence in IBM's approach and legs going forward? >> Probably due to the advances that we see in z/OS itself where handling mixed workloads, which it's just been doing for many of the 50 years that it's been around, being able to prioritize different workloads, not only just at the CPU dispatching, but also at the memory usage, also at the IO, all the way down through the channel to the actual device. You don't see other operating systems that have that level of granularity for managing mixed workloads. >> In the security component, that's what to me is unique about this so-called private cloud, and I say, I was using that software mainframe example from VMWare in the past, and it got a good portion of the way there, but it couldn't get that last mile, which is, any workload, any application with the performance and security that you would expect. It's just never quite got there. I don't know if the pendulum is swinging, I don't know if that's the accurate way to say it, but it's certainly stabilized, wouldn't you say? >> There's certainly new eyes being opened every day to saying, wait a minute, I could do something different here. Muscle memory doesn't have to guide me in doing business the way I have been doing it before, and that's this muscle memory I'm talking about of this ETL piece. >> Right, well, and a large number of workloads in mainframe are running Linux, right, you got Anaconda, Spark, all these modern tools. The question you asked about developers was right on. If it's independent or transparent to developers, then who cares, that's the key. That's the key lever this day and age is the developer community. You know it well. >> That's right. Give 'em what they want. They're the customers, they're the infrastructure that's being built. >> Bryan, we'll give you the last word, bumper sticker on the event, Rocket Software, your partnership, whatever you choose. >> We're excited to be here, it's an exciting day to talk about machine learning on z/OS. I say we're bullish on the mainframe, we are, we're especially bullish on z/OS, and that's what this even today is all about. That's where the data is, that's where we need the analytics running, that's where we need the machine learning running, that's where we need to get the developers to access the data live. >> Excellent, Bryan, thanks very much for coming to theCUBE. >> Bryan: Thank you. >> And keep right there, everybody. We'll be back with our next guest. This is theCUBE, we're live from New York City. Be right back. (electronic keyboard music)
SUMMARY :
Event, brought to you by IBM. powering the path to close to where we are, but and it's spread across the Is that right, no direct sales force? from just being the Okay, so how do you or the analytics to the data versus Why the mainframe, why now? data is on the mainframe, is on the mainframe obviously It's going to be much that also leverages the architecture. There's got to be certain They don't have to code up some The goal is that they never do. and my experience has been the containers and the folks who get it are the ones who You don't hear that talk on the mainframe. the mainframe, by the way. It's half and half or whatever. half is mainframe-focused. of the company on mainframe, z/OS. in the double digits every year. the days of being able to do analytics directly to that data, you don't have it's got an InfiniBand pipe to some for many of the 50 years I don't know if that's the in doing business the way I is the developer community. They're the customers, bumper sticker on the the developers to access the data live. very much for coming to theCUBE. This is theCUBE, we're
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
IBM | ORGANIZATION | 0.99+ |
Bryan | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Paul Maritz | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Rocket Software | ORGANIZATION | 0.99+ |
50 years | QUANTITY | 0.99+ |
2009 | DATE | 0.99+ |
New York City | LOCATION | 0.99+ |
2008 | DATE | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
27th year | QUANTITY | 0.99+ |
New York City | LOCATION | 0.99+ |
first step | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
JDBC | ORGANIZATION | 0.99+ |
1,300 employees | QUANTITY | 0.99+ |
Continuum | ORGANIZATION | 0.99+ |
Last week | DATE | 0.99+ |
New York | LOCATION | 0.99+ |
Anaconda | ORGANIZATION | 0.99+ |
two things | QUANTITY | 0.99+ |
mid-'90s | DATE | 0.99+ |
Spark | TITLE | 0.99+ |
Rocket | ORGANIZATION | 0.99+ |
z/OS Connect | TITLE | 0.99+ |
10 | DATE | 0.99+ |
two teams | QUANTITY | 0.99+ |
Linux | TITLE | 0.99+ |
today | DATE | 0.99+ |
two-speed | QUANTITY | 0.99+ |
two separate teams | QUANTITY | 0.99+ |
Z. Bryan Smith | PERSON | 0.99+ |
SQL | TITLE | 0.99+ |
Bryan Smith | PERSON | 0.99+ |
z/OS | TITLE | 0.98+ |
two years ago | DATE | 0.98+ |
ReSTful | TITLE | 0.98+ |
Swagger | TITLE | 0.98+ |
last week | DATE | 0.98+ |
decades ago | DATE | 0.98+ |
DB2 | TITLE | 0.98+ |
HANA | TITLE | 0.97+ |
IBM Machine Learning Launch Event | EVENT | 0.97+ |
Anaconda Conference | EVENT | 0.97+ |
Hadoop | TITLE | 0.97+ |
Spark | ORGANIZATION | 0.97+ |
One | QUANTITY | 0.97+ |
Informix | TITLE | 0.96+ |
VMWare | ORGANIZATION | 0.96+ |
More than half | QUANTITY | 0.95+ |
z13 | COMMERCIAL_ITEM | 0.95+ |
JSON | TITLE | 0.95+ |