Image Title

Search Results for ReSTful:

Nathan Hall, Pure Storage | Veritas Vision Solution Day


 

>> From Tavern on the Green in Central Park, New York it's theCUBE. Covering Veritas Vision Solution Day, brought to you by Veritas. >> Welcome back to New York City everybody. We're here in the heart of Central Park at Tavern On the Green, a beautiful facility. I'm surrounded by Yankee fans so I'm like a fish out of water. But that's okay, it's a great time of the year. We love it, we're still in it up in Boston so we're happy. Dave Vellante here, you're watching theCUBE, the leader in live tech coverage. Nathan Hall is here, he's the field CTO at Pure Storage. Nathan, good to see you. >> Good to see you too. >> Thanks for coming on. >> Thanks. >> So you guys made some announcements today with Veritas, what's that all about? >> It's pretty exciting and Veritas, being the market leader in data protection software. Now our customers are able to take Veritas's net backup software and use it to drive the policy engine of Snapshots for our FlashArrays. They're also able to take Veritas and back up our data hub, which is our new strategy with FlashBlade to really unify all of data analytics onto a single platform. So Veritas really is the solution net back up that's able to back up all the workloads and Pure is the solution that's able to run all the workloads. >> So what if I could follow-up on that, maybe push you a little bit? A lot of these announcements that you see, we call them Barney deals, I love you, you love me, we go to market together and everything's wonderful. Are we talking about deeper integration than that or is just kind of press release? >> Absolutely deeper integration. So you'll see not just how-to guides, white papers, et cetera, but there's actual engineering-level integration that's happening here. We're available as an advanced disk target within that back up, we've integrated into CloudPoint as well. We certify all of our hardware platforms with Veritas. So this is deep, deep engineering-level integration. >> Yeah, we're excited about Pure, we followed you guys since the early days. You know we saw Scott Dietzen, what he built, very impressive modern architecture, you won't be a legacy for 20, 25 years so you've got a lot going for you. Presumably it's easier to integrate with such a modern architecture, but now at the same time you got to integrate with Veritas, it's been around for about 25 years. We heard a lot about how they're investing in API-based architectures, and microservices, and containers and the like, so what is that like in terms of integrating with a 25-year-old company? >> Well I think, from Pure's perspective we are API first, we're RESTfull APIs first. We've done a ton of integrations across multiple platforms whether it's Kubernetes, Docker, VMware, et cetera, so we have a lot of experience in terms of how to integrate with various flavors of other infrastructure. I think Veritas has done a lot of work as well in terms of maturing their API to really be this kind of cloud-first type of API, this RESTful API, that made our cross-integration much easier. >> You guys like being first, there were a number of firsts, you guys were kind of the first, or one of the first with flash for block. You were kind of the first for file. You guys have hit AI pretty hard, everybody's now doing that. You guys announced the first partnership with NVIDIA, everybody's now doing that. (laughs) You guys announced giving away NVME as part of the Stack for no upcharge, everybody's now doing that. So, you like to be first. Culturally, you've worked at some other companies, what's behind that? >> Well culturally, this is best company I've worked at in terms of culture, period, and really it all starts with the culture of the company. I think that's why we're first in so many places and it's not just first in terms of first to market. It's really about first in terms of customer feedback. If you look at the Gartner Magic Quadrant we're up, we've been at leaders quadrant for five years in a row. But this year, we're indisputably the leader. Furthest to the right on the X-axis, furthest north on the Y-axis and that's all driven by just a customer-obsessed culture. We've got a Net Promoter Score of 86.6 which is stratospheric. It's something that puts us in the top 1% of all business-to-business companies, not just tech companies. So, it's really that culture about customer obsession that drives us to be first. Both to market, in a lot of cases, but also just first in terms of customer perception of our technology. >> You guys were a first at really escape velocity, the billion dollar unicorn status, and now you're kind of having that fly-wheel effect where you're able to throw off different innovations in different areas. Can you talk more about the data hub and the relevance to what you're doing with Veritas and data protection? Let's unpack that a little bit. >> Sure, sure, the data hub, we had a great keynote this morning with Jyothi the VP of Marketing for Veritas and he had an interesting customer tidbit. He had some sort of unnamed government agency customer that actually gets penalized when they're unable to retrieve data fast enough. That's not something that many of our customers have, but they do get penalized in terms of opportunity costs. The reason why is 'cause customers just have their data siloed into all these different split-up locations and that prevents them from being able to get insight out of that data. If you look at AI luminaries like Andrew Ng or even people like Dominique Brezinski at Apple, they all agree that you have to, in order to be successful with your data strategy, you have to unify these data silos. And that's what the data hub does. For the first time we're able to unify everything from data warehousing, to data lakes, to streaming analytics, to AI and now even backup all onto a single platform with multidimensional performance. That's FlashBlade and that is our data hub, we think it's revolutionary and we're challenging the rest of the storage industry to follow suit. Let's make less silos, let's unify the data into a data hub so that our customers can get real actionable information out of their data. >> I was on a crowd chat the other day, you guys put out an open letter to the storage community, an open challenge, so that was kind of both a little controversial but also some fun. That's a very important point you're making about sort of putting data at the core. I make an observation, it's not so much true about Facebook anymore 'cause after the whole fake news thing their market value dropped. But if you look at the top five companies in terms of market value, include Facebook in there, they and Berkshire keep doing this, but let's assume for a second that Facebook's up there. Apple, Google, Facebook, Microsoft, and Amazon, top five in terms of US market value. Of course markets ebb and they flow, but it's no coincidence that those are data companies. They all have a lot of hard assets at those companies. They've got data at their core so it's interesting to hear you talk about data hub because one of the challenges that we see for traditional companies, call them incumbents, is they have data in stovepipes. For them to compete they've got to put it in the digital world, they've got to put data at their core. It's not just for start-ups and people doing Greenfield, it's for folks that are established and don't want to get disrupted. Long-winded question, how do they get, let's think of traditional company, an incumbent company, how do they get from point A to point B with the data hub? >> I think Andrew Ng has a great talked-point on this. He basically talks about your data strategy and you need to think about, as a company, how do you acquire data and then how do you unify into a single data hub? It's not just around putting it on a single platform, such as FlashBlade. A valuable byproduct of that is if you have all the stove-piped data, though you probably in terms of your data scientist trying to get access to it, now have to, they have 10 different stovepipes you've got 10 different VPs that you have to go talk to in order to get access to that data. So it really starts with stopping the bleeding and starting to have a data strategy around how do we acquire and how do we make certain or storing data in the same place and have a single unified data hub in order to maximize the value we are able to get out of that data. >> You know when I talked to, I'll throw my two cents in, I talk to a lot of chief data officers. To me, the ones that are most insightful talk about their five imperatives. First of all, is they got to understand how data contributes to monetization. Whether it's saving money or making money, it's not necessarily selling your data. I think a lot of people make that mistake, oh I'm going to monetize my data, I mean I'm going to sell my data, no, it's all about how it contributes to value. The second is, what about data sources? And then how do I get access to data sources? There's a lot implied there in terms of governance and security and who has access to that. And in the same time, how do I scale up my business so that I get the right people who can act on that data? Then how do I form relationships with a line of business so that I can maximize that monetization? Those are, I think, sensible steps that aren't trivial. They require a lot of thought and a lot of cultural change and I would imagine that's what a lot of your customers are going through right now. >> I think they are and I think as IT practitioners out there, I think that we have a duty to get closer to our business and be able to kind of educate them around these data strategies. To give them the same level of insight that you're talking about, you see in some chief data officers. But if I looked out at the, there's a recent study on the Fortune 50, the CXOs, and these aren't even CIOs, they're actually, we think as IT practitioners that the cloud is the most disruptive thing that we see, but the CEOs and the CFOs are actually five times more likely to talk about AI and data as being more disruptive to their business. But most of them have no data strategy, most of them don't know how AI works. It's up to us as IT practitioners to educate the business. To say here's what's possible, here's what we have to do in order to maximize the value out of data, so that you can get a business advantage out of this. It's incumbent on us as IT leaders. >> So Nathan, I think again, that's really insightful because let's face it, if you're moving at the speed of the CIO, which is what many companies want to do, because that's the so called, fat middle and that's where the money is. But you're behind, I mean we're moving into a new era, the cloud era, no pun intended, is here, it's solid but we're entering that data of machine intelligence and we built the foundation with the dupe even, there's a lot of data now what do we do with it? We see, and I wonder if you could comment on this, is the innovation engine of the future changing it? It use to be Moore's Law, we marched to the cadence of Moore's Law for years. Now it's data applying machine intelligence and then, of course, using the cloud for scale and attracting start-ups and innovation. That's fine because we want to program infrastructure, we don't want to deploy infrastructure. If you think about Pure, you got data for sure. You're going hard after machine intelligence. And cloud, if I understand your cloud play, you sell to cloud providers whether they're on-prem or in the public cloud but what do you think about those? That innovation sandwich that I just described and how do you guys play? >> Well, cloud is where we get over 30% of our revenue so we're actually selling to the cloud, cloud service providers, et cetera. For example, one of the biggest cloud service providers out there that I think today's announcement helps them out a lot from a policy perspective actually used FlashBlade to reduce their SLAs, to reduce their restore time from, I think, it was 30 hours down to 38 minutes. They were paying money before to their customers. What we see in our cloud strategy is one of empowering cloud providers, but also we think that cloud is increasingly, at the infrastructure layer, going to be commoditized and it's going to be about how do we enable multicloud? So how do we enable customers to get around data gravity problems? I've got this big, weighty database that I want to see if I can move it up to the cloud but that takes me forever. So how do we help customers be able to move to one cloud or even exit a cloud to another or back to on-prem? We think there's a lot of value in applying our, for example deduplication technology, et cetera, to helping customers with those data gravity problems, to making a more open world in terms of sharing data to and from the cloud. >> Great, well we looked at Pure and Veritas getting together, do some hard core engineering, going to market, solving some real problems. Thanks Nathan for hanging out, this iconic beautiful Tavern on the Green in the heart of New York City. Appreciate you coming on theCUBE. >> Thanks Dave. >> All right, keep it right there everybody, Dave Vallante. We'll be right back right after this short break. You're watching theCUBE from Veritas Solutions Day, #VeritasVision, be right back. (digital music)

Published Date : Oct 11 2018

SUMMARY :

brought to you by Veritas. We're here in the heart of Central Park that's able to run all the workloads. A lot of these announcements that you see, We certify all of our hardware platforms with Veritas. but now at the same time you got to integrate with Veritas, in terms of maturing their API to really be or one of the first with flash for block. and it's not just first in terms of first to market. to what you're doing with Veritas and data protection? the rest of the storage industry to follow suit. how do they get from point A to point B with the data hub? to maximize the value we are able to get out of that data. so that I get the right people who can act on that data? that the cloud is the most disruptive thing that we see, or in the public cloud but what do you think about those? to be about how do we enable multicloud? in the heart of New York City. We'll be right back right after this short break.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Andrew NgPERSON

0.99+

AppleORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

FacebookORGANIZATION

0.99+

Dominique BrezinskiPERSON

0.99+

GoogleORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

NVIDIAORGANIZATION

0.99+

NathanPERSON

0.99+

DavePERSON

0.99+

BostonLOCATION

0.99+

Dave VallantePERSON

0.99+

Nathan HallPERSON

0.99+

JyothiPERSON

0.99+

five yearsQUANTITY

0.99+

20QUANTITY

0.99+

38 minutesQUANTITY

0.99+

New York CityLOCATION

0.99+

30 hoursQUANTITY

0.99+

VeritasORGANIZATION

0.99+

secondQUANTITY

0.99+

PureORGANIZATION

0.99+

todayDATE

0.99+

Scott DietzenPERSON

0.99+

firstQUANTITY

0.99+

BothQUANTITY

0.99+

this yearDATE

0.99+

oneQUANTITY

0.99+

two centsQUANTITY

0.99+

billion dollarQUANTITY

0.99+

Central ParkLOCATION

0.98+

five timesQUANTITY

0.98+

about 25 yearsQUANTITY

0.98+

bothQUANTITY

0.98+

singleQUANTITY

0.98+

first partnershipQUANTITY

0.98+

USLOCATION

0.98+

GartnerORGANIZATION

0.98+

10 different stovepipesQUANTITY

0.97+

1%QUANTITY

0.97+

over 30%QUANTITY

0.97+

FirstQUANTITY

0.96+

10 different VPsQUANTITY

0.96+

Veritas Solutions DayEVENT

0.96+

RESTfulTITLE

0.96+

86.6QUANTITY

0.96+

first timeQUANTITY

0.96+

one cloudQUANTITY

0.95+

firstsQUANTITY

0.95+

five imperativesQUANTITY

0.95+

single platformQUANTITY

0.95+

25-year-oldQUANTITY

0.94+

Central Park, New YorkLOCATION

0.92+

CloudPointTITLE

0.92+

Tavern on the GreenLOCATION

0.92+

BarneyORGANIZATION

0.92+

Tavern On the GreenLOCATION

0.9+

#VeritasVisionORGANIZATION

0.88+

Moore's LawTITLE

0.87+

FlashBladeORGANIZATION

0.87+

Veritas Vision Solution DayEVENT

0.86+

VPPERSON

0.86+

25 yearsQUANTITY

0.85+

Bryan Smith, Rocket Software - IBM Machine Learning Launch - #IBMML - #theCUBE


 

>> Announcer: Live from New York, it's theCUBE, covering the IBM Machine Learning Launch Event, brought to you by IBM. Now, here are your hosts, Dave Vellante and Stu Miniman. >> Welcome back to New York City, everybody. We're here at the Waldorf Astoria covering the IBM Machine Learning Launch Event, bringing machine learning to the IBM Z. Bryan Smith is here, he's the vice president of R&D and the CTO of Rocket Software, powering the path to digital transformation. Bryan, welcome to theCUBE, thanks for coming on. >> Thanks for having me. >> So, Rocket Software, Waltham, Mass. based, close to where we are, but a lot of people don't know about Rocket, so pretty large company, give us the background. >> It's been around for, this'll be our 27th year. Private company, we've been a partner of IBM's for the last 23 years. Almost all of that is in the mainframe space, or we focused on the mainframe space, I'll say. We have 1,300 employees, we call ourselves Rocketeers. It's spread around the world. We're really an R&D focused company. More than half the company is engineering, and it's spread across the world on every continent and most major countries. >> You're esstenially OEM-ing your tools as it were. Is that right, no direct sales force? >> About half, there are different lenses to look at this, but about half of our go-to-market is through IBM with IBM-labeled, IBM-branded products. We've always been, for the side of products, we've always been the R&D behind the products. The partnership, though, has really grown. It's more than just an R&D partnership now, now we're doing co-marketing, we're even doing some joint selling to serve IBM mainframe customers. The partnership has really grown over these last 23 years from just being the guys who write the code to doing much more. >> Okay, so how do you fit in this announcement. Machine learning on Z, where does Rocket fit? >> Part of the announcement today is a very important piece of technology that we developed. We call it data virtualization. Data virtualization is really enabling customers to open their mainframe to allow the data to be used in ways that it was never designed to be used. You might have these data structures that were designed 10, 20, even 30 years ago that were designed for a very specific application, but today they want to use it in a very different way, and so, the traditional path is to take that data and copy it, to ETL it someplace else they can get some new use or to build some new application. What data virtualization allows you to do is to leave that data in place but access it using APIs that developers want to use today. They want to use JSON access, for example, or they want to use SQL access. But they want to be able to do things like join across IMS, DB2, and VSAM all with a single query using an SQL statement. We can do that relational databases and non-relational databases. It gets us out of this mode of having to copy data into some other data store through this ETL process, access the data in place, we call it moving the applications or the analytics to the data versus moving the data to the analytics or to the applications. >> Okay, so in this specific case, and I have said several times today, as Stu has heard me, two years ago IBM had a big theme around the z13 bringing analytics and transactions together, this sort of extends that. Great, I've got this transaction data that lives behind a firewall somewhere. Why the mainframe, why now? >> Well, I would pull back to where I said where we see more companies and organizations wanting to move applications and analytics closer to the data. The data in many of these large companies, that core business-critical data is on the mainframe, and so, being able to do more real time analytics without having to look at old data is really important. There's this term data gravity. I love the visual that presents in my mind that you have these different masses, these different planets if you will, and the biggest, massivest planet in that solar system really is the data, and so, it's pulling the smaller satellites if you will into this planet or this star by way of gravity because data is, data's a new currency, data is what the companies are running on. We're helping in this announcement with being able to unlock and open up all mainframe data sources, even some non-mainframe data sources, and using things like Spark that's running on the platform, that's running on z/OS to access that data directly without having to write any special programming or any special code to get to all their data. >> And the preferred place to run all that data is on the mainframe obviously if you're a mainframe customer. One of the questions I guess people have is, okay, I get that, it's the transaction data that I'm getting access to, but if I'm bringing transaction and analytic data together a lot of times that analytic data might be in social media, it might be somewhere else not on the mainframe. How do envision customers dealing with that? Do you have tooling them to do that? >> We do, so this data virtualization solution that I'm talking about is one that is mainframe resident, but it can also access other data sources. It can access DB2 on Linux Windows, it can access Informix, it can access Cloudant, it can access Hadoop through IBM's BigInsights. Other feeds like Twitter, like other social media, it can pull that in. The case where you'd want to do that is where you're trying to take that data and integrate it with a massive amount of mainframe data. It's going to be much more highly performant by pulling this other small amount of data into, next to that core business data. >> I get the performance and I get the security of the mainframe, I like those two things, but what about the economics? >> Couple of things. One, IBM when they ported Spark to z/OS, they did it the right way. They leveraged the architecture, it wasn't just a simple port of recompiling a bunch of open source code from Apache, it was rewriting it to be highly performant on the Z architecture, taking advantage of specialty engines. We've done the same with the data virtualization component that goes along with that Spark on z/OS offering that also leverages the architecture. We actually have different binaries that we load depending on which architecture of the machine that we're running on, whether it be a z9, an EC12, or the big granddaddy of a z13. >> Bryan, can you speak the developers? I think about, you're talking about all this mobile and Spark and everything like that. There's got to be certain developers that are like, "Oh my gosh, there's mainframe stuff. "I don't know anything about that." How do you help bridge that gap between where it lives in the tools that they're using? >> The best example is talking about embracing this API economy. And so, developers really don't care where the stuff is at, they just want it to be easy to get to. They don't have to code up some specific interface or language to get to different types of data, right? IBM's done a great job with the z/OS Connect in opening up the mainframe to the API economy with ReSTful interfaces, and so with z/OS Connect combined with Rocket data virtualization, you can come through that z/OS Connect same path using all those same ReSTful interfaces pushing those APIs out to tools like Swagger, which the developers want to use, and not only can you get to the applications through z/OS Connect, but we're a service provider to z/OS Connect allowing them to also get to every piece of data using those same ReSTful APIs. >> If I heard you correctly, the developer doesn't need to even worry about that it's on mainframe or speak mainframe or anything like that, right? >> The goal is that they never do. That they simply see in their tool-set, again like Swagger, that they have data as well as different services that they can invoke using these very straightforward, simple ReSTful APIs. >> Can you speak to the customers you've talked to? You know, there's certain people out in the industry, I've had this conversation for a few years at IBM shows is there's some part of the market that are like, oh, well, the mainframe is this dusty old box sitting in a corner with nothing new, and my experience has been the containers and cool streaming and everything like that, oh well, you know, mainframe did virtualization and Linux and all these things really early, decades ago and is keeping up with a lot of these trends with these new type of technologies. What do you find in the customers that, how much are they driving forward on new technologies, looking for that new technology and being able to leverage the assets that they have? >> You asked a lot of questions there. The types of customers certainly financial and insurance are the big two, but that doesn't mean that we're limited and not going after retail and helping governments and manufacturing customers as well. What I find is talking with them that there's the folks who get it and the folks who don't, and the folks who get it are the ones who are saying, "Well, I want to be able "to embrace these new technologies," and they're taking things like open source, they're looking at Spark, for example, they're looking at Anaconda. Last week, we just announced at the Anaconda Conference, we stepped on stage with Continuum, IBM, and we, Rocket, stood up there talking about this partnership that we formed to create this ecosystem because the development world changes very, very rapidly. For a while, all the rage was JDBC, or all the rage was component broker, and so today it's Spark and Anaconda are really in the forefront of developers' minds. We're constantly moving to keep up with developers because that's where the action's happening. Again, they don't care where the data is housed as long as you can open that up. We've been playing with this concept that came up from some research firm called two-speed IT where you have maybe your core business that has been running for years, and it's designed to really be slow-moving, very high quality, it keeps everything running today, but they want to embrace some of their new technologies, they want to be able to roll out a brand-new app, and they want to be able to update that multiple times a week. And so, this two-speed IT says, you're kind of breaking 'em off into two separate teams. You don't have to take your existing infrastructure team and say, "You must embrace every Agile "and every DevOps type of methodology." What we're seeing customers be successful with is this two-speed IT where you can fracture these two, and now you need to create some nice integration between those two teams, so things like data virtualization really help with that. It opens up and allows the development teams to very quickly access those assets on the mainframe in this case while allowing those developers to very quickly crank out an application where quality is not that important, where being very quick to respond and doing lots of AB testing with customers is really critical. >> Waterfall still has its place. As a company that predominately, or maybe even exclusively is involved in mainframe, I'm struck by, it must've been 2008, 2009, Paul Maritz comes in and he says VMWare our vision is to build the software mainframe. And of course the world said, "Ah, that's, mainframe's dead," we've been hearing that forever. In many respects, I accredit the VMWare, they built sort of a form of software mainframe, but now you hear a lot of talk, Stu, about going back to bare metal. You don't hear that talk on the mainframe. Everything's virtualized, right, so it's kind of interesting to see, and IBM uses the language of private cloud. The mainframe's, we're joking, the original private cloud. My question is you're strategy as a company has been always focused on the mainframe and going forward I presume it's going to continue to do that. What's your outlook for that platform? >> We're not exclusively by the mainframe, by the way. We're not, we have a good mix. >> Okay, it's overstating that, then. It's half and half or whatever. You don't talk about it, 'cause you're a private company. >> Maybe a little more than half is mainframe-focused. >> Dave: Significant. >> It is significant. >> You've got a large of proportion of the company on mainframe, z/OS. >> So we're bullish on the mainframe. We continue to invest more every year. We invest, we increase our investment every year, and so in a software company, your investment is primarily people. We increase that by double digits every year. We have license revenue increases in the double digits every year. I don't know many other mainframe-based software companies that have that. But I think that comes back to the partnership that we have with IBM because we are more than just a technology partner. We work on strategic projects with IBM. IBM will oftentimes stand up and say Rocket is a strategic partner that works with us on hard problem-solving customers issues every day. We're bullish, we're investing more all the time. We're not backing away, we're not decreasing our interest or our bets on the mainframe. If anything, we're increasing them at a faster rate than we have in the past 10 years. >> And this trend of bringing analytics and transactions together is a huge mega-trend, I mean, why not do it on the mainframe? If the economics are there, which you're arguing that in many use cases they are, because of the value component as well, then the future looks pretty reasonable, wouldn't you say? >> I'd say it's very, very bright. At the Anaconda Conference last week, I was coming up with an analogy for these folks. It's just a bunch of data scientists, right, and during most of the breaks and the receptions, they were just asking questions, "Well, what is a mainframe? "I didn't know that we still had 'em, "and what do they do?" So it was fun to educate them on that. But I was trying to show them an analogy with data warehousing where, say that in the mid-'90s it was perfectly acceptable to have a separate data warehouse separate from your transaction system. You would copy all this data over into the data warehouse. That was the model, right, and then slowly it became more important that the analytics or the BI against that data warehouse was looking at more real time data. So then it became more efficiencies and how do we replicate this faster, and how do we get closer to, not looking at week-old data but day-old data? And so, I explained that to them and said the days of being able to do analytics against old data that's copied are going away. ETL, we're also bullish to say that ETL is dead. ETL's future is very bleak. There's no place for it. It had its time, but now it's done because with data virtualization you can access that data in place. I was telling these folks as they're talking about, these data scientists, as they're talking about how they look at their models, their first step is always ETL. And so I told them this story, I said ETL is dead, and they just look at me kind of strange. >> Dave: Now the first step is load. >> Yes, there you go, right, load it in there. But having access from these platforms directly to that data, you don't have to worry about any type of a delay. >> What you described, though, is still common architecture where you've got, let's say, a Z mainframe, it's got an InfiniBand pipe to some exit data warehouse or something like that, and so, IBM's vision was, okay, we can collapse that, we can simplify that, consolidate it. SAP with HANA has a similar vision, we can do that. I'm sure Oracle's got their vision. What gives you confidence in IBM's approach and legs going forward? >> Probably due to the advances that we see in z/OS itself where handling mixed workloads, which it's just been doing for many of the 50 years that it's been around, being able to prioritize different workloads, not only just at the CPU dispatching, but also at the memory usage, also at the IO, all the way down through the channel to the actual device. You don't see other operating systems that have that level of granularity for managing mixed workloads. >> In the security component, that's what to me is unique about this so-called private cloud, and I say, I was using that software mainframe example from VMWare in the past, and it got a good portion of the way there, but it couldn't get that last mile, which is, any workload, any application with the performance and security that you would expect. It's just never quite got there. I don't know if the pendulum is swinging, I don't know if that's the accurate way to say it, but it's certainly stabilized, wouldn't you say? >> There's certainly new eyes being opened every day to saying, wait a minute, I could do something different here. Muscle memory doesn't have to guide me in doing business the way I have been doing it before, and that's this muscle memory I'm talking about of this ETL piece. >> Right, well, and a large number of workloads in mainframe are running Linux, right, you got Anaconda, Spark, all these modern tools. The question you asked about developers was right on. If it's independent or transparent to developers, then who cares, that's the key. That's the key lever this day and age is the developer community. You know it well. >> That's right. Give 'em what they want. They're the customers, they're the infrastructure that's being built. >> Bryan, we'll give you the last word, bumper sticker on the event, Rocket Software, your partnership, whatever you choose. >> We're excited to be here, it's an exciting day to talk about machine learning on z/OS. I say we're bullish on the mainframe, we are, we're especially bullish on z/OS, and that's what this even today is all about. That's where the data is, that's where we need the analytics running, that's where we need the machine learning running, that's where we need to get the developers to access the data live. >> Excellent, Bryan, thanks very much for coming to theCUBE. >> Bryan: Thank you. >> And keep right there, everybody. We'll be back with our next guest. This is theCUBE, we're live from New York City. Be right back. (electronic keyboard music)

Published Date : Feb 15 2017

SUMMARY :

Event, brought to you by IBM. powering the path to close to where we are, but and it's spread across the Is that right, no direct sales force? from just being the Okay, so how do you or the analytics to the data versus Why the mainframe, why now? data is on the mainframe, is on the mainframe obviously It's going to be much that also leverages the architecture. There's got to be certain They don't have to code up some The goal is that they never do. and my experience has been the containers and the folks who get it are the ones who You don't hear that talk on the mainframe. the mainframe, by the way. It's half and half or whatever. half is mainframe-focused. of the company on mainframe, z/OS. in the double digits every year. the days of being able to do analytics directly to that data, you don't have it's got an InfiniBand pipe to some for many of the 50 years I don't know if that's the in doing business the way I is the developer community. They're the customers, bumper sticker on the the developers to access the data live. very much for coming to theCUBE. This is theCUBE, we're

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IBMORGANIZATION

0.99+

BryanPERSON

0.99+

Dave VellantePERSON

0.99+

Paul MaritzPERSON

0.99+

DavePERSON

0.99+

Stu MinimanPERSON

0.99+

Rocket SoftwareORGANIZATION

0.99+

50 yearsQUANTITY

0.99+

2009DATE

0.99+

New York CityLOCATION

0.99+

2008DATE

0.99+

OracleORGANIZATION

0.99+

27th yearQUANTITY

0.99+

New York CityLOCATION

0.99+

first stepQUANTITY

0.99+

twoQUANTITY

0.99+

JDBCORGANIZATION

0.99+

1,300 employeesQUANTITY

0.99+

ContinuumORGANIZATION

0.99+

Last weekDATE

0.99+

New YorkLOCATION

0.99+

AnacondaORGANIZATION

0.99+

two thingsQUANTITY

0.99+

mid-'90sDATE

0.99+

SparkTITLE

0.99+

RocketORGANIZATION

0.99+

z/OS ConnectTITLE

0.99+

10DATE

0.99+

two teamsQUANTITY

0.99+

LinuxTITLE

0.99+

todayDATE

0.99+

two-speedQUANTITY

0.99+

two separate teamsQUANTITY

0.99+

Z. Bryan SmithPERSON

0.99+

SQLTITLE

0.99+

Bryan SmithPERSON

0.99+

z/OSTITLE

0.98+

two years agoDATE

0.98+

ReSTfulTITLE

0.98+

SwaggerTITLE

0.98+

last weekDATE

0.98+

decades agoDATE

0.98+

DB2TITLE

0.98+

HANATITLE

0.97+

IBM Machine Learning Launch EventEVENT

0.97+

Anaconda ConferenceEVENT

0.97+

HadoopTITLE

0.97+

SparkORGANIZATION

0.97+

OneQUANTITY

0.97+

InformixTITLE

0.96+

VMWareORGANIZATION

0.96+

More than halfQUANTITY

0.95+

z13COMMERCIAL_ITEM

0.95+

JSONTITLE

0.95+