Doug Matthews, Veritas | CUBE Conversation, July 2020
>> Announcer: From theCUBE studios in Palo Alto and Boston, connecting with thought leaders all around the world, this is a CUBE conversation. >> Hi, I'm Stuart Miniman and welcome to this episode of CUBE conversations. I'm here from our Boston area studio. Happy to welcome to the program, Doug Matthews. He's the vice president of product management with Veritas coming to us from Atlanta. Doug, thanks so much for joining us. Nice to see you. >> Hey, great to see you Stuart and thanks for having me today. >> Yeah, so Doug obviously, 2020, there's a lot of change going on, globally, a lot of things happening financially, but one of the ongoing changes that we've been watching and has had huge ripple effects, is of course the impact on cloud. So why don't you bring us in a little bit. Tell us, what you work on, and how cloud has been impacting, what's happening with the data protection or resiliency in your world? >> Sure, so, Veritas Technologies is a long brand of focus on data protection. And we are highly focused on protecting data regardless of where it lives, whether it lives on a customer's premise or whether it lives in a cloud, public cloud architecture, or even in a cloud application. So, for us, this has been a transformational change as more and more people begin to adopt cloud services as the work from home trend starts and we're seeing them much higher emergence of ransomware. >> Yeah, so cloud of course, it is a unevenly distributed if you look at, if look at a countries, if you look at industries, >> Right. >> I'm wondering what you're hearing from customers, what's kind of the 2020 snapshot of where we are with the overall cloud wave. >> Sure, yeah. What we're seeing is a much more rapid adoption of cloud services as businesses and organizations begin to wrestle with the fact that they can't bring people into the office. So the work from home trend, the access to resources needs to be delivered through the cloud applications, even data centers. We're now beginning to see you some supply chain hiccups that are causing the supply chain fulfillment of server orders beginning to slow down. So customers are beginning to think more broadly about cloud gives you agility, operational ability to react to change. So people are accelerating their adoption of cloud resources because they're almost being forced to. >> Yeah, is there anything specific you're seeing are you getting any data maybe with coronavirus as to what service is in the cloud and what impact that's having on your customers? >> Yeah, so dramatic change, right. So for example, Azure Cloud Services are up something like 775%, which is just astounding number, VDI, Virtual Desktops up over 300%, and just massive of these cloud resources is just a continuing component trim. >> Yeah, and how about from a data protection standpoint and security. Obviously, we've seen that the malicious attacks have increased, unfortunately, and when you have more people outside of the enterprise walls itself, there's more things we need to make sure that our data is secure. >> Yeah, absolutely. And we have without a doubt seen a rise in ransomware attacks and malware attacks. What's interesting to note is increasingly the consumer is placing the blame for these attacks, less on the perpetrators and more on the organization and business leaders. For example, over 40% of consumers actually hold the business leader responsible where ransomware attack that their business suffers And (indistinct) percent would actually say that they would stop buying from an organization that suffers from a malware or has been a victim of an attack. So the mindset here is no longer place blame on the perpetrators, but on the business leader and owner that didn't protect their data in such a way that kept the user from being exposed. >> Yeah, Doug, why don't you bring us inside and explain how Veritas is helping in these environments to protect our data? >> Yeah, so I think the first thing is as a business leader begins to think about their cloud contract, they need to understand their SLAs and how that maps to what that cloud provider is going to provide for them. We actually found, recently, we produced a report called the "Truth in Cloud Report" and in that report, we talked to cloud architects and business leaders over 1600 of them that respond, and one of the things that we found pretty interesting is that 85% of the respondents said that the cloud service provider is responsible for protecting their data, but that's completely disconnected from the actual fact that over 53% or so of those that responded actually had an SLA that was higher than their cloud service provider would provide. So they believe it's supposed to be done by the cloud provider, but it isn't being done by the cloud provider to meet their needs. So people really need to think about and analyze who's protecting their data and how they're protected when they move into that cloud architecture. >> Yeah, I have to say I'm a little surprised to hear those results, the drum beat that I've heard from the security industry for the last couple of years has been about the shared responsibility model, there have been some rather public and highly visible failures where say somebody made a false assumption that was something would be turned on and the cloud service providers have come back and said, "Hey, you all, if there's these things you need to do and just because there's a lock on the door, if you don't lock it, we're not responsible for it". It is kind of the analogy I use. Shouldn't we, by 2020 now, where cloud is not new. I would have thought that we would have gotten through some of these rather basic understanding of who's responsible for what and ultimately who needs to answer for these things. >> Yeah, I think we're still in that adoption life cycle and I think there was the... We mapped this as a hype cycle of our own... We're people right in the adoption of cloud and we believe that classically cloud architects, probably 20 to 25% of organizations, have actually fully adopted cloud at this point and are aggressively adopting cloud, but there is such a rush now to get in from these business leaders and architects, who haven't really you've taken the time to frame and understand things that they're now being pulled along in this journey and rediscovering this thing. So we have to keep that drum beat up as some of the cloud laggers or more mainstream technology adopters are beginning to adopt cloud 'cause they haven't stayed aware. I completely agree with you. We've been talking about the shared responsibility model for a long time, but these survey results showed that it's still a problem. >> Doug, you make a great point. You talk about companies have had to compress their cycles and while normally they would have been able to really plan things, walk through what they were going to do, they're often rushing into things a little bit more. So what advice would you give other companies that are now been dipping their toe, but jumping into cloud or they need to accelerate what they're doing, what advice would you make sure that people don't get in a little over their skis or do something that they're going to regret? >> Sure, so the first thing I would say is, have a recovery plan and make sure you rehearse it. Again, back to the blame here is falling on business leaders, so don't get caught by it, make sure that you understand your recovery plan, make sure that you rehearse it and that it works. The second thing is, I would absolutely read that fine print of your contract and make sure that your required SLAs match up with what your cloud services provider provides, or you need to adapt technology that helps you to adjust to make sure that you achieve that SLA. And then the final thing as you're doing all this, so many people look at cloud for cost optimization as an outcome, make sure you don't overpay because the there are various levels of cloud storage, cloud storage is extremely expensive, cloud resources are expensive. Typically people think about the actual host itself or the instance itself, make sure that you think about the storage as well. So use things like deduplication or lower tiers of storage to optimize your cost efficiency. >> All right, so Doug as we mentioned earlier in the discussion Veritas has been around for awhile really well understood how you help customers, help connect us as to what you're doing for the cloud specifically. >> Sure, so specifically for cloud, let's focus on an upcoming release. I think most people that are probably watching this are familiar with our product called NetBackup, it's the enterprise leader in data protection. NetBackup is designed to solve the data protection challenges across all infrastructure whether it's your typical on premise infrastructure or new cloud architectures. So in these new cloud architectures, we've done things to make sure that you efficiently utilize cloud storage. So we do things like deduplication, we also control network bandwidth and make sure that you minimize rather your impact on network bandwidth. So you've minimize your overall cost requirements associated with cloud or data protection. The other thing that we're doing in this next release, which I think is really exciting is, we're going to take our cloud point solution and our resiliency platform solution, these solutions are designed to help customers, efficiently recover in cloud as well as do it in a very quick and automated fashion. And we're going to bake those into our NetBackup product. So the NetBackup consumer will automatically have access to these two new technologies that we've been developing for the last several years. So that's really exciting for us to be including those with our NetBackup product. >> All right, and Doug, when we talk about cloud, is this supported across any cloud or there are specific integrations that we should understand or just where does this fit in the entire, on a multicloud ecosystem? >> Yeah, so the one other thing, again, about NetBackup being a platform, it support over 1400 different data sources, over 800 different data targets, and that includes over 60 cloud providers, so it supports us this broad ecosystem of cloud architecture but where that makes sense, we always go deep. So we go deep with your traditional cloud providers, like AWS or Azure and provide that deeper level capability for those those cloud providers. >> All right, great. What else should we know about what's new from Veratis's cloud offering? >> Yeah, I think when we build our cloud solutions, we focus on a four stage lifecycle of a customer. For example, we realized that customer wants to migrate the cloud, they want to protect their resources in cloud, they want to be able to recover when the time comes and then optimize their cloud footprint. So we tend to focus in those four pillars to achieve success for our customers. >> Yeah, a question on that, I think about moving to the cloud, there's a lot of discussion about how do I modernize my environment and often it's I move to the cloud, but then how do I really become cloud native, if you will. So I'm making updates and I'm making changes. If I think about backup traditionally, it was, let me get something, let me put it in place and I'm going to run it that way for years. So how does Veritas make sure that as I'm modernizing as I'm making changes that my data is still going to be protected no matter where I am along that journey. >> Sure, so I think as customers are migrating to and adopting cloud, their first stage on the train or their first station that they come to on the train is that lift and shift approach. We're going to take everything from on premise and we're going to move it to the cloud. So we have technologies that will help our customer do that with automated failback, so they can set up the replication solution, push a button now they're up and running in cloud, hey, it didn't work, push the button and they're back down in their on premise environment, adjust and do it when it makes sense and they're ready to make it make it work. So we have a fairly robust set of technologies that can help in that lift and shift process, lift and shift process. The other thing that we provide is for those infrastructure as code guys, the guys that are further out that are thinking, how do I natively build cloud based solutions? We have a very full suite of APIs so that the customer can implement their infrastructure as code requirements right there through that Swagger interface that you would expect and deploy infrastructure as code environments in cloud, utilizing our enterprise class API. So we're purpose built to be able to help customers get the cloud, and then also support those cloud applications that are built there natively. >> Yeah, Doug, I'm wondering, do you have either a customer example, maybe anonymized you can share, or just any general cloud learnings about where your customers are and how Veritas is helping them? >> Sure, so one of the first things that we see customers try to accomplish is the move of their backup storage infrastructure into a longterm storage in cloud. So they might use it as a replacement for tape, they might use it as a replacement for disk, and they want to live in the cloud environment. So we have a capability, we call it CloudCatalyst that moves data very efficiently from on prem into the cloud, keeps it deduplicated, optimizes it for wide area network transmit, and really efficiently moves that data in the cloud, and then really what's important is once it gets in the cloud, it doesn't touch that data. So we have a large customer who's got over a couple of petabytes of data in Europe that wanted to make that migration to cloud, they were using another provider at the time, so we came in and we were actually able to save them over 98% of their overall operational cost associated with moving and migrating that data just based on this one capability. So that's a key element, right. As people are moving that data to cloud, make sure that it stays efficient, optimized, deduplicated in stored efficient. >> All right, Doug, I'll give you the final word. >> Yeah, I think my warning for customers is to make sure that they are well-protected with their data state in cloud. Understand what your cloud service provider provides, make sure that your SLOs, your service level objectives are going to be met by the technologies that you deploy in order to solve your cloud problems. And then think about things holistically, think about it first from the migration, then how you protect it, then once you get there, what do you do to recover, make you test that. And then once you've got everything kind of thought through and ready to implement, make sure that you've optimized it to be efficient in it's cost utilization and in it's operations. >> All right, well, Doug Matthews, thank you so much for the updates, we really appreciate you sharing us some important tips for customers as they go along their cloud journey. >> Thank you so much, Stuart. >> All right, I'm Stuart Miniman and thank you for watching theCUBE. (gentle music)
SUMMARY :
leaders all around the world, Nice to see you. Hey, great to see you Stuart is of course the impact on cloud. as the work from home trend starts with the overall cloud wave. the access to resources needs and just massive of these cloud resources that the malicious attacks and more on the organization and in that report, we and the cloud service taken the time to frame they need to accelerate and make sure that your for the cloud specifically. and make sure that you and that includes over 60 cloud providers, What else should we know about what's new to migrate the cloud, and often it's I move to the cloud, so that the customer can As people are moving that data to cloud, give you the final word. and ready to implement, make for the updates, we really and thank you for watching theCUBE.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Doug | PERSON | 0.99+ |
Doug Matthews | PERSON | 0.99+ |
Doug Matthews | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
Atlanta | LOCATION | 0.99+ |
Stuart Miniman | PERSON | 0.99+ |
Stuart | PERSON | 0.99+ |
July 2020 | DATE | 0.99+ |
Boston | LOCATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
20 | QUANTITY | 0.99+ |
Veritas Technologies | ORGANIZATION | 0.99+ |
85% | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Veritas | ORGANIZATION | 0.99+ |
first station | QUANTITY | 0.99+ |
first stage | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
second thing | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
775% | QUANTITY | 0.99+ |
over 60 cloud providers | QUANTITY | 0.99+ |
over 98% | QUANTITY | 0.99+ |
Truth in Cloud Report | TITLE | 0.99+ |
Veratis | ORGANIZATION | 0.98+ |
over 53% | QUANTITY | 0.98+ |
over 40% | QUANTITY | 0.98+ |
two new technologies | QUANTITY | 0.98+ |
over 800 different data targets | QUANTITY | 0.98+ |
one | QUANTITY | 0.97+ |
25% | QUANTITY | 0.97+ |
first thing | QUANTITY | 0.97+ |
first thing | QUANTITY | 0.97+ |
over 300% | QUANTITY | 0.97+ |
first | QUANTITY | 0.96+ |
CloudCatalyst | TITLE | 0.95+ |
over 1400 different data sources | QUANTITY | 0.95+ |
CUBE | ORGANIZATION | 0.94+ |
NetBackup | TITLE | 0.93+ |
first things | QUANTITY | 0.92+ |
coronavirus | OTHER | 0.91+ |
four pillars | QUANTITY | 0.89+ |
theCUBE | ORGANIZATION | 0.87+ |
one capability | QUANTITY | 0.83+ |
over 1600 of them | QUANTITY | 0.83+ |
Azure Cloud Services | TITLE | 0.79+ |
NetBackup | ORGANIZATION | 0.78+ |
over a couple of petabytes of | QUANTITY | 0.75+ |
Azure | ORGANIZATION | 0.74+ |
years | QUANTITY | 0.73+ |
four stage | QUANTITY | 0.66+ |
Veritas | PERSON | 0.65+ |
last couple | DATE | 0.59+ |
Swagger | TITLE | 0.58+ |
years | DATE | 0.56+ |
last | DATE | 0.55+ |
Anne Gentle, Cisco DevNet | DevNet Create 2019
>> Live from Mountain View, California, it's theCUBE! Covering DevNet Create 2019, brought to you by Cisco. >> Hi, welcome to theCUBE's coverage of Cisco DevNet Create 2019, Lisa Martin with John Furrier, we've been here all day, talking about lots of very inspiring, educational, collaborative folks, and we're pleased to welcome to theCUBE Anne Gentle, developer experience manager for Cisco DevNet, Anne, thank you so much for joining us on theCUBE today. >> Thank you so much for having me. >> So this event, everything's like, rockstar start this morning with Susie, Mandy, and the team with the keynotes, standing room only, I know when I was walking out. >> I loved it, yes. >> Yes, there's a lot of bodies in here, it's pretty toasty. >> Yeah. >> The momentum that you guys have created, pun intended. >> Oh, yes. >> No, I can't take credit for that, is really, you can feel it, there's a tremendous amount of collaboration, this is your second create? >> Second create, yeah, so I've been with DevNet itself for about year and a half, and started at Cisco about three years ago this month, but I feel like developer experience is one of my first loves, when I really started to understand how to advocate for the developer experience. So DevNet just does a great job of not only advocating within Cisco, but outside of Cisco as well, so we make sure that their voice is heard, if there's some oddity with an API, which, you know, I'm really into API design, API style, we can kind of look at that first, and kind of look at it sideways and then talk to the teams, okay is there a better way to think about this from a developer standpoint. >> It's great, I love the API love there, it's going around a lot here. DevNet create a cloud native vibe that's kind of integrating and cross-pollinating into DevNet, Cisco proper. You're no stranger to cloud computing early days, and ecosystems that have formed naturally and grown, some morph, some go different directions, so you're involved in OpenStack, we know that, we've talked before about OpenStack, just some great successes as restarts, restarts with OpenStack ultimately settled in what it did, the CNCF, the Cloud Native Computing Foundation, is kind of the cloud native OpenStack model. >> Yeah, yeah. >> You've seen the communities grow, and the market's maturing. >> Definitely, definitely. >> So what's your take on this, because it creates kind of a, the creator builder side of it, we hear builder from Amazon. >> Yeah, I feel like we're able to bring together the standards, one of the interesting things about OpenStack was okay, can we do open standards, that's an interesting idea, right? And so, I think that's partially what we're able to do here, which is share, open up about our experiences, you know, I just went to a talk recently where the SendGrid former advocate is now working more on the SDK side, and he's like, yeah the travel is brutal, and so I just kind of graduated into maintaining seven SDKs. So, that's kind of wandering from where you were originally talking, but it's like, we can share with each other not only our hardships, but also our wins as well, so. >> API marketplaces is not a new concept, Apache was acquired-- >> Yes. >> By a big company, we know that name, Google. But now it's not just application programming interface marketplaces, with containers and server space, and microservices. >> Right. >> The role of APIs growing up on a whole other level is happening. >> Exactly. >> This is where you hear Cisco, and frankly I'm blown away by this, at the Cisco Live, that all the portfolio (mumbles) has APIs. >> True, yes, exactly. >> This is just a whole changeover, so, APIs, I just feel a whole other 2.0 or 3.0 level is coming. >> Absolutely. >> What's your take on this, because-- >> So, yeah, in OpenStack we documented like, two APIs to start, and then suddenly we had 15 APIs to document, right, so, learn quick, get in there and do the work, and I think that that's what's magical about APIs, is, we're learning from our designs in the beginning, we're bringing our users along with us, and then, okay, what's next? So, James Higginbotham, I saw one of his talks today, he's really big in the API education community, and really looking towards what's next, so he's talking about different architectures, and event-driven things that are going to happen, and so even talking about, well what's after APIs, and I think that's where we're going to start to be enabled, even as end users, so, sure, I can consume APIs, I'm pretty good at that now, but what are companies building on top of it, right? So like GitHub is going even further where you can have GitHub actions, and this is what James is talking about, where it's like, well the API enabled it, but then there's these event-driven things that go past that. So I think that's what we're starting to get into, is like, APIs blew up, right? And we're beyond just the create read. >> So, user experience, developer experience, back to what you do, and what Mandy was talking about. You can always make it easier, right? And so, as tools change, there's more tools, there's more workloads, there's more tools, there's more this, more APIs, so there's more of everything coming. >> Yeah. >> It's a tsunami to the developer, what are some of the trends that you see to either abstract away complexities, and, or, standardize or reduce the toolchains? >> Love where you're going with this, so, the thing is, I really feel like in the last, even, since 2010 or so, people are starting to understand that REST APIs are really just HTTP protocol, we can all understand it, there's very simple verbs to memorize. So I'm actually starting to see that the documentation is a huge part of this, like a huge part of the developer experience, because if, for one, there are APIs that are designed enough that you can memorize the entire API, that blows me away when people have memorized an API, but at the same time, if you look at it from like, they come to your documentation every day, they're reading the exact information they can give, they're looking at your examples, of course they're going to start to just have it at their fingertips with muscle memory, so I think that's, you know, we're starting to see more with OpenAPI, which is originally called Swagger, so now the tools are Swagger, and OpenAPI is the specification, and there's just, we can get more done with our documentation if we're able to use tools like that, that start to become industry-wide, with really good tools around them, and so one of the things that I'm really excited about, what we do at DevNet, is that we can, so, we have a documentation tool system, that lets us not only publish the reference information from the OpenAPI, like very boring, JSON, blah blah blah, machines can read it, but then you can publish it in these beautiful ways that are easy to read, easy to follow, and we can also give people tutorials, code examples, like everything's integrated into the docs and the site, and we do it all from GitHub, so I don't know if you guys know that's how we do our site from the back side, it's about 1000 or 2000 GitHub repos, is how we build that documentation. >> Everything's going to GitHub, the network configurations are going to GitHub, it's programmable, it's got to be in GitHub. >> Yes, it's true, and everything's Git-based right? >> So, back to the API question, because I think I'm connecting some dots from some of the conversations we had, we heard from some of the community members, there's a lot of integration touchpoints. Oh, a call center app on their collaboration talks to another database, which talks to another database, so these disparate systems can be connected through APIs, which has been around for a while, whether it's an old school SOAP interface, to, you know, HTTP and REST APIs, to full form, cooler stuff now. But it's also more of a business model opportunity, because the point is, if your API is your connection point-- >> Yes. >> There's potential business deals that could go on, but if you don't have good documentation, it's like not having a good business model. >> Right, and the best documentation really understands a user's task, and so that's why API design is so important, because if you need to make sure that your API looks like someone's daily work, get the wording right, get the actual task right, make sure that whatever workflow you've built into your API can be shown through in any tutorial I can write, right? So yeah, it's really important. >> What's the best practice, where should I go? I want to learn about APIs, so then I'm going to have a couple beers, hockey's over, it's coming back, Sharks are going to the next round, Bruins are going to the next round, I want to dig into APIs tonight. Where do I go, what are some best practices, what should I do? >> Yeah, alright, so we have DevNet learning labs, and I'm telling you because I see the web stats, like, the most popular ones are GitHub, REST API and Python, so you're in good company. Lots of people sitting on their couches, and a lot of them are like 20 minutes at a time, and if you want to do like an entire set that we've kind of curated for you all together, you should go to developer.cisco.com/startnow, and that's basically everything from your one-on-ones, all the way up to, like, really deep dive into products, what they're meant to do, the best use cases. >> Okay, I got to ask you, and I'll put you on the spot, pick your favorite child. Gold standard, what's the best APIs that you like, do you think are the cleanest, tightest? >> Oh, best APIs I like, >> Best documented? >> So in the technical writing world, the gold standard that everyone talks about is the Stripe documentation, so that's in financial tech, and it's very clean, we actually can do something like it with a three column layout-- >> Stripe (mumbles) payment gateway-- >> Stripe is, yeah, the API, and so apparently, from a documentation standpoint, they're just, people just go gaga for their docs, and really try to emulate them, so yeah. And as far as an API I use, so I have a son with type one diabetes, I don't know if I've shared this before, but he has a continuous glucose monitor that's on his arm, and the neat thing is, we can use a REST API to get the data every five minutes on how his blood sugar is doing. So when you're monitoring this, to me that's my favorite right now, because I have it on my watch, I have it on my phone, I know he's safe at school, I know he's safe if he goes anywhere. So it's like, there's so many use cases of APIs, you know? >> He's got the policy-based program, yeah. >> He does, yes, yes. >> Based upon where's he's at, okay, drink some orange juice now, or, you know-- >> Yes, get some juice. >> Get some juice, so, really convenient real-time. >> Yes, definitely, and he, you know, he can see it at school too, and just kind of, not let his friends know too much, but kind of keep an eye on it, you know? >> Automation. >> Yeah, exactly, exactly. >> Sounds like great cloud native, cool. You have a Meraki hub in your house? >> I don't have one at home. >> Okay. >> Yeah, I need to set one up, so yeah, we're terrible net nannies and we monitor everything, so I think I need Meraki at home. (laughing) >> It's a status symbol now-- >> It is now! >> We're hearing in the community. Here in the community of DevNet, you got to have a Meraki hub in your, switch in your house. >> It's true, it's true. >> So if you look back at last year's Create versus, I know we're just through almost day one, what are some of the things that really excite you about where this community of now, what did they say this morning, 585,000 strong? Where this is going, the potential that's just waiting to be unlocked? >> So I'm super excited for our Creator awards, we actually just started that last year, and so it's really neat to see, someone who won a Creator award last year, then give a talk about the kind of things he did in the coming year. And so I think that's what's most exciting about looking a year ahead for the next Create, is like, not only what do the people on stage do, but what do the people sitting next to me in the talks do? Where are they being inspired? What kind of things are they going to invent based on seeing Susie's talk about Wi-Fi 6? I was like, someone invent the thing so that when I go to a hotel, and my kids' devices take all the Wi-Fi first, and then I don't have any, someone do that, you know what I mean, yeah? >> Parental rights. >> So like, because you're on vacation and like, everybody has two devices, well, with a family of four-- [John] - They're streaming Netflix, Amazon Prime-- >> Yeah, yeah! >> Hey, where's my video? >> Like, somebody fix this, right? >> Maybe we'll hear that next year. >> That's what I'm saying, someone invent it, please. >> And thank you so much for joining John and me on theCUBE this afternoon, and bringing your wisdom and your energy and enthusiasm, we appreciate your time. >> Thank you. >> Thank you. >> For John Furrier, I am Lisa Martin, you're watching theCUBE live from Cisco DevNet Create 2019. Thanks for watching. (upbeat music)
SUMMARY :
Covering DevNet Create 2019, brought to you by Cisco. Anne, thank you so much for joining us on theCUBE today. and the team with the keynotes, Yes, there's a lot of bodies in here, The momentum that you guys have created, and kind of look at it sideways and then talk to the teams, is kind of the cloud native OpenStack model. and the market's maturing. the creator builder side of it, but it's like, we can share with each other By a big company, we know that name, Google. APIs growing up on a whole other level is happening. This is where you hear Cisco, This is just a whole changeover, and event-driven things that are going to happen, back to what you do, and what Mandy was talking about. and so one of the things that I'm really excited about, the network configurations are going to GitHub, from some of the conversations we had, but if you don't have good documentation, Right, and the best documentation so then I'm going to have a couple beers, and if you want to do like an entire set Gold standard, what's the best APIs that you like, of APIs, you know? He's got the policy-based so, really convenient real-time. You have a Meraki hub in your house? Yeah, I need to set one up, so yeah, We're hearing in the community. and so it's really neat to see, And thank you so much for joining John and me you're watching theCUBE live from Cisco DevNet Create 2019.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
James | PERSON | 0.99+ |
Anne Gentle | PERSON | 0.99+ |
James Higginbotham | PERSON | 0.99+ |
20 minutes | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Anne | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Cloud Native Computing Foundation | ORGANIZATION | 0.99+ |
Susie | PERSON | 0.99+ |
two devices | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
CNCF | ORGANIZATION | 0.99+ |
Mandy | PERSON | 0.99+ |
OpenAPI | TITLE | 0.99+ |
second | QUANTITY | 0.99+ |
15 APIs | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
2010 | DATE | 0.99+ |
next year | DATE | 0.99+ |
Mountain View, California | LOCATION | 0.99+ |
Second | QUANTITY | 0.99+ |
Git | TITLE | 0.99+ |
Sharks | ORGANIZATION | 0.99+ |
DevNet | ORGANIZATION | 0.98+ |
developer.cisco.com/startnow | OTHER | 0.98+ |
Swagger | TITLE | 0.98+ |
Python | TITLE | 0.98+ |
three column | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
tonight | DATE | 0.97+ |
GitHub | ORGANIZATION | 0.97+ |
today | DATE | 0.96+ |
585,000 | QUANTITY | 0.96+ |
Netflix | ORGANIZATION | 0.96+ |
OpenStack | TITLE | 0.95+ |
first loves | QUANTITY | 0.95+ |
two APIs | QUANTITY | 0.94+ |
SendGrid | ORGANIZATION | 0.94+ |
theCUBE | ORGANIZATION | 0.93+ |
REST API | TITLE | 0.93+ |
this morning | DATE | 0.93+ |
this afternoon | DATE | 0.93+ |
first | QUANTITY | 0.92+ |
Bruins | PERSON | 0.91+ |
three years ago | DATE | 0.9+ |
coming year | DATE | 0.89+ |
REST | TITLE | 0.88+ |
Cisco DevNet | ORGANIZATION | 0.87+ |
2019 | DATE | 0.86+ |
JSON | TITLE | 0.86+ |
seven SDKs | QUANTITY | 0.85+ |
family of | QUANTITY | 0.84+ |
GitHub | TITLE | 0.81+ |
a year | QUANTITY | 0.79+ |
Prime | COMMERCIAL_ITEM | 0.79+ |
five minutes | QUANTITY | 0.78+ |
Meraki | ORGANIZATION | 0.75+ |
DevNet Create | TITLE | 0.75+ |
DevNet Create 2019 | TITLE | 0.73+ |
about 1000 | QUANTITY | 0.73+ |
Stefan Voss, Dell EMC | CUBEConversation, February 2019
>> From the SiliconANGLE media office in Boston Massachusetts, it's theCUBE. Now here's your host, Dave Vellante. >> Hi everbody, this is Dave Vellante, and welcome to this special Cube conversation on a very important topic, cyber security and cyber resiliency. With me today is Stefan Voss who's the Senior Director of Product Management for Data Protection Software and Cyber Security and Compliance at Dell EMC. Stefan, thanks for coming on and helping us understand this very important topic ahead of RSA World. >> My pleasure, thanks Dave for having me. >> You're welcome, so let's talk about the environment today. We have, for years, seen back-up evolve into data protection, obviously disaster recovery is there, certainly long term retention. But increasingly, cyber resilience is part of the conversation. What are you seeing from customers? >> Yeah, definitely, we're seeing that evolution as well. It's definitely a changing market and what a perfect fit. We have to worry about right of breach, What happens when I get attacked? How can I recover? And the technologies we have, that we have for business resiliency back-up, they all apply, they all apply more than ever. But sometimes they have to be architected in a different way. So folks are very sensitive to that and they realize that they have great technologies. >> I'm glad you mentioned the focus on recovery because we have a lot of conversations on theCUBE about the CIO and how he, or she, should be communicating to the board, or the CSO, how they should be communicating to the board. That conversation has changed quite dramatically over the last 10 years. Cyber is a board-level issue. When you talk to, certainly large companies, every quarter they're talking about cyber. And not just in terms of what they're doing to keep the bad guys out but really what the processes are to respond, what the right regime is - you know, cyber security is obviously a team sport, it's not just the responsibility of the CSO or the SECOPS team, or the IT team, everybody has to be involved and be aware of it. Are you seeing that awareness at board levels within your customer base, and maybe even at smaller companies? >> 100%, I think the company size almost doesn't matter. Everybody can lose their business fairly quickly and there's one thing that NotPetya, that very bad, sort of, attack told us is that it can be very devastating. And so if we don't have a process and if we don't treat it as a team sport, we'll be uncoordinated. So, first of all, we learned that recovery is real and we need to have a recovery strategy. Doesn't mean we don't do detection, so the NIS continuum applies, but the CSOs are much more interested in the actual data recovery than they ever were before which is very interesting. And then, you know, you learn that the process is as important as the technology. So, in other words, Bob Bender - a fabulous quote from Founders Federal - you know, the notion of sweating before the game, being prepared, having a notion of a cyber recovery run book. Because the nature of the disasters are changing so, therefore, we have to think about using the same technologies in a different way. >> And I said at the open that things are shifting from just a pure back-up and recovery spectrum to much broader. The ROI is changing, people are trying to get more out of their data protection infrastructure than just insurance and, certainly, risk management and cyber resiliency and response is part of that. How is the ROI equation changing? >> Yeah, I mean, it's a very valid question. You know, we do have, people are asking for the ROI. We have to take a risk-based approach, we are mitigating risk. It's never fun to have any data protection or business resilience topology, 'cause it's incremental cost, but we do that for a reason. We need to be able to have an operational recovery strategy, a recovery strategy from a geographic disaster and, of course, now more so than ever a recovery strategy from a cyber attack. And so, therefore, we have to think about, you know, not so much the ROI but what is my risk reduction, right? By having, sort of, that process in place but also the confidence that I can get to the data that I need to recover. >> Now we're gonna get into that a little bit later when we talk about the business impact analysis. But I wanna talk about data isolation. Obviously ransomware is a hot topic today and this notion of creating an air gap. What is data isolation from your perspective? What are customers doing there? >> Yeah, I mean, I think almost every customer has a variant of data isolation. It's clear that it works, we've seen this from the NotPetya attack again that where we were, large logistics company, right, found data the domain controller on a system that underwent maintenance in Nigeria. So a system that was offline, but we don't wanna operate that way. So we wanna get the principles of isolation because we know it kind of reduces the attack surface, right, from the internal actor, from ransomware variants, you name it. All of these are, when you have stuff on the network it's theoretically fair game for the attacker. >> So that Nigeria example was basically by luck there was a system offline under maintenance that happened to be isolated? And so they were able to recover from that system? >> Absolutely. And another example was, of course, critical data that domain controller, 'cause that's what this attack happened to go after, was on tape. And so, you know, this just shows and proves that isolation works. The challenge we were running into with every customer we work with was the recovery time. Especially when you have to do selective recovery more often, you know, we wanna be able to get the benefits of online media. But also get, sort of, the benefits of isolation. >> Yeah, I mean, you don't wanna recover from tape. Tape is there as a last resort and hopefully you never have to go to it. How are customers, sort of, adopting this data isolation strategy and policy? Who's involved, what are some of the pre-requisites that they need to think about? >> Yeah, so the good thing - first thing's first, right. We have technology we know and love, so our data protection appliances where we started architecting this workflow, that we can use. So, in other words, you don't have to learn a new technology, buy something else. There's an incremental investment, yes. And then we have to think about who's involved. So that earlier point, the security folks are almost always involved, and they should be involved. Sometimes they fund the project, sometimes it comes out of IT. Right, so, this is the collaborative effort and then to the extent it's necessary, of course, you wanna have GRC - so the risk people - involved to make sure that we really focus on the most important critical assets. >> Now ahead of RSA, let's talk a little bit about what's going on in that world. There are security frameworks, Nist in particular is one, that's relatively new, I mean it's 2014 it came out, it's been revised really focusing on prevent, detect and, very importantly, respond. Something we've talked about a lot. Are people using that framework? Are they doing the self-assessments that Nist prescribes? What's your take? >> Yeah, I think they are. So, first of all, they are realizing that leaning too much left of breach, in other words hoping that we can always catch everything, sort of the eggshell perimeter, everybody understands that that's not enough. So we have to go in-depth and we also have to have a recovery strategy. And so the way I always like to break it down pragmatically is - one, what do I prioritize on? So we can always spend money on everything, but doing a business impact analysis and then maybe governing that in a tool like RSA Archer can help me be a little bit more strategic. And then, on the other end, if I can do a better job co-ordinating the data recovery along with the incident response, that will go a long way. You know and, of course, that doesn't forego any investment in the detection but it is widely adopted. >> One of the key parts about the NIS framework is understanding exposure in the supply chain where you may not have total control over one of your suppliers' policies, but yet they're embedded into your workflow. How are people handling that? Is there a high degree of awareness there? What are you seeing? >> It is absolutely, that's why product security is such an important element, and it's the number one priority for Dell Security, even above and beyond the internal security of our data center, as crazy as it sounds. Because, you know, we can do a lot of damage right in the market. So, certainly, supply chain, making sure we have robust products all along the way is something that every customer asks about all the time and it's very important. >> Let's go back to business impact analysis, we've mentioned it a couple of times now. What is a business impact analysis and how do you guys go about helping your customers conduct one? >> Yeah, I mean, let's maybe keep it to that example, let's say I go through this analysis and I find that I'm a little bit fuzzy on the recovery and that's an area I wanna invest. You know, and then I buy off on the concept that I have an isolated or cyber recovery vault on an isolated enclave onto which I can then copy data and make sure that I can get to it when I have to recover. The question then becomes, well what does business critical mean? And that's where the business impact analysis will help to say what is your business critical process - number one, number two - what are the associated applications, assets? 'Cause when you have that dependency map it makes it a lot easier to start prioritizing what applications do I put in the vault, in other words. In this specific example. And then how can I put it into financial terms to justify the investment? >> Well we were talking about ROI before, I mean really we've done actually quite a few studies looking at Global 2000 and the cost of downtime. I mean, these are real tangible metrics that, if you can reduce the amount of downtime or you can reduce the security threat, you're talking about putting money back in your pocket. Because Global 2000 organizations are losing millions and millions of dollars every year, so it is actually hard ROI. Even though some people might look at it as softer. I wanna talk about isolated data vault, you know, this notion of air gaps. What are you guys specifically doing there? Do you have solutions in that area? >> Yeah, we do. So we are using, luckily, so the concepts that we know from resiliency disaster recovery. Right, so our data protection storage which is very robust, it's very secure, it has very secure replication. So we have the mechanisms to get data into the vault, we have the mechanisms to create a read-only copy, so an immutable copy, that I can then go back into. So all of this is there, right, but the problem is how do I automate that workflow? So that's a software that we wrote that goes along with the data protection appliance sale. And what it does, it's all about ingesting that business critical data that I talked about into the secure enclave, and then rendering it into an immutable copy that I can get to when I have nowhere else to go. >> Okay, so you've got that gap, that air gap. Now, the bad guys will say 'Hey, I can get through an air gap, I can dress somebody up as a worker and put a stick in'. And so, how much awareness is there of that exposure? And I know it's maybe, you know, we're hitting the tip of the pyramid here, but still important. Can you guys help address that through, whether it's processes or product or experience? >> 100% so we have, of course, our consulting services that will then work with you on elements of physical security, or how do I lock down that remaining replication link? It's just about raising the bar for the attacker to make it more likely we'll catch them before they can get to, really, the prized assets. We're just raising the bar but, yes, those are things we do. So consulting, physical security, how do I do secure reporting out? How do I secure management going in? How do I secure that replication or synchronization link into the vault? All of these are topics that we then discuss, if they kind of deviate from the best practices and we have very good answers through our many customer arrangements. >> Stefan, let's talk about some of the specific offerings. RSA is a portfolio company in the Dell Technologies Group, it's a sister company of Dell EMC. What are you guys doing with RSA? Are you integrating with any of their specific products? Maybe you could talk about that a little bit? >> Yeah, I think, so when you think about recovery and incident response being so important, there's an obvious, right? So what RSA has found - I thought this was very interesting is that there's a lack of coordination between, typically, the security teams and the data professionals, data restoration professionals. So the more we can bridge that gap through technology, reporting, the better it is, right? So, there's a logical affinity between an incident response retainer, activity, and the data recovery solutions that we provide. That's one example, right? So every day counts, that example that I talked about NotPetya, the specific customer was losing 25 Euros every day. If I can shave off one day, it's money in the bank. Or money not out of the bank. The other area is, how do I make sure that I'm strategic about what data I protect in this way? That's the BIA Archer. And then there's some integrations we are looking at from an analytics perspective. >> Archer being the sort of governance risk and compliance, workflow, that's sort of one of the flagship products of RSA. So you integrate to that framework. And what about analytics, things like IOC, RSA NetWitness, are those products that you're integrating to or with, or leveraging in any way? >> Yeah, first off, analytics in general it's an interesting concept now we have data inside our secure enclave, right? So what if we could actually go in and give more confidence to the actual copies that we're storing there. So we have an ecosystem from an analytics perspective. We work with one specific company, we have Arrest API-based integration where we then, essentially, use them to do a vote of confidence on the copy, of the raw back up. Is it good? Are there signs that it was corrupted by malware? and so forth. So what that helps us do is be more proactive around our recovery because, I think you're about to say something - but if I knew there's something, you know, suspicious then I can start my analytics activity that much sooner. >> Well the lightbulb went off in my head. Because if I have an air gap, and I was saying before, it's necessary but insufficient. If I can run analytics on the corpus of the back up data and I can identify anomalies, I might be able to end run somebody trying to get through that air gap that I just mentioned before. Maybe it's a physical, you know, security breach. And the analytics might inform me. Is that a reasonable scenario? >> It is a reasonable scenario, though we do something slightly different. So, first of all, detection mechanisms, left of breach stuff, is what it is, we love it, we sell it, you know, we use it. But, you know, when it comes to back up they're not off-the-shelf tools we can just use and say 'Hey, why don't you scan this back up?' It doesn't typically work. So what we do is, in the vault, we have time, we have a workbench so it's almost like sending a specimen to the lab. And then we take a look at it. Are there any signs that there was data corruption that was indicative of a ransomware attack? And when there is such a scenario we say, 'You might wanna take a look at it, and do some further investigation'. That's when we then look at NetWitness or working with the security teams. But we can now be of service and say 'You might wanna look at this copy over here'. It's suspicious, there's an indicative compromise. And then take the next steps other than hoping for the best. >> You mentioned the ecosystem, you mentioned the ecosystem before. I wanna double-click on that. So, talk about the ecosystem. We've said here it's a team sport, you can't just do it alone. From a platform perspective is it open, is it API based? Maybe you can give some examples of how you're working with the ecosystem and how they're leveraging the platform. >> Yeah 100%. So, like I said, so we have, you know, our data protection appliances and that's sort of our plumbing, right, to get the data to where I want. We have the orchestration software. This is the part we're talking about. The orchestration software has Arrest API, everything's documented in Swagger. And the reason we did that is that we can do these orchestrations with third party analytics vendors, that's one use case right? So, I'm here, I have a copy here, please scan, tell me what you find and then give me an alert if you find something. The other example would be, maybe, doing a level of resiliency orchestration. Where you'd automate the recovery workflow beyond what we would have to offer. There are many examples but that is how we are enabling the ecosystem, essentially. >> You mentioned Founders Federal earlier. Is that a customer, is that a reference customer? What can you tell me about them? >> Yeah it's a reference customer and they very much saw the need for this type of protection. And, you know, we've been working with them. There's a Dell World, last year, session that we did with them. And very much the same sort of, like the quote said, focus on the process not only the product and the set of technologies, right? And, so that's how we've been partnering with them. >> The quote being 'Sweat before the game'? Founders Federal, that's a great quote. Alright, we've talked a lot about just, sort of, general terms about cyber recovery. What can you tell us, tell the audience, what makes Dell EMC cyber recovery different in the marketplace and, you know, relative to your competition? Pitch me. >> Yeah, I mean, I think it's a very unique capability. Because, one, you need a large install base and, sort of, a proven platform to even built it on, right? So when you look at the data domain technology we have a lot to work with. We have a lot of customers using it. So that's very hard to mimic. We have the orchestration software where we, I believe, are ahead of the game, right? So the orchestration software that I talked about that gets the data into the vault securely. And then our ecosystem, right? So those are really the three things. And then, of course, we have the consulting services which is also hard to mimic. To really, you know, design the process around this whole thing. But I think the ecosystem, sort of, approach is also very powerful. >> You have a big portfolio, you've got your sister company that's, sort of, well known obviously in this business. Do you also have solutions? I mean, for instance, is there an appliance as part of the portfolio that fits in here? And what is that? >> Yeah, so, you can think of this as, if I wanted to really blow it down, the two things I would buy is a data domain - it could be the smallest one - and a VxRail appliance that runs the software. And then I stick that in the vault. And then there's, sort of, that product. So you can think of it as an appliance that happens to go with the software that I talked about that does the orchestration. >> Okay, so, RSA the premier conference on cyber coming up in a couple of weeks. What have you guys got going there? Give us a little tease. >> Yeah, absolutely. So it's gonna be an awesome show and we will have a booth, and so we look forward to a lot of customer conversations. And we do have a panel. It's gonna be with Mastercard and RSA and myself. And we're really gonna take it from left of breach all the way to right of breach. >> Awesome, do you know when that panel is yet? >> It is, I think, on the 5th, I may have to check. >> Which is which day? >> I wanna say it's Wednesday. >> So it starts on the Monday, right? So that'll be day three. So check the conference schedule, I mean things change at the last minute. But that's great. Mastercard is an awesome reference customer. We've worked with them in the past and so, that's great. Stefan, thanks very much for coming to theCUBE and sharing some of your perspectives and what's coming up at RSA. It's good to have you. >> Thanks so much, Dave, I appreciate it. >> Okay, thanks for watching everybody. This is Dave Vellante from our East Cost headquarters. You're watching theCUBE.
SUMMARY :
From the SiliconANGLE media office and Compliance at Dell EMC. is part of the conversation. And the technologies we have, that we have or the IT team, everybody has to be involved And so if we don't have a process And I said at the open that things are shifting And so, therefore, we have to think about, you know, What is data isolation from your perspective? So a system that was offline, but we don't wanna And so, you know, this just shows and proves pre-requisites that they need to think about? So that earlier point, the security folks Now ahead of RSA, let's talk a little bit And so the way I always like to break it down One of the key parts about the NIS framework is something that every customer asks about all the time and how do you guys go about and I find that I'm a little bit fuzzy on the recovery and the cost of downtime. So we have the mechanisms to get data into the vault, And I know it's maybe, you know, we're that will then work with you on elements of RSA is a portfolio company in the Dell Technologies Group, and the data recovery solutions that we provide. of the flagship products of RSA. of the raw back up. And the analytics might inform me. we love it, we sell it, you know, we use it. So, talk about the ecosystem. And the reason we did that is that we can What can you tell me about them? and the set of technologies, right? different in the marketplace and, you know, that gets the data into the vault securely. as part of the portfolio that fits in here? and a VxRail appliance that runs the software. Okay, so, RSA the premier conference And we do have a panel. So it starts on the Monday, right? This is Dave Vellante from our East Cost headquarters.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Stefan | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Stefan Voss | PERSON | 0.99+ |
Bob Bender | PERSON | 0.99+ |
Nigeria | LOCATION | 0.99+ |
Dell Technologies Group | ORGANIZATION | 0.99+ |
RSA | ORGANIZATION | 0.99+ |
millions | QUANTITY | 0.99+ |
February 2019 | DATE | 0.99+ |
100% | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
Global 2000 | ORGANIZATION | 0.99+ |
Mastercard | ORGANIZATION | 0.99+ |
Dell EMC | ORGANIZATION | 0.99+ |
Boston Massachusetts | LOCATION | 0.99+ |
one day | QUANTITY | 0.99+ |
Wednesday | DATE | 0.99+ |
2014 | DATE | 0.99+ |
25 Euros | QUANTITY | 0.99+ |
Monday | DATE | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Founders Federal | ORGANIZATION | 0.98+ |
first | QUANTITY | 0.98+ |
millions of dollars | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
one | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
Dell World | ORGANIZATION | 0.97+ |
one thing | QUANTITY | 0.97+ |
Nist | ORGANIZATION | 0.96+ |
two things | QUANTITY | 0.95+ |
one example | QUANTITY | 0.95+ |
RSA Archer | TITLE | 0.94+ |
day three | QUANTITY | 0.94+ |
SECOPS | ORGANIZATION | 0.94+ |
three things | QUANTITY | 0.93+ |
NetWitness | ORGANIZATION | 0.92+ |
last 10 years | DATE | 0.88+ |
RSA World | ORGANIZATION | 0.83+ |
> 100% | QUANTITY | 0.82+ |
GRC | ORGANIZATION | 0.81+ |
Data Protection Software | ORGANIZATION | 0.76+ |
Arrest | TITLE | 0.76+ |
RSA | TITLE | 0.73+ |
Swagger | TITLE | 0.73+ |
NotPetya | TITLE | 0.71+ |
IOC | ORGANIZATION | 0.68+ |
NotPetya | ORGANIZATION | 0.68+ |
Cube | ORGANIZATION | 0.67+ |
NIS | TITLE | 0.67+ |
years | QUANTITY | 0.65+ |
CSO | ORGANIZATION | 0.65+ |
every year | QUANTITY | 0.62+ |
double | QUANTITY | 0.62+ |
SiliconANGLE | ORGANIZATION | 0.6+ |
5th | QUANTITY | 0.56+ |
Archer | ORGANIZATION | 0.55+ |
East | LOCATION | 0.53+ |
RSA NetWitness | TITLE | 0.53+ |
BIA Archer | ORGANIZATION | 0.52+ |
VxRail | ORGANIZATION | 0.39+ |
two | OTHER | 0.33+ |
Craig Stewart, SnapLogic | SnapLogic Innovation Day 2018
>> Narrator: From San Mateo, California, it's theCUBE, covering SnapLogic Innovation Day 2018. Brought to you by SnapLogic. >> Hey, welcome back here, Jeff Frick here with theCUBE. We're at the crossroads, it's 101 and 92 in San Mateo, California. A lot of popular software companies actually started here, I can always think of the Siebel sign going up and we used to talk about the movement of Silicon Valley from the chips down in the South Bay and Sunnyvale, and intel, really to a lot of software here in the middle of the peninsula. We're excited to be here at SnapLogic's headquarters for Innovation Day, and our next guest is Craig Stewart, he's the VP of product management. Craig, great to see you. >> Thank you very much. Welcome. >> Absolutely So, we're talking about API's, and we go to a lot of tech shows and the API economy is something that's talked about all the time. But really that has evolved for a couple reasons. One, is the proliferation of Cloud services, and the proliferation of applications in the Cloud services. We all know if you go to Google Cloud Next or Amazon re:Invent, the logo slide of absent services available for these things is tremendous. Give us kind of an update, you've been involved in this space for a long time, how its evolving what you guys are are working on here at SnapLogic. >> What we've seen change of late, is that not only is there a requirement for our customers to build API's, but also to then allow those API's to be consumed by their partners and networks out there. As a part of that, they may need to have more management of those API's, then we provide. We're very good at creating API's with inbound and outbound payload, parameters, all of those things, so we can create those data services via our API's, but customers then need to have a requirement now to add some functionality around. What about when I have a thousand users of these, and I need to be able to throttle them and those kinds of things. What we've seen happening is there's been this space of the full lifecycle API management technologies, which have been available for some time, and amongst those we've had Google Apigee kind of being the benchmark of those with the Apigee Edge platform, and in fact what we've done in this latest release is we've provided engineered integration into that Apigee Edge platform so that the API's that we create, we can push those directly into the Apigee Edge platform for them to do the advanced authentication, the monetization, the developer platform around it to develop a portal, all of those kind of things. In addition to that, we've also added the functionality to generate the open API specification, Swagger, as it's known, and to be able to take that Swagger definition to having generated it, we can then actually drop it into the API gateways provided by all of the different Cloud vendors. Whether it's Amazon with their API gateway or the Aggre gateway, all you need to do is then take that generated Swagger definition, and this literally is a right-mouse button, "open" API, and it generates the file for you, from there just drop that into those platforms and now they can be actually managed in those services directly. >> I want to unpack API lifecycle management, cos just for a 101 for people that aren't familiar. We think of API's and we know applications or making calls, and it's, "I'm sending data from this app to that app, "and this is pulling information from that app to this app." That's all pretty straightforward, but what are some of the nuances in lifecycle management of API's that your typical person really hasn't fought through that are A, super important and only increasing in relevance as more and more of these systems are all tied together. >> The use of those API's, some of the things around them that those platforms provide is some advanced authentication. They may be using, wanting to use OWA two-factor authentication, those kind of things. They may want to do some protocol translation. Many customers may know how to consume a SOAP service... generally Legacy, these days-- >> So funny that SOAP is now Legacy (laughs) >> It just cracks me up. I remember, the hottest thing since sliced bread >> Oh yeah! Oh yeah! I still have the Microsoft Internet Explorer four T-shirt-- >> When it was 95 Box too, I'm sure. But that's another conversation for another day. (laughs) >> The management of those API's adding that functionality to do advanced authentication, to do throttling... If you have an API, you don't want all of your back end systems to suddenly be overwhelmed. >> Jeff: Right. Right. >> One of those things that those full lifecycle platforms can do is throttle so that you can say this user may have only 10 requests a minute or something like that, so that stops the back end system being overwhelmed in the event of a spike in usage. That helps with denial of service attacks and those kind of things where you're protecting the core systems. Other things that they can do is the monetization. If you want to atrially expose an API for partners to consume but you want to charge them on that basis, you want to have a way of actually tracking those things to then be able to monetize that and to provide the analytics and the billing on top of it. There's a number of those different aspects that the full lifecycle provides on top of what we provide which is the core API that we're actually creating. >> Right. Is it even feasible to plug an API into a Cloud-based service if your service isn't also Cloud-based cos as you're speaking and talking about spikes, clearly that's one of the huge benefits of Cloud, is that you have the ability to spike whether it's planned or unplanned to massive scale depending on what you're trying to do and to turn that back down. I would imagine (laughs) if your API is going through that platform and you're connecting to another application, and it's Pepsi running a promotion on Superbowl Sunday, hopefully your application is running in a very similar type of infrastructure. >> Absolutely. You do have to plan for that elastic scalability. And that's one of those things with the SnapLogic platform, is it has been built to be able to scale in that way. >> Right. Now there's a lot of conversation too around iPass and integration platforms as a service. How do you see that mapping back to more of a straightforward API integration. >> What we're talking about in terms of API integration here, and the things that we've just recently added, this is the consumption of our API's. The iPass platform that we actually provide consumes API's, all sorts of different API's, whether they're SOAP or REST and different native API's of different applications. That we do out of the box. That is what we are doing, is API integration. >> Right. >> The new functionality that we've introduced is this added capability to then manage those API's from external systems. That's particularly where those external systems go beyond the boundaries of a company's own domain. It's when they need to expose those API's to their partners, to other third parties that are going to want to consume those API's. That's where you need those additional layers of protection. Most customers actually use those API's internally within their organization, and they don't need that extra level of management. >> Right. Right. But I would imagine it's an increasingly important and increasingly common and increasingly prolific that the API integration and the API leverage is less and less inside the building and much much more outside the building. >> It is certainly going a lot more outside the building because customers are recognizing their data is an asset. >> Right. Right. Then having it be a Cloud broker, if you will, just adds a nice integration point that's standardized, has scale, has reliability, versus having all these point-to-point solutions. >> Yeah, absolutely. >> I was going to say, As you look forward, I can't believe we're May 16 of 2018 already (laughs), the years halfway over, but what are you looking forward to next? What's kind of on the roadmap as this API economy continues to evolve, which is then going to increase the demands on those API's integration, those API's in management, as you said the lifecycle of the way all this stuff works together, what's kind of on the roadmap if we talk a year from now, what are we going to be talking about? >> There's a lot of... settling down of what we've delivered that's going to take place, and on top of that, then the capabilities that we can add to add some additional capabilities that the customers want to use, even internally. Because even internally where they're not using a Cloud service, they have requirements to identify who in an organization is utilizing those things. So additional capabilities without having to go beyond the boundaries of the customers own domain. That's going to be some things like authentication, it's going to be some additional... Metrics of what's actually being used in those API's, the metrics on the API's themselves in terms of how are they performing, how frequently are they being called, and in addition to that, what's the response time on those things? So there's additional intelligence that we're going to be providing over and above the creation of the API's that we're looking to do for those customers, particularly inside the organization. >> It's very similar requirements but just different, right, because organizations, take a company like Boeing, or something, is actually not just one company, there's many, many organizations, you have all kinds of now with GDPR coming out, cut of data, privacy and management restrictions, so even if it's inside your four walls, all those measures, all those controls are still very very relevant. >> Very much so. Providing some additional capabilities around that is pretty important for us. >> Alright. Well Craig, you're sitting right on top of the API economy, so I think you'll keep busy for a little while. >> (laughs) That's for sure. >> Thanks for taking a few minutes to stop by. >> Thank you. >> He's Craig Stewart, I'm Jeff Frick, you're watching theCUBE from SnapLogic in San Mateo, California. Thanks for watching. (techno music)
SUMMARY :
Brought to you by SnapLogic. and intel, really to a lot of software Thank you very much. and the API economy is something kind of being the benchmark of those from that app to this app." that those platforms provide remember, the hottest thing since conversation for another day. adding that functionality to Jeff: Right. and the billing on top of it. and to turn that back down. to be able to scale in that way. to more of a straightforward and the things that we've that are going to want and the API leverage lot more outside the building broker, if you will, and in addition to that, all those measures, all those controls around that is pretty important for us. busy for a little while. few minutes to stop by. in San Mateo, California.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Boeing | ORGANIZATION | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Craig Stewart | PERSON | 0.99+ |
SnapLogic | ORGANIZATION | 0.99+ |
Craig | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Jeff | PERSON | 0.99+ |
San Mateo, California | LOCATION | 0.99+ |
San Mateo, California | LOCATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Pepsi | ORGANIZATION | 0.99+ |
Sunnyvale | LOCATION | 0.99+ |
OWA | TITLE | 0.99+ |
iPass | TITLE | 0.99+ |
May 16 of 2018 | DATE | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
South Bay | LOCATION | 0.99+ |
GDPR | TITLE | 0.99+ |
101 | QUANTITY | 0.98+ |
Superbowl Sunday | EVENT | 0.97+ |
Apigee Edge | TITLE | 0.97+ |
SnapLogic Innovation Day 2018 | EVENT | 0.96+ |
One | QUANTITY | 0.95+ |
one company | QUANTITY | 0.95+ |
one | QUANTITY | 0.94+ |
theCUBE | ORGANIZATION | 0.92+ |
Innovation Day | EVENT | 0.92+ |
ORGANIZATION | 0.91+ | |
10 requests a minute | QUANTITY | 0.9+ |
thousand users | QUANTITY | 0.85+ |
intel | ORGANIZATION | 0.83+ |
couple | QUANTITY | 0.83+ |
Siebel | ORGANIZATION | 0.78+ |
92 | LOCATION | 0.76+ |
101 | LOCATION | 0.75+ |
Cloud | TITLE | 0.74+ |
four | COMMERCIAL_ITEM | 0.74+ |
two | QUANTITY | 0.72+ |
SnapLogic | TITLE | 0.65+ |
Swagger | TITLE | 0.62+ |
Internet Explorer | TITLE | 0.59+ |
Apigee | TITLE | 0.58+ |
95 Box | ORGANIZATION | 0.51+ |
year | QUANTITY | 0.45+ |
Aggre | TITLE | 0.37+ |
Bryan Smith, Rocket Software - IBM Machine Learning Launch - #IBMML - #theCUBE
>> Announcer: Live from New York, it's theCUBE, covering the IBM Machine Learning Launch Event, brought to you by IBM. Now, here are your hosts, Dave Vellante and Stu Miniman. >> Welcome back to New York City, everybody. We're here at the Waldorf Astoria covering the IBM Machine Learning Launch Event, bringing machine learning to the IBM Z. Bryan Smith is here, he's the vice president of R&D and the CTO of Rocket Software, powering the path to digital transformation. Bryan, welcome to theCUBE, thanks for coming on. >> Thanks for having me. >> So, Rocket Software, Waltham, Mass. based, close to where we are, but a lot of people don't know about Rocket, so pretty large company, give us the background. >> It's been around for, this'll be our 27th year. Private company, we've been a partner of IBM's for the last 23 years. Almost all of that is in the mainframe space, or we focused on the mainframe space, I'll say. We have 1,300 employees, we call ourselves Rocketeers. It's spread around the world. We're really an R&D focused company. More than half the company is engineering, and it's spread across the world on every continent and most major countries. >> You're esstenially OEM-ing your tools as it were. Is that right, no direct sales force? >> About half, there are different lenses to look at this, but about half of our go-to-market is through IBM with IBM-labeled, IBM-branded products. We've always been, for the side of products, we've always been the R&D behind the products. The partnership, though, has really grown. It's more than just an R&D partnership now, now we're doing co-marketing, we're even doing some joint selling to serve IBM mainframe customers. The partnership has really grown over these last 23 years from just being the guys who write the code to doing much more. >> Okay, so how do you fit in this announcement. Machine learning on Z, where does Rocket fit? >> Part of the announcement today is a very important piece of technology that we developed. We call it data virtualization. Data virtualization is really enabling customers to open their mainframe to allow the data to be used in ways that it was never designed to be used. You might have these data structures that were designed 10, 20, even 30 years ago that were designed for a very specific application, but today they want to use it in a very different way, and so, the traditional path is to take that data and copy it, to ETL it someplace else they can get some new use or to build some new application. What data virtualization allows you to do is to leave that data in place but access it using APIs that developers want to use today. They want to use JSON access, for example, or they want to use SQL access. But they want to be able to do things like join across IMS, DB2, and VSAM all with a single query using an SQL statement. We can do that relational databases and non-relational databases. It gets us out of this mode of having to copy data into some other data store through this ETL process, access the data in place, we call it moving the applications or the analytics to the data versus moving the data to the analytics or to the applications. >> Okay, so in this specific case, and I have said several times today, as Stu has heard me, two years ago IBM had a big theme around the z13 bringing analytics and transactions together, this sort of extends that. Great, I've got this transaction data that lives behind a firewall somewhere. Why the mainframe, why now? >> Well, I would pull back to where I said where we see more companies and organizations wanting to move applications and analytics closer to the data. The data in many of these large companies, that core business-critical data is on the mainframe, and so, being able to do more real time analytics without having to look at old data is really important. There's this term data gravity. I love the visual that presents in my mind that you have these different masses, these different planets if you will, and the biggest, massivest planet in that solar system really is the data, and so, it's pulling the smaller satellites if you will into this planet or this star by way of gravity because data is, data's a new currency, data is what the companies are running on. We're helping in this announcement with being able to unlock and open up all mainframe data sources, even some non-mainframe data sources, and using things like Spark that's running on the platform, that's running on z/OS to access that data directly without having to write any special programming or any special code to get to all their data. >> And the preferred place to run all that data is on the mainframe obviously if you're a mainframe customer. One of the questions I guess people have is, okay, I get that, it's the transaction data that I'm getting access to, but if I'm bringing transaction and analytic data together a lot of times that analytic data might be in social media, it might be somewhere else not on the mainframe. How do envision customers dealing with that? Do you have tooling them to do that? >> We do, so this data virtualization solution that I'm talking about is one that is mainframe resident, but it can also access other data sources. It can access DB2 on Linux Windows, it can access Informix, it can access Cloudant, it can access Hadoop through IBM's BigInsights. Other feeds like Twitter, like other social media, it can pull that in. The case where you'd want to do that is where you're trying to take that data and integrate it with a massive amount of mainframe data. It's going to be much more highly performant by pulling this other small amount of data into, next to that core business data. >> I get the performance and I get the security of the mainframe, I like those two things, but what about the economics? >> Couple of things. One, IBM when they ported Spark to z/OS, they did it the right way. They leveraged the architecture, it wasn't just a simple port of recompiling a bunch of open source code from Apache, it was rewriting it to be highly performant on the Z architecture, taking advantage of specialty engines. We've done the same with the data virtualization component that goes along with that Spark on z/OS offering that also leverages the architecture. We actually have different binaries that we load depending on which architecture of the machine that we're running on, whether it be a z9, an EC12, or the big granddaddy of a z13. >> Bryan, can you speak the developers? I think about, you're talking about all this mobile and Spark and everything like that. There's got to be certain developers that are like, "Oh my gosh, there's mainframe stuff. "I don't know anything about that." How do you help bridge that gap between where it lives in the tools that they're using? >> The best example is talking about embracing this API economy. And so, developers really don't care where the stuff is at, they just want it to be easy to get to. They don't have to code up some specific interface or language to get to different types of data, right? IBM's done a great job with the z/OS Connect in opening up the mainframe to the API economy with ReSTful interfaces, and so with z/OS Connect combined with Rocket data virtualization, you can come through that z/OS Connect same path using all those same ReSTful interfaces pushing those APIs out to tools like Swagger, which the developers want to use, and not only can you get to the applications through z/OS Connect, but we're a service provider to z/OS Connect allowing them to also get to every piece of data using those same ReSTful APIs. >> If I heard you correctly, the developer doesn't need to even worry about that it's on mainframe or speak mainframe or anything like that, right? >> The goal is that they never do. That they simply see in their tool-set, again like Swagger, that they have data as well as different services that they can invoke using these very straightforward, simple ReSTful APIs. >> Can you speak to the customers you've talked to? You know, there's certain people out in the industry, I've had this conversation for a few years at IBM shows is there's some part of the market that are like, oh, well, the mainframe is this dusty old box sitting in a corner with nothing new, and my experience has been the containers and cool streaming and everything like that, oh well, you know, mainframe did virtualization and Linux and all these things really early, decades ago and is keeping up with a lot of these trends with these new type of technologies. What do you find in the customers that, how much are they driving forward on new technologies, looking for that new technology and being able to leverage the assets that they have? >> You asked a lot of questions there. The types of customers certainly financial and insurance are the big two, but that doesn't mean that we're limited and not going after retail and helping governments and manufacturing customers as well. What I find is talking with them that there's the folks who get it and the folks who don't, and the folks who get it are the ones who are saying, "Well, I want to be able "to embrace these new technologies," and they're taking things like open source, they're looking at Spark, for example, they're looking at Anaconda. Last week, we just announced at the Anaconda Conference, we stepped on stage with Continuum, IBM, and we, Rocket, stood up there talking about this partnership that we formed to create this ecosystem because the development world changes very, very rapidly. For a while, all the rage was JDBC, or all the rage was component broker, and so today it's Spark and Anaconda are really in the forefront of developers' minds. We're constantly moving to keep up with developers because that's where the action's happening. Again, they don't care where the data is housed as long as you can open that up. We've been playing with this concept that came up from some research firm called two-speed IT where you have maybe your core business that has been running for years, and it's designed to really be slow-moving, very high quality, it keeps everything running today, but they want to embrace some of their new technologies, they want to be able to roll out a brand-new app, and they want to be able to update that multiple times a week. And so, this two-speed IT says, you're kind of breaking 'em off into two separate teams. You don't have to take your existing infrastructure team and say, "You must embrace every Agile "and every DevOps type of methodology." What we're seeing customers be successful with is this two-speed IT where you can fracture these two, and now you need to create some nice integration between those two teams, so things like data virtualization really help with that. It opens up and allows the development teams to very quickly access those assets on the mainframe in this case while allowing those developers to very quickly crank out an application where quality is not that important, where being very quick to respond and doing lots of AB testing with customers is really critical. >> Waterfall still has its place. As a company that predominately, or maybe even exclusively is involved in mainframe, I'm struck by, it must've been 2008, 2009, Paul Maritz comes in and he says VMWare our vision is to build the software mainframe. And of course the world said, "Ah, that's, mainframe's dead," we've been hearing that forever. In many respects, I accredit the VMWare, they built sort of a form of software mainframe, but now you hear a lot of talk, Stu, about going back to bare metal. You don't hear that talk on the mainframe. Everything's virtualized, right, so it's kind of interesting to see, and IBM uses the language of private cloud. The mainframe's, we're joking, the original private cloud. My question is you're strategy as a company has been always focused on the mainframe and going forward I presume it's going to continue to do that. What's your outlook for that platform? >> We're not exclusively by the mainframe, by the way. We're not, we have a good mix. >> Okay, it's overstating that, then. It's half and half or whatever. You don't talk about it, 'cause you're a private company. >> Maybe a little more than half is mainframe-focused. >> Dave: Significant. >> It is significant. >> You've got a large of proportion of the company on mainframe, z/OS. >> So we're bullish on the mainframe. We continue to invest more every year. We invest, we increase our investment every year, and so in a software company, your investment is primarily people. We increase that by double digits every year. We have license revenue increases in the double digits every year. I don't know many other mainframe-based software companies that have that. But I think that comes back to the partnership that we have with IBM because we are more than just a technology partner. We work on strategic projects with IBM. IBM will oftentimes stand up and say Rocket is a strategic partner that works with us on hard problem-solving customers issues every day. We're bullish, we're investing more all the time. We're not backing away, we're not decreasing our interest or our bets on the mainframe. If anything, we're increasing them at a faster rate than we have in the past 10 years. >> And this trend of bringing analytics and transactions together is a huge mega-trend, I mean, why not do it on the mainframe? If the economics are there, which you're arguing that in many use cases they are, because of the value component as well, then the future looks pretty reasonable, wouldn't you say? >> I'd say it's very, very bright. At the Anaconda Conference last week, I was coming up with an analogy for these folks. It's just a bunch of data scientists, right, and during most of the breaks and the receptions, they were just asking questions, "Well, what is a mainframe? "I didn't know that we still had 'em, "and what do they do?" So it was fun to educate them on that. But I was trying to show them an analogy with data warehousing where, say that in the mid-'90s it was perfectly acceptable to have a separate data warehouse separate from your transaction system. You would copy all this data over into the data warehouse. That was the model, right, and then slowly it became more important that the analytics or the BI against that data warehouse was looking at more real time data. So then it became more efficiencies and how do we replicate this faster, and how do we get closer to, not looking at week-old data but day-old data? And so, I explained that to them and said the days of being able to do analytics against old data that's copied are going away. ETL, we're also bullish to say that ETL is dead. ETL's future is very bleak. There's no place for it. It had its time, but now it's done because with data virtualization you can access that data in place. I was telling these folks as they're talking about, these data scientists, as they're talking about how they look at their models, their first step is always ETL. And so I told them this story, I said ETL is dead, and they just look at me kind of strange. >> Dave: Now the first step is load. >> Yes, there you go, right, load it in there. But having access from these platforms directly to that data, you don't have to worry about any type of a delay. >> What you described, though, is still common architecture where you've got, let's say, a Z mainframe, it's got an InfiniBand pipe to some exit data warehouse or something like that, and so, IBM's vision was, okay, we can collapse that, we can simplify that, consolidate it. SAP with HANA has a similar vision, we can do that. I'm sure Oracle's got their vision. What gives you confidence in IBM's approach and legs going forward? >> Probably due to the advances that we see in z/OS itself where handling mixed workloads, which it's just been doing for many of the 50 years that it's been around, being able to prioritize different workloads, not only just at the CPU dispatching, but also at the memory usage, also at the IO, all the way down through the channel to the actual device. You don't see other operating systems that have that level of granularity for managing mixed workloads. >> In the security component, that's what to me is unique about this so-called private cloud, and I say, I was using that software mainframe example from VMWare in the past, and it got a good portion of the way there, but it couldn't get that last mile, which is, any workload, any application with the performance and security that you would expect. It's just never quite got there. I don't know if the pendulum is swinging, I don't know if that's the accurate way to say it, but it's certainly stabilized, wouldn't you say? >> There's certainly new eyes being opened every day to saying, wait a minute, I could do something different here. Muscle memory doesn't have to guide me in doing business the way I have been doing it before, and that's this muscle memory I'm talking about of this ETL piece. >> Right, well, and a large number of workloads in mainframe are running Linux, right, you got Anaconda, Spark, all these modern tools. The question you asked about developers was right on. If it's independent or transparent to developers, then who cares, that's the key. That's the key lever this day and age is the developer community. You know it well. >> That's right. Give 'em what they want. They're the customers, they're the infrastructure that's being built. >> Bryan, we'll give you the last word, bumper sticker on the event, Rocket Software, your partnership, whatever you choose. >> We're excited to be here, it's an exciting day to talk about machine learning on z/OS. I say we're bullish on the mainframe, we are, we're especially bullish on z/OS, and that's what this even today is all about. That's where the data is, that's where we need the analytics running, that's where we need the machine learning running, that's where we need to get the developers to access the data live. >> Excellent, Bryan, thanks very much for coming to theCUBE. >> Bryan: Thank you. >> And keep right there, everybody. We'll be back with our next guest. This is theCUBE, we're live from New York City. Be right back. (electronic keyboard music)
SUMMARY :
Event, brought to you by IBM. powering the path to close to where we are, but and it's spread across the Is that right, no direct sales force? from just being the Okay, so how do you or the analytics to the data versus Why the mainframe, why now? data is on the mainframe, is on the mainframe obviously It's going to be much that also leverages the architecture. There's got to be certain They don't have to code up some The goal is that they never do. and my experience has been the containers and the folks who get it are the ones who You don't hear that talk on the mainframe. the mainframe, by the way. It's half and half or whatever. half is mainframe-focused. of the company on mainframe, z/OS. in the double digits every year. the days of being able to do analytics directly to that data, you don't have it's got an InfiniBand pipe to some for many of the 50 years I don't know if that's the in doing business the way I is the developer community. They're the customers, bumper sticker on the the developers to access the data live. very much for coming to theCUBE. This is theCUBE, we're
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
IBM | ORGANIZATION | 0.99+ |
Bryan | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Paul Maritz | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Rocket Software | ORGANIZATION | 0.99+ |
50 years | QUANTITY | 0.99+ |
2009 | DATE | 0.99+ |
New York City | LOCATION | 0.99+ |
2008 | DATE | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
27th year | QUANTITY | 0.99+ |
New York City | LOCATION | 0.99+ |
first step | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
JDBC | ORGANIZATION | 0.99+ |
1,300 employees | QUANTITY | 0.99+ |
Continuum | ORGANIZATION | 0.99+ |
Last week | DATE | 0.99+ |
New York | LOCATION | 0.99+ |
Anaconda | ORGANIZATION | 0.99+ |
two things | QUANTITY | 0.99+ |
mid-'90s | DATE | 0.99+ |
Spark | TITLE | 0.99+ |
Rocket | ORGANIZATION | 0.99+ |
z/OS Connect | TITLE | 0.99+ |
10 | DATE | 0.99+ |
two teams | QUANTITY | 0.99+ |
Linux | TITLE | 0.99+ |
today | DATE | 0.99+ |
two-speed | QUANTITY | 0.99+ |
two separate teams | QUANTITY | 0.99+ |
Z. Bryan Smith | PERSON | 0.99+ |
SQL | TITLE | 0.99+ |
Bryan Smith | PERSON | 0.99+ |
z/OS | TITLE | 0.98+ |
two years ago | DATE | 0.98+ |
ReSTful | TITLE | 0.98+ |
Swagger | TITLE | 0.98+ |
last week | DATE | 0.98+ |
decades ago | DATE | 0.98+ |
DB2 | TITLE | 0.98+ |
HANA | TITLE | 0.97+ |
IBM Machine Learning Launch Event | EVENT | 0.97+ |
Anaconda Conference | EVENT | 0.97+ |
Hadoop | TITLE | 0.97+ |
Spark | ORGANIZATION | 0.97+ |
One | QUANTITY | 0.97+ |
Informix | TITLE | 0.96+ |
VMWare | ORGANIZATION | 0.96+ |
More than half | QUANTITY | 0.95+ |
z13 | COMMERCIAL_ITEM | 0.95+ |
JSON | TITLE | 0.95+ |