Harveer Singh, Western Union | Western Union When Data Moves Money Moves
(upbeat music) >> Welcome back to Supercloud 2, which is an open industry collaboration between technologists, consultants, analysts, and of course, practitioners, to help shape the future of cloud. And at this event, one of the key areas we're exploring is the intersection of cloud and data, and how building value on top of hyperscale clouds and across clouds is evolving, a concept we call supercloud. And we're pleased to welcome Harvir Singh, who's the chief data architect and global head of data at Western Union. Harvir, it's good to see you again. Thanks for coming on the program. >> Thanks, David, it's always a pleasure to talk to you. >> So many things stand out from when we first met, and one of the most gripping for me was when you said to me, "When data moves, money moves." And that's the world we live in today, and really have for a long time. Money has moved as bits, and when it has to move, we want it to move quickly, securely, and in a governed manner. And the pressure to do so is only growing. So tell us how that trend is evolved over the past decade in the context of your industry generally, and Western Union, specifically. Look, I always say to people that we are probably the first ones to introduce digital currency around the world because, hey, somebody around the world needs money, we move data to make that happen. That trend has actually accelerated quite a bit. If you look at the last 10 years, and you look at all these payment companies, digital companies, credit card companies that have evolved, majority of them are working on the same principle. When data moves, money moves. When data is stale, the money goes away, right? I think that trend is continuing, and it's not just the trend is in this space, it's also continuing in other spaces, specifically around, you know, acquisition of customers, communication with customers. It's all becoming digital, and it's, at the end of the day, it's all data being moved from one place or another. At the end of the day, you're not seeing the customer, but you're looking at, you know, the data that he's consuming, and you're making actionable items on it, and be able to respond to what they need. So I think 10 years, it's really, really evolved. >> Hmm, you operate, Western Union operates in more than 200 countries, and you you have what I would call a pseudo federated organization. You're trying to standardize wherever possible on the infrastructure, and you're curating the tooling and doing the heavy lifting in the data stack, which of course lessens the burden on the developers and the line of business consumers, so my question is, in operating in 200 countries, how do you deal with all the diversity of laws and regulations across those regions? I know you're heavily involved in AWS, but AWS isn't everywhere, you still have some on-prem infrastructure. Can you paint a picture of, you know, what that looks like? >> Yeah, a few years ago , we were primarily mostly on-prem, and one of the biggest pain points has been managing that infrastructure around the world in those countries. Yes, we operate in 200 countries, but we don't have infrastructure in 200 countries, but we do have agent locations in 200 countries. United Nations says we only have like 183 are countries, but there are countries which, you know, declare themselves countries, and we are there as well because somebody wants to send money there, right? Somebody has an agent location down there as well. So that infrastructure is obviously very hard to manage and maintain. We have to comply by numerous laws, you know. And the last few years, specifically with GDPR, CCPA, data localization laws in different countries, it's been a challenge, right? And one of the things that we did a few years ago, we decided that we want to be in the business of helping our customers move money faster, security, and with complete trust in us. We don't want to be able to, we don't want to be in the business of managing infrastructure. And that's one of the reasons we started to, you know, migrate and move our journey to the cloud. AWS, obviously chosen first because of its, you know, first in the game, has more locations, and more data centers around the world where we operate. But we still have, you know, existing infrastructure, which is in some countries, which is still localized because AWS hasn't reached there, or we don't have a comparable provider there. We still manage those. And we have to comply by those laws. Our data privacy and our data localization tech stack is pretty good, I would say. We manage our data very well, we manage our customer data very well, but it comes with a lot of complexity. You know, we get a lot of requests from European Union, we get a lot of requests from Asia Pacific every pretty much on a weekly basis to explain, you know, how we are taking controls and putting measures in place to make sure that the data is secured and is in the right place. So it's a complex environment. We do have exposure to other clouds as well, like Google and Azure. And as much as we would love to be completely, you know, very, very hybrid kind of an organization, it's still at a stage where we are still very heavily focused on AWS yet, but at some point, you know, we would love to see a world which is not reliant on a single provider, but it's more a little bit more democratized, you know, as and when what I want to use, I should be able to use, and pay-per-use. And the concept started like that, but it's obviously it's now, again, there are like three big players in the market, and, you know, they're doing their own thing. Would love to see them come collaborate at some point. >> Yeah, wouldn't we all. I want to double-click on the whole multi-cloud strategy, but if I understand it correctly, and in a perfect world, everything on-premises would be in the cloud is, first of all, is that a correct statement? Is that nirvana for you or not necessarily? >> I would say it is nirvana for us, but I would also put a caveat, is it's very tricky because from a regulatory perspective, we are a regulated entity in many countries. The regulators would want to see some control if something happens with a relationship with AWS in one country, or with Google in another country, and it keeps happening, right? For example, Russia was a good example where we had to switch things off. We should be able to do that. But if let's say somewhere in Asia, this country decides that they don't want to partner with AWS, and majority of our stuff is on AWS, where do I go from there? So we have to have some level of confidence in our own infrastructure, so we do maintain some to be able to fail back into and move things it needs to be. So it's a tricky question. Yes, it's nirvana state that I don't have to manage infrastructure, but I think it's far less practical than it said. We will still own something that we call it our own where we have complete control, being a financial entity. >> And so do you try to, I'm sure you do, standardize between all the different on-premise, and in this case, the AWS cloud or maybe even other clouds. How do you do that? Do you work with, you know, different vendors at the various places of the stack to try to do that? Some of the vendors, you know, like a Snowflake is only in the cloud. You know, others, you know, whether it's whatever, analytics, or storage, or database, might be hybrid. What's your strategy with regard to creating as common an experience as possible between your on-prem and your clouds? >> You asked a question which I asked when I joined as well, right? Which question, this is one of the most important questions is how soon when I fail back, if I need to fail back? And how quickly can I, because not everything that is sitting on the cloud is comparable to on-prem or is backward compatible. And the reason I say backward compatible is, you know, there are, our on-prem cloud is obviously behind. We haven't taken enough time to kind of put it to a state where, because we started to migrate and now we have access to infrastructure on the cloud, most of the new things are being built there. But for critical application, I would say we have chronology that could be used to move back if need to be. So, you know, technologies like Couchbase, technologies like PostgreSQL, technologies like Db2, et cetera. We still have and maintain a fairly large portion of it on-prem where critical applications could potentially be serviced. We'll give you one example. We use Neo4j very heavily for our AML use cases. And that's an important one because if Neo4j on the cloud goes down, and it's happened in the past, again, even with three clusters, having all three clusters going down with a DR, we still need some accessibility of that because that's one of the biggest, you know, fraud and risk application it supports. So we do still maintain some comparable technology. Snowflake is an odd one. It's obviously there is none on-prem. But then, you know, Snowflake, I also feel it's more analytical based technology, not a transactional-based technology, at least in our ecosystem. So for me to replicate that, yes, it'll probably take time, but I can live with that. But my business will not stop because our transactional applications can potentially move over if need to. >> Yeah, and of course, you know, all these big market cap companies, so the Snowflake or Databricks, which is not public yet, but they've got big aspirations. And so, you know, we've seen things like Snowflake do a deal with Dell for on-prem object store. I think they do the same thing with Pure. And so over time, you see, Mongo, you know, extending its estate. And so over time all these things are coming together. I want to step out of this conversation for a second. I just ask you, given the current macroeconomic climate, what are the priorities? You know, obviously, people are, CIOs are tapping the breaks on spending, we've reported on that, but what is it? Is it security? Is it analytics? Is it modernization of the on-prem stack, which you were saying a little bit behind. Where are the priorities today given the economic headwinds? >> So the most important priority right now is growing the business, I would say. It's a different, I know this is more, this is not a very techy or a tech answer that, you know, you would expect, but it's growing the business. We want to acquire more customers and be able to service them as best needed. So the majority of our investment is going in the space where tech can support that initiative. During our earnings call, we released the new pillars of our organization where we will focus on, you know, omnichannel digital experience, and then one experience for customer, whether it's retail, whether it's digital. We want to open up our own experience stores, et cetera. So we are investing in technology where it's going to support those pillars. But the spend is in a way that we are obviously taking away from the things that do not support those. So it's, I would say it's flat for us. We are not like in heavily investing or aggressively increasing our tech budget, but it's more like, hey, switch this off because it doesn't make us money, but now switch this on because this is going to support what we can do with money, right? So that's kind of where we are heading towards. So it's not not driven by technology, but it's driven by business and how it supports our customers and our ability to compete in the market. >> You know, I think Harvir, that's consistent with what we heard in some other work that we've done, our ETR partner who does these types of surveys. We're hearing the same thing, is that, you know, we might not be spending on modernizing our on-prem stack. Yeah, we want to get to the cloud at some point and modernize that. But if it supports revenue, you know, we'll invest in that, and get the, you know, instant ROI. I want to ask you about, you know, this concept of supercloud, this abstracted layer of value on top of hyperscale infrastructure, and maybe on-prem. But we were talking about the integration, for instance, between Snowflake and Salesforce, where you got different data sources and you were explaining that you had great interest in being able to, you know, have a kind of, I'll say seamless, sorry, I know it's an overused word, but integration between the data sources and those two different platforms. Can you explain that and why that's attractive to you? >> Yeah, I'm a big supporter of action where the data is, right? Because the minute you start to move, things are already lost in translation. The time is lost, you can't get to it fast enough. So if, for example, for us, Snowflake, Salesforce, is our actionable platform where we action, we send marketing campaigns, we send customer communication via SMS, in app, as well as via email. Now, we would like to be able to interact with our customers pretty much on a, I would say near real time, but the concept of real time doesn't work well with me because I always feel that if you're observing something, it's not real time, it's already happened. But how soon can I react? That's the question. And given that I have to move that data all the way from our, let's say, engagement platforms like Adobe, and particles of the world into Snowflake first, and then do my modeling in some way, and be able to then put it back into Salesforce, it takes time. Yes, you know, I can do it in a few hours, but that few hours makes a lot of difference. Somebody sitting on my website, you know, couldn't find something, walked away, how soon do you think he will lose interest? Three hours, four hours, he'll probably gone, he will never come back. I think if I can react to that as fast as possible without too much data movement, I think that's a lot of good benefit that this kind of integration will bring. Yes, I can potentially take data directly into Salesforce, but I then now have two copies of data, which is, again, something that I'm not a big (indistinct) of. Let's keep the source of the data simple, clean, and a single source. I think this kind of integration will help a lot if the actions can be brought very close to where the data resides. >> Thank you for that. And so, you know, it's funny, we sometimes try to define real time as before you lose the customer, so that's kind of real time. But I want to come back to this idea of governed data sharing. You mentioned some other clouds, a little bit of Azure, a little bit of Google. In a world where, let's say you go more aggressively, and we know that for instance, if you want to use Google's AI tools, you got to use BigQuery. You know, today, anyway, they're not sort of so friendly with Snowflake, maybe different for the AWS, maybe Microsoft's going to be different as well. But in an ideal world, what I'm hearing is you want to keep the data in place. You don't want to move the data. Moving data is expensive, making copies is badness. It's expensive, and it's also, you know, changes the state, right? So you got governance issues. So this idea of supercloud is that you can leave the data in place and actually have a common experience across clouds. Let's just say, let's assume for a minute Google kind of wakes up, my words, not yours, and says, "Hey, maybe, you know what, partnering with a Snowflake or a Databricks is better for our business. It's better for the customers," how would that affect your business and the value that you can bring to your customers? >> Again, I would say that would be the nirvana state that, you know, we want to get to. Because I would say not everyone's perfect. They have great engineers and great products that they're developing, but that's where they compete as well, right? I would like to use the best of breed as much as possible. And I've been a person who has done this in the past as well. I've used, you know, tools to integrate. And the reason why this integration has worked is primarily because sometimes you do pick the best thing for that job. And Google's AI products are definitely doing really well, but, you know, that accessibility, if it's a problem, then I really can't depend on them, right? I would love to move some of that down there, but they have to make it possible for us. Azure is doing really, really good at investing, so I think they're a little bit more and more closer to getting to that state, and I know seeking our attention than Google at this point of time. But I think there will be a revelation moment because more and more people that I talk to like myself, they're also talking about the same thing. I'd like to be able to use Google's AdSense, I would like to be able to use Google's advertising platform, but you know what? I already have all this data, why do I need to move it? Can't they just go and access it? That question will keep haunting them (indistinct). >> You know, I think, obviously, Microsoft has always known, you know, understood ecosystems. I mean, AWS is nailing it, when you go to re:Invent, it's all about the ecosystem. And they think they realized they can make a lot more money, you know, together, than trying to have, and Google's got to figure that out. I think Google thinks, "All right, hey, we got to have the best tech." And that tech, they do have the great tech, and that's our competitive advantage. They got to wake up to the ecosystem and what's happening in the field and the go-to-market. I want to ask you about how you see data and cloud evolving in the future. You mentioned that things that are driving revenue are the priorities, and maybe you're already doing this today, but my question is, do you see a day when companies like yours are increasingly offering data and software services? You've been around for a long time as a company, you've got, you know, first party data, you've got proprietary knowledge, and maybe tooling that you've developed, and you're becoming more, you're already a technology company. Do you see someday pointing that at customers, or again, maybe you're doing it already, or is that not practical in your view? >> So data monetization has always been on the charts. The reason why it hasn't seen the light is regulatory pressure at this point of time. We are partnering up with certain agencies, again, you know, some pilots are happening to see the value of that and be able to offer that. But I think, you know, eventually, we'll get to a state where our, because we are trying to build accessible financial services, we will be in a state that we will be offering those to partners, which could then extended to their customers as well. So we are definitely exploring that. We are definitely exploring how to enrich our data with other data, and be able to complete a super set of data that can be used. Because frankly speaking, the data that we have is very interesting. We have trends of people migrating, we have trends of people migrating within the US, right? So if a new, let's say there's a new, like, I'll give you an example. Let's say New York City, I can tell you, at any given point of time, with my data, what is, you know, a dominant population in that area from migrant perspective. And if I see a change in that data, I can tell you where that is moving towards. I think it's going to be very interesting. We're a little bit, obviously, sometimes, you know, you're scared of sharing too much detail because there's too much data. So, but at the end of the day, I think at some point, we'll get to a state where we are confident that the data can be used for good. One simple example is, you know, pharmacies. They would love to get, you know, we've been talking to CVS and we are talking to Walgreens, and trying to figure out, if they would get access to this kind of data demographic information, what could they do be better? Because, you know, from a gene pool perspective, there are diseases and stuff that are very prevalent in one community versus the other. We could probably equip them with this information to be able to better, you know, let's say, staff their pharmacies or keep better inventory of products that could be used for the population in that area. Similarly, the likes of Walmarts and Krogers, they would like to have more, let's say, ethnic products in their aisles, right? How do you enable that? That data is primarily, I think we are the biggest source of that data. So we do take pride in it, but you know, with caution, we are obviously exploring that as well. >> My last question for you, Harvir, is I'm going to ask you to do a thought exercise. So in that vein, that whole monetization piece, imagine that now, Harvir, you are running a P&L that is going to monetize that data. And my question to you is a there's a business vector and a technology vector. So from a business standpoint, the more distribution channels you have, the better. So running on AWS cloud, partnering with Microsoft, partnering with Google, going to market with them, going to give you more revenue. Okay, so there's a motivation for multi-cloud or supercloud. That's indisputable. But from a technical standpoint, is there an advantage to running on multiple clouds or is that a disadvantage for you? >> It's, I would say it's a disadvantage because if my data is distributed, I have to combine it at some place. So the very first step that we had taken was obviously we brought in Snowflake. The reason, we wanted our analytical data and we want our historical data in the same place. So we are already there and ready to share. And we are actually participating in the data share, but in a private setting at the moment. So we are technically enabled to share, unless there is a significant, I would say, upside to moving that data to another cloud. I don't see any reason because I can enable anyone to come and get it from Snowflake. It's already enabled for us. >> Yeah, or if somehow, magically, several years down the road, some standard developed so you don't have to move the data. Maybe there's a new, Mogli is talking about a new data architecture, and, you know, that's probably years away, but, Harvir, you're an awesome guest. I love having you on, and really appreciate you participating in the program. >> I appreciate it. Thank you, and good luck (indistinct) >> Ah, thank you very much. This is Dave Vellante for John Furrier and the entire Cube community. Keep it right there for more great coverage from Supercloud 2. (uplifting music)
SUMMARY :
Harvir, it's good to see you again. a pleasure to talk to you. And the pressure to do so is only growing. and you you have what I would call But we still have, you know, you or not necessarily? that I don't have to Some of the vendors, you and it's happened in the past, And so, you know, we've and our ability to compete in the market. and get the, you know, instant ROI. Because the minute you start to move, and the value that you can that, you know, we want to get to. and cloud evolving in the future. But I think, you know, And my question to you So the very first step that we had taken and really appreciate you I appreciate it. Ah, thank you very much.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Walmarts | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Microsoft | ORGANIZATION | 0.99+ |
Walgreens | ORGANIZATION | 0.99+ |
Asia | LOCATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Harvir | PERSON | 0.99+ |
Three hours | QUANTITY | 0.99+ |
four hours | QUANTITY | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
New York City | LOCATION | 0.99+ |
United Nations | ORGANIZATION | 0.99+ |
Krogers | ORGANIZATION | 0.99+ |
US | LOCATION | 0.99+ |
one | QUANTITY | 0.99+ |
Databricks | ORGANIZATION | 0.99+ |
Western Union | ORGANIZATION | 0.99+ |
Harvir Singh | PERSON | 0.99+ |
10 years | QUANTITY | 0.99+ |
two copies | QUANTITY | 0.99+ |
one country | QUANTITY | 0.99+ |
183 | QUANTITY | 0.99+ |
European Union | ORGANIZATION | 0.99+ |
Mongo | ORGANIZATION | 0.99+ |
three big players | QUANTITY | 0.99+ |
first step | QUANTITY | 0.99+ |
Snowflake | TITLE | 0.98+ |
AdSense | TITLE | 0.98+ |
more than 200 countries | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
three clusters | QUANTITY | 0.98+ |
Snowflake | ORGANIZATION | 0.98+ |
Mogli | PERSON | 0.98+ |
John Furrier | PERSON | 0.98+ |
supercloud | ORGANIZATION | 0.98+ |
one example | QUANTITY | 0.97+ |
GDPR | TITLE | 0.97+ |
Adobe | ORGANIZATION | 0.97+ |
Salesforce | ORGANIZATION | 0.97+ |
200 countries | QUANTITY | 0.97+ |
one experience | QUANTITY | 0.96+ |
Harveer Singh | PERSON | 0.96+ |
one community | QUANTITY | 0.96+ |
Pure | ORGANIZATION | 0.95+ |
One simple example | QUANTITY | 0.95+ |
two different platforms | QUANTITY | 0.95+ |
Salesforce | TITLE | 0.94+ |
first | QUANTITY | 0.94+ |
Cube | ORGANIZATION | 0.94+ |
BigQuery | TITLE | 0.94+ |
nirvana | LOCATION | 0.93+ |
single source | QUANTITY | 0.93+ |
Asia Pacific | LOCATION | 0.93+ |
first ones | QUANTITY | 0.92+ |
Bill Andrews, ExaGrid | VeeamON 2022
(upbeat music) >> We're back at VeeamON 2022. We're here at the Aria in Las Vegas Dave Vellante with Dave Nicholson. Bill Andrews is here. He's the president and CEO of ExaGrid, mass boy. Bill, thanks for coming on theCUBE. >> Thanks for having me. >> So I hear a lot about obviously data protection, cyber resiliency, what's the big picture trends that you're seeing when you talk to customers? >> Well, I think clearly we were talking just a few minutes ago, data's growing like crazy, right This morning, I think they said it was 28% growth a year, right? So data's doubling almost just a little less than every three years. And then you get the attacks on the data which was the keynote speech this morning as well, right. All about the ransomware attacks. So we've got more and more data, and that data is more and more under attack. So I think those are the two big themes. >> So ExaGrid as a company been around for a long time. You've kind of been the steady kind of Eddy, if you will. Tell us about ExaGrid, maybe share with us some of the differentiators that you share with customers. >> Sure, so specifically, let's say in the Veeam world you're backing up your data, and you really only have two choices. You can back that up to disc. So some primary storage disc from a Dell, or a Hewlett Packard, or an NetApp or somebody, or you're going to back it up to what's called an inline deduplication appliance maybe a Dell Data Domain or an HPE StoreOnce, right? So what ExaGrid does is we've taken the best of both those but not the challenges of both those and put 'em together. So with disc, you're going to get fast backups and fast restores, but because in backup you keep weekly's, monthly's, yearly retention, the cost of this becomes exorbitant. If you go to a deduplication appliance, and let's say the Dell or the HPs, the data comes in, has to be deduplicated, compare one backup to the next to reduce that storage, which lowers the cost. So fixes that problem, but the fact that they do it inline slows the backups down dramatically. All the data is deduplicated so the restores are slow, and then the backup window keeps growing as the data grows 'cause they're all scale up technologies. >> And the restores are slow 'cause you got to rehydrate. >> You got to rehydrate every time. So what we did is we said, you got to have both. So our appliances have a front end disc cache landing zone. So you're right directed to the disc., Nothing else happens to it, whatever speed the backup app could write at that's the speed we take it in at. And then we keep the most recent backups in that landing zone ready to go. So you want to boot a VM, it's not an hour like a deduplication appliance it's a minute or two. Secondly, we then deduplicate the data into a second tier which is a repository tier, but we have all the deduplicated data for the long term retention, which gets the cost down. And on top of that, we're scale out. Every appliance has networking processor memory end disc. So if you double, triple, quadruple the data you double, triple, quadruple everything. And if the backup window is six hours at 100 terabyte it's six hours at 200 terabyte, 500 terabyte, a petabyte it doesn't matter. >> 'Cause you scale out. >> Right, and then lastly, our repository tier is non-network facing. We're the only ones in the industry with this. So that under a ransomware attack, if you get hold of a rogue server or you hack the media server, get to the backup storage whether it's disc or deduplication appliance, you can wipe out all the backup data. So you have nothing to recover from. In our case, you wipe it out, our landing zone will be wiped out. We're no different than anything else that's network facing. However, the only thing that talks to our repository tier is our object code. And we've set up security policies as to how long before you want us to delete data, let's say 10 days. So if you have an attack on Monday that data doesn't get deleted till like a week from Thursday, let's say. So you can freeze the system at any time and do restores. And then we have immutable data objects and all the other stuff. But the culmination of a non-network facing tier and the fact that we do the delayed deletes makes us the only one in the industry that can actually truly recover. And that's accelerating our growth, of course. >> Wow, great description. So that disc cache layer is a memory, it's a flash? >> It's disc, it's spinning disc. >> Spinning disc, okay. >> Yeah, no different than any other disc. >> And then the tiered is what, less expensive spinning disc? >> No, it's still the same. It's all SaaS disc 'cause you want the quality, right? So it's all SaaS, and so we use Western Digital or Seagate drives just like everybody else. The difference is that we're not doing any deduplication coming in or out of that landing zone to have fast backups and fast restores. So think of it like this, you've got disc and you say, boy it's too expensive. What I really want to do then is put maybe a deduplication appliance behind it to lower the cost or reverse it. I've got a deduplication appliance, ugh, it's too slow for backups and restores. I really want to throw this in front of it to have fast backups first. Basically, that's what we did. >> So where does the cost savings, Bill come in though, on the tier? >> The cost savings comes in the fact that we got deduplication in that repository. So only the most recent backup >> Ah okay, so I get it. >> are the duplicated data. But let's say you had 40 copies of retention. You know, 10 weekly's, 36 monthly's, a few yearly. All of that's deduplicated >> Okay, so you're deduping the stuff that's not as current. >> Right. >> Okay. >> And only a handful of us deduplicate at the layer we do. In other words, deduplication could be anywhere from two to one, up to 50 to one. I mean it's all over the place depending on the algorithm. Now it's what everybody's algorithms do. Some backup apps do two to one, some do five to one, we do 20 to one as well as much as 50 to one depending on the data types. >> Yeah, so the workload is going to largely determine the combination >> The content type, right. with the algos, right? >> Yeah, the content type. >> So the part of the environment that's behind the illogical air gap, if you will, is deduped data. >> Yes. >> So in this case, is it fair to say that you're trading a positive economic value for a little bit longer restore from that environment? >> No, because if you think about backup 95% of the customers restores are from the most recent data. >> From the disc cache. >> 95% of the time 'cause you think about why do you need fast restores? Somebody deleted a file, somebody overwrote a file. They can't go work, they can't open a file. It's encrypted, it's corrupted. That's what IT people are trying to keep users productive. When do you go for longer-term retention data? It's an SEC audit. It's a HIPAA audit. It's a legal discovery, you don't need that data right away. You have days and weeks to get that ready for that legal discovery or that audit. So we found that boundary where you keep users productive by keeping the most recent data in the disc cache landing zone, but anything that's long term. And by the way, everyone else is long term, at that point. >> Yeah, so the economics are comparable to the dedupe upfront. Are they better, obviously get the performance advance? >> So we would be a lot looped. The thing we replaced the most believe it or not is disc, we're a lot less expensive than the disc. I was meeting with some Veeam folks this morning and we were up against Cisco 3260 disc at a children's hospital. And on our quote was $500,000. The disc was 1.4 million. Just to give you an example of the savings. On a Data Domain we're typically about half the price of a Data Domain. >> Really now? >> The reason why is their front end control are so expensive. They need the fastest trip on the planet 'cause they're trying to do inline deduplication. >> Yeah, so they're chasing >> They need the fastest memory >> on the planet. >> this chips all the time. They need SSD on data to move in and out of the hash table. In order to keep up with inline, they've got to throw so much compute at it that it drives their cost up. >> But now in the case of ransomware attack, are you saying that the landing zone is still available for recovery in some circumstances? Or are you expecting that that disc landing zone would be encrypted by the attacker? >> Those are two different things. One is deletion, one is encryption. So let's do the first scenario. >> I'm talking about malicious encryption. >> Yeah, absolutely. So the first scenario is the threat actor encrypts all your primary data. What's does he go for next? The backup data. 'Cause he knows that's your belt and suspend is to not pay the ransom. If it's disc he's going to go in and put delete commands at the disc, wipe out the disc. If it's a data domain or HPE StoreOnce, it's all going to be gone 'cause it's one tier. He's going to go after our landing zone, it's going to be gone too. It's going to wipe out our landing zone. Except behind that we have the most recent backup deduplicate in the repository as well as all the other backups. So what'll happen is they'll freeze the system 'cause we weren't going to delete anything in the repository for X days 'cause you set up a policy, and then you restore the most recent backup into the landing zone or we can restore it directly to your primary storage area, right? >> Because that tier is not network facing. >> That's right. >> It's fenced off essentially. >> People call us every day of the week saying, you saved me, you saved me again. People are coming up to me here, you saved me, you saved me. >> Tell us a story about that, I mean don't give me the names but how so. >> I'll actually do a funnier story, 'cause these are the ones that our vendors like to tell. 'Cause I'm self-serving as the CEO that's good of course, a little humor. >> It's your 15 minutes of job. >> That is my 15 minutes of fame. So we had one international company who had one ExaGrid at one location, 19 Data Domains at the other locations. Ransomware attack guess what? 19 Data Domains wiped out. The one ExaGrid, the only place they could restore. So now all 20 locations of course are ExaGrids, China, Russia, Mexico, Germany, US, et cetera. They rolled us out worldwide. So it's very common for that to occur. And think about why that is, everyone who's network facing you can get to the storage. You can say all the media servers are buttoned up, but I can find a rogue server and snake my way over the storage, I can. Now, we also of course support the Veeam Data Mover. So let's talk about that since we're at a Veeam conference. We were the first company to ever integrate the Veeam Data Mover. So we were the first actually ever integration with Veeam. And so that Veeam Data Mover is a protocol that goes from Veeam to the ExaGrid, and we run it on both ends. So that's a more secure protocol 'cause it's not an open format protocol like SaaS. So with running the Veeam Data Mover we get about 30% more performance, but you do have a more secure protocol layer. So if you don't get through Veeam but you get through the protocol, boom, we've got a stronger protocol. If you make it through that somehow, or you get to it from a rogue server somewhere else we still have the repository. So we have all these layers so that you can't get at it. >> So you guys have been at this for a while, I mean decade and a half plus. And you've raised a fair amount of money but in today's terms, not really. So you've just had really strong growth, sequential growth. I understand it, and double digit growth year on year. >> Yeah, about 25% a year right now >> 25%, what's your global strategy? >> So we have sales offices in about 30 countries already. So we have three sales teams in Brazil, and three in Germany, and three in the UK, and two in France, and a lot of individual countries, Chile, Argentina, Columbia, Mexico, South Africa, Saudi, Czech Republic, Poland, Dubai, Hong Kong, Australia, Singapore, et cetera. We've just added two sales territories in Japan. We're adding two in India. And we're installed in over 50 countries. So we've been international all along the way. The goal of the company is we're growing nicely. We have not raised money in almost 10 years. >> So you're self-funding. You're cash positive. >> We are cash positive and self-funded and people say, how have you done that for 10 years? >> You know what's interesting is I remember, Dave Scott, Dave Scott was the CEO of 3PAR, and he told me when he came into that job, he told the VCs, they wanted to give him 30 million. He said, I need 80 million. I think he might have raised closer to a hundred which is right around what you guys have raised. But like you said, you haven't raised it in a long time. And in today's terms, that's nothing, right? >> 100 is 500 in today's terms. >> Yeah, right, exactly. And so the thing that really hurt 3PAR, they were public companies so you could see all this stuff is they couldn't expand internationally. It was just too damn expensive to set up the channels, and somehow you guys have figured that out. >> 40% of our business comes out of international. We're growing faster internationally than we are domestically. >> What was the formula there, Bill, was that just slow and steady or? >> It's a great question. >> No, so what we did, we said let's build ExaGrid like a McDonald's franchise, nobody's ever done that before in high tech. So what does that mean? That means you have to have the same product worldwide. You have to have the same spares model worldwide. You have to have the same support model worldwide. So we early on built the installation. So we do 100% of our installs remotely. 100% of our support remotely, yet we're in large enterprises. Customers racks and stacks the appliances we get on with them. We do the entire install on 30 minutes to about three hours. And we've been developing that into the product since day one. So we can remotely install anywhere in the world. We keep spares depots all over the world. We can bring 'em up really quick. Our support model is we have in theater support people. So they're in Europe, they're in APAC, they're in the US, et cetera. And we assign customers to the support people. So they deal with the same support person all the time. So everything is scalable. So right now we're going to open up India. It's the same way we've opened up every other country. Once you've got the McDonald's formula we just stamp it all over the world. >> That's amazing. >> Same pricing, same product same model, same everything. >> So what was the inspiration for that? I mean, you've done this since day one, which is what like 15, 16 years ago. Or just you do engineering or? >> No, so our whole thought was, first of all you can't survive anymore in this world without being an international company. 'Cause if you're going to go after large companies they have offices all over the world. We have companies now that have 17, 18, 20, 30 locations. And there were in every country in the world, you can't go into this business without being able to ship anywhere in the world and support it for a single customer. You're not going into Singapore because of that. You're going to Singapore because some company in Germany has offices in the U.S, Mexico Singapore and Australia. You have to be international. It's a must now. So that was the initial thing is that, our goal is to become a billion dollar company. And we're on path to do that, right. >> You can see a billion. >> Well, I can absolutely see a billion. And we're bigger than everybody thinks. Everybody guesses our revenue always guesses low. So we're bigger than you think. The reason why we don't talk about it is we don't need to. >> That's the headline for our writers, ExaGrid is a billion dollar company and nobody's know about it. >> Million dollar company. >> On its way to a billion. >> That's right. >> You're not disclosing. (Bill laughing) But that's awesome. I mean, that's a great story. I mean, you kind of are a well kept secret, aren't you? >> Well, I dunno if it's a well kept secret. You know, smaller companies never have their awareness of big companies, right? The Dells of the world are a hundred billion. IBM is 70 billion, Cisco is 60 billion. Easy to have awareness, right? If you're under a billion, I got to give a funny story then I think we got to close out here. >> Oh go ahead please. >> So there's one funny story. So I was talking to the CIO of a super large Fortune 500 company. And I said to him, "Just so who do you use?" "I use IBM Db2, and I use, Cisco routers, and I use EMC primary storage, et cetera. And I use all these big." And I said, "Would you ever switch from Db2?" "Oh no, the switching costs would kill me. I could never go to Oracle." So I said to him, "Look would you ever use like a Pure Storage, right. A couple billion dollar company." He says, "Who?" >> Huh, interesting. >> I said to him, all right so skip that. I said, "VMware, would you ever think about going with Nutanix?" "Who?" Those are billion dollar plus companies. And he was saying who? >> Public companies. >> And he was saying who? That's not uncommon when I talk to CIOs. They see the big 30 and that's it. >> Oh, that's interesting. What about your partnership with Veeam? Tell us more about that. >> Yeah, so I would actually, and I'm going to be bold when I say this 'cause I think you can ask anybody here at the conference. We're probably closer first of all, to the Veeam sales force than any company there is. You talk to any Veeam sales rep, they work closer with ExaGrid than any other. Yeah, we are very tight in the field and have been for a long time. We're integrated with the Veeam Data Boomer. We're integrated with SOBR. We're integrated with all the integrations or with the product as well. We have a lot of joint customers. We actually do a lot of selling together, where we go in as Veeam ExaGrid 'cause it's a great end to end story. Especially when we're replacing, let's say a Dell Avamar to Dell Data Domain or a Dell Network with a Dell Data Domain, very commonly Veeam ExaGrid go in together on those types of sales. So we do a lot of co-selling together. We constantly train their systems engineers around the world, every given week we're training either inside sales teams, and we've trained their customer support teams in Columbus and Prague. So we're very tight with 'em we've been tight for over a decade. >> Is your head count public? Can you share that with us? >> So we're just over 300 employees. >> Really, wow. >> We have 70 open positions, so. >> Yeah, what are you looking for? Yeah, everything, right? >> We are looking for engineers. We are looking for customer support people. We're looking for marketing people. We're looking for inside sales people, field people. And we've been hiring, as of late, major account reps that just focus on the Fortune 500. So we've separated that out now. >> When you hire engineers, I mean I think I saw you were long time ago, DG, right? Is that true? >> Yeah, way back in the '80s. >> But systems guy. >> That's how old I am. >> Right, systems guy. I mean, I remember them well Eddie Castro and company. >> Tom West. >> EMV series. >> Tom West was the hero of course. >> The EMV 4000, the EMV 20,000, right? >> When were kids, "The Soul of a New Machine" was the inspirational book but anyway, >> Yeah Tracy Kidder, it was great. >> Are you looking for systems people, what kind of talent are you looking for in engineering? >> So it's a lot of Linux programming type stuff in the product 'cause we run on a Linux space. So it's a lot of Linux programs so its people in those storage. >> Yeah, cool, Bill, hey, thanks for coming on to theCUBE. Well learned a lot, great story. >> It's a pleasure. >> That was fun. >> Congratulations. >> Thanks. >> And good luck. >> All right, thank you. >> All right, and thank you for watching theCUBE's coverage of VeeamON 2022, Dave Vellante for Dave Nicholson. We'll be right back right after this short break, stay with us. (soft beat music)
SUMMARY :
We're here at the Aria in Las Vegas And then you get the attacks on the data You've kind of been the steady and let's say the Dell or And the restores are slow that's the speed we take it in at. and the fact that we So that disc cache layer No, it's still the same. So only the most recent backup are the duplicated data. Okay, so you're deduping the deduplicate at the layer we do. with the algos, right? So the part of the environment 95% of the customers restores 95% of the time 'cause you think about Yeah, so the economics are comparable example of the savings. They need the fastest trip on the planet in and out of the hash table. So let's do the first scenario. So the first scenario is the threat actor Because that tier day of the week saying, I mean don't give me the names but how so. 'Cause I'm self-serving as the CEO So if you don't get through Veeam So you guys have been The goal of the company So you're self-funding. what you guys have raised. And so the thing that really hurt 3PAR, than we are domestically. It's the same way we've Same pricing, same product So what was the inspiration for that? country in the world, So we're bigger than you think. That's the headline for our writers, I mean, you kind of are a The Dells of the world So I said to him, "Look would you ever I said, "VMware, would you ever think They see the big 30 and that's it. Oh, that's interesting. So we do a lot of co-selling together. that just focus on the Fortune 500. Eddie Castro and company. in the product 'cause thanks for coming on to theCUBE. All right, and thank you for watching
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Veeam | ORGANIZATION | 0.99+ |
Japan | LOCATION | 0.99+ |
Dave Scott | PERSON | 0.99+ |
Columbus | LOCATION | 0.99+ |
Brazil | LOCATION | 0.99+ |
Germany | LOCATION | 0.99+ |
McDonald | ORGANIZATION | 0.99+ |
India | LOCATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Bill Andrews | PERSON | 0.99+ |
$500,000 | QUANTITY | 0.99+ |
UK | LOCATION | 0.99+ |
three | QUANTITY | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
France | LOCATION | 0.99+ |
Tracy Kidder | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
Seagate | ORGANIZATION | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
100% | QUANTITY | 0.99+ |
U.S | LOCATION | 0.99+ |
two | QUANTITY | 0.99+ |
Australia | LOCATION | 0.99+ |
six hours | QUANTITY | 0.99+ |
US | LOCATION | 0.99+ |
Singapore | LOCATION | 0.99+ |
100 terabyte | QUANTITY | 0.99+ |
17 | QUANTITY | 0.99+ |
Prague | LOCATION | 0.99+ |
15 minutes | QUANTITY | 0.99+ |
1.4 million | QUANTITY | 0.99+ |
200 terabyte | QUANTITY | 0.99+ |
40 copies | QUANTITY | 0.99+ |
30 minutes | QUANTITY | 0.99+ |
3PAR | ORGANIZATION | 0.99+ |
Western Digital | ORGANIZATION | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
The Soul of a New Machine | TITLE | 0.99+ |
20 | QUANTITY | 0.99+ |
500 terabyte | QUANTITY | 0.99+ |
80 million | QUANTITY | 0.99+ |
30 million | QUANTITY | 0.99+ |
ExaGrid | ORGANIZATION | 0.99+ |
three sales teams | QUANTITY | 0.99+ |
95% | QUANTITY | 0.99+ |
10 years | QUANTITY | 0.99+ |
first scenario | QUANTITY | 0.99+ |
Monday | DATE | 0.99+ |
10 days | QUANTITY | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
18 | QUANTITY | 0.99+ |
Thursday | DATE | 0.99+ |
60 billion | QUANTITY | 0.99+ |
Hewlett Packard | ORGANIZATION | 0.99+ |
50 | QUANTITY | 0.99+ |
70 billion | QUANTITY | 0.99+ |
two sales territories | QUANTITY | 0.99+ |
Tom West | PERSON | 0.99+ |
Mexico | LOCATION | 0.99+ |
20 locations | QUANTITY | 0.99+ |
Bill | PERSON | 0.99+ |
Argentina | LOCATION | 0.99+ |
Linux | TITLE | 0.99+ |
70 open positions | QUANTITY | 0.99+ |
Poland | LOCATION | 0.99+ |
five | QUANTITY | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
first | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
Columbia | LOCATION | 0.99+ |
Dubai | LOCATION | 0.99+ |
a minute | QUANTITY | 0.99+ |
Evan Kaplan, InfluxData
(upbeat music) >> Okay today, we welcome Evan Kaplan, CEO of InfluxData, the company behind InfluxDB. Welcome Evan, thanks for coming on. >> Hey John, thanks for having me. >> Great segment here on the InfluxDB story. What is the story? Take us through the history, why time series? What's the story? >> So the history history is actually pretty interesting. Paul Dix my partner in this and our founder, super passionate about developers and developer experience. And he had worked on wall street building a number of time series kind of platform, trading platforms for trading stocks. And from his point of view, it was always what he would call a yak shave. Which means you had to do a ton of work just to start doing work. Which means you had to write a bunch of extrinsic routines, you had to write a bunch of application handling on existing relational databases, in order to come up with something that was optimized for a trading platform or a time series platform. And he sort of, he just developed this real clear point of view. This is not how developers should work. And so in 2013, he went through Y Combinator, and he built something for, he made his first commit to open source InfluxDB in the end of 2013. And he basically, you know from my point of view, he invented modern time series, which is you start with a purpose built time series platform to do these kind of workloads, and you get all the benefits of having something right out of the box. So a developer can be totally productive right away. >> And how many people are in the company? What's the history of employees is there? >> Yeah, I think we're, you know, I always forget the number but something like 230 or 240 people now. I joined the company in 2016, and I love Paul's vision. And I just had a strong conviction about the relationship between time series and IOT. 'Cause if you think about it, what sensors do is they speak time series. Pressure, temperature, volume, humidity, light, they're measuring, they're instrumenting something over time. And so I thought that would be super relevant over the long term, and I've not regretted it. >> Oh no, and it's interesting at that time if you go back in history, you know, the role of database. It's all relational database, the one database to rule the world. And then as cloud started coming in, you started to see more databases proliferate, types of databases. And time series in particular is interesting 'cause real time has become super valuable from an application standpoint. IOT which speaks time series, means something. It's like time matters >> Times yeah. >> And sometimes data's not worth it after the time, sometimes it's worth it. And then you get the data lake, so you have this whole new evolution. Is this the momentum? What's the momentum? I guess the question is what's the momentum behind it? >> You mean what's causing us to grow so fast? >> Yeah the time series, why is time series- >> And the category- >> Momentum, what's the bottom line? >> Well think about it, you think about it from a broad sort of frame which is, what everybody's trying to do is build increasingly intelligent systems. whether it's a self-driving car or a robotic system that does what you want to do, or a self-healing software system. Everybody wants to build increasing intelligent systems. And so in order to build these increasing intelligent systems, you have to instrument the system well. And you have to instrument it over time, better and better. And so you need a tool, a fundamental tool to drive that instrumentation. And that's become clear to everybody that that instrumentation is all based on time. And so what happened, what happened, what happened, what's going to happen. And so you get to these applications like predictive maintenance, or smarter systems, and increasingly you want to do that stuff not just intelligently, but fast in real time. So millisecond response, so that when you're driving a self-driving car, and the system realizes that you're about to do something, essentially you want to be able to act in something that looks like real time. All systems want to do that, they want to be more intelligent, and they want to be more real time. And so we just happen to, you know, we happen to show up at the right time in the evolution of a market. >> It's interesting near real time isn't good enough when you need real time. >> Yeah, it's not, it's not. And it's like everybody wants real even when you don't need it, ironically you want it. It's like having the feature for, you know you buy a new television, you want that one feature, even though you're not going to use it. You decide that's your buying criteria. Real time is criteria for people. >> So I mean, what you're saying then is near realtime is getting closer to real time as fast as possible? >> Right. >> Okay, so talk about the aspect of data, 'cause we're hearing a lot of conversations on theCUBE in particular around how people are implementing and actually getting better. So iterating on data, but you have to know when it happened to get know how to fix it. So this is a big part of what we're seeing with people saying, "Hey, you know I want to "make my machine learning algorithms better "after the fact, I want to learn from the data." How do you see that evolving? Is that one of the use cases of sensors as people bring data in off the network, getting better with the data, knowing when it happened? >> Well, for sure what you're saying is, is none of this is non-linear, it's all incremental. And so if you take something, you know just as an easy example, if you take a self-driving car, what you're doing is you're instrumenting that car to understand where it can perform in the real world in real time. And if you do that, if you run the loop which is, I instrument it, I watch what happens, oh that's wrong, oh I have to correct for that. I correct for that in the software. If you do that for a billion times, you get a self-driving car. But every system moves along that evolution. And so you get the dynamic of constantly instrumenting, watching the system behave and do it. And so a self driving car is one thing, but even in the human genome, if you look at some of our customers, you know, people like, people doing solar arrays, people doing power walls like all of these systems are getting smarter and smarter. >> Well, let's get into that. What are the top applications? What are you seeing with InfluxDB, the time series, what's the sweet spot for the application use case and some customers? Give some examples. >> Yeah so it's pretty easy to understand on one side of the equation, that's the physical side is, sensors are getting cheap obviously we know that. The whole physical world is getting instrumented, your home, your car, the factory floor, your wrist watch, your healthcare, you name it, it's getting instrumented in the physical world. We're watching the physical world in real time. And so there are three or four sweet spots for us, but they're all on that side, they're all about IOT. So they're thinking about consumer IOT kind of projects like Google's Nest, Tudor, particle sensors, even delivery engines like Rappi, who deliver the instant car to South America. Like anywhere there's a physical location and that's on the consumer side. And then another exciting space is the industrial side. Factories are changing dramatically over time. Increasingly moving away from proprietary equipment to develop or driven systems that run operational. Because what has to get smarter when you're building a factory is systems all have to get smarter. And then lastly, a lot in the renewables, so sustainability. So a lot, you know, Tesla, Lucid motors, Nicola motors, you know, lots to do with electric cars, solar arrays, windmills arrays, just anything that's going to get instrumented that where that instrumentation becomes part of what the purpose is. >> It's interesting the convergence of physical and digital is happening with the data. IOT you mentioned, you know, you think of IOT, look at the use cases there. It was proprietary OT systems, now becoming more IP enabled, internet protocol. And now edge compute, getting smaller, faster, cheaper. AI going to the edge. Now you have all kinds of new capabilities that bring that real time and time series opportunity. Are you seeing IOT going to a new level? Where's the IOT OT dots connecting to? Because, you know as these two cultures merge, operations basically, industrial, factory, car, they got to get smarter. Intelligent edge is a buzzword but I mean, it has to be more intelligent. Where's the action in all this? >> So the action, really, it really at the core, it's at the developer, right? Because you're looking at these things, it's very hard to get an off the shelf system to do the kinds of physical and software interaction. So the action's really happen at the developer. And so what you're seeing is a movement in the world that maybe you and I grew up in with IT or OT moving increasingly that developer driven capability. And so all of these IOT systems, they're bespoke, they don't come out of the box. And so the developer, the architect, the CTO, they define what's my business? What am I trying to do? Am I trying to sequence a human genome and figure out when these genes express themselves? Or am I trying to figure out when the next heart rate monitor is going to show up in my apple watch? Right, what am I trying to do? What's the system I need to build? And so starting with the developer is where all of the good stuff happens here. Which is different than it used to be, right. It used to be you'd buy an application or a service or a SaaS thing for, but with this dynamic, with this integration of systems, it's all about bespoke, it's all about building something. >> So let's get to the developer real quick. Real highlight point here is the data, I mean, I could see a developer saying, "Okay, I need to have an application for the edge," IOT edge or car, I mean we're going to have, I mean Tesla got applications of the car, it's right there. I mean, there's the modern application life cycle now. So take us through how does this impacts the developer. Does it impact their CICD pipeline? Is it cloud native? I mean where does this go to? >> Well, so first of all you're talking about, there was an internal journey that we had to go through as a company which I think is fascinating for anybody that's interested, is we went from primarily a monolithic software that was open sourced to building a Cloud-native platform. Which means we had to move from an agile development environment to a CICD environment. So to degree that you are moving your service, whether it's you know, Tesla monitoring your car and updating your power walls, right. Or whether it's a solar company updating the arrays, right, to a degree that that service is cloud. Then increasingly we remove from an agile development to a CICD environment, which you're shipping code to production every day. And so it's not just the developers, it's all the infrastructure to support the developers to run that service and that sort of stuff. I think that's also going to happen in a big way. >> When your customer base that you have now, and as you see evolving with in InfluxDB, is it that they're going to be writing more of the application or relying more on others? I mean obviously it's an open source component here. So when you bring in kind of old way, new way, old way was, I got a proprietary platform running all this IOT stuff, and I got to write, here's an application that's general purpose. I have some flexibility, somewhat brittle, maybe not a lot of robustness to it, but it does this job. >> A good way to think about this is- >> Versus new way which is what? >> So yeah a good way to think about this is what's the role of the developer/architect, CTO, that chain within a large, with an enterprise or a company. And so the way to think about is I started my career in the aerospace industry. And so when you look at what Boeing does to assemble a plane, they build very very few of the parts. Instead what they do is they assemble. They buy the wings, they buy the engines, they assemble, actually they don't buy the wings. That's the one thing, they buy the material for the wing. They build the wings 'cause there's a lot of tech in the wings, and they end up being assemblers, smart assemblers of what ends up being a flying airplane. Which is a pretty big deals even now. And so what happens with software people is, they have the ability to pull from you know, the best of the open source world. So they would pull a time series capability from us, then they would assemble that with potentially some ETL logic from somebody else. Or they'd assemble it with a Kafka interface to be able to stream the data in. And so they become very good integrators and assemblers but they become masters of that bespoke application. And I think that's where it goes 'cause you're not writing native code for everything. >> So they're more flexible, they have faster time to market 'cause they're assembling. >> Way faster. >> And they get to still maintain their core competency, AKA their wings in this case. >> They become increasingly not just coders but designers and developers. They become broadly builders is what we like to think of it. People who start and build stuff. By the way, this is not different than the people just up the road. Google have been doing for years or the tier one Amazon building all their own. >> Well, I think one of the things that's interesting is that this idea of a systems developing, a system architecture. I mean systems have consequences when you make changes. So when you have now cloud data center on-premise and edge working together, how does that work across the system? You can't have a wing that doesn't work with the other wing kind of thing. >> That's exactly, but that's where that Boeing or that airplane building analogy comes in. For us, we've really been thoughtful about that because IOT it's critical. So our open source edge has the same API as our cloud native stuff that has enterprise on prem edge. So our multiple products have the same API and they have a relationship with each other. They can talk with each other. So the builder builds it once. And so this is where, when you start thinking about the components that people have to use to build these services is that, you want to make sure at least that base layer, that database layer that those components talk to each other. >> So I'll have to ask you if I'm the customer, I put my customer hat on. Okay, hey, I'm dealing with a lot. >> Does that mean you have a PO for- >> (laughs) A big check, a blank check, if you can answer this question. >> Only if in tech. >> If you get the question right. I got all this important operation stuff, I got my factory, I got my self-driving cars, this isn't like trivial stuff, this is my business. How should I be thinking about time series? Because now I have to make these architectural decisions as you mentioned and it's going to impact my application development. So huge decision point for your customers. What should I care about the most? What's in it for me? Why is time series important? >> Yeah, that's a great question. So chances are, if you've got a business that was 20 years old or 25 years old, you were already thinking about time series. You probably didn't call it that, you built something on Oracle, or you built something on IBM's Db2, right, and you made it work within your system. Right, and so that's what you started building. So it's already out there, there are probably hundreds of millions of time series applications out there today. But as you start to think about this increasing need for real time, and you start to think about increasing intelligence, you think about optimizing those systems over time, I hate the word, but digital transformation. Then you start with time series, it's a foundational base layer for any system that you're going to build. There's no system I can think of where time series shouldn't be the foundational base layer. If you just want to store your data and just leave it there and then maybe look it up every five years, that's fine. That's not time series. Time series is when you're building a smarter more intelligent, more real time system. And the developers now know that. And so the more they play a role in building these systems the more obvious it becomes. >> And since I have a PO for you and a big check. >> Yeah. >> What's the value to me when I implement this? What's the end state? What's it look like when it's up and running? What's the value proposition for me? What's in it for me? >> So when it's up and running, you're able to handle the queries, the writing of the data, the down sampling of the data, the transforming it in near real time. So that the other dependencies that a system it gets for adjusting a solar array or trading energy off of a power wall or some sort of human genome, those systems work better. So time series is foundational. It's not like it's doing every action that's above, but it's foundational to build a really compelling intelligence system. I think that's what developers and architects are seeing now. >> Bottom line, final word, what's in it for the customer? What's your statement to the customer? What would you say to someone looking to do something in time series and edge? >> Yeah so it's pretty clear to us that if you're building, if you view yourself as being in the business of building systems, that you want 'em to be increasingly intelligent, self-healing autonomous. You want 'em to operate in real time, that you start from time series. But I also want to say what's in it for us, Influx. What's in it for us is, people are doing some amazing stuff. You know, I highlighted some of the energy stuff, some of the human genome, some of the healthcare, it's hard not to be proud or feel like, "Wow." >> Yeah. >> "Somehow I've been lucky, I've arrived at the right time, "in the right place with the right people "to be able to deliver on that." That's also exciting on our side of the equation. >> Yeah, it's critical infrastructure, critical of operations. >> Yeah. >> Great stuff. Evan thanks for coming on, appreciate this segment. All right, in a moment, Brian Gilmore director of IOT and emerging technology at InfluxData will join me. You're watching theCUBE, leader in tech coverage. Thanks for watching. (upbeat music)
SUMMARY :
the company behind InfluxDB. What is the story? And he basically, you know I joined the company in 2016, database, the one database And then you get the data lake, And so you get to these applications when you need real time. It's like having the feature for, Is that one of the use cases of sensors And so you get the dynamic InfluxDB, the time series, and that's on the consumer side. It's interesting the And so the developer, of the car, it's right there. So to degree that you is it that they're going to be And so the way to think they have faster time to market And they get to still By the way, this is not So when you have now cloud So our open source edge has the same API So I'll have to ask if you can answer this question. What should I care about the most? And so the more they play a for you and a big check. So that the other that you want 'em to be "in the right place with the right people critical of operations. Brian Gilmore director of IOT
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
2016 | DATE | 0.99+ |
Brian Gilmore | PERSON | 0.99+ |
Boeing | ORGANIZATION | 0.99+ |
Evan Kaplan | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Evan Kaplan | PERSON | 0.99+ |
2013 | DATE | 0.99+ |
Tesla | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Amazon | ORGANIZATION | 0.99+ |
Paul Dix | PERSON | 0.99+ |
South America | LOCATION | 0.99+ |
230 | QUANTITY | 0.99+ |
Evan | PERSON | 0.99+ |
InfluxData | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Paul | PERSON | 0.99+ |
three | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
240 people | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
IOT | ORGANIZATION | 0.98+ |
one | QUANTITY | 0.98+ |
end of 2013 | DATE | 0.97+ |
one side | QUANTITY | 0.97+ |
Lucid | ORGANIZATION | 0.96+ |
Y Combinator | ORGANIZATION | 0.96+ |
one thing | QUANTITY | 0.96+ |
tier one | QUANTITY | 0.94+ |
InfluxDB | TITLE | 0.93+ |
one feature | QUANTITY | 0.93+ |
25 years old | QUANTITY | 0.93+ |
20 years old | QUANTITY | 0.93+ |
one database | QUANTITY | 0.91+ |
hundreds of millions of time series | QUANTITY | 0.9+ |
two cultures | QUANTITY | 0.89+ |
Influx | OTHER | 0.88+ |
every five years | QUANTITY | 0.87+ |
InfluxDB | ORGANIZATION | 0.84+ |
Nicola | ORGANIZATION | 0.81+ |
Db2 | TITLE | 0.76+ |
theCUBE | ORGANIZATION | 0.76+ |
Rappi | ORGANIZATION | 0.76+ |
a billion times | QUANTITY | 0.76+ |
a ton of work | QUANTITY | 0.72+ |
apple | ORGANIZATION | 0.69+ |
Tudor | ORGANIZATION | 0.69+ |
Kafka | TITLE | 0.69+ |
four sweet spots | QUANTITY | 0.65+ |
years | QUANTITY | 0.59+ |
Manu Parbhakar, AWS & Mike Evans, Red Hat | AWS re:Invent 2021
(upbeat music) >> Hey, welcome back everyone to theCube's coverage of AWS re:Invent 2021. I'm John Furrier, host of theCube, wall-to-wall coverage in-person and hybrid. The two great guests here, Manu Parbhakar, worldwide Leader, Linux and IBM Software Partnership at AWS, and Mike Evans, Vice President of Technical Business Development at Red Hat. Gentlemen, thanks for coming on theCube. Love this conversation, bringing Red Hat and AWS together. Two great companies, great technologies. It really is about software in the cloud, Cloud-Scale. Thanks for coming on. >> Thanks John. >> So get us into the partnership. Okay. This is super important. Red Hat, well known open source as cloud needs to become clear, doing an amazing work. Amazon, Cloud-Scale, Data is a big part of it. Modern software. Tell us about the partnership. >> Thanks John. Super excited to share about our partnership. As we have been partnering for almost 14 years together. We started in the very early days of AWS. And now we have tens of thousands of customers that are running RHEL on EC2. If you look at over the last three years, the pace of innovation for our joint partnership has only increased. It has manifested in three key formats. The first one is the pace at which RHEL supports new EC2 instances like Arm, Graviton. You know, think a lot of features like Nitro. The second is just the portfolio of new RHEL offerings that we have launched over the last three years. We started with RHEL for sequel, RHEL high availability, RHEL for SAP, and then only last month, we've launched the support for knowledge base for RHEL customers. Mike, you want to talk about what you're doing with OpenShift and Ansible as well? >> Yeah, it's good to be here. It's fascinating to me cause I've been at Red Hat for 21 years now. And vividly remember the start of working with AWS back in 2008, when the cloud was kind of a wild idea with a whole bunch of doubters. And it's been an interesting time, but I feel the next 14 years are going to be exciting in a different way. We now have a very large customer base from almost every industry in the world built on RHEL, and running on AWS. And our goal now is to continue to add additional elements to our offerings, to build upon that and extend it. The largest addition which we're going to be talking a lot about here at the re:Invent show was the partnership in April this year when we launched the Red Hat OpenShift service on AWS as a managed version of OpenShift for containers based workloads. And we're seeing a lot of the customers that have standardized on RHEL on EC2, or ones that are using OpenShift on-premise deployments, as the early adopters of ROSA, but we're also seeing a huge number of new customers who never purchased anything from Red Hat. So, in addition to the customers, we're getting great feedback from systems integrators and ISV partners who are looking to have a software application run both on-premise and in AWS, and with OpenShift being one of the pioneers in enabling both container and harnessing Kubernetes where ROSA is just a really exciting area for us to track and continue to advance together with AWS. >> It's very interesting. Before I get to ROSA, I want to just get the update on Red Hat and IBM, obviously the acquisition part of IBM, how is that impacting the partnership? You can just quickly touch on that. >> Sure. I'll start off and, I mean, Red Hat went from a company that was about 15,000 employees competing with a lot of really large technology companies and we added more than 100,000 field oriented people when IBM acquired Red Hat to help magnify the Red Hat solutions, and the global scale and coverage of IBM is incredible. I like to give two simple examples of people. One is, I remember our salesforce in EMEA telling me they got a $4 million order from a country in Africa theydidn't even know existed. And IBM had 100 people in it, or AT&T is one of Red Hat's largest accounts, and I think at one point we had seven full-time people on it and AT&T is one of IBM's largest accounts and they had two seven storey buildings full of people working with AT&T. So RHELative to AWS, we now also see IBM embracing AWS more with both software, and services, in the magnification of Red Hat based solutions, combined with that embrace should be, create some great growth. And I think IBM is pretty excited about being able to sell Red Hat software as well. >> Yeah, go ahead. >> And Manu I think you have, yeah. >> Yeah. I think there's also, it is definitely very positive John. >> Yeah. >> You know, just the joint work that Red Hat and AWS have done for the last 14 years, working in the trenches supporting our end customers is now also providing lot of Tailwinds for the IBM software partnership. We have done some incredible work over the last 12 months around three broad categories. The first one is around product, what we're doing around customer success, and then what we're doing around sales and marketing. So on the product side, we have listed about 15 products on Marketplace over the course of the last 12 to 15 months. And our goal is to launch all of the IBM Cloud Paks. These are containerized versions of IBM software on Marketplace by the first half of next year. The other feedback that we are getting from our customers is that, hey, we love IBM software running at Amazon, but we like to have a cloud native SaaS version of the software. So there's a lot of work that's going on right now, to make sure that many of these offerings are available in a cloud-native manner. And you're not talking with Db2 Cognos, Maximo, (indistinct), on EC2. The second thing that we're doing is making sure that many of these large enterprise customers are running IBM software, are successful. So our technical teams are attached to the hip, working on the ground floor in making customers like Delta successful in running IBM software on them. I think the third piece around sales and marketing just filing up a vibrant ecosystem, rather how do we modernize and migrate this IBM software on Cloud Paks on AWS? So there's a huge push going on here. So (indistinct), you know, the Red Hat partnership is providing a lot of Tailwinds to accelerate our partnership with IBM software. >> You know, I always, I've been saying all this year in Red Hat summit, as well as Ansible Fest that, distributed computing is coming to large scale. And that's really the, what's happening. I mean, you looking at what you guys are doing cause it's amazing. ROSA Red Hat OpenShift on AWS, very notable to use the term on AWS, which actually means something in the partnership as we learned over the years. How is that going Mike because you launched on theCube in April, ROSA, it had great traction going in. It's in the Marketplace. You've got some integration. It's really a hand in glove situation with Cloud-Scale. Take us through what's the update? >> Yeah, let me, let me let Manu speak first to his AWS view and then I'll add the Red Hat picture. >> Thanks Mike. John for ROSA is part of an entire container portfolio. So if you look at it, so we have ECS, EKS, the managed Kubernetes service. We have the serverless containers with Fargate. We launched ECS case anywhere. And then ROSA is part of an entire portfolio of container services. As you know, two thirds of all container workloads run on AWS. And a big function of that is because we (indistinct) from our customer and then sold them what the requirements are. There are two sets of key customers that are driving the demand and the early adoption of ROSA. The first set of customers that have standardized on OpenShift on-premises. They love the fact that everything that comes out of the box and they would love to use it on Arm. So that's the first (indistinct). The second set of customers are, you know, the large RHEL users on EC2. The tens of thousands of customers that we've talked about that want to move from VM to containers, and want to do DevOps. So it's this set of two customers that are informing our roadmap, as well as our investments around ROSA. We are seeing solid adoption, both in terms of adoption by a customer, as well as the partners and helping, and how our partners are helping our customers in modernizing from VMs to containers. So it's a, it's a huge, it's a huge priority for our container service. And over the next few years, we continue to see, to increase our investment on the product road map here. >> Yeah, from my perspective, first off at the high level in mind, my one of the most interesting parts of ROSA is being integrated in the AWS console and not just for the, you know, where it shows up on the screen, but also all the work behind what that took to get there and why we did it. And we did it because customers were asking both of us, we're saying, look, OpenShift is a platform. We're going to be building and deploying serious applications at incredible scale on it. And it's really got to have joint high-quality support, joint high-quality engineering. It's got to be rock solid. And so we came to agreement with AWS. That was the best way to do that, was to build it in the console, you know, integrated in, into the core of an AWS engineering team with Red Hat engineers, Arm and Arms. So that's, that's a very unique service and it's not like a high level SaaS application that runs above everything, it's down in the bowels and, and really is, needs to be rock solid. So we're seeing, we're seeing great interest, both from end users, as I mentioned, existing customers, new customers, the partner base, you know, how the systems integrators are coming on board. There's lots of business and money to be made in modernizing applications as well as building new cloud native applications. People can, you know, between Red Hat and AWS, we've got some, some models around supporting POCs and customer migrations. We've got some joint investments. it's a really ripe area. >> Yeah. That's good stuff. Real quick. what do you think of ROSA versus EKS and ECS? What's, how should people think about that Mike? (indistinct) >> You got to go for it Manu. Your job is to position all these (indistinct). (indistinct) >> John, ROSA is part of our container portfolio services along with EKS, ECS, Fargate, and any (indistinct) services that we just launched earlier this year. There are, you know, set of customers both that are running OpenShift on-premises that are standardized on ROSA. And then there are large set of RHEL customers that are running RHEL on EC2, that want to use the ROSA service. So, you know, both AWS and Red Hat are now continuing to invest in accelerating the roadmap of the service on our platform. You know, we are working on improving the console experience. Also one of the things we just launched recently is the Amazon controller to Kubernetes, or what , you know, service operators for S3. So over the next few years you will see, you know, significant investment from both Red Hat and AWS in this joint service. And this is an integral part of our overall container portfolio. >> And great stuff to get in the console. That's great, great integration. That's the future. I got to ask about the graviton instances. It's been one of the most biggest success stories, I think we believe in Amazon history in the acquisition of Annapurna, has really created great differentiation. And anyone who's in the software knows if you have good chips powering apps, they go faster. And if the chips are good, they're less expensive. And that's the innovation. We saw that RHEL now supports graviton instances. Tell us more about the Red Hat strategy with graviton and Arms specifically, has that impact your (indistinct) development, and what does it mean for customers? >> Sure. Yeah, it's pretty, it's a pretty fascinating area for me. As I said, I've been a Red Hat for 21 years and my job is actually looking at new markets and new technologies now for Red Hat and work with our largest partners. So, I've been tracking the Arm dynamics for awhile, and we've been working with AWS for over two years, supporting graviton. And it's, I'm seeing more enthusiasm now in terms of developers and, especially for very horizontal, large scale applications. And we're excited to be working with AWS directly on it. And I think it's going to be a fascinating next two years on Arm, personally. >> Many of the specialized processors for training and instances, all that stuff, can be applied to web services and automation like cloud native services, right? Is that, it sounds like a good direction. Take us through that. >> John, on our partnership with Red Hat, we are continuing to iterate, as Mike mentioned, the stuff that we've done around graviton, both the last two years is pretty incredible. And the pace at which we are innovating is improving. Around the (indistinct) and the inferential instances, we are continuing to work with Red Hat and, you know, the support for RHEL should come shortly, very soon. >> Well, my prediction is that the graviton success was going to be applied to every single category. You can get that kind of innovation with this on the software side, just really kind of just, that's the magical, that's the, that's the proven form of software, right? We've been there. Good software powering with some great performance. Manu, Mike, thank you for coming on and sharing the, the news and the partnership update. Congratulations on the partnership. Really good. Thank you. >> Excellent John. Incredible (indistinct). >> Yeah, this is the future software as we see, it's all coming together. Here on theCube, we're bringing all the action, software being powered by chips, is theCube coverage of AWS re:invent 2021. I'm John Furrier, your host. Thanks for watching. (upbeat music)
SUMMARY :
in the cloud, Cloud-Scale. about the partnership. The first one is the pace at which RHEL in the world built on RHEL, how is that impacting the partnership? and services, in the magnification it is definitely very positive John. So on the product side, It's in the Marketplace. first to his AWS view that are driving the demand And it's really got to have what do you think You got to go for it Manu. is the Amazon controller to Kubernetes, And that's the innovation. And I think it's going to be Many of the specialized processors And the pace at which we that the graviton success bringing all the action,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Manu Parbhakar | PERSON | 0.99+ |
Mike | PERSON | 0.99+ |
Mike Evans | PERSON | 0.99+ |
2008 | DATE | 0.99+ |
AT&T | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
two customers | QUANTITY | 0.99+ |
21 years | QUANTITY | 0.99+ |
AT&T. | ORGANIZATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Red Hat | TITLE | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Africa | LOCATION | 0.99+ |
Manu | PERSON | 0.99+ |
April | DATE | 0.99+ |
RHEL | TITLE | 0.99+ |
$4 million | QUANTITY | 0.99+ |
April this year | DATE | 0.99+ |
two sets | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
100 people | QUANTITY | 0.99+ |
Red Hat | TITLE | 0.99+ |
second set | QUANTITY | 0.99+ |
Delta | ORGANIZATION | 0.99+ |
third piece | QUANTITY | 0.99+ |
first set | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
over two years | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
first one | QUANTITY | 0.99+ |
more than 100,000 field | QUANTITY | 0.99+ |
EC2 | TITLE | 0.99+ |
2021 015 Laura Dubois
(gentle music) >> Welcome to this Cube Conversation, I'm Lisa Martin. Laura Dubois joins me next, VP of product management at Dell Technologies, Laura, welcome back to the program. >> Yeah, thank you so much Lisa, it's just fantastic to be here and talking about data protection now that we're coming out of COVID, it's just wonderful to be here, thank you so much. >> Isn't it so refreshing. So, you're going to provide some updates on Dell's data protection software, some of the innovation, how you're working with customers and prospects. So let's go ahead and dig right in, let's talk about some of the innovation and the enhancements that Dell is making to its data protection suite of software and also how customers are influencing that. >> Yeah, so it's a great question Lisa and you're right. We have driven a lot of innovation and enhancements in our data protection suite. And let me just level a second. So data protection suite, is a solution that is deployed by really tens of thousands of customers. And we continue to innovate and enhance that data protection suite. Data protection suite is comprised primarily of three main data protection software capabilities. So, longstanding capabilities and customer adoption of Avamar, which continues to be a central capability on our portfolio. The second one is Networker. So Networker is also an enterprise grade, highly scalable and performance data protection solution. And then a couple of years ago, we launched a new data protection capability called power protect data manager. So, all three of these capabilities, really the foundation of our data protection suite. And as I said, enterprises around the world rely on these three sets of capabilities to protect their data, regardless of wherever it resides. And it's really central now more than ever in the face of increasing security, risks and compliance and the need to be able to have an always kind of available environment that customers rely on the capabilities and data protection suite to really make sure their enterprises resilient. >> Absolutely, and make sure that that data is recoverable if anything happens, you mentioned cybersecurity. We'll get into that in a second. But so thousands of Avamar and Networker customers, what are some of the key workloads and data that these customers are protecting with these technologies? >> Yeah, I mean, so, actually tens of thousands. >> Tens of thousands. >> Tens of thousands of customers that rely on data protection suite. And it really, I think the strength and advantage of our portfolio is its breadth, breadth in terms of client operating environments, in terms of applications and databases, in terms of workloads and specifically use cases. So I mean, the breadth that we offer is unparalleled, pretty much whether Windows, Linux, OpenVMS, NetWare, kind of going back in time a long tail of kind of operating environments and then databases, right. So everything from SQL and Oracle and Sybase and DB2 to new types of databases, like the NoSQL or content store and key value store types of NoSQL schemas, if you will. And so, and then lastly is the word they use cases, right? So being able to protect data, whether that be data that's in a data center, out in remote or branch locations or data that's out in the cloud, right. And of course, increasingly customers are placing their data in a variety of locations; on Edge, on core data centers and in cloud environments. And we actually have over six exabytes of capacity under management, across public cloud environments. So pretty extensive deployment of our data protection suite in public clouds, you know, the leading hyperscalers, cloud environments and premises as well. >> So let's talk a little bit about the customer influence 'cause obviously there's a very cooperative relationship that Dell has with its customers that help you achieve things. Like, for example, I saw that according to IDC, Dell Technologies is number one in data protection, appliances, and software, leader in the Gartner Magic Quadrant for data center backup and recovery for over 20 years now. Talk to us a little bit more about that symbiotic customer, Dell relationship. >> Yeah, so it's a great question. We see our customers as strategic partners, and we really want to understand their business, their requirements. We engage on a quarterly basis with customers and partners in advisory councils. And then of course, we are always engaging with customers outside of those cycles on a kind of a one-on-one basis. And so we are really driving the innovation and the backlogs and the roadmap for data protection suite based upon customer feedback. And approximately 79% of the fortune 100 customers, our Dell data, Dell Technologies data protection customers. Now that's not to say that that's our only customer base. We have customers in commercial accounts, in mid-market in federal agencies, but, you know, we take our customer relationships really, really seriously, and we engage with them on a regular basis, both in a group forum to provide feedback as well as in a one-on-one basis. And we're building our roadmaps and our product release is based on feedback from customers, and again, know large customer base that we take very seriously. >> Right to the customer listening obviously it is critical for Dell. So you talked a little bit about what that cycle looks like in terms of quarterly meetings and then those individual meetings. What are some of the enhancements and advancements that customers have actually influenced? >> Yeah, so we, I mean, we, I think continuing to provide simplicity and ease of use is a key element of our portfolio and our strategy, right? So continuing to modernize and update the software in terms of workflows, in terms of, you know, common experiences also increasingly customers want to automate their data protection process. So really taking an API-first strategy for how we deliver capabilities to customers, continuing to expand our client database, hypervisor environments, continue to extend out our cloud support, you know, things like protection of cloud native applications with increasingly customers containerizing and building scale-out applications. We want to be able to protect Kubernetes environment. So that's kind of an area of focus for us. Another area of focus for us is going deeper with our key strategic partners, whether that'd be a cloud partner or a hypervisor partner. And then of course, customers, in fact, one of the top three things that we consistently hear from these councils that we do is the criticality of security, security and our data protection environment but the criticality of being able to be resilient from, and in the event of a cyber attack to be able to resilient recover from that cyber attack. So that is an area where we continue to make innovations and investments in the data protection suite as well. >> And that's so critical. One of the things that we saw in the last year, 15 months plus Laura, is this massive rise in ransomware. It's now a household word, the Colonial Pipeline for example, the meat packing plant, it's now many businesses knowing it's not, if we get attacked, but it's when. So having the ability to be resilient and recover that data is table stakes for, I imagine a business in any organization. I want to understand a little bit more. So you talked about tens of thousands of customers using Avamar and Networker. So now they have the capability of also expanding and using more of the suite. Talk to me a little bit about that. >> Yeah, so, I mean, I think it starts with the customer environment and what workloads and use cases they have. And because of the breadth of capabilities indeed the data protection suite, we really optimize the solution based upon their needs, right. So if they have a large portfolio of applications that they need to maintain but they're also building applications or systems for the future, we have a solution there. If they have a single hypervisor strategy or a multiple hypervisor strategy, we have a strategy there, if they have data that's on-premise and across a range of public clouds, one large customer we have as a, kind of three-plus one strategy around cloud. So they're leveraging three different public cloud, IS environments, and then they're also have their on-premise cloud environment. So, you know, we, it really starts with the customer workload and the data, and where it lives; whether that's be out in an Edge location in a remote or branch office, on an end point somewhere, they need to protect whether it be in a core data center or multiple data centers, or rather be in the cloud. That's how we think about optimizing the solution for the customers. >> Curious if you can give me any examples of customers maybe by industry that were, have been with Dell for a long time with Avamar and Networker for a long time and how they've expanded, being able to pick, as you say, as their, or as their environment grows and we've got, now this blur of right. It's now worked from anywhere, data centers, Edge. Talk to me about some customers examples that you think really articulate the value of what Dell is delivering. >> Yeah, so, I mean, I think one customer in the financial services sector comes to mind. They have a large amount of unstructured data that they need to protect, you know, petabytes, petabytes and petabytes of data they need to protect. And so I think that's one customer that comes to mind is someone we've been with for a long time, been partnering with for a long time. Another customer I mentioned in the, it was a kind of a three-letter software company that is a really strategic partner for us with on-premise, in the cloud. You know, healthcare is a big and important sector for Dell. We have integrations into kind of leading healthcare applications. So that's another big, whether they be a healthcare provider or a healthcare insurance company, and had a fourth example, but it's escaping my mind right now, but, I would say going back to the cyber discussion, I mean, one thing that we, where we see really customers looking for guidance from us around cyber recovery and cyber resilience is in what the, you know, of course president Biden just released this executive board on his mandate for ensuring that the federal agencies but also companies in the millisecond sector, sectors be able to ensure resilience from cyber attacks. So that's companies in financial services, that's companies in healthcare, energy, oil, and gas transportation, right. Obviously in companies and industries that are critical to our economy and our infrastructure. And so that has been an area where we've seen, recently in the last, I would say 12 months increased in engagement, you mentioned Colonial Pipeline, for example. So those are some high salient highlights I think of in terms of, you know, kind of key customers. But pretty much every sector. I mean, the U.S. government, all of the the agencies, whether they be civilian, or DOD or key kind of engagement partners of ours. >> Yeah, and as you said in the last year, what a year it's been. But really a business in every industry has got to be able to be resilient and recover when something happens. Can you talk a little bit about some of the specific enhancements that you guys have made to the suite? >> Yeah, sure. So, you know, we continue to enhance our hypervisor capabilities. So we continue to enhance not only the core VMware or hyperbaric capabilities but we continue to enhance some of the extensions or plugins that we have for those. So whether that be things like our VRealized plugin or a vCloud director plugin for say, VMware. So that's kind of a big focus for us. Continuing to enhance capabilities around leveraging the cloud for long-term retention. So that's another kind of enhancement area for us. But cloud in general is an ara where we continue to drive more and more enhancement. Improving performance in cloud environments for a variety of use cases, whether that be DR to the cloud, backup or replications of the cloud or backing up workloads that are already in the cloud. There's a key use cases for us, as well as the archive to cloud use cases. So there's just some examples or areas where we've driven enhancements and you can expect to see more, you know we have a six month release cadence for Avamar and Networker, and we continue with that momentum. And at the end of this month, we have the next major release of our data protection suite. And then six months later, we'll have the next update and so on and so forth. And we've been doing that actually for the last three to four years. This is a six month release cadence for data protection suite. We continue with that momentum. And like I said, simplicity and modernity, APIs and automation, extending our workloads and hypervisors and use cases. And then cloud is a big focusing area as well, as well as security and cyber resilience. >> Right, and so a lot of flexibility in choice for Avamar and Networker customers. As things change the world continues to pivot and we know it's absolutely essential to be able to recover that data. You mentioned 70, I think 79% of the Fortune 100 are using Dell technologies for data protection software. That's probably something that's only going to continue to grow. Lots of stuff coming up. As you mention, what are some of the things that you're personally excited about as the world starts to open up and you get to actually go out and engage with customers? >> I'm in just looking forward to like in-person meetings. I mean, I just loved going and trying to understand what problems the customers are trying to solve and how we can help address those. I think, you know, what I see customers sort of struggling with is how do they kind of manage their current environment while they're building for the future? So there's a lot of interest in questions around, how do they protect some of these new types of workloads, whether they're deployed on premise or in the public cloud. So that continues to be an area where we continue to engage with customers. I'm also really personally excited about the extensions that we're doing in our cyber recovery capabilities so as you can expect to hear more about some of those in the next 12 months, because we're really seeing that as a key driver to kind of increased policies around and implementations around data protection is because of these, you know, the needs to be able to be resilient from cyber attacks. I would say we're also doing some very interesting integrations with VMware. We're going to have some first and only announcements around VMware and managing protection for VMware, you know, VM environments. So you can look forward to hearing more about that. And we have customers that have deployed our data protection solutions at scale. One customer has 150,000 clients who they're protecting with our data protection offerings, 150,000. And so we're continuing to improve the, and enhance the products to meet those kinds of scale requirements. And I'm excited by the fact that we've had this long standing relationship with this one particular customer and continue to help in flowing up where their needs go. >> And that's something that even a great job of talking about is just not just a longstanding relationships but really that dedication that Dell has to innovating with its customers. Laura, thank you for sharing some of the updates of what's new, what you're continuing to do with customers, and what you're looking forward to in the future. It sounds like we might hear some news around the VMworld timeframe. >> Yes, I think so. >> All right, Laura, thank you so much for joining me today. Appreciate your time. >> Yeah, it's been great to be here. Thanks so much. >> Excellent from Laura Dubois and Lisa Martin, you're watching this Cube Conversation. (soft music)
SUMMARY :
Welcome to this Cube it's just fantastic to be here and the enhancements that Dell is making and the need to be able to have an always Absolutely, and make sure Yeah, I mean, so, So I mean, the breadth that that according to IDC, and the roadmap for data protection suite What are some of the and in the event of a cyber attack So having the ability to be resilient of applications that they need to maintain that you think really articulate the value that they need to protect, Yeah, and as you said in the last year, And at the end of this month, 79% of the Fortune 100 the needs to be able to be continuing to do with customers, All right, Laura, thank you to be here. Dubois and Lisa Martin,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Laura | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
2015 | DATE | 0.99+ |
John Troyer | PERSON | 0.99+ |
Umair Khan | PERSON | 0.99+ |
Laura Dubois | PERSON | 0.99+ |
Keith Townsend | PERSON | 0.99+ |
1965 | DATE | 0.99+ |
Keith | PERSON | 0.99+ |
Laura Dubois | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Emil | PERSON | 0.99+ |
Cloud Native Computing Foundation | ORGANIZATION | 0.99+ |
Fidelity | ORGANIZATION | 0.99+ |
Lisa | PERSON | 0.99+ |
1946 | DATE | 0.99+ |
10 seconds | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
2019 | DATE | 0.99+ |
Amr Abdelhalem | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Kapil Thangavelu | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
San Diego | LOCATION | 0.99+ |
10 feet | QUANTITY | 0.99+ |
Avamar | ORGANIZATION | 0.99+ |
Amr | PERSON | 0.99+ |
One | QUANTITY | 0.99+ |
San Diego, California | LOCATION | 0.99+ |
12 months | QUANTITY | 0.99+ |
one tool | QUANTITY | 0.99+ |
Fidelity Investments | ORGANIZATION | 0.99+ |
tens of thousands | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
thousands | QUANTITY | 0.99+ |
one repository | QUANTITY | 0.99+ |
Lambda | TITLE | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
Tens of thousands | QUANTITY | 0.99+ |
six month | QUANTITY | 0.99+ |
8000 people | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
10,000 developers | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
214 | OTHER | 0.99+ |
six months later | DATE | 0.99+ |
C two | TITLE | 0.99+ |
today | DATE | 0.99+ |
fourth year | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
NoSQL | TITLE | 0.99+ |
CNCF | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
150,000 | QUANTITY | 0.99+ |
79% | QUANTITY | 0.99+ |
KubeCon | EVENT | 0.99+ |
2022 | DATE | 0.99+ |
OpenVMS | TITLE | 0.99+ |
Networker | ORGANIZATION | 0.99+ |
GitOps | TITLE | 0.99+ |
DOD | ORGANIZATION | 0.99+ |
Mirko Novakovic, Instana - An IBM Company | IBM Think 2021
>> Presenter: From around the globe, it's theCUBE with digital coverage of IBM think2021 brought to you by IBM. >> Well, good to have you here on theCUBE. We continue our conversations here as part of the IBM Think initiative. I'm John Walls, your host here on theCUBE joined today by Mirko Novakovic, who is the co-founder and CEO of Instana which is an IBM company. Is specialized in enterprise observability for cloud native applications. And Mirko joins us all the way from Germany, near Cologne, Germany. Mirko, good to see it today. How are you doing? >> I'm good. Hi, John. Nice to meet you. >> You bet yeah. Thank you for taking the time today. First off, let's just give some definitions here. Enterprise observability. What is that? What are we talking about here? >> Yes observability is basically the next generation of monitoring, which means it provides data from a system, from an application to the outside, so that people from the outset can basically judge what's happening inside of an application. So think about you're a big e-commerce provider and you have your shop application and it doesn't work. Observability gives you the ability to really deep dive and see all the relevant metrics, logs and application flows to understand why something is not working as you would expect. >> So if I'm, or just listening to this, I think, okay, I'm monitoring my applications already right. I've got to APM and enforce and and I kind of know what things are going on. What's happening, where the hiccups are, all that. How, what is the enhancement here then in terms of observability taking, it sounds like you're kind of taking APM to a much higher level. >> Absolutely. I mean that's essentially how you can think about it. And we see three things that really make us Instana and enterprise observability different. And number one is automation. So the way we gather this information is fully automated. So you don't have to configure anything. We get inside of your code. We analyze the flow up the clarification we get the arrows, the logs and the metrics fully automatic. And the second is getting context. One of the problems with monitoring is if you have all these monitoring data silos so you have metrics on the one side locks into different tool. What we built is a real context. So we tie those data automatically together so that you get real information out of all the data. And the third is that we provide actions. So basically we use AI to figure out what the problem is and then automate things. Is it a problem resolution, restarting container or resizing your cloud? That's what we suggest automatically out of all the context and data that we gathered. >> So you're talking about automation, context, intelligence you'd combine all of that into one big bundle here then basically, that's a big bundle, right? I'm not a giant vacuum. If you will, you're ingesting all this information. You're looking for, you know, performance metrics. So you're trying to find problems. What's the complexity of tying all that together instead of keeping those functions separate you know, what are what's the benefit to having all that kind of under one roof then? >> Yeah. So from the complexity point of view for the end customer it's really easy because we do it automated. For us as a vendor building this it's super complex but we wanted to make it very easy for the user and I would say the benefit is that you get, we call it the meantime to repair like the time from a problem to resolve the problem gets significantly reduced because normally you have to do that correlation of data manually. And now with that context you get this automated by a machine and we even suggest you these intelligent actions to fix the problem. >> So, I'm sorry, go ahead. >> Yeah. And by the way, one of the things why IBM acquired us and why we are so excited working together with IBM is the combination of that functionality with something like Watson AIOps, because as I said we are suggesting an action and the next step is really fully automating this action with something like Watson AIOps and the automation functionality that IBM has. So that the end user not only gets the information what to do the machine even does and fix the problem automatically. >> Well, and I'm wondering too, just about the kind of the volume that we're dealing with these days in terms of software capabilities and data. You've got obviously a lot more inputs, right? A lot more interaction going on a lot more capabilities. You've got apps they're kind of broken down into microservices now. So, I mean, you've got you've got a lot more action, basically, right? You got a lot more going on and what's the challenge to not only keeping up with that but also building for the future for building for different kinds of capabilities and different kinds of interactions that maybe we can't even predict right now. >> Absolutely. Yeah. So I'm 20 years in that space. When I started, as you said it was a very simple system, right? You had an application server like WebSphere maybe a DB2 database so that was your application. It's like today applications are broken down into hundreds of little services that communicate with each other. And you can imagine if something breaks down in a system where you have two or three components it's maybe not easy, but it's handled by a human to figure out what the problem is. If you have a thousand pieces that are somehow interconnected and something is broken it is really hard to figure that out. And that's essentially the problem that we had to solve with the contacts, with the automation, with AI to figure out how all these things are tied together and then analyze automatically for the user where issues are happening. And by the way, that's also when you look into the future I think things will get more and more complicated. You can see now that people break down from microservice into functions, we get more server less. We get more into a hybrid cloud environment where you operate on premise and in multiple clouds. So things get more complex not less complex from an architectural perspective. >> You bring up clouds too. Is this agnostic, I mean, or do you work with an exclusive cloud provider or are you open for business basically? >> We are open for business but we have to support the different cloud technologies. So we support all the big public cloud vendors from IBM to Amazon, Google, Microsoft. But on the other hand, we see with enterprises maybe there's 10, 20% of the workload in the public cloud but the rest is still on premises. And there's also a lot of legacy. So you have to bring all this together in one view and in one context, and that's one of the things we do. We not only support the modern cloud native applications we also support the legacy on premise world so that we can bring that together. And that helps customer to migrate, right? Because if they understand the workload in the on-premise world it's easier to transform that into a cloud native world but it also gives an end to end view from the end user to we always say from mobile to mainframe, right? From a mobile app down to the mainframe application we can give you an end to end view. >> Yeah, you talk about legacy. In this case, you may be cloud services that people use but they're, but that, you know a lot of these legacy applications, right, too that are running, that are they're still very useful and still highly functional but at some point they're not going to be so would it be easier for you or what do you do in terms of talking with your clients in terms of what do they leave behind? What are they bringing with them? How, what kind of transition timeframe should they be thinking about? Because I don't think you want to be supporting forever, right? I mean, you want to be evolving into newer more efficient services and solutions. And so you've got to bring them along too, I would think. Right? >> Yeah. But to be really honest I think there are two ways of thinking. One is as a vendor you would love to support only the new technologies and don't have to support all the legacy technologies. But on the other hand, the reality is especially in bigger enterprises you will find everything in every word. And so if you want to give a holistic D view into the application stacks you have to support also the older legacy parts because they are part of the business critical systems of the customer. And yes, we suggest to upgrade and go into a cloud native world, but being realistic I think for the next decade we will have to live with a world where you have legacy and new things working together. I think that's just the reality. And in 10 years, what is new today is legacy then, right? >> John: Right exactly. >> So we will always live in a kind of hybrid world between legacy and new things. >> Yeah, you've got this technological continuum going on right? That you know, what's new and shiny today's is going to be, you know old hat in five years. But that's the beauty of it all obviously >> Yes. >> Now talk about AIOps. I mean, go into that relationship a little bit if you would , I mean eventually what is observability set you up to do in terms of your artificial intelligence operations and what are the capabilities now that you're providing in terms of the observability solutions that AIOps can benefit from? >> Yeah, so the way I think about these two categories is that observability is the system of record. That's where all the data is collected and put into context. So that's what we do as Instana is we take all the data metrics, logs, traces, profiles and put it into our system of record by the way in very high granularity, it's very important. So we do not sample, we have second granularity metrics. So very high quality data in that system of record where AIOps is the system of action. This is the system where it takes the data that we have, applies machine learning, statistical analytics et cetera, on it, to figure out, for example root cause of problems or even predict problems in the future, and then suggests actions, right? What the next thing that AI does is it suggests or automates an action that you need to do to to for example, scale up the system, scale down the system scaling down because you want to save costs for example these are all things that are happening in the system of action, which is the AIOps space. >> When I think about what you're talking about in terms of observability, I think, well, who needs it? Everybody is probably the answer to that. Can you give us maybe just a couple of examples of some clients that you've worked with in terms of particular needs that they had, and then how you applied your observability platform to provide them with these kinds of solutions? >> Yeah. I remember a big e-commerce vendor in the US approaching us last October. They were approaching the black Friday, right? Where they sell a lot of goods and they had performance issues but they only had issues with certain types of customers and with their existing APM solution, they couldn't figure out where the problem is because existing solutions sample which means if you have a thousand customers you only see one of them as an example because the other 999 are not in your sample. And so they used us because we don't sample. With us, if you have, they have more than a billion requests today you see every of the 1 billion requests and after a few days they had all the problems figured out. And that's what, that was one of the things that we really do differently is providing all the needed data, not sampling and then giving the context around the problem so that you can solve issues like performance issues on your e-commerce system easily. So they switched and you can imagine switching assistant before black Friday, you only do that if it's really needed. So they were really under pressure and so they switched their APM tool to Instana to be able to fulfill the big demand they have on these black Friday days. >> All right, before I let you go you were just saying they had a high degree of confidence. How were you sweating that one out? Because that was not a small thing at all I would assume. >> Yes. It's not a small thing and to be honest, also it's very hard to predict the traffic on black Fridays. Right? And in this case, I remember our SRE team. They had almost 20 times the traffic of a normal day during that black Friday. And because we don't sample, we need to make sure that we can handle and process all these traces but we did we did pretty well. So I have a high confidence in our platform that we can really handle a big amounts of data. We have one of the biggest companies in the world. The biggest companies in these worlds they use our tool to monitor billions of requests. So I think we have proven that it works. >> Yeah, I would say you're smiling too about it. So I think it, obviously it did work. >> It did work, but yeah, I'm sweating still. Yeah. (laughs) >> Never let them see you, sweat Mirko. I think you're very good at that. And obviously very good at enterprise observability. It's an interesting concept. Certainly putting it well under practice. And thanks for the time today to talk about it here as part of IBM thing to share your company's success story. Thank you Mirko. >> Thanks for having me John. >> All Right. We've been talking about enterprise observability here. IBM Think, The initiative continues here on theCUBE. I'm John Walls and thank you for joining us. (soft music)
SUMMARY :
brought to you by IBM. Well, good to have you here on theCUBE. for taking the time today. so that people from the and I kind of know what So the way we gather this If you will, you're ingesting and we even suggest you So that the end user not but also building for the future And that's essentially the mean, or do you work with one of the things we do. Because I don't think you And so if you want to So we will always live is going to be, you know of the observability solutions action that you need to do to Everybody is probably the answer to that. so that you can solve issues How were you sweating that one out? companies in the world. So I think it, obviously it did work. Yeah. And thanks for the time today and thank you for joining us.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Mirko | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Mirko Novakovic | PERSON | 0.99+ |
John | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
John Walls | PERSON | 0.99+ |
Germany | LOCATION | 0.99+ |
two | QUANTITY | 0.99+ |
20 years | QUANTITY | 0.99+ |
1 billion requests | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
two categories | QUANTITY | 0.99+ |
US | LOCATION | 0.99+ |
10, 20% | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
more than a billion requests | QUANTITY | 0.99+ |
five years | QUANTITY | 0.99+ |
third | QUANTITY | 0.99+ |
SRE | ORGANIZATION | 0.99+ |
second | QUANTITY | 0.99+ |
three things | QUANTITY | 0.99+ |
last October | DATE | 0.99+ |
10 years | QUANTITY | 0.98+ |
two ways | QUANTITY | 0.98+ |
999 | QUANTITY | 0.98+ |
billions of requests | QUANTITY | 0.97+ |
one view | QUANTITY | 0.97+ |
Instana | ORGANIZATION | 0.97+ |
three components | QUANTITY | 0.96+ |
black Friday | EVENT | 0.95+ |
one side | QUANTITY | 0.95+ |
Cologne, Germany | LOCATION | 0.95+ |
hundreds of little services | QUANTITY | 0.95+ |
next decade | DATE | 0.94+ |
black Fridays | EVENT | 0.94+ |
First | QUANTITY | 0.94+ |
WebSphere | TITLE | 0.93+ |
one context | QUANTITY | 0.91+ |
a thousand customers | QUANTITY | 0.88+ |
one roof | QUANTITY | 0.86+ |
almost 20 times | QUANTITY | 0.86+ |
thousand pieces | QUANTITY | 0.84+ |
Instana | LOCATION | 0.82+ |
Watson AIOps | TITLE | 0.82+ |
things | QUANTITY | 0.82+ |
one big bundle | QUANTITY | 0.81+ |
AIOps | ORGANIZATION | 0.78+ |
theCUBE | ORGANIZATION | 0.7+ |
second granularity | QUANTITY | 0.7+ |
number one | QUANTITY | 0.67+ |
DB2 | TITLE | 0.53+ |
Think | COMMERCIAL_ITEM | 0.52+ |
AIOps | TITLE | 0.43+ |
2021 | DATE | 0.39+ |
think2021 | EVENT | 0.33+ |
Juan Loaiza, Oracle | CUBE Conversation 2021
(upbeat music) >> The innovation around databases has exploded over the last few years. Not only do organizations continue to rely on database technology to manage their most mission critical business data. But new use cases have emerged that process and analyze unstructured data. They share data at scale, protect data, provide greater heterogeneity. New technologies are being injected into the database equation. Not just cloud which has been a huge force in the space, but also AI to drive better insights and automation, blockchain to protect data and provide better auditability, new file formats to expand the utility of database technology and more. Debates are bound as to who's the best number one, the fastest, the most cloudy, the least expensive, et cetera. But there is no debate, when it comes to leadership and mission critical database technologies. That status goes to Oracle. And with me to talk about the developments of database technology in the market is cube alum Juan Loaiza, who's executive vice president of Mission Critical Database Technology at Oracle. Juan always great to see you, thanks for making some time. >> Thanks, great to see you Dave, always a pleasure to join you. >> Yeah and I hope you have some time because they've got a lot of questions for you. (chuckles) I want to start with- >> All right I love questions. >> Good I want to start and we'll go deep if you're up for it. I want to start with the GoldenGate announcement. We're covering that recent announcement, the service on OCI. GoldenGate it's part of this your super high availability capabilities that Oracle is so well known for. What do we need to know about the new service and what it brings for your customers? >> Yeah, so first of all, GoldenGate is all about creating real time data throughout an enterprise. So it does replication, data integration, moving data into analytic workloads, streaming analytics of data, migrating of databases and making databases highly available. All those are use cases for real-time data movement. And GoldenGate is really the leading product in the market, has been for many years. We have about 80% of the global fortune 500 running GoldenGate today, in addition to thousands and thousands of smaller customers. So it is the premier data integration, replication, high availability, anything involving moving data in real time, GoldenGate is the premier platform. And so we've had that available as a product for many years. And what we just recently done is we've released it as a cloud service, as a fully managed and automated cloud service. So that's kind of the big new thing that's happening right now. >> So is that what's unique about this, is it's now a service, or there are other attributes that are unique to Oracle? >> Yeah, so the service is kind of the most basic part to it. But the big thing about the service is it makes this product dramatically easier to use. So traditionally the data integration, replication products, although very powerful, also are very complex to use. And one of the big benefits of the service is we've made a dramatically simpler. So not just super experts can use it, but anyone can use it. And also as part of releasing it as a cloud service, we've done a number of unique things including making it completely elastically scalable, pay per use and dynamic scalability. So just in time, real time scalability. So as your workload increases we automatically increase the throughput of GoldenGate. So previously you had to figure all this stuff out ahead of time. It was very static. All these products have been very static. Now it's completely dynamic a native cloud product and that's very unique in the market. >> So, I mean, from an availability standpoint, I guess IBM sort of has this with Db2 but it doesn't offer the heterogeneity that GoldenGate has. But at what about like AWS, Microsoft, Google, do they provide services like, like GoldenGate? >> There's really nothing like the GoldenGate service. When you're talking about people like Google and Azure, they really have do it yourself third-party products. So there'll be a third party data integration replication product, and it's kind of available in their marketplace and customers have to do everything. So it's basically a put it together, your own kit. And it's very complicated. I mean these data integration products have always been complicated, and they're even more complicated in the cloud, if you have to do everything yourself. Amazon has a product but it's really focused on basic data migration to their cloud. It doesn't have the same capabilities as Oracle has. It doesn't have the elasticity, it doesn't have pay peruse, so it's really not very clavy at all. >> Well, so I mean the biggest customers have always glommed onto GoldenGate because they need that super ultra high availability. And they're capable of do it yourself. So, tell us how this compares to two DIY. >> Yeah, so you have mentioned the big customers so you're absolutely right. The big customers have been big users of GoldenGate. Smaller customers or users as well, however, it's been challenging because it's complicated. Data integration has been a complicated area of data management. More and most complicated. And so one of the things this does, is that it expands the market. Makes it much dramatically easier for smaller companies that don't have as many it resources to use the product. Also, smaller companies obviously don't have as much data as the really large giants. So they don't have as much data throughput. So traditionally the price has been high for a small customer. But now, with pay per use in the cloud, it eliminates the two big blockers for smaller enterprises. Which are the costs, the high fixed costs and the complexity of the products. So in which, by the way, it's helpful for everyone also. And for big customers they've also struggled with elasticity. So sometimes a huge batch job will kick in, the rate of change increases and suddenly the replication product doesn't keep up. Because on-prem products aren't really very elastic. So it helps large customers as well. Everybody loves these reviews but the elasticity pay per use, on demand nature of it's really helpful for everybody. >> Well, and because it's delivered as a service I would imagine for the large customers that you're giving them more granularity, so they can apply it maybe for a single application, as opposed to trying to have to justify it across a whole suite. And because the cost is higher, but now if you're allowing me to pay by the drink, is that right? I could just sort of apply it in a more granular level. >> Yes, that's exactly right. It's really pay per use. You can use it as much or as little as you want. You just pay for what you use. And as I mentioned, it's not a static payment either. So if you have a lot of data loads going on and right now you pay a little more, at night when you have less going on, you pay a lot less. So you really just paying for what use. It's very easy to set it up for a single application or all your applications. >> How about for things like continuous replication or real-time analytics, is the service designed to support that? >> Yes, so that's the heritage of GoldenGate. GoldenGate has been around for decades and we've worked with some of the most demanding customers in the world on exactly those things. So real time data all over the enterprise is really the goal that everyone wants. Real-time data from OTP and to analytics, from one system to another system, and for availability. That is the key benefit of GoldenGate. And that's the key technology that we've been working on for decades. And now we have it very easy to use in the cloud. >> Well what would be the overheads associated with that? I mean, for instance, you've go it, you need a second copy. You need the other database copies, and where does it make sense to incur that overhead? Obviously the super high availability apps that can exploit real time. Think like fraud detection is the obvious one, but what else can you add there? >> Well, GoldenGate itself doesn't require any extra copies of anything. However, it does enable customers that want to create for example, an analytics system, a data warehouse, to feed data from all their systems in real time into that data warehouse for example. And it also enables the real-time capabilities, enable high availability and you can get high availability within the cloud with it, between on premises in the cloud, between clouds. Also, you can migrate data. Migrate databases without having to take them down. So all these capabilities are available now and they're very easy to use. >> Okay. Thanks for that clarification. What about autonomous? Is that on the roadmap or what you thinking? >> Yeah, the GoldenGate is essentially an autonomous service. And it works with the Oracle Autonomous Database. So you can both use it as a source for data and as a sink for data, as a place you're writing data. So for example, you can have an autonomous OTP database, that's replicating to another autonomous OTP database in real time. And both of them are replicating changes to the autonomous data warehouse. But it doesn't all have to be autonomous. You can have any mix of, autonomous not autonomous, on-prem in cloud, in anybody's cloud. So that's the beauty of GoldenGate, It's extremely flexible. >> Well, you mentioned the plasticity a couple of times. I mean, why is that so important that that GoldenGate on OCI gives you that elastic, whatever billing the auto-scaling talk, talk to me in terms of what that does for the customer. >> Yeah, there's really two big benefits. One benefit is it's very difficult to predict workloads. So normally on an on-prem configuration, you have to say, okay what is the max possible workload that's going to happen here? And then you have to buy the product, configure the product, get hardware, basically size, everything for that. And then if you guess wrong, you're either spending too much because you oversized it or you have a big data real-time problem. The data can't keep up with the real-time because you've undersized the configuration. So that's hard to do. So the beauty of elasticity and the dynamic elasticity, the pay per use, is you don't have to figure all this stuff out. So if you have more workload, we grow it automatically. If you have less workload, we shrink it automatically. And you don't have to guess ahead of time. You don't have to price ahead of time. So you, you just use what, what you use, right? You don't pay for something that you're not using. So it's a very big change in the whole model of how you use these data, replication, integration, high availability technologies. >> Well, I think I'm correct to say GoldenGate primarily has been for big companies. You mentioned that small companies can now take advantage of this service. We talked about the granularity. And I could definitely see, can they afford it? I guess this is part one and then, and then the other part of the question is, I can see GoldenGate really satisfying your on-prem customers and them taking advantage of it, but do you think this will attract new customers beyond your core? So two part question there. >> Yeah, absolutely. So small customers have been challenged by the complexity of data integration. And that's one of the great things about the cloud services is it's dramatically simpler. So Oracle manages everything. Oracle does the patching, the upgrades. Oracle does the monitoring. It takes care of the high availability of the product. So all that management, complexity, all the configuration set up, everything like that, that's all automated, that's owned by Oracle. So small customers were always challenged by the complexity of product, along with everything else that they had to do. And then the other of course benefit is small customers were challenged by the large fixed price. So now with pay per use, they pay only for what they use. It's really usable by easily by small customers also. So it really expands the market and makes it more broadly applicable. >> So kind of same answer for beyond your existing customer base, beyond the on-prem that that's kind of... You answered >> Right. >> my two part question with one answer, so that was pretty efficient, (chuckles) pun intended. So the bottom line for me and squinting through this announcement is you've got the heterogeneity piece with GoldenGate OCI and as such it's going to give you the capability to create what I'll call an architecturally coherent decentralized data mesh. Big on this data mesh these days, could have decentralized data. With the proviso then I going to be able to connect to OCI, which of course you can do with Azure or I guess you could bring cloud to a customer on prem, first of all, is this correct? And can we expect you over time to do this with AWS or other cloud providers? >> It can move data from Amazon or to Amazon. It can actually handle, any data wherever it lives. So, yeah, it's very flexible and it's really just the automation of all the management, that we're running in our public cloud But the data can be from anywhere to anywhere. >> Cool, all right, let's switch topics here a little bit. Just talk about some of the things that you've been working on, some of the innovation. I sat through your blockchain announcement, it was very cool. Of course I love anything blockchain and crypto, NFTs are exploding, so that Coinbase IPO. It's just really an exciting time out there. I think a lot of people don't really appreciate the innovation that's occurring. So you've been making a lot of big announcements last several months. You've been taking your R and D bringing it into product, So that's great, we love to always see that because that's where really the rubber meets the road. Just for the database side of the house, you announced 21c the next generation of the self-driving data warehouse, ADW, blockchain tables, now you got GoldenGate running on OCI. Take us inside the development organizations. What are the underlying drivers other than your boss. >> When we talk about our autonomous database, it is the mission critical Oracle database, but it's dramatically easier to do. So Oracle does all the management all on automation, but also we use machine learning to tune, and to make it highly available, and to make it highly secure. So that that's been one of our biggest products we've been working on for many years. And recently we enhanced our autonomous data warehouse taking it beyond being a data warehouse to complete a data analytics platform. So it includes things like ETL. So we built ETL into the autonomous data warehouse. We're building our GoldenGate replication into autonomous data warehousing. We built machine learning directly natively into the database. So now, if someone wants to run some machine learning they just run a machine learning queries. They no longer have to stand up a separate system. So a big move that we've been making is, taking it beyond just a database to a full analytic platform. And this goes beyond what anyone else in the industry is doing, because we have a lot more technology. So for example, the ML machine learning directly in the database, the ETL directly in the database. The data replication is directly in the database. All these things are very unique to Oracle. And they dramatically simplify for customers how they manage data. In addition to that, we've also been working in our database product. We've enhanced it tremendously. So our big goal there is to provide what we call it converged database. So everything you need, all the data types. Whether it's JSON, relational, spatial, graph, all that different kinds of data types, all the different kinds of workloads. Analytics, OTP, things like blockchain, microservices events, all built into the Oracle database, making it dramatically easier to both develop and deploy new applications. So those are some of our big, big goals. Make it simple, make it integrated. Take the complexity, we'll take on the complexity. So developers and customers find it easy to develop an easy to use. And we've made huge strides in all these areas in the last couple of years. >> That's awesome. I wonder if we could land on blockchain again for now it's kind of jogging, but sort of on crypto. Though you're not about crypto but you are about applying blockchain. Maybe you can help our audience understand what are some of the real use cases where blockchain tech can be used with Oracle database. >> Yeah, so that's a very interesting topic. As you mentioned, blockchain is very currently, we see a lot of cryptocurrencies. I distributed applications for blockchain. So in general, in the past, we've had two worlds. We've had the enterprise data management world and we've had the blockchain world. And these are very distinct, right? And on the blockchain side the applications have mostly centered around, distributed multi-party applications, right? So where you have multiple parties that all want to reach consensus and then that consensus is stored in a blockchain. So that's kind of been the focus of blockchain. And what we've done is very innovative. We're the first company to ever do this. Is we've taken the core architecture, ideas. And really a lot of it has to do with the cryptography of blockchain. And we've built, we've engineered that natively into the mainstream Oracle database. So now in mainstream Oracle database, we have blockchain technology built in. And it's very dramatically simpler to use. And the use cases, you asked about the use case, that's what we've done. And it's taken us about five years to do this. Now it's been released into the market in our mainstream 19c Oracle database. So the use case is different from the conventional blockchain use case. Which I mentioned was really multi-party consensus based apps. We're trying to make blockchain useful for mainstream, enterprise and government applications. So any kind of mainstream government application, or enterprise application. And that idea of blockchain, the core concept of blockchain, is it addresses a different kind of security problem. So when you look at conventional security, it's really trying to keep people out. So we have things like firewalls, passwords, networking cryption, data encryption. It's all about keeping bad people out of the data. And there's really two big problems that it doesn't address well. One problem is that there's always new security exploits being published. So you have hackers out there that are working overtime. Sometimes they're nation States that are trying to attack data providers. And every week, every month there's a new security exploit that's discovered and this happens all the time. So that's one big problem. So we're building up these elaborate walls of protection around our core data assets. And in the meantime, we have basically barbarians attacking on every side.(chuckles) And every once in a while, they get over the walls and this is just what's happening. So that's one big problem. And the second big problem is elicit changes made by people with credentials. So sometimes you have an insider in your, in your company. Whether it's an administrator or a sales person, a support person, that has valid credentials, but then uses those valid credentials in some illicit way. They go out and change somebody's data for their own gain. And even more common than that cause there's not that many bad guys inside the company to they exist, is stolen credentials. So what's happened in many cases is hackers or nation States will steal for example, administrative credentials and then use those administrative credentials to come into a system and steal data. So that's the kind of problem that is not well addressed by security mechanism. So if you have privileges security mechanism says, yeah you're fine. If somebody steals your privileges, again you get the pass through the gate. And so what we've done with blockchain is we've taken the cryptography elements of blockchain. We call it crypto secure data management. And we've built those into the Oracle database. So think of it this way. If someone actually makes it through over the walls that we built, and in into the core data, what we've done with that cryptographic technology of blockchain, is we've made that immutable. So you can't change it. So even if you make it over the gate you can't get into the core data assets and change those assets. And that's not built into Oracle databases is super easy to adopt. And I think it's going to really enhance and expand the community of people that can actually use that blockchain technology. >> I mean, that's awesome. I could talk all day about blockchain. And I mean, when you think about hackers, it's all there. They're all about ROI, value over cost. And if you can increase the denominator they're going to go somewhere else, right? Because the value will will decline. And this is really the intersection of software engineering cryptography. And I guess even when you bring crypto currency into it, it's like sort of the game theory. That's really kind of not what you're all about, but the first two pieces are really critical in terms of just next generation of raising that security hurdle. Love it. Now, go ahead. >> Yeah it's a different approach. I was just going to say, it's a different approach. Because think about trying to keep people out with things like passwords and firewalls, you can have basically bugs in that software that allow people to exploit and get in. When you're talking about cryptography, that's math, it's very difficult. I mean, you really can't fight pass math. Once the data is cryptographically protected on a blockchain, a hacker can't really do anything with that. It's just, math is math. There's nothing you can do to break it, right. It's very different from trying to get through some algorithm. That's really trying to keep you out. >> Awesome. I said, I could talk forever on this topic. But let me, let me go into some competitive dynamics. You recently announced Autonomous Data Warehouse. You've got service capabilities that are really trying to appeal to the line of business. I want to get your take on that announcement and specifically how you think it compares name names. I'm going to name names you don't have to. But Snowflake, obviously a lot of momentum in the marketplace. AWS with Redshift is doing very, very well. Obviously there are others. But those are two prominent ones that we've tracked in our data shows that have momentum. How do you compare? >> Yeah, so there's a number of different ways to look at the comparison. So the most simplest and straightforward is there's a lot more functionality in Oracle data warehousing. Oracle has been doing this for decades. We have a lot of built-in functionality. For example, machine learning natively built into the database makes it super easy to use. We have mixed workloads, we have spatial capabilities. We have graph capabilities. We have JSON capabilities. We have a microservice capabilities. We have-- So there's a lot more capabilities. So that's number one. Number two, our cloud service is dramatically more elastic. So with our cloud service all you really do, is you basically move the slide. You say hey, I want more resources, I want less resources. In fact, we'll do that automatically, that's called auto-scaling. In contrast when you look at people like Snowflake or Redshift they want you to stand up a new cluster. Hey you have some more workload on Monday, stand up another cluster and then we'll have two sets of clusters or maybe you'd want a third cluster, maybe you want a fourth cluster. So you end up with all these different systems which is how they scale. They say, hey, I can have multiple sets of servers access the same data. With Oracle you don't have to even think about those things. We auto scale, you get more workload. We just give it more resources. You don't even have to think about that. And then the other thing is we're looking at the whole data management end to end problem. So starting with capturing the data, moving the data in real time, transforming the data, loading the data, running machine learning and analytics on the data. Putting all kinds of data in a single place that you can do analytics on all of it together. And then having very rich screen capabilities for viewing the data, graphing the data, modeling the data, all those things. So it's all integrated. It makes it super easy to use. So a much easier, much more functionality and much more elastic than any of our competitors in the market. >> Interesting, thank you for those comments. I mean, it's a different world, right? I mean, you guys got all the market share, they got all the growth, those things over time, you've been around, you see it, they come together and you fight it out and may the best approach wins. >> So we'll be watching >> Yeah also I forgot to mention the obvious thing, which is Oracle runs everywhere. So you can run Oracle on premises. You can run Oracle on the public cloud. You can run what we call cloud at customer. Our competitors really are just public cloud only. So you customers don't get the choice of where they want to run their data warehouse. >> Now Juan a while ago I sat down with David foyer and Mark steamer. We reviewed how Gartner looks at the marketplace and it wasn't surprise that when it came to operational workloads, Oracle stood out. I mean, that's kind of an understatement relative to the major competitors. Most of our viewers, I don't think expected for instance Microsoft or AWS to be that far away from you. But at the same time, the database magic quadrant maybe didn't reflect that gap as widely. So there's some dissonance there with the detailed workload drill downs were dramatic. And I wonder what your take on the results. I mean, obviously you're happy with them. You came out leading in virtually every category or you will one and two, and some of that sort of not even non-mission critical operational stuff. But what can you add to my narrative there? >> Yeah, so Gartner, first of all, we're talking about cloud databases. >> Right. >> Right, so this is not on premises databases this is pure cloud databases. And what they did is they did two things. One is, the main thing was a technical rating of the databases, of the cloud databases. And, there's other vendors that have been had database in the cloud for longer than we have. But in the most recent Gartner analysis report, as you mentioned, Oracle came out on top for cloud database technology, in almost every single operational use case including things like Internet of Things, things like JSON data, variable data, analytics as well as a traditional OTP and mixed workloads. So Oracle was rated the highest technology which isn't a big surprise. We've been doing this for decades. Over 90% of the global fortune 500 run Oracle. And there's a reason, because this is what we're good at. This our core strength. Our availability, our security, our scalability, our functionality, both for OTP and analytics. All the capabilities, built-in machine learning, graph analytics, everything. So even when we compare narrowly things like Internet of Things or variable data against niche competitors that that's what all they do. We came up dramatically ahead. But what surprised a lot of people is how far ahead of some of the other cloud vendors like Amazon, like Azure, like Google, Oracle came out ahead in the cloud database category. So a lot of people think, well, some of these other pure cloud vendors must be ahead of Oracle in cloud database. But actually not. I mean, if you look at the Gartner analyst report, it was very clear. It was Oracle was dramatically ahead of their cloud database technologies with our cloud database. >> So I'm pretty much out of time but last question. I've had some interesting discussions lately and we've pointed out for years in our research that of course you're delivering the entire stack, the database, part of the infrastructure the applications, you have the whole engineered system strategy. And for the most part you're kind of unique in this regard. I mean, Dell just announced that it's spinning off VMware and it could have gone the other direction. And become more integrated hardware and software player, for the data center. But look, it's working for Dell based on the reaction, from the street post announcement. Cisco they got a hardware and software model that's sort of integrated but the company's value that peaked back in the .com boom, it's been very slow to bounce back. But my point is for these companies the street doesn't value, the integrated model. Oracle is kind of the exception. You know, it's at trading at all time highs, I know you're not going to comment on the stock price, but I guess in SAP until it missed it guided conservatively, was kind of on the good trajectory. But so I'm wondering, why do you think Oracle strategy resonates with investors, but not so much those companies? Is it, because you have the applications piece? I mean, maybe that's kind of my premise for, for SAP but what's your take? Why is it working for you? >> Well, okay. I think it's pretty simple, which is some of our competitors, for example, they might have a software product and a hardware product. But mostly those are acquired in their separate products that just happen to be in a portfolio. They are not a single company with a single vision and joint engineering going on. It's really, hey, I got the software on over here. I got the hardware over there, but they don't really talk to each other, they don't really work together. They're not trying to develop something where the stack is actually not just integrated but engineered together. And that is really the key. Oracle focuses on data management top to bottom. So we have everything from our ERP, CRM applications talking to our database, talking to our engineered systems, running in our cloud. And it's all completely engineered together. So Oracle doesn't just acquire these things and kind of glue them together. We actually engineer them and that's fundamentally the difference. You can buy two things and have them as two separate divisions in your company but it doesn't really get you a whole lot. >> Juan it's always a pleasure, I love these conversations and hope we can do more in the future. Really appreciate your time. Thanks for coming to the CUBE >> Pleasure, Dave nice to talk to you. >> All right keep it right there, everybody. This is Dave Vellante for theCUBE, we'll see you next time. (upbeat musiC)
SUMMARY :
of database technology in the market Thanks, great to see you Dave, Yeah and I hope you have some time about the new service So that's kind of the big new thing of the most basic part to it. but it doesn't offer the complicated in the cloud, Well, so I mean the biggest customers And so one of the things this does, And because the cost is higher, So if you have a lot And that's the key technology is the obvious one, And it also enables the Is that on the roadmap So that's the beauty of GoldenGate, that does for the customer. the pay per use, is you don't have of the question is, I can see GoldenGate So it really expands the market beyond the on-prem that that's kind of... So the bottom line for me and it's really just the of the self-driving data So for example, the ML but you are about applying blockchain. And the use cases, you of the game theory. Once the data is in the marketplace. So the most simplest and straightforward may the best approach wins. You can run Oracle on the public cloud. But at the same time, the Yeah, so Gartner, first of all, of the databases, of the cloud databases. And for the most part you're And that is really the key. Thanks for coming to the CUBE theCUBE, we'll see you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Amazon | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Juan Loaiza | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
Juan | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Dell | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
thousands | QUANTITY | 0.99+ |
Monday | DATE | 0.99+ |
two things | QUANTITY | 0.99+ |
One problem | QUANTITY | 0.99+ |
Mark steamer | PERSON | 0.99+ |
One benefit | QUANTITY | 0.99+ |
Gartner | ORGANIZATION | 0.99+ |
OCI | ORGANIZATION | 0.99+ |
fourth cluster | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
one answer | QUANTITY | 0.99+ |
third cluster | QUANTITY | 0.99+ |
one big problem | QUANTITY | 0.99+ |
two big problems | QUANTITY | 0.99+ |
two sets | QUANTITY | 0.99+ |
Coinbase | ORGANIZATION | 0.99+ |
two part | QUANTITY | 0.99+ |
about five years | QUANTITY | 0.98+ |
two big benefits | QUANTITY | 0.98+ |
first company | QUANTITY | 0.97+ |
two separate divisions | QUANTITY | 0.97+ |
Over 90% | QUANTITY | 0.97+ |
GoldenGate | ORGANIZATION | 0.97+ |
second copy | QUANTITY | 0.97+ |
David foyer | PERSON | 0.97+ |
first two pieces | QUANTITY | 0.96+ |
single | QUANTITY | 0.96+ |
two big blockers | QUANTITY | 0.96+ |
single application | QUANTITY | 0.96+ |
Breaking Analysis: Unpacking Oracle’s Autonomous Data Warehouse Announcement
(upbeat music) >> On February 19th of this year, Barron's dropped an article declaring Oracle, a cloud giant and the article explained why the stock was a buy. Investors took notice and the stock ran up 18% over the next nine trading days and it peaked on March 9th, the day before Oracle announced its latest earnings. The company beat consensus earnings on both top-line and EPS last quarter, but investors, they did not like Oracle's tepid guidance and the stock pulled back. But it's still, as you can see, well above its pre-Barron's article price. What does all this mean? Is Oracle a cloud giant? What are its growth prospects? Now many parts of Oracle's business are growing including Fusion ERP, Fusion HCM, NetSuite, we're talking deep into the double digits, 20 plus percent growth. It's OnPrem legacy licensed business however, continues to decline and that moderates, the overall company growth because that OnPrem business is so large. So the overall Oracle's growing in the low single digits. Now what stands out about Oracle is it's recurring revenue model. That figure, the company says now it represents 73% of its revenue and that's going to continue to grow. Now two other things stood out on the earnings call to us. First, Oracle plans on increasing its CapEX by 50% in the coming quarter, that's a lot. Now it's still far less than AWS Google or Microsoft Spend on capital but it's a meaningful data point. Second Oracle's consumption revenue for Autonomous Database and Cloud Infrastructure, OCI or Oracle Cloud Infrastructure grew at 64% and 139% respectively and these two factors combined with the CapEX Spend suggest that the company has real momentum. I mean look, it's possible that the CapEx announcements maybe just optics in they're front loading, some spend to show the street that it's a player in cloud but I don't think so. Oracle's Safra Catz's usually pretty disciplined when it comes to it's spending. Now today on March 17th, Oracle announced updates towards Autonomous Data Warehouse and with me is David Floyer who has extensively researched Oracle over the years and today we're going to unpack the Oracle Autonomous Data Warehouse, ADW announcement. What it means to customers but we also want to dig into Oracle's strategy. We want to compare it to some other prominent database vendors specifically, AWS and Snowflake. David Floyer, Welcome back to The Cube, thanks for making some time for me. >> Thank you Vellante, great pleasure to be here. >> All right, I want to get into the news but I want to start with this idea of the autonomous database which Oracle's announcement today is building on. Oracle uses the analogy of a self-driving car. It's obviously powerful metaphor as they call it the self-driving database and my takeaway is that, this means that the system automatically provisions, it upgrades, it does all the patching for you, it tunes itself. Oracle claims that all reduces labor costs or admin costs by 90%. So I ask you, is this the right interpretation of what Oracle means by autonomous database? And is it real? >> Is that the right interpretation? It's a nice analogy. It's a test to that analogy, isn't it? I would put it as the first stage of the Autonomous Data Warehouse was to do the things that you talked about, which was the tuning, the provisioning, all of that sort of thing. The second stage is actually, I think more interesting in that what they're focusing on is making it easy to use for the end user. Eliminating the requirement for IT, staff to be there to help in the actual using of it and that is a very big step for them but an absolutely vital step because all of the competition focusing on ease of use, ease of use, ease of use and cheapness of being able to manage and deploy. But, so I think that is the really important area that Oracle has focused on and it seemed to have done so very well. >> So in your view, is this, I mean you don't really hear a lot of other companies talking about this analogy of the self-driving database, is this unique? Is it differentiable for Oracle? If so, why, or maybe you could help us understand that a little bit better. >> Well, the whole strategy is unique in its breadth. It has really brought together a whole number of things together and made it of its type the best. So it has a single, whole number of data sources and database types. So it's got a very broad range of different ways that you can look at the data and the second thing that is also excellent is it's a platform. It is fully self provisioned and its functionality is very, very broad indeed. The quality of the original SQL and the query languages, etc, is very, very good indeed and it's a better agent to do joints for example, is excellent. So all of the building blocks are there and together with it's sharing of the same data with OLTP and inference and in memory data paces as well. All together the breadth of what they have is unique and very, very powerful. >> I want to come back to this but let's get into the news a little bit and the announcement. I mean, it seems like what's new in the autonomous data warehouse piece for Oracle's new tooling around four areas that so Andy Mendelsohn, the head of this group instead of the guy who releases his baby, he talked about four things. My takeaway, faster simpler loads, simplified transforms, autonomous machine learning models which are facilitating, What do you call it? Citizen data science and then faster time to insights. So tooling to make those four things happen. What's your take and takeaways on the news? >> I think those are all correct. I would add the ease of use in terms of being able to drag and drop, the user interface has been dramatically improved. Again, I think those, strategically are actually more important that the others are all useful and good components of it but strategically, I think is more important. There's ease of use, the use of apex for example, are more important. And, >> Why are they more important strategically? >> Because they focus on the end users capability. For example, one of other things that they've started to introduce is Python together with their spatial databases, for example. That is really important that you reach out to the developer as they are and what tools they want to use. So those type of ease of use things, those types of things are respecting what the end users use. For example, they haven't come out with anything like click or Tableau. They've left that there for that marketplace for the end user to use what they like best. >> Do you mean, they're not trying to compete with those two tools. They indeed had a laundry list of stuff that they supported, Talend, Tableau, Looker, click, Informatica, IBM, I had IBM there. So their claim was, hey, we're open. But so that's smart. That's just, hey, they realized that people use these tools. >> I'm trying to exclude other people, be a platform and be an ecosystem for the end users. >> Okay, so Mendelsohn who made the announcement said that Oracle's the smartphone of databases and I think, I actually think Alison kind of used that or maybe that was us planing to have, I thought he did like the iPhone of when he announced the exit data way back when the integrated hardware and software but is that how you see it, is Oracle, the smartphone of databases? >> It is, I mean, they are trying to own the complete stack, the hardware with the exit data all the way up to the databases at the data warehouses and the OLTP databases, the inference databases. They're trying to own the complete stack from top to bottom and that's what makes autonomy process possible. You can make it autonomous when you control all of that. Take away all of the requirements for IT in the business itself. So it's democratizing the use of data warehouses. It is pushing it out to the lines of business and it's simplifying it and making it possible to push out so that they can own their own data. They can manage their own data and they do not need an IT person from headquarters to help them. >> Let's stay in this a little bit more and then I want to go into some of the competitive stuff because Mendelsohn mentioned AWS several times. One of the things that struck me, he said, hey, we're basically one API 'cause we're doing analytics in the cloud, we're doing data in the cloud, we're doing integration in the cloud and that's sort of a big part of the value proposition. He made some comparisons to Redshift. Of course, I would say, if you can't find a workload where you beat your big competitor then you shouldn't be in this business. So I take those things with a grain of salt but one of the other things that caught me is that migrating from OnPrem to Oracle, Oracle Cloud was very simple and I think he might've made some comparisons to other platforms. And this to me is important because he also brought in that Gartner data. We looked at that Gardner data when they came out with it in the operational database class, Oracle smoked everybody. They were like way ahead and the reason why I think that's important is because let's face it, the Mission Critical Workloads, when you look at what's moving into AWS, the Mission Critical Workloads, the high performance, high criticality OLTP stuff. That's not moving in droves and you've made the point often that companies with their own cloud particularly, Oracle you've mentioned this about IBM for certain, DB2 for instance, customers are going to, there should be a lower risk environment moving from OnPrem to their cloud, because you could do, I don't think you could get Oracle RAC on AWS. For example, I don't think EXIF data is running in AWS data centers and so that like component is going to facilitate migration. What's your take on all that spiel? >> I think that's absolutely right. You all crown Jewels, the most expensive and the most valuable applications, the mission-critical applications. The ones that have got to take a beating, keep on taking. So those types of applications are where Oracle really shines. They own a very large high percentage of those Mission Critical Workloads and you have the choice if you're going to AWS, for example of either migrating to Oracle on AWS and that is frankly not a good fit at all. There're a lot of constraints to running large systems on AWS, large mission critical systems. So that's not an option and then the option, of course, that AWS will push is move to a Roller, change your way of writing applications, make them tiny little pieces and stitch them all together with microservices and that's okay if you're a small organization but that has got a lot of problems in its own, right? Because then you, the user have to stitch all those pieces together and you're responsible for testing it and you're responsible for looking after it. And that as you grow becomes a bigger and bigger overhead. So AWS, in my opinion needs to have a move towards a tier-one database of it's own and it's not in that position at the moment. >> Interesting, okay. So, let's talk about the competitive landscape and the choices that customers have. As I said, Mendelssohn mentioned AWS many times, Larry on the calls often take shy, it's a compliment to me. When Larry Ellison calls you out, that means you've made it, you're doing well. We've seen it over the years, whether it's IBM or Workday or Salesforce, even though Salesforce's big Oracle customer 'cause AWS, as we know are Oracle customer as well, even though AWS tells us they've off called when you peel the onion >> Five years should be great, some of the workers >> Well, as I said, I believe they're still using Oracle in certain workloads. Way, way, we digress. So AWS though, they take a different approach and I want to push on this a little bit with database. It's got more than a dozen, I think purpose-built databases. They take this kind of right tool for the right job approach was Oracle there converging all this function into a single database. SQL JSON graph databases, machine learning, blockchain. I'd love to talk about more about blockchain if we have time but seems to me that the right tool for the right job purpose-built, very granular down to the primitives and APIs. That seems to me to be a pretty viable approach versus kind of a Swiss Army approach. How do you compare the two? >> Yes, and it is to many initial programmers who are very interested for example, in graph databases or in time series databases. They are looking for a cheap database that will do the job for a particular project and that makes, for the program or for that individual piece of work is making a very sensible way of doing it and they pay for ads on it's clear cloud dynamics. The challenge as you have more and more data and as you're building up your data warehouse in your data lakes is that you do not want to have to move data from one place to another place. So for example, if you've got a Roller,, you have to move the database and it's a pretty complicated thing to do it, to move it to Redshift. It's a five or six steps to do that and each of those costs money and each of those take time. More importantly, they take time. The Oracle approach is a single database in terms of all the pieces that obviously you have multiple databases you have different OLTP databases and data warehouse databases but as a single architecture and a single design which means that all of the work in terms of moving stuff from one place to another place is within Oracle itself. It's Oracle that's doing that work for you and as you grow, that becomes very, very important. To me, very, very important, cost saving. The overhead of all those different ones and the databases themselves originate with all as open source and they've done very well with it and then there's a large revenue stream behind the, >> The AWS, you mean? >> Yes, the original database is in AWS and they've done a lot of work in terms of making it set with the panels, etc. But if a larger organization, especially very large ones and certainly if they want to combine, for example data warehouse with the OLTP and the inference which is in my opinion, a very good thing that they should be trying to do then that is incredibly difficult to do with AWS and in my opinion, AWS has to invest enormously in to make the whole ecosystem much better. >> Okay, so innovation required there maybe is part of the TAM expansion strategy but just to sort of digress for a second. So it seems like, and by the way, there are others that are doing, they're taking this converged approach. It seems like that is a trend. I mean, you certainly see it with single store. I mean, the name sort of implies that formerly MemSQL I think Monte Zweben of splice machine is probably headed in a similar direction, embedding AI in Microsoft's, kind of interesting. It seems like Microsoft is willing to build this abstraction layer that hides that complexity of the different tooling. AWS thus far has not taken that approach and then sort of looking at Snowflake, Snowflake's got a completely different, I think Snowflake's trying to do something completely different. I don't think they're necessarily trying to take Oracle head-on. I mean, they're certainly trying to just, I guess, let's talk about this. Snowflake simplified EDW, that's clear. Zero to snowflake in 90 minutes. It's got this data cloud vision. So you sign on to this Snowflake, speaking of layers they're abstracting the complexity of the underlying cloud. That's what the data cloud vision is all about. They, talk about this Global Mesh but they've not done a good job of explaining what the heck it is. We've been pushing them on that, but we got, >> Aspiration of moment >> Well, I guess, yeah, it seems that way. And so, but conceptually, it's I think very powerful but in reality, what snowflake is doing with data sharing, a lot of reading it's probably mostly read-only and I say, mostly read-only, oh, there you go. You'll get better but it's mostly read and so you're able to share the data, it's governed. I mean, it's exactly, quite genius how they've implemented this with its simplicity. It is a caching architecture. We've talked about that, we can geek out about that. There's good, there's bad, there's ugly but generally speaking, I guess my premise here I would love your thoughts. Is snowflakes trying to do something different? It's trying to be not just another data warehouse. It's not just trying to compete with data lakes. It's trying to create this data cloud to facilitate data sharing, put data in the hands of business owners in terms of a product build, data product builders. That's a different vision than anything I've seen thus far, your thoughts. >> I agree and even more going further, being a place where people can sell data. Put it up and make it available to whoever needs it and making it so simple that it can be shared across the country and across the world. I think it's a very powerful vision indeed. The challenge they have is that the pieces at the moment are very, very easy to use but the quality in terms of the, for example, joints, I mentioned, the joints were very powerful in Oracle. They don't try and do joints. They, they say >> They being Snowflake, snowflake. Yeah, they don't even write it. They would say use another Postgres >> Yeah. >> Database to do that. >> Yeah, so then they have a long way to go. >> Complex joints anyway, maybe simple joints, yeah. >> Complex joints, so they have a long way to go in terms of the functionality of their product and also in my opinion, they sure be going to have more types of databases inside it, including OLTP and they can do that. They have obviously got a great market gap and they can do that by acquisition as well as they can >> They've started. I think, I think they support JSON, right. >> Do they support JSON? And graph, I think there's a graph database that's either coming or it's there, I can't keep all that stuff in my head but there's no reason they can't go in that direction. I mean, in speaking to the founders in Snowflake they were like, look, we're kind of new. We would focus on simple. A lot of them came from Oracle so they know all database and they know how hard it is to do things like facilitate complex joints and do complex workload management and so they said, let's just simplify, we'll put it in the cloud and it will spin up a separate data warehouse. It's a virtual data warehouse every time you want one to. So that's how they handle those things. So different philosophy but again, coming back to some of the mission critical work and some of the larger Oracle customers, they said they have a thousand autonomous database customers. I think it was autonomous database, not ADW but anyway, a few stood out AON, lift, I think Deloitte stood out and as obviously, hundreds more. So we have people who misunderstand Oracle, I think. They got a big install base. They invest in R and D and they talk about lock-in sure but the CIO that I talked to and you talked to David, they're looking for business value. I would say that 75 to 80% of them will gravitate toward business value over the fear of lock-in and I think at the end of the day, they feel like, you know what? If our business is performing, it's a better business decision, it's a better business case. >> I fully agree, they've been very difficult to do business with in the past. Everybody's in dread of the >> The audit. >> The knock on the door from the auditor. >> Right. >> And that from a purchasing point of view has been really bad experience for many, many customers. The users of the database itself are very happy indeed. I mean, you talk to them and they understand why, what they're paying for. They understand the value and in terms of availability and all of the tools for complex multi-dimensional types of applications. It's pretty well, the only game in town. It's only DB2 and SQL that had any hope of doing >> Doing Microsoft, Microsoft SQL, right. >> Okay, SQL >> Which, okay, yeah, definitely competitive for sure. DB2, no IBM look, IBM lost its dominant position in database. They kind of seeded that. Oracle had to fight hard to win it. It wasn't obvious in the 80s who was going to be the database King and all had to fight. And to me, I always tell people the difference is that the chairman of Oracle is also the CTO. They spend money on R and D and they throw off a ton of cash. I want to say something about, >> I was just going to make one extra point. The simplicity and the capability of their cloud versions of all of this is incredibly good. They are better in terms of spending what you need or what you use much better than AWS, for example or anybody else. So they have really come full circle in terms of attractiveness in a cloud environment. >> You mean charging you for what you consume. Yeah, Mendelsohn talked about that. He made a big point about the granularity, you pay for only what you need. If you need 33 CPUs or the other databases you've got to shape, if you need 33, you've got to go to 64. I know that's true for everyone. I'm not sure if that's true too for snowflake. It may be, I got to dig into that a little bit, but maybe >> Yes, Snowflake has got a front end to hiding behind. >> Right, but I didn't want to push it that a little bit because I want to go look at their pricing strategies because I still think they make you buy, I may be wrong. I thought they make you still do a one-year or two-year or three-year term. I don't know if you can just turn it off at any time. They might allow, I should hold off. I'll do some more research on that but I wanted to make a point about the audits, you mentioned audits before. A big mistake that a lot of Oracle customers have made many times and we've written about this, negotiating with Oracle, you've got to bring your best and your brightest when you negotiate with Oracle. Some of the things that people didn't pay attention to and I think they've sort of caught onto this is that Oracle's SOW is adjudicate over the MSA, a lot of legal departments and procurement department. Oh, do we have an MSA? With all, Yes, you do, okay, great and because they think the MSA, they then can run. If they have an MSA, they can rubber stamp it but the SOW really dictateS and Oracle's gotcha there and they're really smart about that. So you got to bring your best and the brightest and you've got to really negotiate hard with Oracle, you get trouble. >> Sure. >> So it is what it is but coming back to Oracle, let's sort of wrap on this. Dominant position in mission critical, we saw that from the Gartner research, especially for operational, giant customer base, there's cloud-first notion, there's investing in R and D, open, we'll put a question Mark around that but hey, they're doing some cool stuff with Michael stuff. >> Ecosystem, I put that, ecosystem they're promoting their ecosystem. >> Yeah, and look, I mean, for a lot of their customers, we've talked to many, they say, look, there's actually, a tail at the tail way, this saves us money and we don't have to migrate. >> Yeah. So interesting, so I'll give you the last word. We started sort of focusing on the announcement. So what do you want to leave us with? >> My last word is that there are platforms with a certain key application or key parts of the infrastructure, which I think can differentiate themselves from the Azures or the AWS. and Oracle owns one of those, SAP might be another one but there are certain platforms which are big enough and important enough that they will, in my opinion will succeed in that cloud strategy for this. >> Great, David, thanks so much, appreciate your insights. >> Good to be here. Thank you for watching everybody, this is Dave Vellante for The Cube. We'll see you next time. (upbeat music)
SUMMARY :
and that moderates, the great pleasure to be here. that the system automatically and it seemed to have done so very well. So in your view, is this, I mean and the second thing and the announcement. that the others are all useful that they've started to of stuff that they supported, and be an ecosystem for the end users. and the OLTP databases, and the reason why I and the most valuable applications, and the choices that customers have. for the right job approach was and that makes, for the program OLTP and the inference that complexity of the different tooling. put data in the hands of business owners that the pieces at the moment Yeah, they don't even write it. Yeah, so then they Complex joints anyway, and also in my opinion, they sure be going I think, I think they support JSON, right. and some of the larger Everybody's in dread of the and all of the tools is that the chairman of The simplicity and the capability He made a big point about the granularity, front end to hiding behind. and because they think the but coming back to Oracle, Ecosystem, I put that, ecosystem Yeah, and look, I mean, on the announcement. and important enough that much, appreciate your insights. Good to be here.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
Mendelsohn | PERSON | 0.99+ |
Andy Mendelsohn | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
David Floyer | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
March 9th | DATE | 0.99+ |
February 19th | DATE | 0.99+ |
five | QUANTITY | 0.99+ |
Deloitte | ORGANIZATION | 0.99+ |
75 | QUANTITY | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Larry Ellison | PERSON | 0.99+ |
Mendelssohn | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
each | QUANTITY | 0.99+ |
90% | QUANTITY | 0.99+ |
one-year | QUANTITY | 0.99+ |
Gartner | ORGANIZATION | 0.99+ |
73% | QUANTITY | 0.99+ |
Snowflake | ORGANIZATION | 0.99+ |
two tools | QUANTITY | 0.99+ |
Michael | PERSON | 0.99+ |
64% | QUANTITY | 0.99+ |
two factors | QUANTITY | 0.99+ |
more than a dozen | QUANTITY | 0.99+ |
last quarter | DATE | 0.99+ |
SQL | TITLE | 0.99+ |
Marc Staimer, Dragon Slayer Consulting & David Floyer, Wikibon | December 2020
>> Announcer: From theCUBE studios in Palo Alto, in Boston, connecting with thought leaders all around the world. This is theCUBE conversation. >> Hi everyone, this is Dave Vellante and welcome to this CUBE conversation where we're going to dig in to this, the area of cloud databases. And Gartner just published a series of research in this space. And it's really a growing market, rapidly growing, a lot of new players, obviously the big three cloud players. And with me are three experts in the field, two long time industry analysts. Marc Staimer is the founder, president, and key principal at Dragon Slayer Consulting. And he's joined by David Floyer, the CTO of Wikibon. Gentlemen great to see you. Thanks for coming on theCUBE. >> Good to be here. >> Great to see you too Dave. >> Marc, coming from the great Northwest, I think first time on theCUBE, and so it's really great to have you. So let me set this up, as I said, you know, Gartner published these, you know, three giant tomes. These are, you know, publicly available documents on the web. I know you guys have been through them, you know, several hours of reading. And so, night... (Dave chuckles) Good night time reading. The three documents where they identify critical capabilities for cloud database management systems. And the first one we're going to talk about is, operational use cases. So we're talking about, you know, transaction oriented workloads, ERP financials. The second one was analytical use cases, sort of an emerging space to really try to, you know, the data warehouse space and the like. And, of course, the third is the famous Gartner Magic Quadrant, which we're going to talk about. So, Marc, let me start with you, you've dug into this research just at a high level, you know, what did you take away from it? >> Generally, if you look at all the players in the space they all have some basic good capabilities. What I mean by that is ultimately when you have, a transactional or an analytical database in the cloud, the goal is not to have to manage the database. Now they have different levels of where that goes to as how much you have to manage or what you have to manage. But ultimately, they all manage the basic administrative, or the pedantic tasks that DBAs have to do, the patching, the tuning, the upgrading, all of that is done by the service provider. So that's the number one thing they all aim at, from that point on every database has different capabilities and some will automate a whole bunch more than others, and will have different primary focuses. So it comes down to what you're looking for or what you need. And ultimately what I've learned from end users is what they think they need upfront, is not what they end up needing as they implement. >> David, anything you'd add to that, based on your reading of the Gartner work. >> Yes. It's a thorough piece of work. It's taking on a huge number of different types of uses and size of companies. And I think those are two parameters which really change how companies would look at it. If you're a Fortune 500 or Fortune 2000 type company, you're going to need a broader range of features, and you will need to deal with size and complexity in a much greater sense, and a lot of probably higher levels of availability, and reliability, and recoverability. Again, on the workload side, there are different types of workload and there're... There is as well as having the two transactional and analytic workloads, I think there's an emerging type of workload which is going to be very important for future applications where you want to combine transactional with analytic in real time, in order to automate business processes at a higher level, to make the business processes synchronous as opposed to asynchronous. And that degree of granularity, I think is missed, in a broader view of these companies and what they offer. It's in my view trying in some ways to not compare like with like from a customer point of view. So the very nuance, what you talked about, let's get into it, maybe that'll become clear to the audience. So like I said, these are very detailed research notes. There were several, I'll say analysts cooks in the kitchen, including Henry Cook, whom I don't know, but four other contributing analysts, two of whom are CUBE alum, Don Feinberg, and Merv Adrian, both really, you know, awesome researchers. And Rick Greenwald, along with Adam Ronthal. And these are public documents, you can go on the web and search for these. So I wonder if we could just look at some of the data and bring up... Guys, bring up the slide one here. And so we'll first look at the operational side and they broke it into four use cases. The traditional transaction use cases, the augmented transaction processing, stream/event processing and operational intelligence. And so we're going to show you there's a lot of data here. So what Gartner did is they essentially evaluated critical capabilities, or think of features and functions, and gave them a weighting, or a weighting, and then a rating. It was a weighting and rating methodology. On a s... The rating was on a scale of one to five, and then they weighted the importance of the features based on their assessment, and talking to the many customers they talk to. So you can see here on the first chart, we're showing both the traditional transactions and the augmented transactions and, you know, the thing... The first thing that jumps out at you guys is that, you know, Oracle with Autonomous is off the charts, far ahead of anybody else on this. And actually guys, if you just bring up slide number two, we'll take a look at the stream/event processing and operational intelligence use cases. And you can see, again, you know, Oracle has a big lead. And I don't want to necessarily go through every vendor here, but guys, if you don't mind going back to the first slide 'cause I think this is really, you know, the core of transaction processing. So let's look at this, you've got Oracle, you've got SAP HANA. You know, right there interestingly Amazon Web Services with the Aurora, you know, IBM Db2, which, you know, it goes back to the good old days, you know, down the list. But so, let me again start with Marc. So why is that? I mean, I guess this is no surprise, Oracle still owns the Mission-Critical for the database space. They earned that years ago. One that, you know, over the likes of Db2 and, you know, Informix and Sybase, and, you know, they emerged as number one there. But what do you make of this data Marc? >> If you look at this data in a vacuum, you're looking at specific functionality, I think you need to look at all the slides in total. And the reason I bring that up is because I agree with what David said earlier, in that the use case that's becoming more prevalent is the integration of transaction and analytics. And more importantly, it's not just your traditional data warehouse, but it's AI analytics. It's big data analytics. It's users are finding that they need more than just simple reporting. They need more in-depth analytics so that they can get more actionable insights into their data where they can react in real time. And so if you look at it just as a transaction, that's great. If you're going to just as a data warehouse, that's great, or analytics, that's fine. If you have a very narrow use case, yes. But I think today what we're looking at is... It's not so narrow. It's sort of like, if you bought a streaming device and it only streams Netflix and then you need to get another streaming device 'cause you want to watch Amazon Prime. You're not going to do that, you want one, that does all of it, and that's kind of what's missing from this data. So I agree that the data is good, but I don't think it's looking at it in a total encompassing manner. >> Well, so before we get off the horses on the track 'cause I love to do that. (Dave chuckles) I just kind of let's talk about that. So Marc, you're putting forth the... You guys seem to agree on that premise that the database that can do more than just one thing is of appeal to customers. I suppose that makes, certainly makes sense from a cost standpoint. But, you know, guys feel free to flip back and forth between slides one and two. But you can see SAP HANA, and I'm not sure what cloud that's running on, it's probably running on a combination of clouds, but, you know, scoring very strongly. I thought, you know, Aurora, you know, given AWS says it's one of the fastest growing services in history and they've got it ahead of Db2 just on functionality, which is pretty impressive. I love Google Spanner, you know, love the... What they're trying to accomplish there. You know, you go down to Microsoft is, they're kind of the... They're always good enough a database and that's how they succeed and et cetera, et cetera. But David, it sounds like you agree with Marc. I would say, I would think though, Amazon kind of doesn't agree 'cause they're like a horses for courses. >> I agree. >> Yeah, yeah. >> So I wonder if you could comment on that. >> Well, I want to comment on two vectors. The first vector is that the size of customer and, you know, a mid-sized customer versus a global $2,000 or global 500 customer. For the smaller customer that's the heart of AWS, and they are taking their applications and putting pretty well everything into their cloud, the one cloud, and Aurora is a good choice. But when you start to get to a requirements, as you do in larger companies have very high levels of availability, the functionality is not there. You're not comparing apples and... Apples with apples, it's two very different things. So from a tier one functionality point of view, IBM Db2 and Oracle have far greater capability for recovery and all the features that they've built in over there. >> Because of their... You mean 'cause of the maturity, right? maturity and... >> Because of their... Because of their focus on transaction and recovery, et cetera. >> So SAP though HANA, I mean, that's, you know... (David talks indistinctly) And then... >> Yeah, yeah. >> And then I wanted your comments on that, either of you or both of you. I mean, SAP, I think has a stated goal of basically getting its customers off Oracle that's, you know, there's always this urinary limping >> Yes, yes. >> between the two companies by 2024. Larry has said that ain't going to happen. You know, Amazon, we know still runs on Oracle. It's very hard to migrate Mission-Critical, David, you and I know this well, Marc you as well. So, you know, people often say, well, everybody wants to get off Oracle, it's too expensive, blah, blah, blah. But we talked to a lot of Oracle customers there, they're very happy with the reliability, availability, recoverability feature set. I mean, the core of Oracle seems pretty stable. >> Yes. >> But I wonder if you guys could comment on that, maybe Marc you go first. >> Sure. I've recently done some in-depth comparisons of Oracle and Aurora, and all their other RDS services and Snowflake and Google and a variety of them. And ultimately what surprised me is you made a statement it costs too much. It actually comes in half of Aurora for in most cases. And it comes in less than half of Snowflake in most cases, which surprised me. But no matter how you configure it, ultimately based on a couple of things, each vendor is focused on different aspects of what they do. Let's say Snowflake, for example, they're on the analytical side, they don't do any transaction processing. But... >> Yeah, so if I can... Sorry to interrupt. Guys if you could bring up the next slide that would be great. So that would be slide three, because now we get into the analytical piece Marc that you're talking about that's what Snowflake specialty is. So please carry on. >> Yeah, and what they're focused on is sharing data among customers. So if, for example, you're an automobile manufacturer and you've got a huge supply chain, you can supply... You can share the data without copying the data with any of your suppliers that are on Snowflake. Now, can you do that with the other data warehouses? Yes, you can. But the focal point is for Snowflake, that's where they're aiming it. And whereas let's say the focal point for Oracle is going to be performance. So their performance affects cost 'cause the higher the performance, the less you're paying for the performing part of the payment scale. Because you're paying per second for the CPUs that you're using. Same thing on Snowflake, but the performance is higher, therefore you use less. I mean, there's a whole bunch of things to come into this but at the end of the day what I've found is Oracle tends to be a lot less expensive than the prevailing wisdom. So let's talk value for a second because you said something, that yeah the other databases can do that, what Snowflake is doing there. But my understanding of what Snowflake is doing is they built this global data mesh across multiple clouds. So not only are they compatible with Google or AWS or Azure, but essentially you sign up for Snowflake and then you can share data with anybody else in the Snowflake cloud, that I think is unique. And I know, >> Marc: Yes. >> Redshift, for instance just announced, you know, Redshift data sharing, and I believe it's just within, you know, clusters within a customer, as opposed to across an ecosystem. And I think that's where the network effect is pretty compelling for Snowflake. So independent of costs, you and I can debate about costs and, you know, the tra... The lack of transparency of, because AWS you don't know what the bill is going to be at the end of the month. And that's the same thing with Snowflake, but I find that... And by the way guys, you can flip through slides three and four, because we've got... Let me just take a quick break and you have data warehouse, logical data warehouse. And then the next slide four you got data science, deep learning and operational intelligent use cases. And you can see, you know, Teradata, you know, law... Teradata came up in the mid 1980s and dominated in that space. Oracle does very well there. You can see Snowflake pop-up, SAP with the Data Warehouse, Amazon with Redshift. You know, Google with BigQuery gets a lot of high marks from people. You know, Cloud Data is in there, you know, so you see some of those names. But so Marc and David, to me, that's a different strategy. They're not trying to be just a better data warehouse, easier data warehouse. They're trying to create, Snowflake that is, an incremental opportunity as opposed to necessarily going after, for example, Oracle. David, your thoughts. >> Yeah, I absolutely agree. I mean, ease of use is a primary benefit for Snowflake. It enables you to do stuff very easily. It enables you to take data without ETL, without any of the complexity. It enables you to share a number of resources across many different users and know... And be able to bring in what that particular user wants or part of the company wants. So in terms of where they're focusing, they've got a tremendous ease of use, tremendous focus on what the customer wants. And you pointed out yourself the restrictions there are of doing that both within Oracle and AWS. So yes, they have really focused very, very hard on that. Again, for the future, they are bringing in a lot of additional functions. They're bringing in Python into it, not Python, JSON into the database. They can extend the database itself, whether they go the whole hog and put in transaction as well, that's probably something they may be thinking about but not at the moment. >> Well, but they, you know, they obviously have to have TAM expansion designs because Marc, I mean, you know, if they just get a 100% of the data warehouse market, they're probably at a third of their stock market valuation. So they had better have, you know, a roadmap and plans to extend there. But I want to come back Marc to this notion of, you know, the right tool for the right job, or, you know, best of breed for a specific, the right specific, you know horse for course, versus this kind of notion of all in one, I mean, they're two different ends of the spectrum. You're seeing, you know, Oracle obviously very successful based on these ratings and based on, you know their track record. And Amazon, I think I lost count of the number of data stores (Dave chuckles) with Redshift and Aurora and Dynamo, and, you know, on and on and on. (Marc talks indistinctly) So they clearly want to have that, you know, primitive, you know, different APIs for each access, completely different philosophies it's like Democrats or Republicans. Marc your thoughts as to who ultimately wins in the marketplace. >> Well, it's hard to say who is ultimately going to win, but if I look at Amazon, Amazon is an all-cart type of system. If you need time series, you go with their time series database. If you need a data warehouse, you go with Redshift. If you need transaction, you go with one of the RDS databases. If you need JSON, you go with a different database. Everything is a different, unique database. Moving data between these databases is far from simple. If you need to do a analytics on one database from another, you're going to use other services that cost money. So yeah, each one will do what they say it's going to do but it's going to end up costing you a lot of money when you do any kind of integration. And you're going to add complexity and you're going to have errors. There's all sorts of issues there. So if you need more than one, probably not your best route to go, but if you need just one, it's fine. And if, and on Snowflake, you raise the issue that they're going to have to add transactions, they're going to have to rewrite their database. They have no indexes whatsoever in Snowflake. I mean, part of the simplicity that David talked about is because they had to cut corners, which makes sense. If you're focused on the data warehouse you cut out the indexes, great. You don't need them. But if you're going to do transactions, you kind of need them. So you're going to have to do some more work there. So... >> Well... So, you know, I don't know. I have a different take on that guys. I think that, I'm not sure if Snowflake will add transactions. I think maybe, you know, their hope is that the market that they're creating is big enough. I mean, I have a different view of this in that, I think the data architecture is going to change over the next 10 years. As opposed to having a monolithic system where everything goes through that big data platform, the data warehouse and the data lake. I actually see what Snowflake is trying to do and, you know, I'm sure others will join them, is to put data in the hands of product builders, data product builders or data service builders. I think they're betting that that market is incremental and maybe they don't try to take on... I think it would maybe be a mistake to try to take on Oracle. Oracle is just too strong. I wonder David, if you could comment. So it's interesting to see how strong Gartner rated Oracle in cloud database, 'cause you don't... I mean, okay, Oracle has got OCI, but you know, you think a cloud, you think Google, or Amazon, Microsoft and Google. But if I have a transaction database running on Oracle, very risky to move that, right? And so we've seen that, it's interesting. Amazon's a big customer of Oracle, Salesforce is a big customer of Oracle. You know, Larry is very outspoken about those companies. SAP customers are many, most are using Oracle. I don't, you know, it's not likely that they're going anywhere. My question to you, David, is first of all, why do they want to go to the cloud? And if they do go to the cloud, is it logical that the least risky approach is to stay with Oracle, if you're an Oracle customer, or Db2, if you're an IBM customer, and then move those other workloads that can move whether it's more data warehouse oriented or incremental transaction work that could be done in a Aurora? >> I think the first point, why should Oracle go to the cloud? Why has it gone to the cloud? And if there is a... >> Moreso... Moreso why would customers of Oracle... >> Why would customers want to... >> That's really the question. >> Well, Oracle have got Oracle Cloud@Customer and that is a very powerful way of doing it. Where exactly the same Oracle system is running on premise or in the cloud. You can have it where you want, you can have them joined together. That's unique. That's unique in the marketplace. So that gives them a very special place in large customers that have data in many different places. The second point is that moving data is very expensive. Marc was making that point earlier on. Moving data from one place to another place between two different databases is a very expensive architecture. Having the data in one place where you don't have to move it where you can go directly to it, gives you enormous capabilities for a single database, single database type. And I'm sure that from a transact... From an analytic point of view, that's where Snowflake is going, to a large single database. But where Oracle is going to is where, you combine both the transactional and the other one. And as you say, the cost of migration of databases is incredibly high, especially transaction databases, especially large complex transaction databases. >> So... >> And it takes a long time. So at least a two year... And it took five years for Amazon to actually succeed in getting a lot of their stuff over. And five years they could have been doing an awful lot more with the people that they used to bring it over. So it was a marketing decision as opposed to a rational business decision. >> It's the holy grail of the vendors, they all want your data in their database. That's why Amazon puts so much effort into it. Oracle is, you know, in obviously a very strong position. It's got growth and it's new stuff, it's old stuff. It's, you know... The problem with Oracle it has like many of the legacy vendors, it's the size of the install base is so large and it's shrinking. And the new stuff is.... The legacy stuff is shrinking. The new stuff is growing very, very fast but it's not large enough yet to offset that, you see that in all the learnings. So very positive news on, you know, the cloud database, and they just got to work through that transition. Let's bring up slide number five, because Marc, this is to me the most interesting. So we've just shown all these detailed analysis from Gartner. And then you look at the Magic Quadrant for cloud databases. And, you know, despite Amazon being behind, you know, Oracle, or Teradata, or whomever in every one of these ratings, they're up to the right. Now, of course, Gartner will caveat this and say, it doesn't necessarily mean you're the best, but of course, everybody wants to be in the upper, right. We all know that, but it doesn't necessarily mean that you should go by that database, I agree with what Gartner is saying. But look at Amazon, Microsoft and Google are like one, two and three. And then of course, you've got Oracle up there and then, you know, the others. So that I found that very curious, it is like there was a dissonance between the hardcore ratings and then the positions in the Magic Quadrant. Why do you think that is Marc? >> It, you know, it didn't surprise me in the least because of the way that Gartner does its Magic Quadrants. The higher up you go in the vertical is very much tied to the amount of revenue you get in that specific category which they're doing the Magic Quadrant. It doesn't have to do with any of the revenue from anywhere else. Just that specific quadrant is with that specific type of market. So when I look at it, Oracle's revenue still a big chunk of the revenue comes from on-prem, not in the cloud. So you're looking just at the cloud revenue. Now on the right side, moving to the right of the quadrant that's based on functionality, capabilities, the resilience, other things other than revenue. So visionary says, hey how far are you on the visionary side? Now, how they weight that again comes down to Gartner's experts and how they want to weight it and what makes more sense to them. But from my point of view, the right side is as important as the vertical side, 'cause the vertical side doesn't measure the growth rate either. And if we look at these, some of these are growing much faster than the others. For example, Snowflake is growing incredibly fast, and that doesn't reflect in these numbers from my perspective. >> Dave: I agree. >> Oracle is growing incredibly fast in the cloud. As David pointed out earlier, it's not just in their cloud where they're growing, but it's Cloud@Customer, which is basically an extension of their cloud. I don't know if that's included these numbers or not in the revenue side. So there's... There're a number of factors... >> Should it be in your opinion, Marc, would you include that in your definition of cloud? >> Yeah. >> The things that are hybrid and on-prem would that cloud... >> Yes. >> Well especially... Well, again, it depends on the hybrid. For example, if you have your own license, in your own hardware, but it connects to the cloud, no, I wouldn't include that. If you have a subscription license and subscription hardware that you don't own, but it's owned by the cloud provider, but it connects with the cloud as well, that I would. >> Interesting. Well, you know, to your point about growth, you're right. I mean, it's probably looking at, you know, revenues looking, you know, backwards from guys like Snowflake, it will be double, you know, the next one of these. It's also interesting to me on the horizontal axis to see Cloud Data and Databricks further to the right, than Snowflake, because that's kind of the data lake cloud. >> It is. >> And then of course, you've got, you know, the other... I mean, database used to be boring, so... (David laughs) It's such a hot market space here. (Marc talks indistinctly) David, your final thoughts on all this stuff. What does the customer take away here? What should I... What should my cloud database management strategy be? >> Well, I was positive about Oracle, let's take some of the negatives of Oracle. First of all, they don't make it very easy to rum on other platforms. So they have put in terms and conditions which make it very difficult to run on AWS, for example, you get double counts on the licenses, et cetera. So they haven't played well... >> Those are negotiable by the way. Those... You bring it up on the customer. You can negotiate that one. >> Can be, yes, They can be. Yes. If you're big enough they are negotiable. But Aurora certainly hasn't made it easy to work with other plat... Other clouds. What they did very... >> How about Microsoft? >> Well, no, that is exactly what I was going to say. Oracle with adjacent workloads have been working very well with Microsoft and you can then use Microsoft Azure and use a database adjacent in the same data center, working with integrated very nicely indeed. And I think Oracle has got to do that with AWS, it's got to do that with Google as well. It's got to provide a service for people to run where they want to run things not just on the Oracle cloud. If they did that, that would in my term, and my my opinion be a very strong move and would make make the capabilities available in many more places. >> Right. Awesome. Hey Marc, thanks so much for coming to theCUBE. Thank you, David, as well, and thanks to Gartner for doing all this great research and making it public on the web. You can... If you just search critical capabilities for cloud database management systems for operational use cases, that's a mouthful, and then do the same for analytical use cases, and the Magic Quadrant. There's the third doc for cloud database management systems. You'll get about two hours of reading and I learned a lot and I learned a lot here too. I appreciate the context guys. Thanks so much. >> My pleasure. All right, thank you for watching everybody. This is Dave Vellante for theCUBE. We'll see you next time. (upbeat music)
SUMMARY :
leaders all around the world. Marc Staimer is the founder, to really try to, you know, or what you have to manage. based on your reading of the Gartner work. So the very nuance, what you talked about, You're not going to do that, you I thought, you know, Aurora, you know, So I wonder if you and, you know, a mid-sized customer You mean 'cause of the maturity, right? Because of their focus you know... either of you or both of you. So, you know, people often say, But I wonder if you But no matter how you configure it, Guys if you could bring up the next slide and then you can share And by the way guys, you can And you pointed out yourself to have that, you know, So if you need more than one, I think maybe, you know, Why has it gone to the cloud? Moreso why would customers of Oracle... on premise or in the cloud. And as you say, the cost in getting a lot of their stuff over. and then, you know, the others. to the amount of revenue you in the revenue side. The things that are hybrid and on-prem that you don't own, but it's Well, you know, to your point got, you know, the other... you get double counts Those are negotiable by the way. hasn't made it easy to work and you can then use Microsoft Azure and the Magic Quadrant. We'll see you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
Rick Greenwald | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Marc Staimer | PERSON | 0.99+ |
Marc | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Adam Ronthal | PERSON | 0.99+ |
Don Feinberg | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Larry | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
December 2020 | DATE | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Henry Cook | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
two | QUANTITY | 0.99+ |
five years | QUANTITY | 0.99+ |
Gartner | ORGANIZATION | 0.99+ |
Merv Adrian | PERSON | 0.99+ |
100% | QUANTITY | 0.99+ |
second point | QUANTITY | 0.99+ |
Skyla Loomis, IBM | AnsibleFest 2020
>> (upbeat music) [Narrator] From around the globe, it's theCUBE with digital coverage of AnsibleFest 2020, brought to you by Red Hat. >> Hello welcome back to theCUBE virtual coverage of AnsibleFest 2020 Virtual. We're not face to face this year. I'm John Furrier, your host. We're bringing it together remotely. We're in the Palo Alto Studios with theCUBE and we're going remote for our guests this year. And I hope you can come together online enjoy the content. Of course, go check out the events site on Demand Live. And certainly I have a lot of great content. I've got a great guest Skyla Loomis Vice president, for the Z Application Platform at IBM. Also known as IBM Z talking Mainframe. Skyla, thanks for coming on theCUBE Appreciate it. >> Thank you for having me. So, you know, I've talked many conversations about the Mainframe of being relevant and valuable in context to cloud and cloud native because if it's got a workload you've got containers and all this good stuff, you can still run anything on anything these days. By integrating it in with all this great glue layer, lack of a better word or oversimplifying it, you know, things going on. So it's really kind of cool. Plus Walter Bentley in my previous interview was talking about the success of Ansible, and IBM working together on a really killer implementation. So I want to get into that, but before that let's get into IBM Z. How did you start working with IBM Z? What's your role there? >> Yeah, so I actually just got started with Z about four years ago. I spent most of my career actually on the distributed platform, largely with data and analytics, the analytics area databases and both On-premise and Public Cloud. But I always considered myself a friend to Z. So in many of the areas that I'd worked on, we'd, I had offerings where we'd enabled it to work with COS or Linux on Z. And then I had this opportunity come up where I was able to take on the role of leading some of our really core runtimes and databases on the Z platform, IMS and z/TPF. And then recently just expanded my scope to take on CICS and a number of our other offerings related to those kind of in this whole application platform space. And I was really excited because just of how important these runtimes and this platform is to the world,really. You know, our power is two thirds of our fortune 100 clients across banking and insurance. And it's you know, some of the most powerful transaction platforms in the world. You know doing hundreds of billions of transactions a day. And you know, just something that's really exciting to be a part of and everything that it does for us. >> It's funny how distributed systems and distributed computing really enable more longevity of everything. And now with cloud, you've got new capabilities. So it's super excited. We're seeing that a big theme at AnsibleFest this idea of connecting, making things easier you know, talk about distributed computing. The cloud is one big distribute computer. So everything's kind of playing together. You have a panel discussion at AnsibleFest Virtual. Could you talk about what your topic is and share, what was some of the content in there? Content being, content as in your presentation? Not content. (laughs) >> Absolutely. Yeah, so I had the opportunity to co-host a panel with a couple of our clients. So we had Phil Allison from Black Knight and Pat Lane from Allstate and they were really joining us and talking about their experience now starting to use Ansible to manage to z/OS. So we just actually launched some content collections and helping to enable and accelerate, client's use of using Ansible to manage to z/OS back in March of this year. And we've just seen tremendous client uptake in this. And these are a couple of clients who've been working with us and, you know, getting started on the journey of now using Ansible with Z they're both you know, have it in the enterprise already working with Ansible on other platforms. And, you know, we got to talk with them about how they're bringing it into Z. What use cases they're looking at, the type of culture change, that it drives for their teams as they embark on this journey and you know where they see it going for them in the future. >> You know, this is one of the hot items this year. I know that events virtual so has a lot of content flowing around and sessions, but collections is the top story. A lot of people talking collections, collections collections, you know, integration and partnering. It hits so many things but specifically, I like this use case because you're talking about real business value. And I want to ask you specifically when you were in that use case with Ansible and Z. People are excited, it seems like it's working well. Can you talk about what problems that it solves? I mean, what was some of the drivers behind it? What were some of the results? Could you give some insight into, you know, was it a pain point? Was it an enabler? Can you just share why that was getting people are getting excited about this? >> Yeah well, certainly automation on Z, is not new, you know there's decades worth of, of automation on the platform but it's all often proprietary, you know, or bundled up like individual teams or individual people on teams have specific assets, right. That they've built and it's not shared. And it's certainly not consistent with the rest of the enterprise. And, you know, more and more, you're kind of talking about hybrid cloud. You know, we're seeing that, you know an application is not isolated to a single platform anymore right. It really expands. And so being able to leverage this common open platform to be able to manage Z in the same way that you manage the entire rest of your enterprise, whether that's Linux or Windows or network or storage or anything right. You know you can now actually bring this all together into a common automation plane in control plane to be able to manage to all of this. It's also really great from a skills perspective. So, it enables us to really be able to leverage. You know Python on the platform and that's whole ecosystem of Ansible skills that are out there and be able to now use that to work with Z. >> So it's essentially a modern abstraction layer of agility and people to work on it. (laughs) >> Yeah >> You know it's not the joke, Hey, where's that COBOL programmer. I mean, this is a serious skill gap issues though. This is what we're talking about here. You don't have to replace the, kill the old to bring in the new, this is an example of integration where it's classic abstraction layer and evolution. Is that, am I getting that right? >> Absolutely. I mean I think that Ansible's power as an orchestrator is part of why, you know, it's been so successful here because it's not trying to rip and replace and tell you that you have to rewrite anything that you already have. You know, it is that glue sort of like you used that term earlier right? It's that glue that can span you know, whether you've got rec whether you've got ACL, whether you're using z/OSMF you know, or any other kind of custom automation on the platform, you know, it works with everything and it can start to provide that transparency into it as well, and move to that, like infrastructure as code type of culture. So you can bring it into source control. You can have visibility to it as part of the Ansible automation platform and tower and those capabilities. And so you, it really becomes a part of the whole enterprise and enables you to codify a lot of that knowledge. That, you know, exists again in pockets or in individuals and make it much more accessible to anybody new who's coming to the platform. >> That's a great point, great insight.& It's worth calling out. I'm going to make a note of that and make a highlight from that insight. That was awesome. I got to ask about this notion of client uptake. You know, when you have z/OS and Ansible kind of come in together, what are the clients area? When do they get excited? When do they know that they've got to do? And what are some of the client reactions? Are they're like, wake up one day and say, "Hey, yeah I actually put Ansible and z/OS together". You know peanut butter and chocolate is (mumbles) >> Honestly >> You know, it was just one of those things where it's not obvious, right? Or is it? >> Actually I have been surprised myself at how like resoundingly positive and immediate the reactions have been, you know we have something, one of our general managers runs a general managers advisory council and at some of our top clients on the platform and you know we meet with them regularly to talk about, you know, the future direction that we're going. And we first brought this idea of Ansible managing to Z there. And literally unanimously everybody was like yes, give it to us now. (laughs) It was pretty incredible, you know? And so it's you know, we've really just seen amazing uptake. We've had over 5,000 downloads of our core collection on galaxy. And again that's just since mid to late March when we first launched. So we're really seeing tremendous excitement with it. >> You know, I want to want to talk about some of the new announcements, but you brought that up. I wanted to kind of tie into it. It is addictive when you think modernization, people success is addictive. This is another theme coming out of AnsibleFest this year is that when the sharing, the new content you know, coders content is the theme. I got to ask you because you mentioned earlier about the business value and how the clients are kind of gravitating towards it. They want it.It is addictive, contagious. In the ivory towers in the big, you know, front office, the business. It's like, we've got to make everything as a service. Right. You know, you hear that right. You know, and say, okay, okay, boss You know, Skyla, just go do it. Okay. Okay. It's so easy. You can just do it tomorrow, but to make everything as a service, you got to have the automation, right. So, you know, to bridge that gap has everything is a service whether it's mainframe. I mean okay. Mainframe is no problem. If you want to talk about observability and microservices and DevOps, eventually everything's going to be a service. You got to have the automation. Could you share your, commentary on how you view that? Because again, it's a business objective everything is a service, then you got to make it technical then you got to make it work and so on. So what's your thoughts on that? >> Absolutely. I mean, agility is a huge theme that we've been focusing on. We've been delivering a lot of capabilities around a cloud native development experience for folks working on COBOL, right. Because absolutely you know, there's a lot of languages coming to the platform. Java is incredibly powerful and it actually runs better on Z than it runs on any other platform out there. And so, you know, we're seeing a lot of clients you know, starting to, modernize and continue to evolve their applications because the platform itself is incredibly modern, right? I mean we come out with new releases, we're leading the industry in a number of areas around resiliency, in our security and all of our, you know, the face of encryption and number of things that come out with, but, you know the applications themselves are what you know, has not always kept pace with the rate of change in the industry. And so, you know, we're really trying to help enable our clients to make that leap and continue to evolve their applications in an important way, and the automation and the tools that go around it become very important. So, you know, one of the things that we're enabling is the self service, provisioning experience, right. So clients can, you know, from Open + Shift, be able to you know, say, "Hey, give me an IMS and z/OS connect stack or a kicks into DB2 stack." And that is all under the covers is going to be powered by Ansible automation. So that really, you know, you can get your system programmers and your talent out of having to do these manual tasks, right. Enable the development community. So they can use things like VS Code and Jenkins and GET Lab, and you'll have this automated CICB pipeline. And again, Ansible under the covers can be there helping to provision those test environments. You know, move the data, you know, along with the application, changes through the pipeline and really just help to support that so that, our clients can do what they need to do. >> You guys got the collections in the hub there, so automation hub, I got to ask you where do you see the future of the automating within z/OS going forward? >> Yeah, so I think, you know one of the areas that we'd like to see go is head more towards this declarative state so that you can you know, have this declarative configuration defined for your Z environment and then have Ansible really with the data and potency right. Be able to, go out and ensure that the environment is always there, and meeting those requirements. You know that's partly a culture change as well which goes along with it, but that's a key area. And then also just, you know, along with that becoming more proactive overall part of, you know, AI ops right. That's happening. And I think Ansible on the automation that we support can become you know, an integral piece of supporting that more intelligent and proactive operational direction that, you know, we're all going. >> Awesome Skyla. Great to talk to you. And so insightful, appreciate it. One final question. I want to ask you a personal question because I've been doing a lot of interviews around skill gaps and cybersecurity, and there's a lot of jobs, more job openings and there are a lot of people. And people are with COVID working at home. People are looking to get new skilled up positions, new opportunities. Again cybersecurity and spaces and event we did and want to, and for us its huge, huge openings. But for people watching who are, you know, resetting getting through this COVID want to come out on the other side there's a lot of online learning tools out there. What skill sets do you think? Cause you brought up this point about modernization and bringing new people and people as a big part of this event and the role of the people in community. What areas do you think people could really double down on? If I wanted to learn a skill. Or an area of coding and business policy or integration services, solution architects, there's a lot of different personas, but what skills can I learn? What's your advice to people out there? >> Yeah sure. I mean on the Z platform overall and skills related to Z, COBOL, right. There's, you know, like two billion lines of COBOL out there in the world. And it's certainly not going away and there's a huge need for skills. And you know, if you've got experience from other platforms, I think bringing that in, right. And really being able to kind of then bridge the two things together right. For the folks that you're working for and the enterprise we're working with you know, we actually have a bunch of education out there. You got to master the mainframe program and even a competition that goes on that's happening now, for folks who are interested in getting started at any stage, whether you're a student or later in your career, but you know learning, you know, learn a lot of those platforms you're going to be able to then have a career for life. >> Yeah. And the scale on the data, this is so much going on. It's super exciting. Thanks for sharing that. Appreciate it. Want to get that plug in there. And of course, IBM, if you learn COBOL you'll have a job forever. I mean, the mainframe's not going away. >> Absolutely. >> Skyla, thank you so much for coming on theCUBE Vice President, for the Z Application Platform and IBM, thanks for coming. Appreciate it. >> Thanks for having me. >> I'm John Furrier your host of theCUBE here for AnsibleFest 2020 Virtual. Thanks for watching. (upbeat music)
SUMMARY :
brought to you by Red Hat. And I hope you can come together online So, you know, I've And it's you know, some you know, talk about with us and, you know, getting started And I want to ask you in the same way that you of agility and people to work on it. kill the old to bring in on the platform, you know, You know, when you have z/OS and Ansible And so it's you know, we've I got to ask you because You know, move the data, you know, so that you can you know, But for people watching who are, you know, And you know, if you've got experience And of course, IBM, if you learn COBOL Skyla, thank you so much for coming I'm John Furrier your host of theCUBE
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
IBM | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Phil Allison | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
AnsibleFest | ORGANIZATION | 0.99+ |
Walter Bentley | PERSON | 0.99+ |
Skyla Loomis | PERSON | 0.99+ |
Java | TITLE | 0.99+ |
Python | TITLE | 0.99+ |
tomorrow | DATE | 0.99+ |
Linux | TITLE | 0.99+ |
two things | QUANTITY | 0.99+ |
Windows | TITLE | 0.99+ |
Pat Lane | PERSON | 0.99+ |
this year | DATE | 0.99+ |
Skyla | PERSON | 0.99+ |
Ansible | ORGANIZATION | 0.98+ |
both | QUANTITY | 0.98+ |
mid | DATE | 0.98+ |
100 clients | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
One final question | QUANTITY | 0.98+ |
over 5,000 downloads | QUANTITY | 0.97+ |
Z | TITLE | 0.97+ |
two billion lines | QUANTITY | 0.97+ |
March of this year | DATE | 0.95+ |
Z. | TITLE | 0.95+ |
VS Code | TITLE | 0.95+ |
COBOL | TITLE | 0.93+ |
z/OS | TITLE | 0.92+ |
single platform | QUANTITY | 0.91+ |
hundreds of billions of transactions a day | QUANTITY | 0.9+ |
first | QUANTITY | 0.9+ |
Allstate | ORGANIZATION | 0.88+ |
Palo Alto Studios | LOCATION | 0.88+ |
Z Application Platform | TITLE | 0.86+ |
four years ago | DATE | 0.84+ |
COVID | EVENT | 0.81+ |
late March | DATE | 0.81+ |
about | DATE | 0.8+ |
Vice | PERSON | 0.79+ |
Jenkins | TITLE | 0.78+ |
Vice President | PERSON | 0.77+ |
AnsibleFest 2020 | EVENT | 0.77+ |
IBM Z. | TITLE | 0.72+ |
two thirds | QUANTITY | 0.72+ |
one big distribute computer | QUANTITY | 0.72+ |
one day | QUANTITY | 0.71+ |
z/OSMF | TITLE | 0.69+ |
Z. | ORGANIZATION | 0.69+ |
Black Knight | TITLE | 0.64+ |
ACL | TITLE | 0.64+ |
CICS | ORGANIZATION | 0.63+ |
IMS | TITLE | 0.63+ |
Victoria Stasiewicz, Harley-Davidson Motor Company | IBM DataOps 2020
from the cube studios in Palo Alto in Boston connecting with thought leaders all around the world this is a cube conversation hi everybody this is Dave Volante and welcome to this special digital cube presentation sponsored by IBM we're going to focus in on data op data ops in action a lot of practitioners tell us that they really have challenges operationalizing in infusing AI into the data pipeline we're going to talk to some practitioners and really understand how they're solving this problem and really pleased to bring Victoria stayshia vich who's the Global Information Systems Manager for information management at harley-davidson Vik thanks for coming to the cube great to see you wish we were face to face but really appreciate your coming on in this manner that's okay that's why technology's great right so you you are steeped in a data role at harley-davidson can you describe a little bit about what you're doing and what that role is like definitely so obviously a manager of information management >> governance at harley-davidson and what my team is charged with is building out data governance at an enterprise level as well as supporting the AI and machine learning technologies within my function right so I have a portfolio that portfolio really includes DNA I and governance and also our master data and reference data and data quality function if you're familiar with the dama wheel of course what I can tell you is that my team did an excellent job within this last year in 2019 standing up the infrastructure so those technologies right specific to governance as well as their newer more modern warehouse on cloud technologies and cloud objects tour which also included Watson Studio and Watson Explorer so many of the IBM errs of the world might hear about obviously IBM ISEE or work on it directly we stood that up in the cloud as well as db2 warehouse and cloud like I said in cloud object store we spent about the first five months of last year standing that infrastructure up working on the workflow ensuring that access security management was all set up and can within the platform and what we did the last half of the year right was really start to collect that metadata as well as the data itself and bring the metadata into our metadata repository which is rx metadata base without a tie FCE and then also bring that into our db2 warehouse on cloud environment so we were able to start with what we would consider our dealer domain for harley-davidson and bring those dimensions within to db2 warehouse on cloud which was never done before a lot of the information that we were collecting and bringing together for the analytics team lived in disparate data sources throughout the enterprise so the goal right was to stop with redundant data across the enterprise eliminate some of those disparity to source data resources right and bring it into a centralized repository for reporting okay Wow we got a lot to unpack here Victoria so but let me start with sort of the macro picture I mean years ago you see the data was this thing that had to be managed and it still does but it was a cost was largely a liability you know governance was sort of front and center sometimes you know it was the tail that wagged the value dog and then the whole Big Data movement comes in and everybody wants to be data-driven and so you saw some pretty big changes in just the way in which people looked at data they wanted to you know mine that data and make it an asset versus just a straight liability so what what are the changes that you discerned in in data and in your organization over the last let's say half a decade we to tell you the truth we started looking at access management and the ability to allow some of our users to do some rapid prototyping that they could never do before so what more and more we're seeing as far as data citizens or data scientists right or even analysts throughout most enterprises is it well they want access to the information they want it now they want speed to insight at this moment using pretty much minimal Viable Product they may not need the entire data set and they don't want to have to go through leaps and bounds right to just get access to that information or to bring that information into necessarily a centralized location so while I talk about our db2 warehouse on cloud and that's an excellent example of one we actually need to model data we know that this is data that we trust right that's going to be called upon many many times from many many analysts right there's other information out there that people are collecting because there's so much big data right there's so many ways to enrich your data within your organization for your customer reporting the people are really trying to tap into those third-party datasets so what my team has done what we're seeing right change throughout the industry is that a lot of teams and a lot of enterprises are looking at s technologists how can we enable our scientists and our analysts right the ability to access data virtually so instead of repeating right recuperating redundant data sources we're actually ambling data virtualization at harley-davidson and we've been doing that first working with our db2 warehouse on cloud and connecting to some of our other trusted versions of data warehouses that we have throughout the enterprise that being our dealer warehouse as well to enable obviously analysts to do some quick reporting without having to bring all that data together that is a big change I see the fact that we were able to tackle that that's allowed technology to get back ahead because most backup Furnish say most organizations right have given IT the bad rap wrap up it takes too long to get what we need my technologists cannot give me my data at my fingertips in a timely manner to not allow for speed to insight and answers the business questions at point of time of delivery most and we've supplied data to our analysts right they're able to calculate aggregate brief the reporting metrics to get those answers back to the business but they're a week two weeks too late the information is no longer relevant so data virtualization through data Ops is one of the ways and we've been able to speed that up and act as a catalyst for data delivery but we've also done though and I see this quite a bit is well that's excellent we still need to start classifying our information and labeling that at the system level we've seen most most enterprises right I worked at Blue Cross as well with IBM tool had the same struggle they were trying to eliminate their technology debt reduce their spend reduce the time it takes for resources working on technologies to maintain technologies they want to reduce their their IT portfolio of assets and capabilities that they license today so what do they do to do that it's time to start taking a look at what systems should be classified as essential systems versus those systems that are disparate and could be eliminated and that starts with data governance right so okay so your your main focus is on governance and you talked about real people want answers now they don't want to have to wait they don't want to go big waterfall process so what was what would you say was sort of some of the top challenges in terms of just operationalizing your data pipelining getting to the point that you are today you know I have to be quite honest um standing up the governance framework the methodology behind it right to get it data owners data stewards at a catalog established that was not necessarily the heavy lifting the heavy lifting really came with I'm setting up a brand new infrastructure in the cloud for us to be quite honest um we with IBM partnered and said you know what we're going to the cloud and these tools had never been implemented in the cloud before we were kind of the first do it so some of the struggles that we aren't they or took on and we're actually um standing up the infrastructure security and access management network pipeline access right VPN issues things of that nature I would say is some of the initial roadblocks we went through but after we overcame those challenges with the help of IBM and the patience of both the Harley and IBM team it became quite easy to roll out these technologies to other users the nice thing is right we at harley-davidson have been taking the time to educate our users today up for example we had what we call the data bytes a Lunch and Learn and so in that Lunch and Learn what we did is we took our entire GIS team our global information services team which is all of IT through these new technologies it was a form of over 250 people with our CIO and CTO on and taking them through how do we use these tools what are the purpose of schools why do we need governance to maintain these pools why is metadata management important to the organization that piece of it seems to be much easier than just our initial scanning it up so it's good enough to start letting users in well sounds like you had real sponsorship from from leadership and input from leadership and they were kind of leaning into the whole process first of all is that true and how important is that for success oh it's essential we often said when we were first standing up the tools to be quite honest is our CIO really understand what it is that were for standing up as our CIO really understand governance because we didn't have the time to really get that face-to-face interaction with our leadership so I myself made it a mandate having done this previously at Blue Cross to get in front of my CIO and my CTO and educate them on what it is we are exactly standing up and once we did that it was very easy to get at an executive steering committee as well as an executive membership Council right I'm boarded with our governance council and now they're the champions of that it's never easy that was selling governance to leadership and the ROI is never easy because it's not something that you can easily calculate it's something that has to show its return on investment over time and that means that you're bringing dashboards you're educating your CIO and CTO and how you're bringing people together how groups are now talking about solutions and technologies in a domain like environment right where you have people from at an international level we have people from Asia from Europe from China that join calls every Thursday to talk about the data quality issue specific to dealer for example what systems were using what solutions on there are on the horizon to solve them so that now instead of having people from other countries that work for Harley as well as just even within the US right creating one-off solutions that are answering the same business questions using the same data but creating multiple solutions right to solve the same problem we're now bringing them together and we're solving together and we're prioritizing those as well so that return on investment necessarily down the line you can show that is you know what instead of this printing into five projects we've now turned this into one and instead of implementing four systems we've now implemented one and guess what we have the business rules and we have the classification I to this system so that you CIO or CTO right you now go in and reference this information a glossary a user interface something that a c-level can read interpret understand quickly write dissect the information for their own need without having to take the long lengthy time to talk to a technologist about what does this information mean and how do i how do I use it you know what's interesting is take away based on what you just said is you know harley-davidson is an iconic brand cool company with fuckin motorcycles right and but you came out of an insurance background which is a regulated industry where you know governance is sort of de rigueur right I mean it's it's a table steak so how are you able that arleigh to balance the sort of tension between governance and the sort of business flexibility so there's different there's different lovers I would call them right obviously within healthcare in insurance the importance becomes compliance and risk and regulatory right they're big pushes gosh I don't want to pay millions of dollars for fines start classifying this information enabling security reducing risk all that good stuff right for Harley Davidson it was much different it was more or less we have a mission right we want to invest in our technologies yet we want to save money how do we cut down the technologies that we have today reduce our technology spend yet and able our users have access to more information in a timely manner that's not an easy that's not an easy pass right um so what we did is I took that my married governance part-time model and our time model is specific worried they're gonna tolerate an application we're going to invest in an application we're gonna migrate an application or we're gonna eliminate that so I'm talking to my CIO said you know we can use governance the classifier system help act as a catalyst when we start to implement what it is we're doing with our technologies which technologies are we going to eliminate tomorrow we as IG cannot do that unless we discuss some sort of business impact unless you look at a system and say how many users are using us what reports are essential the business teams do they need this system is this something that's critical for users today to eat is this duplicate 'iv right we have many systems that are solving the same capability that is how I sold that off my CIO and it made it important to the rest of the organization they knew we had a mandate in front of us we had to reduce technology spend and that really for me made it quite easy and talking to other technologists as well as business users on why if governance is important why it's going to help harley-davidson and their mission to save money going forward I will tell you though that the businesses of biggest value right is the fact that they now owns the data they're more likely right to use your master data management systems like I said I'm the owner of our MDM services today as well as our customer knowledge center today they're more likely to access and reference those systems if they feel that they built the rule and they own the rules in those systems so that's another big value add to write as many business users will say ok you know you think I need access to this system I don't know I'm not sure I don't know what the data looks like within it is it easily accessible is it gonna give me the reporting metrics that I need that's where governance will help them for example like our state a scientist beam using a catalog right you can browse your metadata you can look at your server your database your tables your fields understand what those mean understand the classifications the formulas within them right they're all documented in a glossary versus having to go and ask for access to six different systems throughout the enterprise hoping right that's Sally next few that told you you needed access to these systems was right just to find out that you don't need the access and hence it took you three days to get the access anyway that's why a glossary is really a catalyst a lot of that well it's really interesting what you just said about you went through essentially an application rationalization exercise which which saved your organization money that's not always easy because you know businesses even though the you know IIT may be spending money on these systems businesses don't want to give them up but you were able to use it sounds like you're able to use data to actually inform which applications you should invest in versus you know sunset as well you'd sounds like you were giving the business a real incentive to go through this exercise because they ended up as you said owning the data well then what's great right who wants pepper what's using the old power and driving a new car if they can buy the I'm sorry bull owning the old car right driving the old park if they can truly own a new car for a cheaper price nobody wants to do that I've even looked at Tesla's right I can buy a Tesla for the same prices I can buy a minivan these days I think I might buy the Tesla but what I will say is that we also use that we built out a capabilities model with our enterprise architecture team and building that capabilities model we started to bucket our technologies within those capabilities models right like AI machine learning warehouse on cloud technologies are even warehousing technologies governance technologies you know those types of classifications today integrations technologies reporting technologies by kind of grouping all those into a capabilities matrix right and was Eve it was easy for us to then start identifying alright we're the system owners for these when it comes to technologies who are the business users for these based on that right let's go talk to this team the dealer management team about access to this new profiling capability with an IBM or this new catalog with an IBM right that they can use stay versus this sharepoint excel spreadsheets they were using for their metadata management right or the profiling tools that were old you know ten years old some of our sa peoples that they were using before right let's sell them on the noodles and start migrating them that becomes pretty easy because I mean unless you're buying some really old technology when you give people a purview into those new tools and those new capabilities especially with some of the IBM's new tools we have today there the buy-in is pretty quick it's pretty easy to sell somebody on something shiny and it's much easier to use than some of the older technologies let's talk about the business impact in my understanding is you were trying to increase the improve the effectiveness of the dealers not not just go out and brute force sign up more dealers were you able to achieve that outcome and what does it meant for your business yes actually we were so right now what we did is we slipped something called a CDR and that's our consumer dealer and development repository right that's where a lot of our dealer information resides today it's actually argue ler warehouse we had some other systems that we're collecting that information Kalinin like speed for example we were able to bring all that reporting man to one location sunset some of those other technologies but then also enable for that centralized reporting layer which we've also used data virtualization to start to marry submit information to db2 warehouse on cloud for users so we're allowing basically those that want to access CDR and our db2 warehouse and called dealer information to do that within one reporting layer um in doing so we were able to create something called a dealer harmonized ID really which is our version of we have so many dealers today right and some of those dealers actually sell bytes some of those dealers sell just apparel material some of those dealers just sell parts of those dealers right can we have certain you IDs kind of a golden record mastered information if you will right bought back in reporting so that we can accurately assess the dealer performance up to two years ago right it was really hard to do that we had information spread out all over it was really hard to get a good handle on what dealers were performing and what dealers weren't because was it was tough right for our analysts to wrangle that information and bring it together it took time many times we you would get multiple answers to one business question which is never good right one one question should have one answer if it's accurate um that is what we worked on within us last year and that's where really our CEO so the value at is now we can start to act on what dealers are performing at an optimal level versus what dealers are struggling and that's allowed even our account reps or field steel fields that right to go work with those struggling dealers and start to share with them the information of you know these are what some of our stronger dealer performing dealers are doing today that is making them more affecting it inside sorry effective is selling bikes you know these are some of the best practices you can implement that's where we make right our field staff smarter and our dealers smarter we're not looking to shut down dealers we just want to educate them on how to do better well and to your point about a single version of the truth if you will the the lines of business kind of owning their own data that's critical because you're not spending all your time you know pointing at fingers trying to understand the data if the if the users own it then they own it I and so how does self-service fit in were you able to achieve you know some level of self-service how far could you and you go there we were we did use some other tools I'll be quite honest aside from just the IBM tools today that's enabled some of that self-service analytics si PSAC was one of them Alteryx is another big one that we like to that our analyst team likes to use today to wrangle and bring that data together but that really allowed for our analysts spread in our reporting teams to start to build their own derivations their transformations for reporting themselves because they're more user interface space versus going in the backend systems and having to write straight pull right sequel queries things of that nature it usually takes time then requires a deeper level of knowledge then what we'd like to allow for our analysts right to have today I can say the same thing with the data scientist scheme you know they use a lot of the R and Python coding today what we've tried to do is make sure that the tools are available so that they can do everything they need to do without us really having to touch anything and I will be quite honest we have not had to touch much of anything we have a very skilled data scientist team so I will tell you that the tools that we put in place today Watson explore some of the other tools as well they haven't that has enabled the data scientists to really quickly move do what they need to do for reporting and even in cases where maybe Watson or Explorer may not be the optimal technology right for them to use we've also allowed for them to use some of our other resources are open source resources to build some of the models that they're that they were looking to build well I'm glad you brought that up Victoria because IBM makes a big deal out of you know being open and so you're kind of confirming that you can use third-party tools and and if you like you know tool vendor ABC you can use them as part of this framework yeah it's really about TCO right so take a look at what you have today if it's giving you at least 80% of what you need for the business or for your data scientists or reporting analysts right to do what they need to do it's to me it's good enough right it's giving you what you need it's pretty hard to find anything that's exactly 100 percent it's about being open though to when you're scientists or your analysts find another reporting tool right that requires minimal maintenance or let's just say did a scientist flow that requires minimal maintenance it's free right because it's open source IBM can integrate with that and we can enable that to be a quicker way for them to do what they need to do versus telling them no right you can't use the other technologies or the other open source information out there for you today you've got to use just these spools that's pretty tough to do and I think that would shut most IT shops down pretty quick within larger enterprises because it would really act as a roadblock to allow most of our teams right to do what they need to do reporting well last question so a big part of this the data ops you know borrowing from DevOps is this continuous integration continuous improvement you know kind of ongoing MOOC raising the bar if you will what do you see going from here oh I definitely see I see a world I see a world of where we're allowing for that rapid prototyping like I was talking about earlier I see a very big change in the data industry you said it yourself right we are in the brink of big data and it's only gonna get bigger there are organizations right right now that have literally understood how much of an asset their data really is today but they're starting to sell their data ah to other of their similar people are smaller industries right similar vendors within the industry similar spaces right so they can make money off of it because data truly is an asset now the key to it that was obviously making sure that it's curated that it's cleanse that it's rusted so that when you are selling that back you can't really make money off of it but we've seen though and what I really see on the horizon is the ability to vet that data right is in the past what have you been doing the past decade or just buying big data sets we're trusting that it's you know good information we're not doing a lot of profiling at most organizations arts you're gonna pay this big top dollar you're gonna receive this third-party data set and you're not gonna be able to use it the way you need to what I see on the horizon is us being able to do that you know we're building data Lake houses if you will right we're building um really those Hadoop link environments those data lakes right where we can land information we can quickly access it we can quickly profile it with tools that it would take hours for an ALICE write a bunch of queries do to understand what the profile of that data look like we did that recently at harley-davidson we bought and some third-party data evaluated it quickly through our agile scrum team right within a week we determined that the data was not as good as it as the vendor selling it right pretty much sold it to be and so we told the vendor we want our money back the data is not what we thought it would be please take the data sets back now that's just one use case right but to me that was golden it's a way to save money and start betting the data that we're buying otherwise what I would see in the past or what I've seen in the past is many organizations are just buying up big third-party data sets and just saying okay now it's good enough we think that you know just because it comes from the motorcycle and council right for motorcycles and operation Council then it's good enough it may not be it's up to us to start vetting that and that's where technology is going to change data is going to change analytics is going to change is a great example you're really in the cutting edge of this whole data op trend really appreciate you coming on the cube and sharing your insights and there's more in the crowd chatter crowd chatter off the Thank You Victoria for coming on the cube well thank you Dave nice to meet you it was a pleasure speaking with you yeah really a pleasure was all ours and thank you for watching everybody as I say crowd chatting at flash data op or more detail more Q&A this is Dave Volante for the cube keep it right there but right back right after this short break [Music]
**Summary and Sentiment Analysis are not been shown because of improper transcript**
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Volante | PERSON | 0.99+ |
Asia | LOCATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
five projects | QUANTITY | 0.99+ |
Victoria Stasiewicz | PERSON | 0.99+ |
China | LOCATION | 0.99+ |
Tesla | ORGANIZATION | 0.99+ |
Victoria | PERSON | 0.99+ |
Harley | ORGANIZATION | 0.99+ |
Harley Davidson | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Blue Cross | ORGANIZATION | 0.99+ |
Blue Cross | ORGANIZATION | 0.99+ |
Europe | LOCATION | 0.99+ |
Dave | PERSON | 0.99+ |
US | LOCATION | 0.99+ |
Harley-Davidson Motor Company | ORGANIZATION | 0.99+ |
harley-davidson | PERSON | 0.99+ |
six different systems | QUANTITY | 0.99+ |
Dave Volante | PERSON | 0.99+ |
last year | DATE | 0.99+ |
over 250 people | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
three days | QUANTITY | 0.99+ |
100 percent | QUANTITY | 0.99+ |
IG | ORGANIZATION | 0.99+ |
Watson | TITLE | 0.99+ |
Boston | LOCATION | 0.99+ |
tomorrow | DATE | 0.98+ |
one business question | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
ABC | ORGANIZATION | 0.98+ |
one answer | QUANTITY | 0.97+ |
four systems | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
Victoria stayshia | PERSON | 0.96+ |
Watson Explorer | TITLE | 0.96+ |
Explorer | TITLE | 0.96+ |
2019 | DATE | 0.96+ |
agile | ORGANIZATION | 0.95+ |
Vik | PERSON | 0.95+ |
two years ago | DATE | 0.95+ |
one question | QUANTITY | 0.95+ |
two weeks | QUANTITY | 0.94+ |
both | QUANTITY | 0.93+ |
excel | TITLE | 0.93+ |
Sally | PERSON | 0.92+ |
a week | QUANTITY | 0.92+ |
harley | ORGANIZATION | 0.91+ |
Watson Studio | TITLE | 0.91+ |
last half of the year | DATE | 0.89+ |
Alteryx | ORGANIZATION | 0.88+ |
millions of dollars | QUANTITY | 0.87+ |
single version | QUANTITY | 0.86+ |
every Thursday | QUANTITY | 0.86+ |
R | TITLE | 0.85+ |
Ted Kummert, UiPath | The Release Show: Post Event Analysis
>> Narrator: From around the globe it's theCUBE! With digital coverage of UiPath Live, the release show. Brought to you by UiPath. >> Hi everybody this is Dave Valenti, welcome back to our RPA Drill Down. Ted Kummert is here he is Executive Vice President for Products and Engineering at UiPath. Ted, thanks for coming on, great to see you. >> Dave, it's great to be here, thanks so much. >> Dave your background is pretty interesting, you started as a Silicon Valley Engineer, they pulled you out, you did a huge stint at Microsoft. You got experience in SAS, you've got VC chops with Madrona. And at Microsoft you saw it all, the NT, the CE Space, Workflow, even MSN you did stuff with MSN, and then the all important data. So I'm interested in what attracted you to UiPath? >> Yeah Dave, I feel super fortunate to have worked in the industry in this span of time, it's been an amazing journey, and I had a great run at Microsoft it was fantastic. You mentioned one experience in the middle there, when I first went to the server business, the enterprise business, I owned our Integration and Workflow products, and I would say that's the first I encountered this idea. Often in the software industry there are ideas that have been around for a long time, and what we're doing is refining how we're delivering them. And we had ideas we talked about in terms of Business Process Management, Business Activity Monitoring, Workflow. The ways to efficiently able somebody to express the business process in a piece of software. Bring systems together, make everybody productive, bring humans into it. These were the ideas we talked about. Now in reality there were some real gaps. Because what happened in the technology was pretty different from what the actual business process was. And so lets fast forward then, I met Madrona Venture Group, Seattle based Venture Capital Firm. We actually made a decision to participate in one of UiPath's fundraising rounds. And that's the first I really came encountered with the company and had to have more than an intellectual understanding of RPA. 'Cause when I first saw it, I said "oh, I think that's desktop automation" I didn't look very close, maybe that's going to run out of runway, whatever. And then I got more acquainted with it and figured out "Oh, there's a much bigger idea here". And the power is that by really considering the process and the implementation from the humans work in, then you have an opportunity really to automate the real work. Not that what we were doing before wasn't significant, this is just that much more powerful. And that's when I got really excited. And then the companies statistics and growth and everything else just speaks for itself, in terms of an opportunity to work, I believe, in one of the most significant platforms going in the enterprise today, and work at one of the fastest growing companies around. It was like almost an automatic decision to decide to come to the company. >> Well you know, you bring up a good point you think about software historically through our industry, a lot of it was 'okay here's this software, now figure out how to map your processes to make it all work' and today the processes, especially you think about this pandemic, the processes are unknown. And so the software really has to be adaptable. So I'm wondering, and essentially we're talking about a fundamental shift in the way we work. And is there really a fundamental shift going on in how we write software and how would you describe that? >> Well there certainly are, and in a way that's the job of what we do when we build platforms for the enterprises, is try and give our customers a new way to get work done, that's more efficient and helps them build more powerful applications. And that's exactly what RPA does, the efficiency, it's not that this is the only way in software to express a lot of this, it just happens to be the quickest. You know in most ways. Especially as you start thinking about initiatives like our StudioX product to what we talk about as enabling citizen developers. It's an expression that allows customers to just do what they could have done otherwise much more quickly and efficient. And the value on that is always high, certainly in an unknown era like this, it's even more valuable, there are specific processes we've been helping automate in the healthcare, in financial services, with things like SBA Loan Processing, that we weren't thinking about six months ago, or they weren't thinking about six months ago. We're all thinking about how we're reinventing the way we work as individuals and corporations because of what's going on with the coronavirus crisis, having a platform like this that gives you agility and mapping the real work to what your computer state and applications all know how to do, is even more valuable in a climate like that. >> What attracted us originally to UiPath, we knew Bobby Patrick CMO, he said "Dave, go download a copy, go build some automations and go try it with some other companies". So that really struck us as wow, this is actually quite simple. Yet at the same time, and so you've of course been automating all these simple tasks, but now you've got real aspiration, you're glomming on to this term of Hyperautomation, you've made some acquisitions, you've got a vision, that really has taken you beyond 'paving the cow path' I sometimes say, of all these existing processes. It's really trying to discover new processes and opportunities for automation, which you would think after 50 or whatever years we've been in this industry, we'd have attacked a lot of it, but wow, seems like we have a long way to go. Again, especially what we're learning through this pandemic. Your thoughts on that? >> Yeah, I'd say Hyperautomation. It's actually a Gartner term, it's not our term. But there is a bigger idea here, built around the core automation platform. So let's talk for a second just what's not about the core platform and then what Hyperautomation really means around that. And I think of that as the bookends of how do I discover and plan, how do I improve my ability to do more automations, and find the real opportunities that I have. And how do I measure and optimize? And that's a lot of what we delivered in 20.4 as a new capability. So let's talk about discover and plan. One aspect of that is the wisdom of the crowd. We have a product we call Automation Hub that is all about that. Enabling people who have ideas, they're the ones doing the work, they have the observation into what efficiencies can be. Enabling them to either with our Ask Capture Utility capture that and document that, or just directly document that. And then, people across the company can then collaborate eventually moving on building the best ideas out of that. So there's capturing the crowd, and then there's a more scientific way of capturing actually what the opportunities are. So we've got two products we introduced. One is process mining, and process mining is about going outside in from the, let's call it the larger processes, more end to end processes in the enterprise. Things like order-to-cash and procure-to-pay, helping you understand by watching the events, and doing the analytics around that, where your bottle necks, where are you opportunities. And then task mining said "let's watch an individual, or group of individuals, what their tasks are, let's watch the log of events there, let's apply some machine learning processing to that, and say here's the repetitive things we've found." And really helping you then scientifically discover what your opportunities are. And these ideas have been along for a long time, process mining is not new. But the connection to an automation platform, we think is a new and powerful idea, and something we plan to invest a lot in going forward. So that's the first bookend. And then the second bookend is really about attaching rich analytics, so how do I measure it, so there's operationally how are my robots doing? And then there's everything down to return on investment. How do I understand how they are performing, verses what I would have spent if I was continuing to do them the old way. >> Yeah that's big 'cause (laughing) the hero reports for the executives to say "hey, this is actually working" but at the same time you've got to take a systems view. You don't want to just optimize one part of the system at the detriment to others. So you talk about process mining, which is kind of discovering the backend systems, ERP and the like, where the task mining it sounds like it's more the collaboration and front end. So that whole system thinking, really applies, doesn't it? >> Yeah. Very much so. Another part of what we talked about then, in the system is, how do we capture the ideas and how do we enable more people to build these automations? And that really gets down to, we talk about it in our company level vision, is a robot for every person. Every person should have a digital assistant. It can help you with things you do less frequently, it can help you with things you do all the time to do your job. And how do we help you create those? We've released a new tool we call StudioX. So for our RPA Developers we have Studio, and StudioX is really trying to enable a citizen developer. It's not unlike the art that we saw in Business Intelligence there was the era where analytics and reporting were the domain of experts, and they produced formalized reports that people could consume. But the people that had the questions would have to work with them and couldn't do the work themselves. And then along comes ClickView and Tableau and Power BI enabling the self services model, and all of a sudden people could do that work themselves, and that enabled powerful things. We think the same arch happens here, and StudioX is really our way of enabling that, citizen developer with the ideas to get some automation work done on their own. >> Got a lot in this announcement, things like document understanding, bring your own AI with AI fabric, how are you able to launch so many products, and have them fit together, you've made some acquisitions. Can you talk about the architecture that enables you to do that? >> Yeah, it's clearly in terms of ambition, and I've been there for 10 weeks, but in terms of ambition you don't have to have been there when they started the release after Forward III in October to know that this is the most ambitious thing that this company has ever done from a release perspective. Just in terms of the surface area we're delivering across now as an organization, is substantive. We talk about 1,000 feature improvements, 100's of discreet features, new products, as well as now our automation cloud has become generally available as well. So we've had muscle building over this past time to become world class at offering SAS, in addition to on-premises. And then we've got this big surface area, and architecture is a key component of how you can do this. How do you deliver efficiently the same software on-premises and in the cloud? Well you do that by having the right architecture and making the right bets. And certainly you look forward, how are companies doing this today? It's really all about Cloud-Native Platform. But it's about an architecture such that we can do that efficiently. So there is a lot about just your technical strategy. And then it's just about a ton of discipline and customer focus. It keeps you focused on the right things. StudioX was a great example of we were led by customers through a lot of what we actually delivered, a couple of the major features in it, certainly the out of box templates, the studio governance features, came out of customer suggestions. I think we had about 100 that we have sitting in the backlog, a lot of which we've already done, and really being disciplined and really focused on what customers are telling. So make sure you have the right technical strategy and architecture, really follow your customers, and really stay disciplined and focused on what matters most as you execute on the release. >> What can we learn from previous examples, I think about for instance SQL Server, you obviously have some knowledge in it, it started out pretty simple workloads and then at the time we all said "wow, it's a lot more powerful to come from below that it is, if a Db2, or an Oracle sort of goes down market", Microsoft proved that, obviously built in the robustness necessary, is there a similar metaphor here with regard to things like governance and security, just in terms of where UiPath started and where you see it going? >> Well I think the similarities have more to do with we have an idea of a bigger platform that we're now delivering against. In the database market, that was, we started, SQL Server started out as more of just a transactional database product, and ultimately grew to all of the workloads in the data platform, including transaction for transactional apps, data warehousing and as well as business intelligence. I see the same analogy here of thinking more broadly of the needs, and what the ability of an integrated platform, what it can do to enable great things for customers, I think that's a very consistent thing. And I think another consistent thing is know who you are. SQL Server knew exactly who it had to be when it entered the database market. That it was going to set a new benchmark on simplicity, TCO, and that was going to be the way it differentiated. In this case, we're out ahead of the market, we have a vision that's broader than a lot of the market is today. I think we see a lot of people coming in to this space, but we see them building to where we were, and we're out ahead. So we are operating from a leadership position, and I'm not going to tell you one's easier that the other, and both you have to execute with great urgency. But we're really executing out ahead, so we've got to keep thinking about, and there's no one's tail lights to follow, we have to be the ones really blazing the trail on what all of this means. >> I want to ask you about this incorporation of existing systems. Some markets they take off, it's kind of a one shot deal, and the market just embeds. I think you guys have bigger aspirations than that, I look at it like a service now, misunderstood early on, built the platform and now really is fundamental part of a lot of enterprises. I also look at things like EDW, which again, you have some experience in. In my view it failed to live up to a lot of it's promises even though it delivered a lot of value. You look at some of the big data initiatives, you know EDW still plugs in, it's the system of record, okay that's fine. How do you see RPA evolving? Are we going to incorporate, do we have to embrace existing business process systems? Or is this largely a do-over in your opinion? >> Well I think it's certainly about a new way of building automation, and it's starting to incorporate and include the other ways, for instance in the current release we added support for long running workflow, it was about human workflow based scenarios, now the human is collaborating with the robot, and we built those capabilities. So I do see us combining some of the old and new way. I think one of the most significant things here, is also that impact that AI and ML based technologies and skills can have on the power of the automations that we deliver. We've certainly got a surface area that, I think about our AI and ML strategy in two parts, that we are building first class first party skills, that we're including in the platform, and then we're building a platform for third parties and customers to bring their what their data science teams have delivered, so those can also be a part of our ecosystem, and part of automations. And so things like document understanding, how do I easily extract data from more structured, semi-structured and completely unstructured documents, accurately? And include those in my automations. Computer vision which gives us an ability to automate at a UI level across other types of systems than say a Windows and a browser base application. And task mining is built on a very robust, multi layer ML system, and the innovation opportunity that I think just consider there, you know continue there. You think it's a macro level if there's aspects of machine learning that are about captured human knowledge, well what exactly is an automation that captured where you're capturing a lot of human knowledge. The impact of ML and AI are going to be significant going out into the future. >> Yeah, I want to ask you about them, and I think a lot of people are just afraid of AI, as a separate thing and they have to figure out how to operationalize it. And I think companies like UiPath are really in a position to embed UI into applications AI into applications everywhere, so that maybe those folks that haven't climbed on the digital bandwagon, who are now with this pandemic are realizing "wow, we better accelerate this" they can actually tap machine intelligence through your products and others as well. Your thoughts on that sort of narrative? >> Yeah, I agree with that point of view, it's AI and ML is still maturing discipline across the industry. And you have to build new muscle, and you build new muscle and data science, and it forces you to think about data and how you manage your data in a different way. And that's a journey we've been on as a company to not only build our first party skills, but also to build the platform. It's what's given us the knowledge that to help us figure out, well what do we need to include here so our customers can bring their skills, actually to our platform, and I do think this is a place where we're going to see the real impact of AI and ML in a broader way. Based on the kind of apps it is and the kind of skills we can bring to bear. >> Okay last question, you're ten weeks in, when you're 50, 100, 200 weeks in, what should we be watching, what do you want to have accomplished? >> Well we're listening, we're obviously listening closely to our customers, right now we're still having a great week, 'cause there's nothing like shipping new software. So right now we're actually thinking deeply about where we're headed next. We see there's lots of opportunities and robot for every person, and that initiative, and so we're launched a bunch of important new capabilities there, and we're going to keep working with the market to understand how we can, how we can add additional capability there. We've just got the GA of our automation cloud, I think you should expect more and more services in our automation cloud going forward. I think this area we talked about, in terms of AI and ML and those technologies, I think you should expect more investment and innovation there from us and the community, helping our customers, and I think you will also see us then, as we talked about this convergence of the ways we bring together systems through integrate and build business process, I think we'll see a convergence into the platform of more of those methods. I look ahead to the next releases, and want to see us making some very significant releases that are advancing all of those things, and continuing our leadership in what we talk about now as the Hyperautomation platform. >> Well Ted, lot of innovation opportunities and of course everybody's hopping on the automation bandwagon. Everybody's going to want a piece of your RPA hide, and you're in the lead, we're really excited for you, we're excited to have you on theCUBE, so thanks very much for all your time and your insight. Really appreciate it. >> Yeah, thanks Dave, great to spend this time with you. >> All right thank you for watching everybody, this is Dave Velanti for theCUBE, and our RPA Drill Down Series, keep it right there we'll be right back, right after this short break. (calming instrumental music)
SUMMARY :
Brought to you by UiPath. great to see you. Dave, it's great to the NT, the CE Space, Workflow, the company and had to have more than an a fundamental shift in the way we work. and mapping the real work Yet at the same time, and find the real ERP and the like, And how do we help you create those? how are you able to and making the right bets. and I'm not going to tell you one's easier and the market just embeds. and include the other ways, and I think a lot of people and it forces you to think and I think you will also see us then, and of course everybody's hopping on the great to spend this time with you. and our RPA Drill Down Series,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Ted Kummert | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Dave Valenti | PERSON | 0.99+ |
Dave Velanti | PERSON | 0.99+ |
10 weeks | QUANTITY | 0.99+ |
Ted | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Madrona Venture Group | ORGANIZATION | 0.99+ |
ten weeks | QUANTITY | 0.99+ |
100 | QUANTITY | 0.99+ |
October | DATE | 0.99+ |
UiPath | ORGANIZATION | 0.99+ |
MSN | ORGANIZATION | 0.99+ |
Seattle | LOCATION | 0.99+ |
SQL Server | TITLE | 0.99+ |
50 | QUANTITY | 0.99+ |
SQL Server | TITLE | 0.99+ |
first | QUANTITY | 0.99+ |
first bookend | QUANTITY | 0.99+ |
two parts | QUANTITY | 0.98+ |
Madrona | ORGANIZATION | 0.98+ |
Venture Capital Firm | ORGANIZATION | 0.98+ |
second bookend | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
200 weeks | QUANTITY | 0.98+ |
SQL | TITLE | 0.98+ |
One | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
one | QUANTITY | 0.98+ |
two products | QUANTITY | 0.98+ |
Tableau | TITLE | 0.98+ |
Oracle | ORGANIZATION | 0.97+ |
one experience | QUANTITY | 0.97+ |
Power BI | TITLE | 0.97+ |
about 100 | QUANTITY | 0.97+ |
Windows | TITLE | 0.96+ |
EDW | ORGANIZATION | 0.96+ |
Gartner | ORGANIZATION | 0.96+ |
ClickView | TITLE | 0.95+ |
CE Space | ORGANIZATION | 0.94+ |
one part | QUANTITY | 0.94+ |
100's | QUANTITY | 0.94+ |
Executive Vice President | PERSON | 0.92+ |
six months ago | DATE | 0.92+ |
Forward III | TITLE | 0.91+ |
coronavirus crisis | EVENT | 0.91+ |
first party | QUANTITY | 0.91+ |
SAS | ORGANIZATION | 0.86+ |
One aspect | QUANTITY | 0.86+ |
UiPath | PERSON | 0.86+ |
Bobby Patrick CMO | PERSON | 0.83+ |
one shot | QUANTITY | 0.83+ |
20.4 | QUANTITY | 0.81+ |
StudioX | TITLE | 0.81+ |
Workflow | ORGANIZATION | 0.8+ |
first class | QUANTITY | 0.79+ |
StudioX | ORGANIZATION | 0.79+ |
Hub | ORGANIZATION | 0.78+ |
theCUBE | ORGANIZATION | 0.78+ |
Hyperautomation | ORGANIZATION | 0.77+ |
UiPath Live | TITLE | 0.77+ |
about 1,000 feature improvements | QUANTITY | 0.74+ |
about six months ago | DATE | 0.73+ |
pandemic | EVENT | 0.7+ |
second | QUANTITY | 0.66+ |
Studio | TITLE | 0.66+ |
NT | ORGANIZATION | 0.65+ |
SBA | ORGANIZATION | 0.61+ |
Silicon Valley | LOCATION | 0.55+ |
IBM Think 2020 Keynote Analysis | IBM THINK 2020
from the cube studios in Palo Alto in Boston connecting with thought leaders all around the world this is a cube conversation hello everybody welcome to the cubes exclusive coverage of IBM thanks 2020 digital event experience the cube covering wall-to-wall we've got a number of interviews planned for you going deep my name is Dave Volante I'm here with stoom in ament's - how you doing doing great Dave so we're socially distant as you can see in the studio and mohab row everybody's you know six feet apart got our masks on took them off for this for this segment so Stu let's get into it so a very interesting time obviously for IBM Arvind Krishna doing the big keynote Jim Whitehurst new president so you got a new leadership a lot of talk about resilience agility and flexibility you know which is kind of interesting obviously a lot of their clients are thinking about kovat 19 in that context iBM is trying to provide solutions and capabilities we're going to get into it but really the linchpin of all this is open shift and RedHat and we're gonna talk about what that means for the vision that Arvind Krishna laid out and let's get into it your your thoughts on think 2020 yeah so Dave of course you know last week we had Red Hat summit so Red Hat is still Red Hat you and I had a nice discussion going into Red Hat summit yes thirty four billion dollar acquisition there now under IBM Jim white her slides over in that new role as president but you know one of the questions we've had fundamentally Dave is does an acquisition like this will it change IBM will it change the cloud landscape openshift and Red Hat are doing quite well we definitely have seen some some of the financials and every audience that hasn't seen your analysis segment of IBM should really go in and see that because the Red Hat of course is one of the bright spots in the financials they're you know good growth rate on the number of customers and what they're doing in cloud and underneath a lot of those announcements you dig down and oh yeah there's openshift and there's Red Hat Enterprise Linux rel so you know I long partner for decades between IBM and Red Hat but is you know how will the IBM scale really help the Red Hat pieces there's a number of announcements underneath you know not just you know how does the entire world work on you know Z and power and all of the IBM platforms but you know I believe it's arvind says one of the enduring platforms needs to be the hybrid cloud and you heard a Red Hat summit the entire week it was the open hybrid cloud was the discussion well yes so that actually is interesting you brought up Arvin's sort of pillars there were three enduring platforms that he cited then the fourth of course is I guess open hybrid cloud but the first was mainframes the second was and I'm not sure this is the right order the second was services and then the third was middleware so basically saying excuse me we have to win the day for the architecture of hybrid cloud what's that mean to you then I'd like to chime in yeah so so Dave first of all you know when when we did our analysis when IBM bought red Hatton says you know my TL DR was does this change the cloud landscape my answer is no if I'm a Amazon I'm not sitting there saying oh geez you know the combination of IBM and Red Hat well they're partners and they're they're gonna be involved in it does IBM have huge opportunities in hybrid cloud multi cloud and edge computing absolutely one of the questions is you know how will I be M services really be transformed you know Dave we've watched over the last decade some of the big service organizations have really shrunk down cloud changed the marginal economics you've done so much discussion of this over the last handful of years that you need to measure yourself against the hyper scalars you need to you know see where you can add value and the question is Dave you know when and where do we think of IBM in the new era well so coming back to sort of your point about RedHat and services is it about cloud is a developer's near-term I've said it's it's more about services than it is about cloud longer-term I think it is about cloud but but IBM's definition of cloud is maybe a little different than 10 hours but when Jeannie when on the roadshow - after the redhead acquisitions you said this is gonna be a creative - free cash flow within one year and the reason why I always believe that is because they were gonna plug Red Hat and we've talked about this an open shift right into their services business and start modernizing applications right away they've actually achieved that so I think they had pretty good visibility and that was kind of a mandate so IBM's huge services organization is in a good position to do that they've got deep industry expertise we heard Arvind Krishna on his keynote talking about that Jim Whitehurst talking more about services you really didn't hear Jim you know previously in his previous roles talk a lot about services other than as part of the ecosystem so it's an interesting balancing act that that iBM has to do the real thing I want to dig into Stu is winning the day with the with with the architecture of hybrid cloud so let's start with with cloud talk about how IBM defines cloud IBM on its earning earnings call we talked about this on our Red Hat Summit analysis the cloud was you know 23 billion you know growing it whatever 20 20 plus percent when my eyes have been bleeding reading IBM financial statements in ten case for the last couple of weeks but when you go in there and you look at what's in that cloud and I shared this on my braking analysis this week a very small portion of that cloud revenue that what last year 21 billion very small portion is actually what they call cloud cloud and cognitive software it's only about 20 percent of the pie it's really services it's about 2/3 services so that is a bit of a concern but at the same time it's their greatest opportunity because they have such depth and services if IBM can increase the percentage of its business that's coming from higher margin software a business which was really the strategy go back 20 years ago it's just as services became this so big it's so pervasive that that software percentage you know maybe it grew maybe it didn't but but that's IBM's opportunities to really drive that that that software based revenues so let's talk about what that looks like how does OpenShift play in that IBM definition of cloud which includes on Prem the IBM public law everybody else's public cloud multi-cloud and the edge yeah well first of all Dave right the question is where does IBM technologies where do they live so you know look even before the Red Hat piece if we looked at IBM systems there's a number of times that you're seeing IBM software living on various public clouds and that it's goodness you know one of the things we've talked about for a number of years is you know how can you become more of a software company how can you move to more of the you know cloud consumption models you go in more op X and capex so IBM had done some of that and Red Hat should be able to help supercharge that when we look at some of the announcements the one that of course caught my other most Dave is the you know IBM cloud satellite would would say the shorthand of it it's IBM's version of outposts and underneath that what is it oh it's open shift underneath there and you know how can I take those pieces and we know open ship can live across you know almost any of the clouds and you know cannot live on the IB cloud IBM cloud absolutely can it be open ship be in the data center and on virtualization whether it be open source or VMware absolutely so satellite being a fundamental component underneath of open ship makes a lot of sense and of course Linux yeah Linux underneath if you look at the the one that we've heard IBM talking about for a while now is cloud packs is really how are they helping customers simplify and build that cloud native stack you start with Red Hat Enterprise Linux you put openshift on top of that and then cloud packs are that simple toolset for whether you're doing data or AI or integration that middleware that you talked about in the past iBM has way the ways that they've done middleware for decades and now they have the wonderful open source to help enable that yeah I mean WebSphere bluemix IBM cloud now but but OpenShift is really that pass layer that that IBM had coveted right and I was talking to some of IBM's partners getting ready for this event and they say if you dig through the 10k cloud packs is one of those that you know there are thousands of customers that are using this so it's good traction not just hey we have this cloud stuff and it's wonderful and we took all of these acquisitions everything from SoftLayer to software pieces but you know cloud packs is you know a nice starter for companies to help really move forward on some of their cloud native application journey yes so what whatever we talked about this past week in the braking analysis and certainly David floor has been on this as well as this notion of being able to run a Red Hat based let's call it a stack everywhere and Jim White has talked about that essentially really whether it's on Prem at the edge in the clouds but the key there stew is being able to do so natively so every layer of you know it began call it the stack IT services the data plane the control plane the management plane all the planes being able to the networking the transport etc being natively able to run wherever it is so that you can take fine-grain advantage and leverage the primitives on respective clouds the advantage that IBM has in my view would love your thoughts on this is that Red Hat based platforms it's open source and so I mean it's somebody gonna trust Amazon to be the the cloud native anybody's cloud yeah you know solution well if you're part of the Amazon stack I mean I Amazon frankly an Oracle have similar kind of mindset you know redstack Amazon stack make it all homogeneous and it'll run just fine IBM's coming at it from an open source perspective so they they in some ways will have more credibility but it's gonna take a lot of investment to really Shepherd those standards they're gonna have to put a lot of commitments in committers and they're gonna have to incent people to actually adhere to those standards yeah I mean David's the idea of pass the platform as a service that we've been chasing as an industry for more than a decade what's interesting if you listen to IBM what's underneath this well it's you know taking advantage of the container based architecture with kubernetes underneath so can I run kubernetes anywhere yeah pretty much every cloud has their own service OpenShift can live everywhere the question is what David floors rightly putting out okay if I bake to a single type of solution can i really take advantage of the native offerings so the discussion we've always had for a long time as dua virtualize something in which case I'm really abstract away I get to you know I can't take advantage of the all various pieces do I do multi cloud in which case I have some least common denominator way of looking at cloud because I what I want to be able to do is get the value in differentiation out of each cloud I use but not be stuck on any cloud and yes Dave Red Hat with openshift and based with kubernetes and the open source community is definitely a leading way to do that what you worry about is saying okay how much is this stuck on containerization will it be able to take advantage of things like serverless you talk to IBM and say okay underneath it's going to have all this wonderful components Dave when I talked to Andy Jesse and he says if I was rebuilding AWS today it would all be service underneath so what is that underlying construct you know is it flexible and can it be updated Red Hat and IBM are going to bridge between the container world and the serverless world with things like a native but absolutely we are not yet at the Nirvana that developers can just build their apps and know that it can run anywhere and take advantage of anything so you know some things we know we need to keep working so a couple other things there so Jim Weider has talked about ingesting innovation that the nature of innovation is such that it comes from a lot of different places open source obviously is a you know fundamental you know component of that he talked about the telco edge he gave an example of Vodafone Arvind Krishna talked about anthem kind of redefining healthcare post kovat so you're seeing some examples of course that's good that IBM puts forth some really you know proof points it's not just you know slide where which is good I think the the interesting thing you know you can't just put you know containers out there and expect the innovation to find its way into those containers it's gonna take a lot of work to make sure that as those different layers of the stack that we were talking about before are actually going to come to fruition so there's there's the there's some other announcements in this regard to these Edgecumbe edge computing application manager let's say the telco edge a lot of automation focused you mentioned IBM satellite there's the financial services cloud so we're seeing IBM actually you know sprinkle around some investments there as I said in my breaking in houses I'd like to see them dial up those investments a little bit more maybe dial down the return of cash at least for the next several years to shareholders yeah I mean Dave the concern you would talk to most customers and you say well if you try to even optimize your own data center and turn it into a cloud how can you take advantage of the innovation that the Amazon Microsoft Google's and IBM's are Tait are putting out there in the world you want to be able to plug into that you want to be able to leverage those those new services so that is where it's definitely a shift Dave you think about IBM over a hundred years usually they're talking about their patent portfolio I I think they've actually opened up a lot of their patent portfolio to help attack you know the kovat 19 so it is definitely a very different message and tenor that I hear under Arvind Krishna you know in very early days than what I was used to for the last decade or two from IBM yeah well at the risk of being a little bit repetitive one of the things that I talked about in my breaking the analysis I highlighted that arvind said he wants to lead with a technical story which I really like Arvin's a technical visionary his predecessors his three predecessors were not considered technical visionaries and so I think that's one of the things that's been lacking inside of IBM I think it's one of the reason why why Services has been such a dominant component so look Lou Gerstner too hard to argue with the performance of the company but when he made the decision and IBM made the decision to go all-in on services something's got to give and what gave and I've said this many many times in the cube was was product leadership so I'd like to see IBM get back to that product leadership and I think Red Hat gives them an opportunity to do that obviously Red Hat Linux you know open source is a leader the leader and this is jump all as we've talked about many times in this multi hybrid cloud edge you know throwing all the buzzwords but there's some interesting horses on the track you got you got VMware we throw in AWS just because they're there you can talk about cloud without talking about AWS certainly Microsoft has designs there Cisco Google everybody wants a piece of that pie and I would say that you know Red Hat with with with OpenShift is in a good position if in fact they can make the investments necessary to build out those stacks yeah it's funny Dave because IBM for the history the size that they are often can get overlooked you talk about you know we've probably spent more air time talking about the VMware Amazon relationship than almost any in the last few years well we forget we were sitting at vmworld and two months before VMware announced the Amazon partnership who was it that was up on the main stage with Pat Gelson der it was IBM because IBM was the first partner I I believe that I saw numbers that IBM was saying that they have more hosted VMware environments than anyone out there I'd love to see the data on it to understand there because you know IBM plays in so many different places they just often are not you know aggregated and counted together you know when you get outside of some of the you know middleware mainframe some of the pieces that you talked about earlier Dave so IBM does have a strong position they just haven't been the front center leader too often but they have a broad portfolio and very much services led so they they kind of get forgotten you know off on the sides so IBM stated strategy is to bring those mission critical workloads into the cloud they've said that 80% of the workloads remain on Prem only 20% have been been clarified you know when you when you peel the onions on that there's just is so much growth and cloud native workloads so you know there's there is a somewhat of a so what in that but I will say this so where are the mission critical workloads where do they live today they live on Prem we can but but but whose stacks are running those it's IBM and it's Oracle and and David floor has done some research that suggests that if you're gonna put stuff into the cloud that's mission-critical you're probably better off staying with those those stacks that are going to allow you to a lower risk move not have to necessarily rip and replace and so you know migrating mission-critical Oracle database into AWS or db2 you know infrastructure into AWS is is gonna be much more challenging than than going same-same into the IBM cloud or the respective Oracle cloud so I guess my question to you Stu is why do people want to move those mission critical workloads into the cloud do they well first of all it's unlocking innovation that you talked about Dave so you know we've looked at from a VMware standpoint versus a red hat standpoint if you talk about building new apps doing containerization having that cloud native mindset do I have a bimodal configuration not so not a word that we talk about as much anymore because I want to be able to modernize it modernizing those applications doing any of those migrations we know or super challenging you know heck David Flair has talked about it for a long long time so you bring up some great points here that you know Microsoft might be the best at meeting customers where they are and giving people a lot of options IBM lines up in many ways in a similar ways my biggest critique about VMware is they don't have tight ties to the application it's mostly you know virtual eyes it or now we have some cloud native pieces but other than the pivotal group they didn't do a lot with modernization on applications IBM with their middleware history Red Hat with everything that they do with the developer communities are well positioned to help customers along those digital journeys and going through those transformations so it's you know applications need to be updated you know if anybody that's used applications that are long in the tooth know that they don't have the features that I want they don't react the way they want heck today Dave everybody needs to be able to access things where they are on the go you know it's not a discussion anymore about you know virtual desktop it's about you know work anywhere have access to the data where I need it and be much more flexible and agile and those are some of the configurations that you know iBM has history and their services arm can help customers move along those journeys yeah so you know I think one of the big challenges iBM has it's got a it's got a its fingers in a lot of pies AI you know they talk a lot about blockchain they're about quantum quantum is not gonna be here for a while it's very cool we have an interview coming up with with Jamie Thomas and you know she's all over the quantum we've talked to her in the past about it but I think you know if you think about IBM's business in terms of services and product you know it's whatever it is a 75 you know billion dollar organization 2/3 or and maybe not quite 2/3 maybe 60 Plus percent is services services are not an R&D intensive business you look at a company like Accenture Stu I think Accenture spent last year 800 million on R&D they're a forty five billion dollar forty six billion dollar company so if you really isolate the IBM you know company to two products whatever its call it 25 30 billion they spend a large portion of that that revenue on R&D to get to the six billion but my argument is it's it's not enough to really drive the type of innovation that they need just another again Accenture data point because they're kind of a gold standard along with IBM you.why and others and and a couple of others in services they return seventy six percent of their cash to shareholders iBM has returned consistently 50 to 60 percent to its shareholders so arvind stated he wants to return IBM to growth you know every every IBM CEO says that Ginni I used to talk about has to shrink to grow as I said unfortunately so you should run out of time and now it's up to Arvind to show that but to me growth has got to come from fueling Rd whether it's organic or inorganic I'd like to see you know organic as the real driver for obvious reasons and I don't think just open source in and of itself obviously is going to attract that it'll attract innovation but whether or not IBM will be able to harness it to his advantage is the real challenge unless they're making huge huge commitments to that open source and in a microcosm you know it's a kind of a proxy we saw what happened to Hortonworks and cloud era because they had to had to fund that open source commitment you know IBM we're talking about much much with the hybrid multi-cloud edge much much bigger opportunity but but requirement and we haven't even talked about AI you know bringing you know I think I think you have a quote on you know data is the fuel what was that quote yes it was Jim Whitehurst he said data is the fuel cloud is the platform AI is accelerant and then security my paraphrase is the mission control there so sounds a lot like your innovation cocktail that you've been talking about for the last year or so Dave but iCloud but so okay but AI is the accelerant and I agree by the way applying AI to all this data that we have you know over the years automating it and scaling it in the cloud it's critical and if IBM wants to define cloud as you know the cloud experience anywhere I'm fine with that I'm not a fan of the way they break down their cloud business I think it's bogus and I've called them on that but okay fine so maybe we'll get by that I'll get over it but but but really that is the opportunity it's just it's got to be funded yeah no Dave absolutely iBM has a lot of really good assets there they've got strong leadership as you said can Arvind do another Satya Nadella transformation there's the culture there's the people and there's the product so you know IBM you know absolutely has a lot of great resources and you know smart people and some really good products out there as well as really good ecosystem partnerships it's you know Amazon is not the enemy to IBM Microsoft is a partner for what they're doing and even Google is somebody that they can work with so you know I always say back in the ten years I've been working for you Dave I think the first time I heard the word coopertition I thought it was like an IBM trademark name because they were the ones that really you know lead as to have a broad portfolio and work with everybody in the ecosystem even though you don't necessarily agree or partner on every piece of what you're doing so in a multi cloud AI you know open ecosystem IBM's got a real shot yeah I mean a Satya Nadella like move would be awesome of course Satya had a much much larger you know of cash hoard to play with but but I guess the similarity stew are you you're notwithstanding that now we have three prominent companies run by Indian native born leaders which is pretty astounding when you think about it but notwithstanding that there are some similarities just in terms of culture and emphasis and getting back to sort of the the technical roots the technical visionaries so I'm encouraged but I'm watching very closely stew as I'm sure you are kind of where those investments go how how it plays in the marketplace but but I think you're right I think people underestimate IBM and and but the combination of IBM Red Hat could be very dangerous yeah Dave how many times do we write the article you know has the sleeping giant of IBM been awoken so I think it's a different era now and absolutely there's IBM has the right cards to be able to play at some of these new tables and it's a different IBM for a different era somebody said to me the other day that and probably you've probably heard this you have to but it was first I heard of it is that within five years IBM had better be a division of Red Hat versus the other way around so all right Stu thanks for for helping to set up the IBM think 2020 digital event experience what coming at you wall-to-wall coverage I think we've got over 40 interviews lined up Stu you you have been doing a great job both last week with the Red Hat summit and helping out with IBM thanks so thanks for that Dave no no rainy week at the new Moscone like we had last year a really good content from the comfort of our remote settings yeah so keep it right there buddy this is Dave a lot a force to Minutemen go to Silicon angle calm you'll check out all the news the the cube net we'll have all of our videos will be running wall-to-wall wiki bong calm has some some of the research action this day Volante force too many we'll be right back right after this short break [Music]
SUMMARY :
of some of the you know middleware
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jim Whitehurst | PERSON | 0.99+ |
Jim Weider | PERSON | 0.99+ |
Jim White | PERSON | 0.99+ |
Andy Jesse | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Arvind Krishna | PERSON | 0.99+ |
Dave Volante | PERSON | 0.99+ |
Jim Whitehurst | PERSON | 0.99+ |
Jamie Thomas | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Pat Gelson | PERSON | 0.99+ |
David Flair | PERSON | 0.99+ |
Arvind Krishna | PERSON | 0.99+ |
Lou Gerstner | PERSON | 0.99+ |
50 | QUANTITY | 0.99+ |
David | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Accenture | ORGANIZATION | 0.99+ |
arvind | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Arvind | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
23 billion | QUANTITY | 0.99+ |
80% | QUANTITY | 0.99+ |
six billion | QUANTITY | 0.99+ |
Arvin | PERSON | 0.99+ |
seventy six percent | QUANTITY | 0.99+ |
Joe Gonzalez, MassMutual | Virtual Vertica BDC 2020
(bright music) >> Announcer: It's theCUBE. Covering the Virtual Vertica Big Data Conference 2020, brought to you by Vertica. Hello everybody, welcome back to theCUBE's coverage of the Vertica Big Data Conference, the Virtual BDC. My name is Dave Volante, and you're watching theCUBE. And we're here with Joe Gonzalez, who is a Vertica DBA, at MassMutual Financial. Joe, thanks so much for coming on theCUBE I'm sorry that we can't be face to face in Boston, but at least we're being responsible. So thank you for coming on. >> (laughs) Thank you for having me. It's nice to be here. >> Yeah, so let's set it up. We'll talk about, you know, a little bit about MassMutual. Everybody knows it's a big financial firm, but what's your role there and kind of your mission? >> So my role is Vertica DBA. I was hired January of last year to come on and manage their Vertica cluster. They've been on Vertica for probably about a year and a half before that started out on on-prem cluster and then move to AWS Enterprise in the cloud, and brought me on just as they were considering transitioning over to Vertica's EON mode. And they didn't really have anybody dedicated to Vertica, nobody who really knew and understood the product. And I've been working with Vertica for about probably six, seven years, at that point. I was looking for something new and landed a really good opportunity here with a great company. >> Yeah, you have a lot of experience in Vertica. You had a role as a market research, so you're a data guy, right? I mean that's really what you've been doing your entire career. >> I am, I've worked with Pitney Bowes, in the postage industry, I worked with healthcare auditing, after seven years in market research. And then I've been with MassMutual for a little over a year now, yeah, quite a lot. >> So tell us a little bit about kind of what your objectives are at MassMutual, what you're kind of doing with the platform, what application just supporting, paint a picture for us if you would. >> Certainly, so my role is, MassMutual just decided to make Vertica its enterprise data warehouse. So they've really bought into Vertica. And we're moving all of our data there probably about to good 80, 90% of MassMutual's data is going to be on the Vertica platform, in EON mode. So, and we have a wide usage of that data across corporation. Right now we're about 50 terabytes and growing quickly. And a wide variety of users. So there's a lot of ETLs coming in overnight, loading a lot of data, transforming a lot of data. And a lot of reporting tools are using it. So currently, Tableau MicroStrategy. We have Alteryx using it, and we also have API's running against it throughout the day, 24/7 with people coming in, especially now these days with the, you know, some financial uncertainty going on. A lot of people coming and checking their 401k's, checking their insurance and status and what not. So we have to handle a lot of concurrent traffic on top of the normal big query. So it's a quite diverse cluster. And I'm glad they're really investing in using Vertica as their overall solution for this. >> Yeah, I mean, these days your 401k like this, right? (laughing) Afraid to look. So I wonder, Joe if you could share with our audience. I mean, for those who might not be as familiar with the history of just Vertica, and specifically, about MPP, you've had historically you have, you know, traditional RDBMS, whether it's Db2 or Oracle, and then you had a spate of companies that came out with this notion of MPP Vertica is the one that, I think it's probably one of the few if only brands that they've survived, but what did that bring to the industry and why is that important for people to understand, just in terms of whatever it is, scale, performance, cost. Can you explain that? >> To me, it actually brought scale at good cost. And that's why I've been a big proponent of Vertica ever since I started using it. There's a number, like you said of different platforms where you can load big data and store and house big data. But the purpose of having that big data is not just for it to sit there, but to be used, and used in a variety of ways. And that's from, you know, something small, like the first installation I was on was about 10 terabytes. And, you know, I work with the data warehouses up to 100 terabytes, and, you know, there's Vertica installations with, you know, hundreds of petabytes on them. You want to be able to use that data, so you need a platform that's going to be able to access that data and get it to the clients, get it to the customers as quickly as possible, and not paying an arm and a leg for the privilege to do so. And Vertica allows companies to do that, not only get their data to clients and you know, in company users quickly, but save money while doing so. >> So, but so, why couldn't I just use a traditional RDBMS? Why not just throw it all into Oracle? >> One, cost, Oracle is very expensive while Vertica's a lot more affordable than that. But the column-score structure of Vertica allows for a lot more optimized queries. Some of the queries that you can run in Vertica in 2, 3, 4 seconds, will take minutes and sometimes hours in an RDBMS, like Oracle, like SQL Server. They have the capability to store that amount of data, no question, but the usability really lacks when you start querying tables that are 180 billion column, 180 billion rows rather of tables in Vertica that are over 1000 columns. Those will take hours to run on a traditional RDBMS and then running them in Vertica, I get my queries back in a sec. >> You know what's interesting to me, Joe and I wonder if you could comment, it seems that Vertica has done a good job of embracing, you know, riding the waves, whether it was HDFS and the big data in our early part of the big data era, the machine learning, machine intelligence. Whether it's, you know, TensorFlow and other data science tools, it seems like Vertica somehow in the cloud is the other one, right? A lot of times cloud is super disruptive, particularly to companies that started on-prem, it seems like Vertica somehow has been able to adopt and embrace some of these trends. Why, from your standpoint, first of all, from your standpoint, as a customer, is that true? And why do you think that is? Is it architectural? Is it true mindset engineering? I wonder if you could comment on that. >> It's absolutely true, I've started out again, on an on-prem Vertica data warehouse, and we kind of, you know, rolled kind of along with them, you know, more and more people have been using data, they want to make it accessible to people on the web now. And you know, having that, the option to provide that data from an on-prem solution, from AWS is key, and now Vertica is offering even a hybrid solution, if you want to keep some of your data behind a firewall, on-prem, and put some in the cloud as well. So data at Vertica has absolutely evolved along with the industry in ways that no other company really has that I've seen. And I think the reason for it and the reason I've stayed with Vertica, and specifically have remained at Vertica DBA for the last seven years, is because of the way Vertica stays in touch with it's persons. I've been working with the same people for the seven, eight years, I've been using Vertica, they're family. I'm part of their family, and you know, I'm good friends with some of these people. And they really are in tune not only with the customer but what they're doing. They really sit down with you and have those conversations about, you know, what are your needs? How can we make Vertica better? And they listen to their clients. You know, just having access to the data engineers who develop Vertica to be arranged on a phone call or whatnot, I've never had that with any other company. Vertica makes that available to their customers when they need it. So the personal touch is a huge for them. >> That's good, it's always good to get the confirmation from the practitioners, just not hear from the vendor. I want to ask you about the EON transition. You mentioned that MassMutual brought you in to help with that. What were some of the challenges that you faced? And how did you get over them? And what did, what is, why EON? You know, what was the goal, the outcome and some of the challenges maybe that you had to overcome? >> Right. So MassMutual had an interesting setup when I first came in. They had three different Vertica clusters to accommodate three different portions of their business. The data scientists who use the data quite extensively in very large queries, very intense queries, their work with their predictive analytics and whatnot. It was a separate one for the API's, which needed, you know, sub-second query response times. And the enterprise solution, they weren't always able to get the performance they needed, because the fast queries were being overrun by the larger queries that needed more resources. And then they had a third for starting to develop this enterprise data platform and started, you know, looking into their future. The first challenge was, first of all, bringing all those three together, and back into a single cluster, and allowing our users to have both of the heavy queries and the API queries running at the same time, on the same platform without having to completely separate them out onto different clusters. EON really helps with that because it allows to store that data in the S3 communal storage, have the main cluster set up to run the heavy queries. And then you can set up sub clusters that still point to that S3 data, but separates out the compute so that the API's really have their own resources to run and not be interfered with by the other process. >> Okay, so that, I'm hearing a couple of things. One is you're sort of busting down data silos. So you're able to have a much more coherent view of your data, which I would imagine is critical, certainly. Companies like MassMutual, have been around for 100 years, and so you've got all kinds of data dispersed. So to the extent that you can break down those silos, that's important, but also being able to I guess have granular increments of compute and storage is what I'm hearing. What does that do for you? It make that more efficient? Well, they are other business benefits? Maybe you could elucidate. >> Well, one cost is again, a huge benefit, the cost of running three different clusters in even AWS, in the enterprise solution was a little costly, you know, you had to have your dedicated servers here and there. So you're paying for like, you know, 12, 15 different servers, for example. Whereas we bring them all back into EON, I can run everything on a six-node production cluster. And you know, when things are busy, I can spin up the three-node top cluster for the API's, only paid for when I need them, and then bring them back into the main cluster when things are slowed down a bit, and they can get that performance that they need. So that saves a ton on resource costs, you know, you're not paying for the storage, you're paying for one S3 bucket, you're only paying for the nodes, these are two instances, that are up and running when you need them., and that is huge. And again, like you said, it gives us the ability to silo our data without having to completely separate our data into different storage areas. Which is a big benefit, it gives us the ability to query everything from one single cluster without having to synchronize it to, you know, three different ones. So this one going to have there's, this one going to have there's, but everyone's still looking at the same data and replicate that in QA and Devs so that people can do it outside of production and do some testing as well. >> So EON, obviously a very important innovation. And of course, Vertica touts the difference between others who separate huge storage, and you know, they're not the only one that does that, but they are really I think the only one that does it for on-prem, and virtually across clouds. So my question is, and I think you're doing a breakout session on the Virtual BDC. We're going to be in Boston, now we're doing it online. If I'm in the audience, I'm imagining I'm a junior DBA at an organization that maybe doesn't have a Joe. I haven't been an expert for seven years. How hard is it for me to get, what do I need to do to get up to speed on EON? It sounds great, I want it. I'm going to save my company money, but I'm nervous 'cause I've only been at Vertica DBA for, you know, a year, and I'm sort of, you know, not as experienced as you. What are the things that I should be thinking about? Do I need to bring in? Do I need to hire somebody? Do I need to bring in a consultant? Can I learn it myself? What would you advise? >> It's definitely easy enough that if you have at least a little bit of work experience, you can learn it yourself, okay? 'Cause the concepts are still there. There's some you know, little bits of nuances where you do need to be aware of certain changes between the Enterprise and EON edition. But I would also say consult with your Vertica Account Manager, consult with your, you know, let them bring in the right people from Vertica to help you get up to speed and if you need to, there are also resources available as far as consultants go, that will help you get up to speed very quickly. And we did work together with Vertica and with one of their partners, Clarity, in helping us to understand EON better, set it up the right way, you know, how do we take our, the number of shards for our data warehouse? You know, they helped us evaluate all that and pick the right number of shards, the right number of nodes to get set up and going. And, you know, helped us figure out the best ways to get our data over from the Enterprise Edition into EON very quickly and very efficient. So different with yourself. >> I wanted to ask you about organizational, you know, issues because, you know, the guys like you practitioners always tell me, "Look, the tech, technology comes and goes, that's kind of the easy part, we're good at that. It's the people it's the processes, the skill sets." What does your, you know, team regime look like? And do you have any sort of ideal team makeup or, you know, ideal advice, is it two piece of teams? Is it what kind of skills? What kind of interaction and communications to senior leadership? I wonder if you could just give us some color on that. >> One of the things that makes me extremely proud to be working for MassMutual right now, is that they do what a lot of companies have not been doing and that is investing in IT. They have put a lot of thought, a lot of money, and a lot of support into setting up their enterprise data platform and putting Vertica at the center. And not only did they put the money into getting the software that they needed, like Vertica, you know, MicroStrategy, and all the other tools that we were using to use that, they put the money in the people. Our managers are extremely supportive of us. We hired about 40 to 45 different people within a four-month time frame, data engineers, data analysts, data modelers, a nice mix of people across who can help shape your data and bring the data in and help the users use the data properly, and allow me as the database administrator to make sure that they're doing what they're doing most efficiently and focus on my job. So you have to have that diversity among the different data skills in order to make your team successful. >> That's awesome. Kind of a side question, and it's really not Vertica's wheelhouse, but I'm curious, you know, in the early days of the big data, you know, movement, a lot of the data scientists would complain, and they still do that, "80% of my time is spent wrangling data." The tools for the data engineer, the data scientists, the database, you know, experts, they're all different. And is that changing? And to what degree is that changing? Kind of what ending are we in and just in terms of a more facile environment for all those roles? >> Again, I think it depends on company to company, you know, what resources they make available to the data scientists. And the data scientists, we have a lot of them at MassMutual. And they're very much into doing a lot of machine learning, model training, predictive analytics. And they are, you know, used to doing it outside of Vertica too, you know, pulling that data out into Python and Scalars Bar, and tools like that. And they're also now just getting into using Vertica's in-database analytics and machine learning, which is a skill that, you know, definitely nobody else out there has. So being able to have one somebody who understands Vertica like myself, and being able to train other people to use Vertica the way that is most efficient for them is key. But also just having people who understand not only the tools that you're using, but how to model data, how to architect your tables, your schemas, the interaction between your tables and schemas and whatnot, you need to have that diversity in order to make this work. And our data scientists have benefited immensely from the struct that MassMutual put in place by our data management delivery team. >> That's great, I think I saw, somewhere in your background, that you've trained about 100 people in Vertica. Did I get that right? >> Yes, I've, since I started here, I've gone to our Boston location, our Springfield location, and our New York City location and trained, probably about this point, about 120, 140 of our Vertica users. And I'm trying to do, you know, a couple of follow-up sessions per year. >> So adoption, obviously, is a big goal of yours. Getting people to adopt the platform, but then more importantly, I guess, deliver business value and outcomes. >> Absolutely. >> Yeah, I wanted to ask you about encryption. You know, in the perfect world, everything would be encrypted, but there are trade offs. Are you using encryption? What are you doing in that regard? >> We are actually just getting into that now due to the New York and the CCPA regulations that are now in place. We do have a lot of Person Identifiable Information in our data store that does require encryption. So we are going through a month's long process that started in December, I think, it's actually a bit earlier than that, to start identifying all the columns, not only in our Vertica database, but in, you know, the other databases that we do use, you know, we have Postgres database, SQL Server, Teradata for the time being, until that moves into Vertica. And identify where that data sits, what downstream applications, pull that data from the data sources and store it locally as well, and starts encrypting that data. And because of the tight relationship between Voltage and Vertica, we settled on Voltages as the major platform to start doing that encryption. So we're going to be implementing that in Vertica probably within the next month or two, and roll it out to all the teams that have data that requires encryption. We're going to start rolling it out to the downstream application owners to make sure that they are encrypting the data as they get it pulled over. And we're also using another product for several other applications that don't mesh well as well with both. >> Voltage being micro, focuses encryption solution, correct? >> Right, yes. >> Yes, of course, like a focus for the audience's is the, it owns Vertica and if Vertica is a separate brand. So I want to ask you kind of close on what success looks like. You've been at this for a number of years, coming into MassMutual which was great to hear. I've had some past experience with MassMutual, it's an awesome company, I've been to the Springfield facility and in Boston as well, and I have great respect for them, and they've really always been a leader. So it's great to hear that they're investing in technology as a differentiator. What does success look like for you? Let's say you're at MassMutual for a few years, you're looking back, what success look like? Go. >> A good question. It's changing every day just, you know, with more and more, you know, applications coming onboard, more and more data being pulled in, more uses being found for the data that we have. I think success for me is making sure that Vertica, first of all, is always up made, is always running at its most optimal to keep our users happy. I think when I started, you know, we had a lot of processes that were running, you know, six, seven hours, some of them were taking, you know, almost a day long, because they were so complicated, we've got those running in under an hour now, some of them running in a matter of minutes. I want to keep that optimization going for all of our processes. Like I said, there's a lot of users using this data. And it's been hard over the first year of me being here to get to all of them. And thankfully, you know, I'm getting a bit of help now, I have a couple of system DBAs, and I'm training up to help out with these optimizations, you know, fixing queries, fixing projections to make sure that queries do run as quickly as possible. So getting that to its optimal stage is one. Two, getting our data encrypted and protected so that even if for whatever reasons, somehow somebody breaks into our data, they're not going to be able to get anything at all, because our data is 100% protected. And I think more companies need to be focusing on that as well. And third, I want to see our data science teams using more and more of Vertica's in-database predictive analytics, in-database machine learning products, and really helping make their jobs more efficient by doing so. >> Joe, you're awesome guest I mean, we always like I said, love having the practitioners on and getting the straight, skinny and pros. You're welcome back anytime, and as I say, I wish we could have met in Boston, maybe next year at the BDC. But it's great to have you online, and thanks for coming on theCUBE. >> And thank you for having me and hopefully we'll meet next year. >> Yeah, I hope so. And thank you everybody for watching that. Remember theCUBE is running concurrent with the Vertica Virtual BDC, it's vertica.com/bdc2020. If you want to check out all the keynotes, and all the breakout sessions, I'm Dave Volante for theCUBE. We'll be going. More interviews, for people right there. Thanks for watching. (bright music)
SUMMARY :
Big Data Conference 2020, brought to you by Vertica. (laughs) Thank you for having me. We'll talk about, you know, cluster and then move to AWS Enterprise in the cloud, Yeah, you have a lot of experience in Vertica. in the postage industry, I worked with healthcare auditing, paint a picture for us if you would. with the, you know, some financial uncertainty going on. and then you had a spate of companies that came out their data to clients and you know, Some of the queries that you can run in Vertica a good job of embracing, you know, riding the waves, And you know, having that, the option to provide and some of the challenges maybe that you had to overcome? It was a separate one for the API's, which needed, you know, So to the extent that you can break down those silos, So that saves a ton on resource costs, you know, and I'm sort of, you know, not as experienced as you. to help you get up to speed and if you need to, because, you know, the guys like you practitioners the database administrator to make sure that they're doing of the big data, you know, movement, Again, I think it depends on company to company, you know, Did I get that right? And I'm trying to do, you know, a couple of follow-up Getting people to adopt the platform, but then more What are you doing in that regard? the other databases that we do use, you know, So I want to ask you kind of close on what success looks like. And thankfully, you know, I'm getting a bit of help now, But it's great to have you online, And thank you for having me And thank you everybody for watching that.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Joe Gonzalez | PERSON | 0.99+ |
Vertica | ORGANIZATION | 0.99+ |
Dave Volante | PERSON | 0.99+ |
MassMutual | ORGANIZATION | 0.99+ |
Boston | LOCATION | 0.99+ |
December | DATE | 0.99+ |
100% | QUANTITY | 0.99+ |
Joe | PERSON | 0.99+ |
six | QUANTITY | 0.99+ |
New York City | LOCATION | 0.99+ |
seven years | QUANTITY | 0.99+ |
12 | QUANTITY | 0.99+ |
80% | QUANTITY | 0.99+ |
seven | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
four-month | QUANTITY | 0.99+ |
vertica.com/bdc2020 | OTHER | 0.99+ |
Springfield | LOCATION | 0.99+ |
2 | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
two instances | QUANTITY | 0.99+ |
seven hours | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Scalars Bar | TITLE | 0.99+ |
Python | TITLE | 0.99+ |
180 billion rows | QUANTITY | 0.99+ |
Two | QUANTITY | 0.99+ |
third | QUANTITY | 0.99+ |
15 different servers | QUANTITY | 0.99+ |
two piece | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
180 billion column | QUANTITY | 0.98+ |
over 1000 columns | QUANTITY | 0.98+ |
eight years | QUANTITY | 0.98+ |
Voltage | ORGANIZATION | 0.98+ |
three | QUANTITY | 0.98+ |
hundreds of petabytes | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
six-node | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
one single cluster | QUANTITY | 0.98+ |
Vertica Big Data Conference | EVENT | 0.98+ |
MassMutual Financial | ORGANIZATION | 0.98+ |
4 seconds | QUANTITY | 0.98+ |
EON | ORGANIZATION | 0.98+ |
New York | LOCATION | 0.97+ |
about 10 terabytes | QUANTITY | 0.97+ |
first challenge | QUANTITY | 0.97+ |
next month | DATE | 0.97+ |
Jozef de Vries, IBM | IBM Think 2019
(dramatic music) >> Live from San Francisco. It's theCUBE, covering IBM Think 2019. Brought to you by IBM. >> Welcome back to theCUBE. We are live at IBM Think 2019. I'm Lisa Martin with Dave Vellante. We're in San Francisco this year at the newly rejuved Moscone Center. Welcoming to theCUBE for the first time, Jozef de Vries, Director of IBM Cloud Databases. Jozef, it's great to have you on the program. >> Thank you very much, great to be here, great to be here. >> So as we were talking before we went live, this is, I was asking what you're excited about for this year's IBM Think. >> Yeah. >> Only the second annual IBM Think. >> Right. >> This big merger of a number of shows. >> Sure, you're right. >> Day minus one, team minus one, >> Yeah. >> everything really kicks off tomorrow. Talk to us about some of the things that you're working on. You've been at IBM for a long time. >> Mmm hmm. >> But cloud managed databases, let's talk value there for the customers. >> Yeah, definitely. Cloud managed databases really, at its core, it's about simplifying adoption of cloud provided services and reducing the capital expense that comes along with developing applications. Fundamentally what we're trying to do is abstract the overhead that is associated with running your own systems. Whether it's the infrastructure management, whether it's the network management, whether it's the configuration and deployment of you databases. Our collection of services really is about streamlining time to value of accessing and building against your databases. So we are really focused on is allowing the developer to focus on their business critical applications, their objectives, and really what they're paid for. They're paid to build applications, not paid to maintain systems. When we talk about the CIO office, the CTO office, they are looking at cost, they're looking at ways to reduce overall expenditures. And what we're able to provide with cloud managed databases is the ability not to have to staff an IT team, not to have to maintain and pay for infrastructure, not have to procure licenses, what have you, everything that goes into standing up the managing those systems yourself, we provide that and we provide the consumption based methods. So you basically pay for what you use, and we have various ways in which you can interact with your databases and the charges that are associated with that. But it really is again about alleviating all of that overhead and that expense that is associated with running systems yourself. >> 15 years ago, you're back to, before you started with IBM, >> Yeah. >> There was obviously IBM DB2, Oracle, SQL Server, >> SQL Server. >> I guess MySQL is around >> Mm hmm. >> back then, LabStack was building out the internet. But databases are pretty boring >> Yeah. >> back then. And then all of a sudden, it exploded. >> Right. >> And the NoSQL movement happened in a huge way. >> Mm hmm. >> Coincided with the big data movement. What happened? >> Yeah, I think as we saw the space of this technology evolve, and a variety of different kind of use cases cropping up. The development community kind of respond to that. And really what we try to do with our portfolio is provide that variety of database technology solutions. To me, not any number of different use cases. And we like to think about it broken down into two categories. Your primary data stores. This is where your applications are writing and reading the data that has been stored. And then particularly to your point, this is where we call the auxiliary data services, for example. These are your in memory caches, your message brokers, your search index, what have you. There is a plethora of different database technologies out there today that plug into any number of different use cases and application developers are attempting to fill. And more often than not, they're using more than one database at a time. And really what we're trying to do at IBM with our cloud managed database offering is provide a variety of those data services and database technologies to meet a variety of those use cases, whether they're mixing and matching, or different kind of applications workloads or what have you. We'd like to provide our customers with the choices that are out there today in the community at large. >> So many choices. >> Yeah. >> Am I hearing that its kind of horses for courses? I mean, you get things like, even niches like Cumulo with fine grain security. >> Yeah. >> Or Couchbase, obviously. >> Mm hmm. This one scales. And then this one is easy to use. You take Mongo, for text, really easy to use >> Yeah exactly. >> Sort of different specialized use cases. How do you squint through, and how does IBM match the right characteristics with the right technology? >> It's really, it's two-pronged. It's about understanding the user base. Understanding and listening to your customers. And really internalizing what are the use cases that they are looking to fulfill? It's also being in tune with the database technology in the market today. It's understanding where there are trends. Understanding where there are new use cases cropping up. And it's about building a deep enough engineering operations team where we can quickly spin up these new offerings. And again provide that technology to our end customers. And it's about working with our customers as well. And understanding the use cases and then sometimes making recommendations on what database technology or combination of databases would be best suited for their objectives. >> I'm curious. One of the things that you mentioned in terms of what the developer's day-to-day job should be, is this almost IBM's approach to aligning with the developer role and enabling it in new ways? >> It is really about, I think, having sympathy in delivering on solutions in regards that is simply for the pains that they had otherwise endured 10, 15 years ago. When the notion of cloud managed anything really wasn't a thing yet. Or was just starting to emerge. IBM in houses runs their own systems for years and years obviously and the folks on my team, they have come from other companies, they know that the pain, what pain is involved in trying to run services. So like I said it's a little bit out of sympathy, it's a bit out of knowing what your users need in a cloud managed service. Whether again it's security, or availability, or redundancy, you name it. It's about coming around to the other side of the table and I sat where you once sat. And we know what you need out of your data services. So trusting us to provide that for you. >> How are the requirements different? Things like recovery and resiliency. Do I need asset compliance in this new world? May be you could. >> Yeah. It's funny, that's a good question in that we don't necessarily deal so much with database specific requirements. Again as I mention we try to provide a variety of different database technologies. And by and large the users are going to know what they need, what combinations that they will need. And we'll work with them if they're navigating their way through it. Really what we see more the requirements these days are around the management characteristics. As you cited, are they highly available? Are they backed up? What's your disaster recovery policy? What security policies do you have in place? what compliance, so on and so forth. It's really about presenting the overall package of that managed solution. Not so much, whether the database is going to be high available verses consistent replication or what have you. I mean that's in there, and it's part of what we engage with our customers about, but also what we'd like to put a lot of emphasis is on providing those recognized database technologies so that there is a community behind and there's opportunity for the users to understand what it is that they need beyond just what we can sell them. It's really about selling the value proposition of again, the management characteristics of the services. >> So who do you see as the competition? Obviously the other big, the two big cloud providers, AWS and Azure. >> Yep. >> You're competing with them. >> Definitely. >> Quality of offerings. May be talk about how you fit. >> And Google's another one. Or Oracle is another emerging one. Even Alibaba is catching up quite a bit. It really feels like a neck-to-neck race in our day after day. The way we try to approach our portfolio is focusing on deep, broad and secure. Deep being that there're a core set of database technologies. We're building the database itself. Db2, Cloudant which is based off of Couchbase. Excuse me, CouchDB. And then broad. Again as I've been mentioning, having a variety of different database technologies. And they're secure across the board. Whether it's secure in how we run the systems, secure on how we certify them through external compliance certifications. Or secure in how we integrate with security based tooling that our users can take advantage of. Regarding our competitors, it really is one week it may be a new big data at scale type of database technology. Another day it may be, or another week it might be deeper integrations into the platform. It might be new open source database technologies. It might be a new proprietary database technology. But we're, it's a constant, like I say, race to who got the most robust portfolio. >> Developers are like teenagers. They're fickle. >> Yeah, that too, that too. We got to be quick in order to respond to those demands. >> In this age of hybrid multi-cloud, where the average company has five plus private cloud, public cloud, through inertia, through acquisition, et cetera. Where's IBM's advantage there as companies are, I think we heard a stat the other day, Dave, that in 2018, 80% of the companies migrated data and apps from public cloud. In terms of this reality that companies live in this multi-cloud, where is IBM's advantage there? And where does your approach to cloud managed services really differentiate IBM's capabilities? >> Really there's, for the last couple of years, a tremendous amount of investment on building on the Kubernetes open source platform. And even in particular to our cloud managed database services, we have been developing and have been recently releasing a number of different databases that run on a platform that we've developed against Kubernetes. It's a platform that allows us to orchestrate deployments, deletions of databases, backups, high availability, platform level integrations, all, a number of different things. What that has allowed us to do when concerning a hybrid type of strategy is it makes our platform more portable. So Kubernetes is something that can run on the cloud. It can run in a private cloud. It can run on premise. And this platform we're developing is something that can be deployed, which we do today for private, public cloud consumption, which can also be packaged up and deploy into a private cloud type environment. And ultimately it's portable and it's leveraging of that Kubernetes technology itself. So we're not hamstringing ourselves to purely public cloud type services, or only private cloud type services. We want to have something that is abstracted enough that again it can move around to these different kind of environments. >> How important is open source and how important is it for you to commit to the different open source projects? There are so many, >> Yeah. >> And you have limited resources. So how do you manage that? >> Open source is really critical both in what we're building and what we're also offering. As we've talked about our users out there, they know what they often want or sometimes we nudge them to the right or to the left, but generally speaking it's around all the open source technologies and whatever may be trending for that current month is often times what we're getting requested for. It could be a Postgres. It could be a RabbitMQ. It could be ElasticSearch. What have you. And really we put a lot of emphasis on embracing the open source community, providing those database technologies to our customers. And then it allows our customers to benefit from the community at large too. We don't become again the sole provider of education and information about that technology. We're able to expose the whole community to our customers and they're able to take advantage of that. >> I hear a lot of complaints sometimes, particularly from folks that might list themselves in a marketplace for one cloud or another, that they feel like the primary cloud vendor might be nudging the customer into their proprietary database. What's IBM's position on that? Is that fair? Is that overblown? >> We obviously have proprietary tech, particularly the Db2. And that's something we're continue investing in. It's what we view as one of our strategic top priority database technologies. We are very active developers in the Couch community as well. I wouldn't consider that proprietary, but again back to the point of-- >> CouchDB. You're as the steward of CouchDB. >> Exactly. >> Right. >> Right, exactly. But again, firm believers in open source. We want to give those opportunities to our customers to avoid those vendor lock-in type situations. We actually have quite a lot of interests from our EU customer base. And by and large EU policies are around anti-trust and what have you. They tend to gravitate towards open source technology because they know it's again portable. They can be used in Postgres by IBM one month and if they no longer are satisfied with that, they can take their Postgres workloads and move them into another cloud provider. Ideally they're coming from the other cloud providers onto IBM. >> Well I should be actually more specific, in fairness, Dynamo's often cited. I supposed Google's Spanner although that's sort of a more of a niche, >> Mm hmm. >> specialized database. If I understand it correctly, Db2, that's a hard core transaction >> Sure. >> system. You're not going to confused that with, I don't think, anyway CouchDB. Although, who knows? May be there are some use cases there. But it sounds like you're not nudging them to your proprietary, certainly Db2 is proprietary. CouchDB is one of many options that you offer. >> Certainly Db2 is one of our core products for our database portfolio. And we do want to push our customers to Db2 where-- >> If it makes sense. >> Exactly, where it makes sense. And where there's demand for it. If it doesn't make sense so there's not demand we will offer up any number of the other databases that we also offer. >> Excellent, here's our last question.As >> Sure. >> As IBM Think the 2nd annual kicks off really tomorrow. For this developer audience that you were talking about a lot in our conversation, what are some of the exciting things that they're going to you? Any sort of obviously not breaking news, but >> Mmm hmm. >> Where would you advise the developer community, who's attending IBM Think to go to learn more about cloud managed databases? And how they can really become far more efficient to do their jobs better. >> Sure. Databases are hard, plain and simple. They are particularly hard to run, and developers who are not necessarily database admins, they're not database operators, that they want to focus on building the applications, are going to want to find solutions that alleviate that overhead of running those systems themselves. So to your question we've got sessions all throughout the week where we're talking about our Cloudant offerings and the future of where we're going with that. We've got a couple of different sessions around our IBM cloud database portfolio. This is a lot of the open source database technology we're running. We have demos in the solution center and Db2's strided all around the conference as well. So there's lots of different sessions focused on talking the value proposition of IBM's cloud managed database portfolio across the board. >> A lot of opportunities for learning. Well, Jozef de Vries, Thank you so much for joining Dave and me on theCube this afternoon. >> Thank you very much, it was great. And for Dave Vallente, I am Lisa Martin. You're watching theCube, live from IBM Think 2019. Day 1 stick around. We'll be right back with our next guest. (upbeat music)
SUMMARY :
Brought to you by IBM. Jozef, it's great to have you on the program. this is, I was asking what you're excited about a number of shows. Talk to us about some of the things that you're working on. But cloud managed databases, is the ability not to have to staff an IT team, back then, LabStack was building out the internet. And then all of a sudden, it exploded. Coincided with the big data movement. And really what we try to do with our portfolio Am I hearing that its kind of horses for courses? And then this one is easy to use. the right characteristics with the right technology? And again provide that technology to our end customers. One of the things that you mentioned in terms of And we know what you need out of your data services. How are the requirements different? And by and large the users are going to know what they need, the two big cloud providers, AWS and Azure. May be talk about how you fit. Or secure in how we integrate with security based Developers are like teenagers. We got to be quick in order to respond to those demands. in 2018, 80% of the companies migrated data and apps So Kubernetes is something that can run on the cloud. And you have limited resources. And then it allows our customers to benefit from the or another, that they feel like the primary cloud vendor We obviously have proprietary tech, particularly the Db2. You're as the steward of CouchDB. and what have you. of a niche, that's a hard core transaction CouchDB is one of many options that you offer. And we do want to push our customers to Db2 that we also offer. Excellent, here's our last question that they're going to you? And how they can really become far more efficient and the future of where we're going with that. Thank you so much And for Dave Vallente, I am Lisa Martin.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Dave Vallente | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Jozef de Vries | PERSON | 0.99+ |
Alibaba | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
2018 | DATE | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Jozef | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
80% | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
this year | DATE | 0.99+ |
one week | QUANTITY | 0.99+ |
first time | QUANTITY | 0.99+ |
Kubernetes | TITLE | 0.99+ |
MySQL | TITLE | 0.98+ |
one month | QUANTITY | 0.98+ |
tomorrow | DATE | 0.98+ |
IBM Cloud Databases | ORGANIZATION | 0.98+ |
two categories | QUANTITY | 0.97+ |
both | QUANTITY | 0.97+ |
today | DATE | 0.97+ |
Dynamo | ORGANIZATION | 0.97+ |
CouchDB | TITLE | 0.96+ |
15 years ago | DATE | 0.96+ |
EU | ORGANIZATION | 0.96+ |
IBM Think | ORGANIZATION | 0.96+ |
LabStack | ORGANIZATION | 0.96+ |
IBM Think 2019 | EVENT | 0.96+ |
more than one database | QUANTITY | 0.96+ |
10, 15 years ago | DATE | 0.95+ |
One | QUANTITY | 0.95+ |
five plus | QUANTITY | 0.95+ |
one | QUANTITY | 0.94+ |
Postgres | ORGANIZATION | 0.94+ |
SQL Server | TITLE | 0.93+ |
Day 1 | QUANTITY | 0.92+ |
Moscone Center | LOCATION | 0.92+ |
second annual | QUANTITY | 0.91+ |
Db2 | TITLE | 0.9+ |
this afternoon | DATE | 0.9+ |
two big cloud | QUANTITY | 0.89+ |
Couch | TITLE | 0.89+ |
one cloud | QUANTITY | 0.88+ |
last couple of years | DATE | 0.87+ |
Azure | ORGANIZATION | 0.84+ |
Cloudant | ORGANIZATION | 0.82+ |
NoSQL | TITLE | 0.81+ |
2019 | DATE | 0.8+ |
Think 2019 | EVENT | 0.8+ |
Day minus one | QUANTITY | 0.79+ |
Markus Strauss, McAfee | AWS re:Invent 2018
>> Live from Las Vegas, it's theCUBE, covering AWS re:Invent 2018, brought to you by Amazon Web Services, Intel, and their ecosystem partners. >> Hi everybody, welcome back to Las Vegas. I'm Dave Vellante with theCUBE, the leader in live tech coverages. This is day three from AWS re:Invent, #reInvent18, amazing. We have four sets here this week, two sets on the main stage. This is day three for us, our sixth year at AWS re:Invent, covering all the innovations. Markus Strauss is here as a Product Manager for database security at McAfee. Markus, welcome. >> Hi Dave, thanks very much for having me. >> You're very welcome. Topic near and dear to my heart, just generally, database security, privacy, compliance, governance, super important topics. But I wonder if we can start with some of the things that you see as an organization, just general challenges in securing database. Why is it important, why is it hard, what are some of the critical factors? >> Most of our customers, one of the biggest challenges they have is the fact that whenever you start migrating databases into the cloud, you inadvertently lose some of the controls that you might have on premise. Things like monitoring the data, things like being able to do real time access monitoring and real time data monitoring, which is very, very important, regardless of where you are, whether you are in the cloud or on premise. So these are probably really the biggest challenges that we see for customers, and also a point that holds them back a little, in terms of being able to move database workloads into the cloud. >> I want to make sure I understand that. So you're saying, if I can rephrase or reinterpret, and tell me if I'm wrong. You're saying, you got great visibility on prem and you're trying to replicate that degree of visibility in the cloud. >> Correct. >> It's almost the opposite of what you hear oftentimes, how people want to bring the cloud while on premise. >> Exactly. >> It's the opposite here. >> It's the opposite, yeah. 'Cause traditionally, we're very used to monitoring databases on prem, whether that's native auditing, whether that is in memory monitoring, network monitoring, all of these things. But once you take that database workload, and push it into the cloud, all of those monitoring capabilities essentially disappear, 'cause none of that technology was essentially moved over into the cloud, which is a really, really big point for customers, 'cause they cannot take that and just have a gap in their compliance. >> So database discovery is obviously a key step in that process. >> Correct, correct. >> What is database discovery? Why is it important and where does it fit? >> One of the main challenges most customers have is the ability to know where the data sits, and that begins with knowing where the database and how many databases customers have. Whenever we talk to customers and we ask how many databases are within an organization, generally speaking, the answer is 100, 200, 500, and when the actual scanning happens, very often the surprise is it's a lot more than what the customer initially thought, and that's because it's so easy to just spin off a database, work with it, and then forget about it, but from a compliance point of view, that means you're now sitting there, having data, and you're not monitoring it, you're not compliant. You don't even know it exists. So data discovery in terms of database discovery means you got to be able to find where your database workload is and be able to start monitoring that. >> You know, it's interesting. 10 years ago, database was kind of boring. I mean it was like Oracle, SQL Server, maybe DB2, maybe a couple of others, then all of a sudden, the NoSQL explosion occurred. So when we talk about moving databases into the cloud, what are you seeing there? Obviously Oracle is the commercial database market share leader. Maybe there's some smaller players. Well, Microsoft SQL Server obviously a very big... Those are the two big ones. Are we talking about moving those into the cloud? Kind of a lift and shift. Are we talking about conversion? Maybe you could give us some color on that. >> I think there's a bit of both, right? A lot of organizations who have proprietary applications that run since many, many years, there's a certain amount of lift and shift, right, because they don't want to rewrite the applications that run on these databases. But wherever there is a chance for organizations to move into some of their, let's say, more newer database systems, most organizations would take that opportunity, because it's easier to scale, it's quicker, it's faster, they get a lot more out of it, and it's obviously commercially more valuable as well, right? So, we see quite a big shift around NoSQL, but also some of the open source engines, like MySQL, ProsgreSQL, Percona, MariaDB, a lot of the other databases that, traditionally within the enterprise space, we probably wouldn't have seen that much in the past, right? >> And are you seeing that in a lot of those sort of emerging databases, that the attention to security detail is perhaps not as great as it has been in the traditional transaction environment, whether it's Oracle, DB2, even certainly, SQL Server. So, talk about that potential issue and how you guys are helping solve that. >> Yeah, I mean, one of the big things, and I think it was two years ago, when one of the open source databases got discovered essentially online via some, and I'm not going to name names, but the initial default installation had admin as username and no password, right? And it's very easy to install it that way, but unfortunately it means you potentially leave a very, very big gaping hole open, right? And that's one of the challenges with having open source and easily deployable solutions, because Oracle, SQLServer, they don't let you do that that quickly, right? But it might happen with other not as large database instances. One of the things that McAfee for instance does is helps customers making sure that configuration scans are done, so that once you have set up a database instance, that as an organization, you can go in and can say, okay, I need to know whether it's up to patch level, whether we have any sort of standard users with standard passwords, whether we have any sort of very weak passwords that are within the database environment, just to make sure that you cover all of those points, but because it's also important from a compliance point of view, right? It brings me always back to the compliance point of view of the organization being the data steward, the owner of the data, and it has to be our, I suppose, biggest point to protect the data that sits on those databases, right? >> Yeah, well there's kind of two sides of the same coin. The security and then compliance, governance, privacy, it flips. For those edicts, those compliance and governance edicts, I presume your objective is to make sure that those carry over when you move to the cloud. How do you ensure that? >> So, I suppose the biggest point to make that happen is ensure that you have one set of controls that applies to both environments. It brings us back to the hybrid point, right? Because you got to be able to reuse and use the same policies, and measures, and controls that you have on prem and be able to shift these into the cloud and apply them to the same rigor into the cloud databases as you would have been used to on prem, right? So that means being able to use the same set of policies, the same set of access control whether you're on prem or in the cloud. >> Yeah, so I don't know if our folks in our audience saw it today, but Werner Vogels gave a really, really detailed overview of Aurora. He went back to 2004, when their Oracle database went down because they were trying to do things that were unnatural. They were scaling up, and the global distribution. But anyway, he talked about how they re-architected their systems and gave inside baseball on Aurora. Huge emphasis on recovery. So you know, being very important to them, data accessibility, obviously security is a big piece of that. You're working with AWS on Aurora, and RDS as well. Can you talk specifically about what you're doing there as a partnership? >> So, AWS has, I think it was two days ago, essentially put the Aurora database activity stream into private preview, which is essentially a way for third party vendors to be able to read a activity stream off Aurora, enabling McAfee, for instance, to consume that data and bring customers the same level of real-time monitoring to the database as the servers were, as were used to on prem or even in a EC2 environment, where it's a lot easier because customers have access to the infrastructure, install things. That's always been a challenge within the database as the servers were because that access is not there, right? So, customers need to have an ability to get the same level of detail, and with the database activity stream and the ability for McAfee to read that, we give customers the same ability with Aurora PostgreSQL at the moment as customers have on premise with any of the other databases that we support. >> So you're bringing your expertise, some of which is really being able to identify anomalies, and scribbling through all this noise, and identifying the signal that's dangerous, and then obviously helping people respond to that. That's what you're enabling through that connection point. >> Correct, 'cause for organizations, using something like Aurora is a big saving, and the scalability that comes with it is fantastic. But if I can't have the same level of data control that I have on premise, it's going to stop me as an organization, moving critical data into that, 'cause I can't protect it, and I have to be able to. So, with this step, it's a great first step into being able to provide that same level of activity monitoring in real time as we're used to on prem. >> Same for RDS, is that pretty much what you're doing there? >> It's the same for RDS, yes. There is a certain set level of, obviously, you know, we go through before things go into GA but RDS is part of that program as well, yes. >> So, I wonder if we can step back a little bit and talk about some of the big picture trends in security. You know, we've gone from a world of hacktivists to organized crime, which is very lucrative. There are even state sponsored terrorism. I think Stuxnet is interesting. You probably can't talk about Stuxnet. Anyway-- >> No, not really. >> But, conceptually, now the bar is raised and the sophistication goes up. It's an arms race. How are you keeping pace? What role does data have? What's the state of security technology? >> It's very interesting, because traditionally, databases, nobody wanted to touch the areas. We were all very, very good at building walls around and being very perimeter-oriented when it comes to data center and all of that. I think that has changed little bit with the, I suppose the increased focus on the actual data. Since a lot of the legislations have changed since the threat of what if GDPR came in, a lot of companies had to rethink their take on protecting data at source. 'Cause when we start looking at the exfiltration path of data breaches, almost all the exfiltration happens essentially out of the database. Of course, it makes sense, right? I mean I get into the environment through various different other ways, but essentially, my main goal is not to see the network traffic. My main goal as any sort of hacker is essentially get onto the data, get that out, 'cause that's where the money sits. That's what essentially brings the most money in the open market. So being able to protect that data at source is going to help a lot of companies make sure that that doesn't happen, right? >> Now, the other big topic I want to touch on in the minute we have remaining is ransomware. It's a hot topic. People are talking about creating air gaps, but even air gaps, you can get through an air gap with a stick. Yeah, people get through. Your thoughts on ransomware, how are you guys combating that? >> There is very specific strains, actually, developed for databases. It's a hugely interesting topic. But essentially what it does is it doesn't encrypt the whole database, it encrypts very specific key fields, leaves the public key present for a longer period of time than what we're used to see on the endpoint board, where it's a lot more like a shotgun approach and you know somebody is going to pick it up, and going to pay the $200, $300, $400, whatever it is. On the database side, it's a lot more targeted, but generally it's a lot more expensive, right? So, that essentially runs for six months, eight months, make sure that all of the backups are encrypted as well, and then the public key gets removed, and essentially, you have lost access to all of your data, 'cause even the application that access the data can't talk to the database anymore. So, we have put specific controls in place that monitor for changes in the encryption level, so even if only one or two key fields starting to get encrypted with a different encryption key, we're able to pick that up, and alert you on it, and say hey, hang on, there is something different to what you usually do in terms of your encryption. And that's a first step to stopping that, and being able to roll back and bring in a backup, and change, and start looking where the attacker essentially gained access into the environment. >> Markus, are organizations at the point where they are automating that process, or is it still too dangerous? >> A lot of it is still too dangerous, although, having said that, we would like to go more into the automation space, and I think it's something as an industry we have to, because there is so much pressure on any security personnel to follow through and do all of the rules, and sift through, and find the needle in the haystack. But especially on a database, the risk of automating some of those points is very great, because if you make a mistake, you might break a connection, or you might break something that's essentially very, very valuable, and that's the crown jewels, the data within the company. >> Right. All right, we got to go. Thanks so much. This is a really super important topic. >> Appreciate all the good work you're doing. >> Thanks for having me. >> You're very welcome. All right, keep it right there, everybody. You're watching theCUBE. We'll be right back, right after this short break from AWS re:Invent 2018, from Las Vegas. We'll be right back. (techno music)
SUMMARY :
brought to you by Amazon Web Services, covering all the innovations. some of the things that you see is the fact that whenever you start and you're trying to replicate It's almost the opposite of and push it into the cloud, a key step in that process. is the ability to know where the data sits, Obviously Oracle is the commercial database a lot of the other databases that, that the attention to security detail and it has to be our, those carry over when you move to the cloud. and controls that you have on prem and the global distribution. and the ability for McAfee to read that, and identifying the signal that's dangerous, and the scalability It's the same for RDS, yes. the big picture trends in security. and the sophistication goes up. Since a lot of the legislations have changed in the minute we have remaining is ransomware. that monitor for changes in the encryption level, and do all of the rules, This is a really super important topic. Appreciate all the good work You're very welcome.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Amazon Web Services | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
six months | QUANTITY | 0.99+ |
eight months | QUANTITY | 0.99+ |
Markus Strauss | PERSON | 0.99+ |
one | QUANTITY | 0.99+ |
Markus | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
$200 | QUANTITY | 0.99+ |
2004 | DATE | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
McAfee | ORGANIZATION | 0.99+ |
MySQL | TITLE | 0.99+ |
$300 | QUANTITY | 0.99+ |
$400 | QUANTITY | 0.99+ |
100 | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
sixth year | QUANTITY | 0.99+ |
NoSQL | TITLE | 0.99+ |
two sides | QUANTITY | 0.99+ |
two years ago | DATE | 0.98+ |
both environments | QUANTITY | 0.98+ |
first step | QUANTITY | 0.98+ |
Werner Vogels | PERSON | 0.98+ |
two days ago | DATE | 0.98+ |
ProsgreSQL | TITLE | 0.98+ |
two sets | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
10 years ago | DATE | 0.98+ |
today | DATE | 0.98+ |
MariaDB | TITLE | 0.98+ |
SQL Server | TITLE | 0.97+ |
Aurora | TITLE | 0.97+ |
#reInvent18 | EVENT | 0.96+ |
GDPR | TITLE | 0.96+ |
One | QUANTITY | 0.96+ |
500 | QUANTITY | 0.96+ |
four sets | QUANTITY | 0.95+ |
200 | QUANTITY | 0.95+ |
DB2 | TITLE | 0.95+ |
SQL | TITLE | 0.94+ |
day three | QUANTITY | 0.94+ |
this week | DATE | 0.93+ |
Aurora PostgreSQL | TITLE | 0.89+ |
two key fields | QUANTITY | 0.89+ |
Percona | TITLE | 0.88+ |
one set | QUANTITY | 0.87+ |
re:Invent | EVENT | 0.86+ |
prem | ORGANIZATION | 0.84+ |
AWS re:Invent | EVENT | 0.83+ |
two big ones | QUANTITY | 0.79+ |
AWS re:Invent 2018 | EVENT | 0.77+ |
RDS | TITLE | 0.76+ |
EC2 | TITLE | 0.73+ |
Invent 2018 | TITLE | 0.7+ |
Invent 2018 | EVENT | 0.68+ |
Stuxnet | ORGANIZATION | 0.63+ |
theCUBE | ORGANIZATION | 0.59+ |
Stuxnet | PERSON | 0.57+ |
ttacker | TITLE | 0.52+ |
SQLServer | ORGANIZATION | 0.5+ |
challenges | QUANTITY | 0.49+ |
Evan Kaplan, InfluxData | CUBEConversation, Sept 2018
(intense orchestral music) >> Hey welcome back everybody, Jeff Frick here with theCUBE We are taking a short break from the madness of the conference season to do some CUBE Conversations here in the Palo Alto studio, which we always like to do and meet new people, and hear new stories, learn about new companies. And today we've got a new company, we've never had 'em on theCUBE before, it's Evan Kaplan, he's the CEO of InluxData. Evan, great to see you. >> Yeah, hey thanks for having me. >> Absolutely. So for people that aren't familiar with the company, give 'em kind of the 101 on Influx. >> Yeah so, InfluxData is an opensource platform for collecting metrics and events at scale. The company is about almost four years old, has a large selection of tier one customers, is broadly accepted by developers as the number one time-series platform out there, so. >> So a lot of people talk about collecting data, so we've been doing Splunk since 2012, and, they really found something interesting on log files, and took it a whole 'nother level, so there's a lot of people that are capturing events. So what do you guys do that's a little bit different, how are you slicing and dicing this opportunity? >> Yeah, to put this is even in the broader context of what we're looking at is the 20 year break-up of the Oracle, DB2 and Formex franchise that dominated and relational databases were the answer to all problems and so if you look at a company like Splunk working on logs, they optimized a platform for those logs, for that data set, Elastic also, really interesting space. I think our innovation has been in saying "Hey, where the world's going, where all of these complex systems are going?" Particularly IoT, is to real-time view of the data and so, rather than collect verbose logs, historical views of the data and things like that, real system operators, real developers and builders want to instrument their applications, their infrastructure, so you can view 'em in real time. The place where the rubber hits the road is IoT. Sensors spit out metrics and events, period, full stop. And so if you want to be performant in how you handle, your instrumentation of the physical world, and how you do your machine learning, and how you want to manage these systems, you use a fundamentally time-series based database. As opposed to Splunk or Elastic or, which are primarily search-based databases. >> And are you primarily capturing and standardizing the data to feed other analytics tools, or do you have the whole suite, where you're doing some of the analytics as well? >> Yeah, such a great question. So, the fundamental platform is called the TICK Stack, and it stands for Telegraf which is a collector, which has about 200 different collectors that sit out there in the world and collect everything from SNMP data, to Oracle data, to application, to micro-service data, to Kubernetes, to that sort of stuff. There's Influx, which is the DB, which is highly optimized for millions and millions of writes a second, so collecting data points and samples. There's Chronograf which is the visualization engine and so, it allows you as soon as the data comes input you can see how it's graphed, see it on time-series oriented graphing, and then there's Kapacitor which takes action on the data. What we don't do is the super high sophisticated analytics. There are lots of companies in Silicon Valley who take our data, pump it up, and then we put it back on the platform to build a control loop for it. >> Right. So when the Kapacitor, does your application then take action on those things? >> Yes. Yeah, so, it'd do everything from alerting, to sending out another machine request, to spinning up a new Kubernetes pod, to basically scaling the application, self healing. >> Right. So does it fit in between a lot of those other types of applications that are sending off notifications, and those types of things? >> Yes, yeah. so you're in between? >> And usually, we're instrumented the way a standard developer, or an architect or CTO does is they look at a complex application, or a complex set of sensors, they instrument with Influx and Telegraf, and collect that data, they view it in real time, and then they build control loops, automation loops, to make that easier so when you see a problem, it's got a tolerance you can self adjust for. So it's the beginning of kind of the self-healing system. >> Okay, and I know that Telegraf is definitely opensource, are the other three? >> All four are open-source All four are open-source. >> Everything, in our world, everything for a developer is free, so, and a single note of Influx can handle a couple million writes a second, which is really really performant to run in production. Where our business model is, where we make money is, our closed source clustering, sharding, distributing the database, if you decide you want to run highly available in the production environment, you would buy our closed-source stuff. We have about 430 customers who run our closed source stuff on top of the opensource. >> So, it is kind of like a MapR to Hadoop if you will, where, you know, it's built on, built on the opensource, and then they've got their proprietary stuff kind of wrapped around it, almost like an open core? Or is that a? >> Yeah, it's a little It's a little different than the normal Hadoop stuff. One is, our stuff doesn't have any external dependencies. It can work with other third party projects, but just, it's a platform onto itself, there aren't 25 projects. There are four different projects, we own them all, they come across as a single binary, and it's not part of Apache. >> So they're integrated So the TICK is the full TICK >> Yes, and then you put the clustering on top. So there's some similarity, but not being part of Apache, we can control and keep clean what that experience is. And we're about, the thing that's been most successful for us is, well Paul our founder who is my partner, it's called time to awesome, the idea that a developer in 10 minutes can very quickly be up and instrumenting an application or a set of sensors, and see that data pouring in within 10 minutes from going to the site and downloading the opensource. >> So it's interesting, the giant opportunity is really around IoT, just in terms of the explosion of the sensor data, and we see that coming, and we were at AT&T show a couple weeks ago, talking about 5G which is, slowly, slowly coming down the road, (Evan laughs) they've got the standards fixed. But in terms of the, you said the shorter term, nobody has budget, I always like to joke, nobody has budget for a new platform, they do have budget for new applications, because they've got real problems. So you said you're seeing, your main success now, your go to market application, is around application monitoring? Would that be accurate, or what is kind of your? >> Yeah, there are two broad things, and they're both very similar technology as a service. One is the central monitoring stuff so, Tesla's Power Wall, Seimens' Windmills, a variety of solar companies build Telegraf into their platforms and then use InluxData to collect and store that information and analyze it. On the software side, people like IBM's Cloud Service running their network and their fabric, SAP with Ariba, Cisco with all their collaboration stuff, they instrument their software applications. And that's the idea is it's a general purpose platform for collecting and instrumenting instrumenting the applications or the sensors, either one, or both. >> Okay, and so what are you guys working on now, what's next, kind of raise the profile, get some new stuff >> Yeah, so we are-- before the whole IoT thing completely explodes, we're not quite there yet but it's coming down the pike. >> But we're starting to see it really happen, so that's really exciting for us. And this is just a really, really big market, it's certainly a super set of the log market, it should be. As you think about just the instrumentation of the physical world, how much instrumentation is going on, your clothes, your cars, your homes, your industrial devices, my watch, how much sensor data there is. We think this is a tremendously large market, so we're doing a couple of things. One is, we're about to introduce a new language for querying these kinds of time-series data that's going to be opensource, that a bunch of other people can use with their data stores. We're rolling out a new API-driven service, so that people can store these things directly in the could natively, so all they have to do is know our API. So we're really trying to push from the technology limit we're a product-driven company, and so, and an opensource-driven company, so we're trying to push that, that community is super important to us. >> It's so wild to me, the opportunity to have a closed feedback loop between someone's product back to the barn, you're barely starting to see it, Tesla obviously, is a good example, they're slowly seeing it in other places. But what a fundamental change in manufacturing, from building a product, making some assumptions about use, shipping that product to your distribution, and then, maybe you get some feedback now an then, versus actually monitoring the way that that thing is actually used by your end user, whether it's a product like a car, or even a software application, as you're rolling out all these different apps and features in the apps, how are people using it, are they using it? Where do you double down, where do you back off? And that loop has not really been >> That's pretty insightful. >> opened up very wide. Yeah, no it's just starting to open up, and that whole notion of product telemetry, my prediction is is that, as development teams grow and things like that, you're going to have telemetry experts, people are going to be specializing. How do you instrument these products so you get maximum engagement, and usage, and things like that? So I think that's pretty insightful on your part. If you think about it from a systems point of view, right? Instrumentation is first. You can't do anything 'til you instrument, whether it's telemetry from a product, it's the engagement or this. So instrumentation is first, visibility in real time is second. So observability is the big thought in systems application and building now, this notion of observing your system in real time, because you don't know, apriori, it's impossible to know a complex system, how it's going to behave, then it's automation, right? So like, okay now I can see these behaviors, how do I automate something that makes the experience for you, the user, better? But lastly, we can see this with self-driving cars, it's autonomy. It's the idea that the system becomes self-healing, and AI, and those sorts of things, but that's kind of the last step. There's a lot of learning in that process to get there. >> And it has to be automated because at scale there's no way for people to keep up with this stuff, and then how do you separate signal from noise and how do you know what to do? So you've got to automate a whole bunch of this. >> And you know if we had an aspiration it would be we're not going to write the applications that do these things but what we want to do is be that system of record so that people have a really efficient, effective metrics and events store so they can really track and keep track of all that engagement. Time-stamped data, for lack of a better way to say it. >> It sounds like you're in a pretty good space, Evan. >> Pretty excited (chuckles), thank you. Thanks for saying that, but yeah, we're pretty excited. >> Alright, well thanks for taking a few minutes out of your day and sharing the story, we look forward to watching the journey. >> Yeah. Thanks man. Alright, take care. >> Alright, thanks. He's Evan, I'm Jeff, you're watching theCUBE. We're having a CUBE Conversation in Palo Alto, we'll see you next time, thanks for watching. (intense orchestral music)
SUMMARY :
it's Evan Kaplan, he's the CEO of InluxData. So for people that aren't familiar with the company, is broadly accepted by developers as the number one So what do you guys do and so if you look at a company like Splunk working on logs, and then there's Kapacitor which takes action on the data. So when the Kapacitor, to basically scaling the application, self healing. and those types of things? so you're in between? So it's the beginning of kind of the self-healing system. All four are open-source in the production environment, It's a little different than the normal Hadoop stuff. Yes, and then you put the clustering on top. So you said you're seeing, And that's the idea is it's a general purpose platform before the whole IoT thing completely explodes, so all they have to do is know our API. the opportunity to have a closed feedback loop between There's a lot of learning in that process to get there. and then how do you separate signal from noise and And you know if we had an aspiration it would be Thanks for saying that, but yeah, we're pretty excited. and sharing the story, Alright, take care. we'll see you next time,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Vadim | PERSON | 0.99+ |
Pravin Pillai | PERSON | 0.99+ |
Vadim Supitskiy | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Pravin | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Rickard Söderberg | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Thomas | PERSON | 0.99+ |
Rickard | PERSON | 0.99+ |
Evan | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Micheline Nijmeh | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Peter | PERSON | 0.99+ |
Abdul Razack | PERSON | 0.99+ |
Micheline | PERSON | 0.99+ |
Sept 2018 | DATE | 0.99+ |
March 2019 | DATE | 0.99+ |
Evan Kaplan | PERSON | 0.99+ |
Hong Kong | LOCATION | 0.99+ |
11 | QUANTITY | 0.99+ |
80% | QUANTITY | 0.99+ |
New York City | LOCATION | 0.99+ |
1949 | DATE | 0.99+ |
GANT | ORGANIZATION | 0.99+ |
Tesla | ORGANIZATION | 0.99+ |
Zscaler | ORGANIZATION | 0.99+ |
30% | QUANTITY | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
six months | QUANTITY | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
G Suite | TITLE | 0.99+ |
Paul | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
millions | QUANTITY | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
73% | QUANTITY | 0.99+ |
Mongo | ORGANIZATION | 0.99+ |
58% | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
GDPR | TITLE | 0.99+ |
Formex | ORGANIZATION | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
three years | QUANTITY | 0.99+ |
10 minutes | QUANTITY | 0.99+ |
fourth | QUANTITY | 0.99+ |
InluxData | ORGANIZATION | 0.99+ |
Abdul | PERSON | 0.99+ |
Josh Rogers, Syncsort | theCUBE NYC 2018
>> Live from New York, it's theCUBE, covering theCUBE New York City 2018. Brought to you by SiliconANGLE Media and its ecosystem partners. >> Okay, welcome back, everyone. We're here live in New York City for CUBE NYC. This is our ninth year covering the big data ecosystem, now it's AI, machine-learning, used to be Hadoop, now it's growing, ninth year covering theCUBE here in New York City. I'm John Furrier, with Dave Vellante. Our next guest, Josh Rogers, CEO of Syncsort. I'm going back, long history in theCUBE. You guys have been on every year. Really appreciate chatting with you. Been fun to watch the evolution of Syncsort and also get the insight. Thanks for coming on, appreciate it. >> Thanks for having me. It's great to see you. >> So you guys have constantly been on this wave, and it's been fun to watch. You guys had a lot of IP in your company, and then just watching you guys kind of surf the big data wave, but also make some good decisions, made some good calls. You're always out front. You guys are on the right parts of the wave. I mean now it's cloud, you guys are doing some things. Give us a quick update. You guys got a brand refresh, so you got the new logo goin' on there. Give us a quick update on Syncsort. You got some news, you got the brand refresh. Give us a quick update. >> Sure. I'll start with the brand refresh. We refreshed the brand, and you see that in the web properties and in the messaging that we use in all of our communications. And, we did that because the value proposition of the portfolio had expanded so much, and we had gained so much more insight into some of the key use cases that we're helping customers solve that we really felt we had to do a better job of telling our story and, probably most importantly, engage with the more senior level within these organizations. What we've seen is that when you think about the largest enterprises in the world, we offer a series of solutions around two fundamental value propositions that tend to be top of mind for these executives. The first is how do I take the 20, 30, 40 years of investment in infrastructure and run that as efficiently as possible. You know, I can't make any compromises on the availability of that. I certainly have to improve my governance and secureability of that environment. But, fundamentally, I need to make sure I could run those mission-critical workloads, but I need to also save some money along the way, because what I really want to do is be a data-driven enterprise. What I really want to do is take advantage of the data that gets produced in these transactional applications that run on my AS400 or IBM I-infra environment, my mainframe environment, even in my traditional data warehouse, and make sure that I'm getting the most out of that data by analyzing it in a next-generation set of-- >> I mean one of the trends I want to get your thoughts on, Josh, cause you're kind of talking through the big, meagatrend which is infrastructure agnostic from an application standpoint. So the that's the trend with dev ops, and you guys have certainly had diverse solutions across your portfolio, but, at the end of the day, this is the abstraction layer customers want. They want to run workloads on environments that they know are in production, that work well with applications, so they almost want to view the infrastructure, or cloud, if you will, same thing, as just agnostic, but let the programmability take care of itself, under the hood, if you will. >> Right, and what we see is that people are absolutely kind of into extending and modernizing existing applications. This is in the large enterprise, and those applications and core components will still run on mainframe environments. And so, what we see in terms of use cases is how do we help customers understand how to monitor that, the performance of those applications. If I have a tier that's sitting on the cloud, but it's transacting with the mainframe behind the firewall, how do I get an end-to-end view of application performance? How do I take the data that ultimately gets logged in a DB2 database on the mainframe and make that available in a next-generation repository, like Hadoop, so that I can do advanced analytics? When you think about solving both the optimization and the integration challenge there, you need a lot of expertise in both sides, the old and the new, and I think that's what we uniquely offer. >> You guys done a good job with integration. I want to ask quick question on the integration piece. Is this becoming more and more table stakes, but also challenging at the same time? Integration and connecting systems together, if their stateless, is no problem, you use APIs, right, and do that, but as you start to get data that needs state information, you start to think to think about some of the challenges around different, disparate systems being distributed, but networked, in some cases, even decentralized, so distributed networking is being radically changed by the data decisions on the architecture, but also integration, call it API 2.0 or this new way to connect and integrate. >> Yeah, so what we've tried to focus on is kind of solving that piece between these older applications that run these legacy platforms and making them available to whatever the consumer is. Today, we see Kafka and in Amazon we see Kinesis as kind of key buses delivering data as a service, and so the role that we see ourselves playing and what we announced this week is an ability to track changed data, deliver it in realtime in these older systems, but deliver it to these new targets: Kafka, Kinesis, and whatever comes next. Because really that's the fundamental partner we're trying to be to our customers is we will help you solve the integration challenge between this infrastructure you've been building for 30 years and this next-generation technology that lets you get the next leg of value out of your data. >> So Jim, when you think about the evolution of this whole big data space, the early narrative in the trade press was, well, NoSQL is going to replace Oracle and DB2, and the data lake is going to replace the EDW, and unstructured data is all that matters, and so forth. And now, you look at what's really happened is the EDW is a fundamental component of making decisions and insights, and SQL is the killer app for Hadoop. And I take an example of say fraud detection, and when you think and this is where you guys sit in the middle from the standpoint of data quality, data integration, in order to do what we've done in the past 10 years take fraud detection down from well, I look at my statement a month or two later and then call the credit card company, it's now gone to a text that's instantaneous. Still some false positives, and I'm sure working on that even. So maybe you could describe that use case or any other, your favorite use case, and what your role is there in terms of taking those different data sources, integrating them, improving the data quality. >> So, I think when you think about a use case where I'm trying to improve the SLA or the responsiveness of how do manage against or detect fraud, rather than trying to detect it on a daily basis, I'm trying to detect it at transaction time. The reality is you want to leverage the existing infrastructure you have. So if you have a data warehouse that has detailed information about transaction history, maybe that's a good source. If you have an application that's running on the mainframe that's doing those transaction realtime, the ultimate answer is how do I knit together the existing infrastructure I have and embed the additional intelligence and capability I need from these new capabilities, like, for example, using Kafka, to deliver a complete solution. What we do is we help customers kind of tie that together, Specifically, we announced this integration I mentioned earlier where we can take a changed data element in a DB2 database and publish it into Kafka. That is a key requirement in delivering this real-time fraud detection if I in fact am running transactions on a mainframe, which most of the banks are. >> Without ripping and replacing >> Why would you want to rip out an application >> You don't. >> your core customer file when you can just extend it. >> And you mentioned the Cloudera 6 certification. You guys have been early on there. Maybe talk a little about that relationship, the engineering work that has to get done for you to be able to get into the press release day one. >> We just mentioned that my first time on theCUBE was in 2013, and that was on the back of our initial product release in the big data world. When we brought the initial DMX-h release to market, we knew that we needed to have deep partnerships with Cloudera and the key platform providers. I went and saw Mike Olson, I introduced myself, he was gracious enough to give me an hour, and explain what we thought we could do to help them develop more value proposition around their platform, and it's been a terrific relationship. Our architecture and our engineering and product management relationship is such that it allows us to very rapidly certify and work on their new releases, usually within a couple a days. Not only can customers take advantage of that, which is pretty unique in the industry, but we get some some visibility from Cloudera as evidenced by Tendu's quote in the press release that was released this week, which is terrific. >> Talk about your business a little bit. You guys are like a 50-year old startup. You've had this really interesting history. I remember you from when I first started in the industry following you guys. You've restructured the company, you've done some spin outs, you've done some M and A, but it seems to be working. Talk about growth and progress that you're making. >> We're the leader in the Big Iron to Big Data market. We define that as allowing customers to optimize their traditional legacy investments for cost and performance, and then we help them maximize the value of the data that get generated in those environments by integrating it with next-generation analytic environments. To do that, we need a broad set of capability. There's a lot of different ways to optimize existing infrastructure. One is capacity management, so we made an acquisition about a year ago in the capacity management space. We're allowing customers to figure out how do I make sure I've got not too much and not too little capacity. That's an example of optimization. Another area of capability is data quality. If I'm maximize the value of the data that gets produced in these older environments, it would be great that when it lands in these next-generation repositories it's as high quality as possible. We acquired Trillium about a year ago, or actually coming up >> How's that comin'? >> on two years ago and we think that's a great capability for our customers It's going terrific. We took their core data quality engine, and now it runs natively on a distributed Hadoop infrastructure. We have customers leveraging it to deliver unprecedented volume of matching, so not only breakthrough performance, but this whole notion of write once, run anywhere. I can run it on an SMP environment. I can run it on Hadoop. I can run it Hadoop in the cloud. We've seen terrific growth in that business based on our continued innovation, particularly pointing it at the big data space. >> One of the things that I'm impressed with you guys is you guys have transformed, so having a transformation message to your customers is you have a lot of credibility, but what's interesting is is that the world with containers and Kubernetes now and multi-cloud, you're seeing that you don't have to kill the legacy to bring in the new stuff. You can see you can connect systems, when you guys have done with legacy systems, look at connect the data. You don't have to kill that to bring in the new. >> Right >> You can do cloud-native, you can do some really cool things. >> Right. I think there's-- >> This rip and replace concept is kind of going away. You put containers around it too. That helps. >> Right. It's expensive and it's risky, so why do that. I think that's the realization. The reality is that when people build these mission-critical systems, they stay in place for not five years, but 25 years. The question is how do you allow the customers to leverage what they have and the investment they've made, but take advantage of the next wave, and that's what we're singularly focused on, and I think we're doing a great job of that, not just for customers, but also for these next-generation partners, which has been a lot of fun for us. >> And we also heard people doing analytics they want to have their own multi-tenent, isolated environments, which goes to don't screw this system up, if it's doing a great job on a mission-critical thing, don't bundle it, just connect it to the network, and you're good. >> And on the cloud side, we're continuing to look at our portfolio and say what capabilities will customers want to consume in a cloud-delivery model. We've been doing that in the data quality space for quite awhile. We just launched and announced over the last about three months ago capacity management as a service. You'll continue to see, both on the optimization side and on the integration side, us continuing to deliver new ways for customers to consume the capabilities they need. >> That's a key thing for you guys, integration. That's pretty much how you guys put the stake in the ground and engineer your activities around integration. >> Yeah, we start with the premise that your going to need to continue to run this older investments that you made, and you're going to need to integrate the new stuff with that. >> What's next? What's goin' on the rest of the year with you guys? >> We'll continue to invest heavily in the realtime and changed-data capture space. We think that's really interesting. We're seeing a tremendous amount of demand there. We've made a series of acquisitions in the security space. We believe that the ability to secure data in the core systems and its journey to the next-generation systems is absolutely critical, so we'll continue to invest there. And then, I'd say governance, that's an area that we think is incredibly important as people start to really take advantage of these data lakes they're building, they have to establish real governance capabilities around those. We believe we have an important role to play there. And there's other adjacencies, but those are probably the big areas we're investing in right now. >> Just continuing to move the ball down the field in the Syncsort cadence of acquisitions, organic development. Congratulations. Josh, thanks for comin' on. To John Rogers, CEO of Syncsort, here inside theCUBE. I'm John Furrier with Dave Vellante. Stay with us for more big data coverage, AI coverage, cloud coverage here. Part of CUBE NYC, we're in New York City live. We'll be right back after this short break. Stay with us. (techno music)
SUMMARY :
Brought to you by SiliconANGLE Media and also get the insight. It's great to see you. kind of surf the big data wave, take advantage of the data I mean one of the trends I want to in a DB2 database on the by the data decisions on the architecture, and so the role that we and SQL is the killer app for Hadoop. the existing infrastructure you have. when you can just extend it. the engineering work that has to get done in the big data world. first started in the industry of the data that get generated I can run it Hadoop in the cloud. is that the world with containers You can do cloud-native, you can do I think there's-- concept is kind of going away. but take advantage of the next wave, connect it to the network, and on the integration side, put the stake in the ground integrate the new stuff with that. We believe that the ability to secure data in the Syncsort cadence of acquisitions,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Josh | PERSON | 0.99+ |
Josh Rogers | PERSON | 0.99+ |
2013 | DATE | 0.99+ |
Jim | PERSON | 0.99+ |
Josh Rogers | PERSON | 0.99+ |
20 | QUANTITY | 0.99+ |
John Rogers | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Mike Olson | PERSON | 0.99+ |
Syncsort | ORGANIZATION | 0.99+ |
25 years | QUANTITY | 0.99+ |
New York City | LOCATION | 0.99+ |
New York City | LOCATION | 0.99+ |
30 years | QUANTITY | 0.99+ |
five years | QUANTITY | 0.99+ |
New York | LOCATION | 0.99+ |
Kafka | TITLE | 0.99+ |
an hour | QUANTITY | 0.99+ |
30 | QUANTITY | 0.99+ |
both sides | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
NoSQL | TITLE | 0.99+ |
SiliconANGLE Media | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
40 years | QUANTITY | 0.98+ |
two years ago | DATE | 0.98+ |
first time | QUANTITY | 0.98+ |
IBM | ORGANIZATION | 0.98+ |
Today | DATE | 0.98+ |
Hadoop | TITLE | 0.98+ |
Oracle | ORGANIZATION | 0.98+ |
Amazon | ORGANIZATION | 0.98+ |
ninth year | QUANTITY | 0.97+ |
NYC | LOCATION | 0.97+ |
this week | DATE | 0.96+ |
Trillium | ORGANIZATION | 0.96+ |
SQL | TITLE | 0.96+ |
this week | DATE | 0.96+ |
50-year old | QUANTITY | 0.96+ |
CUBE | ORGANIZATION | 0.96+ |
One | QUANTITY | 0.95+ |
a month | DATE | 0.94+ |
EDW | TITLE | 0.92+ |
about a year ago | DATE | 0.91+ |
Cloudera | ORGANIZATION | 0.91+ |
about | DATE | 0.9+ |
SLA | TITLE | 0.84+ |
DB2 | TITLE | 0.84+ |
one | QUANTITY | 0.82+ |
CEO | PERSON | 0.81+ |
a year ago | DATE | 0.81+ |
theCUBE | ORGANIZATION | 0.8+ |
about three months ago | DATE | 0.79+ |
AS400 | COMMERCIAL_ITEM | 0.78+ |
wave | EVENT | 0.77+ |
past 10 years | DATE | 0.74+ |
two later | DATE | 0.74+ |
two fundamental value propositions | QUANTITY | 0.72+ |
Kinesis | TITLE | 0.72+ |
couple a days | QUANTITY | 0.71+ |
Cloudera 6 | TITLE | 0.7+ |
big data | EVENT | 0.64+ |
day one | QUANTITY | 0.61+ |
2018 | DATE | 0.57+ |
API 2.0 | OTHER | 0.54+ |
Tendu | PERSON | 0.51+ |
Scott Hebner, IBM | Change the Game: Winning With AI
>> Live from Times Square in New York City, it's theCUBE. Covering IBMs Change the Game, Winning With AI. Brought to you by, IBM. >> Hi, everybody, we're back. My name is Dave Vellante and you're watching, theCUBE. The leader in live tech coverage. We're here with Scott Hebner who's the VP of marketing for IBM analytics and AI. Scott, it's good to see you again, thanks for coming back on theCUBE. >> It's always great to be here, I love doing these. >> So one of the things we've been talking about for quite some time on theCUBE now, we've been following the whole big data movement since the early Hadoop days. And now AI is the big trend and we always ask is this old wine, new bottle? Or is it something substantive? And the consensus is, it's real, it's real innovation because of the data. What's your perspective? >> I do think it's another one of these major waves, and if you kind of go back through time, there's been a series of them, right? We went from, sort of centralized computing into client server, and then we went from client server into the whole world of e-business in the internet, back around 2000 time frame or so. Then we went from internet computing to, cloud. Right? And I think the next major wave here is that next step is AI. And machine learning, and applying all this intelligent automation to the entire system. So I think, and it's not just a evolution, it's a pretty big change that's occurring here. Particularly the value that it can provide businesses is pretty profound. >> Well it seems like that's the innovation engine for at least the next decade. It's not Moore's Law anymore, it's applying machine intelligence and AI to the data and then being able to actually operationalize that at scale. With the cloud-like model, whether its OnPrem or Offprem, your thoughts on that? >> Yeah, I mean I think that's right on 'cause, if you kind of think about what AI's going to do, in the end it's going to be about just making much better decisions. Evidence based decisions, your ability to get to data that is previously unattainable, right? 'Cause it can discover things in real time. So it's about decision making and it's about fueling better, and more intelligent business processing. Right? But I think, what's really driving, sort of under the covers of that, is this idea that, are clients really getting what they need from their data? 'Cause we all know that the data's exploding in terms of growth. And what we know from our clients and from studies is only about 15% of what business leaders believe that they're getting what they need from their data. Yet most businesses are sitting on about 80% of their data, that's either inaccessible, un-analyzed, or un-trusted, right? So, what they're asking themselves is how do we first unlock the value of all this data. And they knew they have to do it in new ways, and I think the new ways starts to talk about cloud native architectures, containerization, things of that nature. Plus, artificial intelligence. So, I think what the market is starting to tell us is, AI is the way to unlock the value of all this data. And it's time to really do something significant with it otherwise, it's just going to be marginal progress over time. They need to make big progress. >> But data is plentiful, insights aren't. And part of your strategy is always been to bring insights out of that dividend and obviously focused on clients outcomes. But, a big part of your role is not only communicating IBMs analytic and AI strategy, but also helping shape that strategy. How do you, sort of summarize that strategy? >> Well we talk about the ladder to AI, 'cause one thing when you look at the actual clients that are ahead of the game here, and the challenges that they've faced to get to the value of AI, what we've learned, very, very clearly, is that the hardest part of AI is actually making your data ready for AI. It's about the data. It's sort of this notion that there's no AI without a information architecture, right? You have to build that architecture to make your data ready, 'cause bad data will be paralyzing to AI. And actually there was a great MIT Sloan study that they did earlier in the year that really dives into all these challenges and if I remember correctly, about 81% of them said that the number one challenge they had is, their data. Is their data ready? Do they know what data to get to? And that's really where it all starts. So we have this notion of the ladder to AI, it's several, very prescriptive steps, that we believe through best practices, you need to actually take to get to AI. And once you get to AI then it becomes about how you operationalize it in the way that it scales, that you have explainability, you have transparency, you have trust in what the model is. But it really much is a systematical approach here that we believe clients are going to get there in a much faster way. >> So the picture of the ladder here it starts with collect, and that's kind of what we did with, Hadoop, we collected a lot of data 'cause it was inexpensive and then organizing it, it says, create a trusted analytics foundation. Still building that sort of framework and then analyze and actually start getting insights on demand. And then automation, that seems to be the big theme now. Is, how do I get automation? Whether it's through machine learning, infusing AI everywhere. Be a blockchain is part of that automation, obviously. And it ultimately getting to the outcome, you call it trust, achieving trust and transparency, that's the outcome that we want here, right? >> I mean I think it all really starts with making your data simple and accessible. Which is about collecting the data. And doing it in a way you can tap into all types of data, regardless of where it lives. So the days of trying to move data around all over the place or, heavy duty replication and integration, let it sit where it is, but be able to virtualize it and collect it and containerize it, so it can be more accessible and usable. And that kind of goes to the point that 80% of the enterprised data, is inaccessible, right? So it all starts first with, are you getting all the data collected appropriately, and getting it into a way that you can use it. And then we start feeding things in like, IOT data, and sensors, and it becomes real time data that you have to do this against, right? So, notions of replicating and integrating and moving data around becomes not very practical. So that's step one. Step two is, once you collect all the data doesn't necessarily mean you trust it, right? So when we say, trust, we're talking about business ready data. Do people know what the data is? Are there business entities associated with it? Has it been cleansed, right? Has it been take out all the duplicate data? What do you when a situation with data, you know you have sources of data that are telling you different things. Like, I think we've all been on a treadmill where the phone, the watch, and the treadmill will actually tell you different distances, I mean what's the truth? The whole notion of organizing is getting it ready to be used by the business, in applying the policies, the compliance, and all the protections that you need for that data. Step three is, the ability to build out all this, ability to analyze it. To do it on scale, right, and to do it in a way that everyone can leverage the data. So not just the business analysts, but you need to enable everyone through self-service. And that's the advancements that we're getting in new analytics capabilities that make mere mortals able to get to that data and do their analysis. >> And if I could inject, the challenge with the sort of traditional decision support world is you had maybe two, or three people that were like, the data gods. You had to go through them, and they would get the analysis. And it's just, the agility wasn't there. >> Right. >> So you're trying to, democratizing that, putting it in the hands. >> Absolutely. >> Maybe the business user's not as much of an expert as the person who can build theCUBE, but they could find new use cases, and drive more value, right? >> Actually, from a developer, that needs to get access, and analytics infused into their applications, to the other end of the spectrum which could be, a marketing leader, a finance planner, someone who's planning budgets, supply chain planner. Right, so it's that whole spectrum, not only allowing them to tap into, and analyze the data and gain insights from it, but allow them to customize how they do it and do it in a more self-service. So that's the notion of scale on demand insights. It's really a cultural thing enabled through the technology. With that foundation, then you have the ability to start infuse, where I think the real power starts to kick in here. So I mean, all that's kind of making your data ready for AI, right? Then you start to infuse machine learning, everywhere. And that's when you start to build these models that are self-learning, that start to automate the ability to get to these insights, and to the data. And uncover what has previously been unattainable, right? And that's where the whole thing starts to become automated and more real time and more intelligent. And that's where those models then allow you to do things you couldn't do before. With the data, they're saying they're not getting access to. And then of course, once you get the models, just because you have good models doesn't mean that they've been operationalized, that they've been embedded in applications, embedded in business process. That you have trust and transparency and explainability of what it's telling you. And that's that top tier of the ladder, is really about embedding it, right, so that into your business process in a way that you trust it. So, we have a systematic set of approaches to that, best practices. And of course we have the portfolio that would help you step up that ladder. >> So the fat middle of this bell curve is, something kind of this maturity curve, is kind of the organize and analyze phase, that's probably where most people are today. And what's the big challenge of getting up that ladder, is it the algorithms, what is it? >> Well I think it, it clearly with most movements like this, starts with culture and skills, right? And the ability to just change the game within an organization. But putting that aside, I think what's really needed here is an information architecture that's based in the agility of a cloud native platform, that gives you the productivity, and truly allows you to leverage your data, wherever it resides. So whether it's in the private cloud, the public cloud, on premise, dedicated no matter where it sits, you want to be able to tap into all that data. 'Cause remember, the challenge with data is it's always changing. I don't mean the sources, but the actual data. So you need an architecture that can handle all that. Once you stabilize that, then you can start to apply better analytics to it. And so yeah, I think you're right. That is sort of the bell curve here. And with that foundation that's when the power of infusing machine learning and deep learning and neuronetworks, I mean those kind of AI technologies and models into it all, just takes it to a whole new level. But you can't do those models until you have those bottom tiers under control. >> Right, setting that foundation. Building that framework. >> Exactly. >> And then applying. >> What developers of AI applications, particularly those that have been successful, have told us pretty clearly, is that building the actual algorithms, is not necessarily the hard part. The hard part is making all the data ready for that. And in fact I was reading a survey the other day of actual data scientists and AI developers and 60% of them said the thing they hate the most, is all the data collection, data prep. 'Cause it's so hard. And so, a big part of our strategy is just to simplify that. Make it simple and accessible so that you can really focus on what you want to do and where the value is, which is building the algorithms and the models, and getting those deployed. >> Big challenge and hugely important, I mean IBM is a 100 year old company that's going through it's own digital transformation. You know, we've had Inderpal Bhandari on talking about how to essentially put data at the core of the company, it's a real hard problem for a lot of companies who were not born, you know, five or, seven years ago. And so, putting data at that core and putting human expertise around it as opposed to maybe, having whatever as the core. Humans or the plant or the manufacturing facility, that's a big change for a lot of organizations. Now at the end of the day IBM, and IBM sells strategy but the analytics group, you're in the software business so, what offerings do you have, to help people get there? >> Well in the collect step, it's essentially our hybrid data management portfolio. So think DB2, DB2 warehouse, DB2 event store, which is about IOT data. So there's a set of, and that's where big data in Hadoop and all that with Wentworth's, that's where that all fits in. So building the ability to access all this data, virtualize it, do things like Queryplex, things of that nature, is where that all sits. >> Queryplex being that to the data, virtualization capability. >> Yeah. >> Get to the data no matter where it is. >> To find a queary and don't worry about where it resides, we'll figure that out for you, kind of thought, right? In the organize, that is infosphere, so that's basically our unified governance and integration part of our portfolio. So again, that is collecting all this, taking the collected data and organizing it, and making sure you're compliant with whatever policies. And making it, you know, business ready, right? And so infosphere's where you should look to understand that portfolio better. When you get into scale and analytics on demand, that's Cognos analytics, it is our planning analytics portfolio. And that's essentially our business analytics part of all this. And some data science tools like, SPSS, we're doing statistical analysis and SPSS modeler, if we're doing statistical modeling, things of that nature, right? When you get into the automate and the ML, everywhere, that's Watson Studio which is the integrated development environment, right? Not just for IBM Watson, but all, has a huge array of open technologies in it like, TensorFlow and Python, and all those kind of things. So that's the development environment that Watson machine learning is the runtime that will allow you to run those models anywhere. So those are the two big pieces of that. And then from there you'll see IBM building out more and more of what we already have. But we have Watson applications. Like Watson Assistant, Watson Discovery. We have a huge portfolio of Watson APIs for everything from tone to speech, things of that nature. And then the ability to infuse that all into the business processes. Sort of where you're going to see IBM heading in the future here. >> I love how you brought that home, and we talked about the ladder and it's more than just a PowerPoint slide. It actually is fundamental to your strategy, it maps with your offerings. So you can get the heads nodding, with the customers. Where are you on this maturity curve, here's how we can help with products and services. And then the other thing I'll mention, you know, we kind of learned when we spoke to some others this week, and we saw some of your announcements previously, the Red Hat component which allows you to bring that cloud experience no matter where you are, and you've got technologies to do that, obviously, you know, Red Hat, you guys have been sort of birds of a feather, an open source. Because, your data is going to live wherever it lives, whether it's on Prem, whether it's in the cloud, whether it's in the Edge, and you want to bring sort of a common model. Whether it's, containers, kubernetes, being able to, bring that cloud experience to the data, your thoughts on that? >> And this is where the big deal comes in, is for each one of those tiers, so, the DB2 family, infosphere, business analytics, Cognos and all that, and Watson Studio, you can get started, purchase those technologies and start to use them, right, as individual products or softwares that service. What we're also doing is, this is the more important step into the future, is we're building all those capabilities into one integrated unified cloud platform. That's called, IBM Cloud Private for data. Think of that as a unified, collaborative team environment for AI and data science. Completely built on a cloud native architecture of containers and micro services. That will support a multi cloud environment. So, IBM cloud, other clouds, you mention Red Hat with Openshift, so, over time by adopting IBM Cloud Private for data, you'll get those steps of the ladder all integrated to one unified environment. So you have the ability to buy the unified environment, get involved in that, and it all integrated, no assembly required kind of thought. Or, you could assemble it by buying the individual components, or some combination of both. So a big part of the strategy is, a great deal of flexibility on how you acquire these capabilities and deploy them in your enterprise. There's no one size fits all. We give you a lot of flexibility to do that. >> And that's a true hybrid vision, I don't have to have just IBM and IBM cloud, you're recognizing other clouds out there, you're not exclusive like some companies, but that's really important. >> It's a multi cloud strategy, it really is, it's a multi cloud strategy. And that's exactly what we need, we recognize that most businesses, there's very few that have standardized on only one cloud provider, right? Most of them have multiples clouds, and then it breaks up of dedicated, private, public. And so our strategy is to enable this capability, think of it as a cloud data platform for AI, across all these clouds, regardless of what you have. >> All right, Scott, thanks for taking us through the strategies. I've always loved talking to you 'cause you're a clear thinker, and you explain things really well in simple terms, a lot of complexity here but, it is really important as the next wave sets up. So thanks very much for your time. >> Great, always great to be here, thank you. >> All right, good to see you. All right, thanks for watching everybody. We are now going to bring it back to CubeNYC so, thanks for watching and we will see you in the afternoon. We've got the panel, the influencer panel, that I'll be running with Peter Burris and John Furrier. So, keep it right there, we'll be right back. (upbeat music)
SUMMARY :
Brought to you by, IBM. it's good to see you again, It's always great to be And now AI is the big and if you kind of go back through time, and then being able to actually in the end it's going to be about And part of your strategy is of the ladder to AI, So the picture of the ladder And that's the advancements And it's just, the agility wasn't there. the hands. And that's when you start is it the algorithms, what is it? And the ability to just change Right, setting that foundation. is that building the actual algorithms, And so, putting data at that core So building the ability Queryplex being that to the data, Get to the data no matter And so infosphere's where you should look and you want to bring So a big part of the strategy is, I don't have to have And so our strategy is to I've always loved talking to you to be here, thank you. We've got the panel, the influencer panel,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Scott | PERSON | 0.99+ |
Scott Hebner | PERSON | 0.99+ |
80% | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
60% | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
New York City | LOCATION | 0.99+ |
Python | TITLE | 0.99+ |
Inderpal Bhandari | PERSON | 0.99+ |
PowerPoint | TITLE | 0.99+ |
IBMs | ORGANIZATION | 0.99+ |
Peter Burris | PERSON | 0.99+ |
TensorFlow | TITLE | 0.99+ |
three people | QUANTITY | 0.99+ |
both | QUANTITY | 0.98+ |
Times Square | LOCATION | 0.98+ |
Watson | TITLE | 0.98+ |
about 80% | QUANTITY | 0.98+ |
Watson Assistant | TITLE | 0.98+ |
step one | QUANTITY | 0.98+ |
one | QUANTITY | 0.97+ |
MIT Sloan | ORGANIZATION | 0.97+ |
next decade | DATE | 0.97+ |
about 15% | QUANTITY | 0.97+ |
Watson Studio | TITLE | 0.97+ |
this week | DATE | 0.97+ |
Step two | QUANTITY | 0.96+ |
Watson Discovery | TITLE | 0.96+ |
two big pieces | QUANTITY | 0.96+ |
Red Hat | TITLE | 0.96+ |
about 81% | QUANTITY | 0.96+ |
Openshift | TITLE | 0.95+ |
CubeNYC | LOCATION | 0.94+ |
five | DATE | 0.94+ |
Queryplex | TITLE | 0.94+ |
first | QUANTITY | 0.93+ |
today | DATE | 0.92+ |
100 year old | QUANTITY | 0.92+ |
Wentworth | ORGANIZATION | 0.91+ |
Step three | QUANTITY | 0.91+ |
Change the Game: Winning With AI | TITLE | 0.9+ |
one cloud provider | QUANTITY | 0.9+ |
one thing | QUANTITY | 0.89+ |
DB2 | TITLE | 0.85+ |
each one | QUANTITY | 0.84+ |
seven years ago | DATE | 0.83+ |
OnPrem | ORGANIZATION | 0.83+ |
waves | EVENT | 0.82+ |
number one challenge | QUANTITY | 0.8+ |
Red Hat | TITLE | 0.78+ |
Offprem | ORGANIZATION | 0.77+ |
DB2 | ORGANIZATION | 0.76+ |
major | EVENT | 0.76+ |
major wave | EVENT | 0.75+ |
SPSS | TITLE | 0.73+ |
Moore's Law | TITLE | 0.72+ |
Cognos | TITLE | 0.72+ |
next | EVENT | 0.66+ |
Cloud | TITLE | 0.64+ |
around 2000 | QUANTITY | 0.64+ |
Hadoop | TITLE | 0.61+ |
early Hadoop days | DATE | 0.55+ |
them | QUANTITY | 0.51+ |
wave | EVENT | 0.5+ |
in | DATE | 0.49+ |
theCUBE | TITLE | 0.45+ |
theCUBE | ORGANIZATION | 0.42+ |
theCUBE Insights from VMworld 2018
(upbeat techno music) >> Live from Las Vegas, it's theCUBE covering VMworld2018 brought to by VMware and it's ecosystem partners. >> Welcome back to theCUBE, I am Lisa Martin with Dave Vellante, John Furrier, Stu Miniman at the end of day two of our continuing coverage, guys, of VMworld 2018, huge event, 25+ thousand people here, 100,000+ expected to be engaging with the on demand and the live experiences. Our biggest show, right? 94 interviews over the next three days, two of them down. Let's go, John, to you, some of the takeaways from today from the guests we've had on both sets, what are some of the things that stick out in your mind? Really interesting? >> Well we had Michael Dell on so that's always a great interview, he comes on every year and he's very candid and this year he added a little bit more color commentary. That was great, it was one of my highlights. I thought the keynote that Sanjay Poonen did, he had an amazing guest, Nobel Peace Prize winner, the youngest ever and her story was so inspirational and I think that sets a tone for VMware putting a cultural stake in the ground around tech for good. We've done a lot of AI for good with Intel and there's always been these initiatives but I think there's now a cultural validation that people generally want to work for and buy from companies that are mission driven and mission driven is now part of it and people can be judged on that front so it's good to see VMware get some leadership there and put the stake in the ground. I thought that was the big news today, at least from my standpoint. The rest were like point product announcements. Sanjay Poonen went into great detail on that. Pat Gelsinger also came on, another great highlight and again we didn't have a lot of time, he was running a bit late, he had a tight schedule but it shows how smart he is, he's really super technical and he actually understands at a root level what's going on so he's actually a great CEO right now, the financial performance is there and he's also very technical, and I think it encapsulates all of it that Dell Technologies, under Michael Dell, he's making so much more money, he's going to be richer and richer. (laughing) He took an entrepreneurial bet, it wasn't hurting at the time but Dell was kind of boring, Dave. I wouldn't call it like an innovative company at the time when they were public using the 90 day shot clock. They had some things going on but they were a hardware company, a supplier to IT footprints-- >> Whoa, whoa, they were 60 billion dollars in revenue and a 20 billion dollar market gap, so something was broken. >> Well I mean it was working numbers wise but he seemed-- >> No that's opposite, a 20 billion dollar value on a 60 billion of revenue, is you're sort of a failure, so anyway, at the time. >> Market conditions aside, right, at the time, he seemed like he wanted to do something entrepreneurial and the takeaway from my interview with him, our interview with him, was he took an entrepreneurial bet put his own cash on the table and it's paying off, that horse is coming in. He's going to make more money on this transaction and takes EMC out of the game, folds it into the operations, it really is going to be, I think, a financial success story if market conditions continue to be the way they are. Michael Dell will go down as a great financial maneuver and he'll be in the top epsilon of deals. >> The story people might forget is that Carl Icahn tried to take the company away from him. Michael Dell beat the great Carl Icahn, which doesn't happen often. Why did Carl Icahn want to take Dell private? Because he knew he could make a boatload of money off of it and Michael Dell said, "No way you're taking my company. "I'm going to do my thing and change the industry." >> He's going to have 90% voting control with Silver Lake Partners when the deal is all said and done and taking a company private and the executing the financial engineering plus execution is really hard to do, look at Elon Musk in the news today. He's trying to take Tesla private, he got his butt handed to him. Now he's saying, "No, we're going to stay public." (laughing) >> Wait, guys, are you saying Michael, after he gets all this money from VMware that it will help them go public, he's not going to sell off VMware or get rid of that, right? >> Well that's a joke that he would sell VMware, I mean-- >> Unless the cash is going to be good? >> No, he won't do it. >> I don't think it'll happen. I mean, maybe some day he sells some of the portion of it but you're not going to give up control of it, why would he? It's throwing off so much cash. He's got Silver Lake as a private equity company, they understand this inside and out. I mean this transaction goes down in history as one of the greatest trades ever. >> Yeah. >> Let me ask you guys a question, because I think is one we brought up in the interview because at that time, the pundits, we were actually right on this deal. We were very bullish on it, and we actually analyzed it. You guys did a good job at Wikibon and we on theCUBE pretty much laid out what happened. He executed it, we put the risks out there, but at the time people were saying, "This is a bad deal, EMC." The current state of IT at that time looked like it was dismal but the market forces that changed were cloud, and so what were those sideways impact points that no one understood, that really helped him lift this up? What's your thoughts, Dave, on that? >> First of all the desktop business did way better than anybody thought it would, which is amazing and actually EMC did pretty poorly for a while and so that was kind of a head fake. And then as we knew, VMware crushed it and crushed it even more than anybody expected so that threw off so much cash they were able to deliver, they did Pivotal, they did a Pivotal IPO, sold some software assets. I mean basically Michael Dell and his team did everything they said they said they were going to do and it's worked out, as he said today, even better than they possibly thought. >> Well and the commentary I'd give here is when the acquisition of EMC by Dell happened, the big turn we had is the impact of cloud and we said, "Well, okay they've got VMware over there "and they've got Pivotal but Dell's "just going to be a boring infrastructure company "with server, network and storage." The message that we heard at Dell World and maturing even more here is that this portfolio of families. Yes, VMware's a big piece of it, NSX and the networking, but Pivotal with PKS, all of those tie in to what's Dell's selling. Every time they're selling VxRail, you know that has a big VMware piece. They do the networking piece that extends across multi clouds, so Dell has a much better multi cloud story than I expected them to have when they bought EMC. >> But now, VMware hides a lot of warts. >> Yeah. >> Right? >> Absolutely. >> Let's be honest about that. >> What are they? >> Okay. I still think the client business is exposed. I mean as great as it is, you got to gain share in that business if you want to keep winning, number one. Number two is, the big question I have is can the core of Dell EMC continue to innovate or will it just make incremental improvements, have to do acquisitions to do innovation, inorganic acquisitions, and end up with more stovepipes? That's always been, Stu used to work there, that was always EMC's biggest challenge. Jeff Clark came in and said, "Okay, we're going to rationalize the portfolio." That has backlash as customer's say, "Well wait a minute, does that mean "you're not going to support my products?" No, no, we're going to support your products. So they've got to continue to innovate. As I say, VMware, because of how much cash it throws off, it's 50% of the company's profits, hides a lot of those exposures. >> And if VMware takes a turn, if market conditions change, the debt looming is exposed so again, the game's not over for Dell. He can see the finish line, but. (laughing) >> Buy low, sell high, guess who's selling right now? >> So a lot of financial impact, continued innovation but at the end of the day, guys, this is all about impacting customer's businesses. Not just from we've got to enable them to be successful in this multi cloud era, that's the norm today. They need to facilitate successful digital transformations, business outcomes, but they also have VMware, Dell EMC, Dell Technologies, great power to help customer's transform their cultures. I'd love to get perspective from you guys because I love the voice to the customer, what are some of your favorite Dell EMC, VMware, partner, customer stories that you've heard the last couple days that really articulate the value of this financial successful company that they're achieving? >> Well the first thing I'll say before we get to the customer stories is on your point about what VMware's doing, is they're a technology, Robin Matlock, the CMO was on theCUBE talking about they're a technology company, they have the hands on labs, they're a very geeky audience, which we love. But they have to get leadership on the product side, they got to maintain the R and D, they got to have best in class technical products that actually are relevant. You look at companies like Tintri that went bankrupt, great technology, cul-de-sac market. There's no market there, the world's going cloud. So to me VMware has to start pumping out really strong products and technologies that the customer's are going to buy, right? (laughing) >> In conjunction with the customer to help co-develop what the customer's need. >> So I was talking to a customer and he said, "Look, I'm 10 years behind where the cloud guys are "with Amazon so all I want is VMware "to make my life easier, continue to cut my costs. "I like the way I'm operating, "I just get constant pressure to cut cost, "so if they keep doing that, I'm going to stay with them "for a long, long time." Pete Townsend said it best, companies like VMware, Dell EMC, they move at the speed of the CIO and as long as they can move at the speed of the CIO, I've said this a million times, the rich get richer and it's why competent management that led by founders like Larry Ellison, like Michael Dell, continue to do well in this industry. >> And Andy Jassy technically, I would say, a found of AWS because he started it. >> Absolutely. >> A key, the other thing I would also say from a customer, we hear a lot of customer, I won't name names because a lot of our data's in hallway conversations and at night when we go out and get the real stories. On theCUBE it's mostly, oh we've been very successful at VM, we use virtualization, blah, blah, blah and it's an IT story, but the customers in the hallways that are off the record are saying essentially this, I'm paraphrasing, look it, we have an operation to run. I love this cloud stuff and I'd love to just blink my fingers and be in the cloud and just get rid of all this and operate at a level of cloud native, I just can't. I can't get there. They see Amazon's relationship with VMware as a bridge to the future and takes away a lot of cognitive dissonance around the feelings around VMware's lack of cloud, if you will. In this case, now that's satisfied with the AWS deal and they're focused on operations on premises and how to get their app more closed, like modernize so a lot of the blocking and tackling of the customer is I got virtualization and that's great but I don't want to miss out on the next lever of innovation. Okay, I'm looking at it going slow but no one's instantly migrating to the cloud. >> No way, no way. >> They're either born in the cloud or you're on migration schedules now, really evaluating the financial impact, economic impact, headcount impact of cloud. That's the reality of the cloud. >> You got to throw a flag on some of that messaging of how easy it is to migrate. I mean it's just not that easy. I've talked to customers that said, "Well we started it and we just kind of gave up. "There was no point in it. "The new stuff we're going to do in the cloud, "but we're not going to migrate all of our apps to the cloud, "it just makes no sense, there's no business case for it." >> This is where NSX and containers and Kubernetes bet is big, I think, I think if NSX can connect the clouds with some sort of interoperable layer for whatever workloads are going to move on either Amazon or the clouds, that's good. If they want to get the developers off virtualization, into a new drug, if you will, it's going to be services, micro services, Kubernetes because you can throw containers around those old workloads, modernize with the new stuff without killing the old and Stu and I heard this clear at the CNCF and the Lennox Foundation, that this has changed the mindset because you don't have to kill the old to bring in the new. You can bring in the new, containerize the old and manage on your speed of the CIO. >> And that's Amazon's bet isn't it? I mean, look, even Sanjay even said, if you go back five, six years, the original reinvent that was sweep the floor, bring it all into the cloud? I think that's in Amazon's DNA. I mean ultimately that's their vision. That's what they want to have happen and the way they get there is how you just described it, John. >> That's where this partnership between Amazon and VMware is so important because, right, Amazon has a lot of the developers but needs to be able to get deeper into the enterprise and VMware, starting to make some progress with the developers, they've got a code initiative, they've got all of these cool projects that they announced with everything from server less and Kubernetes and many others, Edge going to be a key use case there but you know, VMware is not, this is not the developer show. Most of the conversations that I had with customers, we're talking IT things, I mean customers doing some cool things but it's about simplifying in my environment, it's about helping operations. Most of the conversations are not about this cool new micro services building these things out. >> Cisco really is the only legacy, traditional enterprise company that's crushing developers. You give IBM some chops, too, but I wouldn't say they're crushing it. We saw that at Cisco Live, Cisco is doing a phenomenal job with developers. >> Well the thing about the cloud, one thing I've been pointing out, observation that I have is if you look at the future of the cloud and you can look for metaphors and/or real examples, I think Amazon Web Services, obviously we know them well but Google Cloud to me is a picture of the future. Not in the sense of what they have for the customer's today it's the way they've run their business from day one. They have developers and they have SREs, Site Reliability Engineers. This VMworld community is going down two paths. Developers are going to be rapidly iterating on real apps and operators who are going to be running systems. That's network storage, all integrated. That's like an SRE at Google. Google's running massive scale and they perfected it, hence Kubernetes, hence some of the tools coming in to services like Istio and things that we're seeing in the Lennox Foundation. To me that's the future model, it's an operator and set of developers. Whoever can make that easy, completely seamless, is the winner of it all. >> And the linchpin, a linchpin, maybe not the linchpin, but a linchpin is still the database, right? We've seen that with Oracle. Why is Amazon going so hard after the database? I mean it's blatantly obvious what their strategy is. >> Database is the hill that everyone is trying to take down. Capture the hill, you get the high ground with the database. >> Come on Dave, when you used to do the financial models of how much money is spent by the enterprise, that database was a big chunk. We've seen the erosion of lots of licensing out there. When I talked to Microsoft, they're like, pushing a lot of open source, they're going to cloud. Microsoft licensing isn't as much. VMware licensing is something that customers would like to shrink over time but database is even bigger. >> It's a strategic fulcrum, obviously Oracle has it. Microsoft clearly has it with Sequel Server. IBM, a big part of IBM's success to this day, is DB2 running on mainframe. (laughing) So Amazon wants a piece of that action, they understand to be a major player in this business you have to have database infrastructure. >> I mean costs are going down, it's going to come down to economics. End of the day the operating models as I said, some things about DB2 on mainframe, the bottom line's going to come down to when the cost numbers to run at the value and cost expense involved in running the tech that's going to be the ultimate way that things are either going to be cleared out or replaced or expanded so the bottom line is it's going to be a cost equation at that level and then the upside's going to be revenue. >> And just a great thing for VMware, since they don't own the application, when they do things like RDS in their environment they are freeing up dollars that customers are then going to be more likely to want to spend with VMware. >> Great point. I want to make real quick, three things we've been watching this week. Is the Amazon VMware deal a one way trip to the cloud? I think it's clear not in the near term, anyway. And the second is what about the edge? The edge to me is all about data, it's like the wild, wild west. It's very unclear that there's a winner there but there's a new type of cloud emerging. And three is the Dell structure. We asked Pat, we asked VMware Ray O'Farrell, we asked Michael, if that 11 billion dollar special dividend was going to impact VMware's ability to fund it's future? Consistent answer there, no. You know, we'll see, we'll see. >> I mean what are they going to say? Yeah, that really limits my ability to buy companies, on theCUBE? No, that's the messaging so of course, 11 billion dollars gone means they can't do MNA with the cash, that means, yeah it's going to be R and D, what does that mean? Investment, so I think the answer is yes it does limit them a little bit. >> Has to. >> It's cash going out the door. >> But VMware just spent, it is rumored, around 500 million dollars for CloudHealth Technologies, Dave, Boston based company, with about 200 people You know, hey, have a billion-- >> They're going to put back a dividend anyway and do stock buybacks but I'm not sure 11 out of the 13 billion is what they would choose to do that for, so going forward, we'll see how it all plays out, obviously. I think, Floyer wrote about this, more has to go toward VMware, less toward-- >> I think it's the other way around. >> Well I think it's really good that we have one more day tomorrow. >> I think it's a one way trip to the cloud in a lot of instances, I think a lot of VMware customers are going to go off virtualization, not hypervisor and end up being in the cloud most of the business. It's going to be interesting, I think the size of customers that Amazon has now, versus VMware is what? Does VMware have more customers than Amazon right now? >> It's pretty close, right? VMware's 500,000? >> 500,000 for VMware. >> And Amazon's-- >> Over a million. >> Are they over a million, really? >> Yeah. >> A lot of smaller customers, but still. >> Yeah. >> Customer's a customer. >> But VMware might have bigger customers, see that's-- >> No question the ASP is higher, but-- >> It's not conflict, I'm just thinking like cloud is natural, right? Why wouldn't you want to use the cloud, right? I mean. >> So guys-- >> So the debate continues. >> Exactly. Good news is we have more time tomorrow to talk more about all this innovation as well as see more real world examples of how VMware is going to be enabling tech for good. Guys, thanks so much for your commentary and letting me be a part of the wrap. >> Thank you. >> Thanks, Lisa. >> Looking forward to day three tomorrow. For Dave, Stu and John, I'm Lisa Martin. You've been watching our coverage of day two VMworld 2018. We look forward to you joining us tomorrow, for day three. (upbeat techno music)
SUMMARY :
brought to by VMware and and the live experiences. and put the stake in the ground. and a 20 billion dollar market so anyway, at the time. and he'll be in the top epsilon of deals. and change the industry." Elon Musk in the news today. sells some of the portion of it but at the time people were saying, First of all the desktop business Well and the commentary I'd give here it's 50% of the company's profits, He can see the finish that really articulate the value that the customer's are going the customer's need. "I like the way I'm operating, I would say, a found of AWS and be in the cloud in the cloud or you're on all of our apps to the cloud, the old to bring in the new. and the way they get there is how you Amazon has a lot of the developers Cisco really is the only legacy, Not in the sense of what they a linchpin, maybe not the linchpin, Database is the hill that We've seen the erosion of success to this day, the bottom line's going to come down to are then going to be more And the second is what about the edge? No, that's the messaging so of course, out of the 13 billion is that we have one more day tomorrow. cloud most of the business. to use the cloud, right? and letting me be a part of the wrap. We look forward to you joining
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Michael | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Jeff Clark | PERSON | 0.99+ |
Pat | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Larry Ellison | PERSON | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Sanjay Poonen | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Tesla | ORGANIZATION | 0.99+ |
Pete Townsend | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Michael Dell | PERSON | 0.99+ |
60 billion | QUANTITY | 0.99+ |
50% | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
Robin Matlock | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Lisa | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
CloudHealth Technologies | ORGANIZATION | 0.99+ |
tomorrow | DATE | 0.99+ |
Carl Icahn | PERSON | 0.99+ |
Stu | PERSON | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
Lennox Foundation | ORGANIZATION | 0.99+ |
13 billion | QUANTITY | 0.99+ |
Sanjay | PERSON | 0.99+ |
CNCF | ORGANIZATION | 0.99+ |
Silver Lake Partners | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
90 day | QUANTITY | 0.99+ |
94 interviews | QUANTITY | 0.99+ |
Floyer | PERSON | 0.99+ |
Dell EMC | ORGANIZATION | 0.99+ |
Pandit Prasad, IBM | DataWorks Summit 2018
>> From San Jose, in the heart of Silicon Valley, it's theCube. Covering DataWorks Summit 2018. Brought to you by Hortonworks. (upbeat music) >> Welcome back to theCUBE's live coverage of Data Works here in sunny San Jose, California. I'm your host Rebecca Knight along with my co-host James Kobielus. We're joined by Pandit Prasad. He is the analytics, projects, strategy, and management at IBM Analytics. Thanks so much for coming on the show. >> Thanks Rebecca, glad to be here. >> So, why don't you just start out by telling our viewers a little bit about what you do in terms of in relationship with the Horton Works relationship and the other parts of your job. >> Sure, as you said I am in Offering Management, which is also known as Product Management for IBM, manage the big data portfolio from an IBM perspective. I was also working with Hortonworks on developing this relationship, nurturing that relationship, so it's been a year since the Northsys partnership. We announced this partnership exactly last year at the same conference. And now it's been a year, so this year has been a journey and aligning the two portfolios together. Right, so Hortonworks had HDP HDF. IBM also had similar products, so we have for example, Big Sequel, Hortonworks has Hive, so how Hive and Big Sequel align together. IBM has a Data Science Experience, where does that come into the picture on top of HDP, so it means before this partnership if you look into the market, it has been you sell Hadoop, you sell a sequel engine, you sell Data Science. So what this year has given us is more of a solution sell. Now with this partnership we go to the customers and say here is NTN experience for you. You start with Hadoop, you put more analytics on top of it, you then bring Big Sequel for complex queries and federation visualization stories and then finally you put Data Science on top of it, so it gives you a complete NTN solution, the NTN experience for getting the value out of the data. >> Now IBM a few years back released a Watson data platform for team data science with DSX, data science experience, as one of the tools for data scientists. Is Watson data platform still the core, I call it dev ops for data science and maybe that's the wrong term, that IBM provides to market or is there sort of a broader dev ops frame work within which IBM goes to market these tools? >> Sure, Watson data platform one year ago was more of a cloud platform and it had many components of it and now we are getting a lot of components on to the (mumbles) and data science experience is one part of it, so data science experience... >> So Watson analytics as well for subject matter experts and so forth. >> Yes. And again Watson has a whole suit of side business based offerings, data science experience is more of a a particular aspect of the focus, specifically on the data science and that's been now available on PRAM and now we are building this arm from stack, so we have HDP, HDF, Big Sequel, Data Science Experience and we are working towards adding more and more to that portfolio. >> Well you have a broader reference architecture and a stack of solutions AI and power and so for more of the deep learning development. In your relationship with Hortonworks, are they reselling more of those tools into their customer base to supplement, extend what they already resell DSX or is that outside of the scope of the relationship? >> No it is all part of the relationship, these three have been the core of what we announced last year and then there are other solutions. We have the whole governance solution right, so again it goes back to the partnership HDP brings with it Atlas. IBM has a whole suite of governance portfolio including the governance catalog. How do you expand the story from being a Hadoop-centric story to an enterprise data-like story, and then now we are taking that to the cloud that's what Truata is all about. Rob Thomas came out with a blog yesterday morning talking about Truata. If you look at it is nothing but a governed data-link hosted offering, if you want to simplify it. That's one way to look at it caters to the GDPR requirements as well. >> For GDPR for the IBM Hortonworks partnership is the lead solution for GDPR compliance, is it Hortonworks Data Steward Studio or is it any number of solutions that IBM already has for data governance and curation, or is it a combination of all of that in terms of what you, as partners, propose to customers for soup to nuts GDPR compliance? Give me a sense for... >> It is a combination of all of those so it has a HDP, its has HDF, it has Big Sequel, it has Data Science Experience, it had IBM governance catalog, it has IBM data quality and it has a bunch of security products, like Gaurdium and it has some new IBM proprietary components that are very specific towards data (cough drowns out speaker) and how do you deal with the personal data and sensitive personal data as classified by GDPR. I'm supposed to query some high level information but I'm not allowed to query deep into the personal information so how do you blog those queries, how do you understand those, these are not necessarily part of Data Steward Studio. These are some of the proprietary components that are thrown into the mix by IBM. >> One of the requirements that is not often talked about under GDPR, Ricky of Formworks got in to it a little bit in his presentation, was the notion that the requirement that if you are using an UE citizen's PII to drive algorithmic outcomes, that they have the right to full transparency. It's the algorithmic decision paths that were taken. I remember IBM had a tool under the Watson brand that wraps up a narrative of that sort. Is that something that IBM still, it was called Watson Curator a few years back, is that a solution that IBM still offers, because I'm getting a sense right now that Hortonworks has a specific solution, not to say that they may not be working on it, that addresses that side of GDPR, do you know what I'm referring to there? >> I'm not aware of something from the Hortonworks side beyond the Data Steward Studio, which offers basically identification of what some of the... >> Data lineage as opposed to model lineage. It's a subtle distinction. >> It can identify some of the personal information and maybe provide a way to tag it and hence, mask it, but the Truata offering is the one that is bringing some new research assets, after GDPR guidelines became clear and then they got into they are full of how do we cater to those requirements. These are relatively new proprietary components, they are not even being productized, that's why I am calling them proprietary components that are going in to this hosting service. >> IBM's got a big portfolio so I'll understand if you guys are still working out what position. Rebecca go ahead. >> I just wanted to ask you about this new era of GDPR. The last Hortonworks conference was sort of before it came into effect and now we're in this new era. How would you say companies are reacting? Are they in the right space for it, in the sense of they're really still understand the ripple effects and how it's all going to play out? How would you describe your interactions with companies in terms of how they're dealing with these new requirements? >> They are still trying to understand the requirements and interpret the requirements coming to terms with what that really means. For example I met with a customer and they are a multi-national company. They have data centers across different geos and they asked me, I have somebody from Asia trying to query the data so that the query should go to Europe, but the query processing should not happen in Asia, the query processing all should happen in Europe, and only the output of the query should be sent back to Asia. You won't be able to think in these terms before the GDPR guidance era. >> Right, exceedingly complicated. >> Decoupling storage from processing enables those kinds of fairly complex scenarios for compliance purposes. >> It's not just about the access to data, now you are getting into where the processing happens were the results are getting displayed, so we are getting... >> Severe penalties for not doing that so your customers need to keep up. There was announcement at this show at Dataworks 2018 of an IBM Hortonwokrs solution. IBM post-analytics with with Hortonworks. I wonder if you could speak a little bit about that, Pandit, in terms of what's provided, it's a subscription service? If you could tell us what subset of IBM's analytics portfolio is hosted for Hortonwork's customers? >> Sure, was you said, it is a a hosted offering. Initially we are starting of as base offering with three products, it will have HDP, Big Sequel, IBM DB2 Big Sequel and DSX, Data Science Experience. Those are the three solutions, again as I said, it is hosted on IBM Cloud, so customers have a choice of different configurations they can choose, whether it be VMs or bare metal. I should say this is probably the only offering, as of today, that offers bare metal configuration in the cloud. >> It's geared to data scientist developers and machine-learning models will build the models and train them in IBM Cloud, but in a hosted HDP in IBM Cloud. Is that correct? >> Yeah, I would rephrase that a little bit. There are several different offerings on the cloud today and we can think about them as you said for ad-hoc or ephemeral workloads, also geared towards low cost. You think about this offering as taking your on PRAM data center experience directly onto the cloud. It is geared towards very high performance. The hardware and the software they are all configured, optimized for providing high performance, not necessarily for ad-hoc workloads, or ephemeral workloads, they are capable of handling massive workloads, on sitcky workloads, not meant for I turned this massive performance computing power for a couple of hours and then switched them off, but rather, I'm going to run these massive workloads as if it is located in my data center, that's number one. It comes with the complete set of HDP. If you think about it there are currently in the cloud you have Hive and Hbase, the sequel engines and the stories separate, security is optional, governance is optional. This comes with the whole enchilada. It has security and governance all baked in. It provides the option to use Big Sequel, because once you get on Hadoop, the next experience is I want to run complex workloads. I want to run federated queries across Hadoop as well as other data storage. How do I handle those, and then it comes with Data Science Experience also configured for best performance and integrated together. As a part of this partnership, I mentioned earlier, that we have progress towards providing this story of an NTN solution. The next steps of that are, yeah I can say that it's an NTN solution but are the product's look and feel as if they are one solution. That's what we are getting into and I have featured some of those integrations. For example Big Sequel, IBM product, we have been working on baking it very closely with HDP. It can be deployed through Morey, it is integrated with Atlas and Granger for security. We are improving the integrations with Atlas for governance. >> Say you're building a Spark machine learning model inside a DSX on HDP within IH (mumbles) IBM hosting with Hortonworks on HDP 3.0, can you then containerize that machine learning Sparks and then deploy into an edge scenario? >> Sure, first was Big Sequel, the next one was DSX. DSX is integrated with HDP as well. We can run DSX workloads on HDP before, but what we have done now is, if you want to run the DSX workloads, I want to run a Python workload, I need to have Python libraries on all the nodes that I want to deploy. Suppose you are running a big cluster, 500 cluster. I need to have Python libraries on all 500 nodes and I need to maintain the versioning of it. If I upgrade the versions then I need to go and upgrade and make sure all of them are perfectly aligned. >> In this first version will you be able build a Spark model and a Tesorflow model and containerize them and deploy them. >> Yes. >> Across a multi-cloud and orchestrate them with Kubernetes to do all that meshing, is that a capability now or planned for the future within this portfolio? >> Yeah, we have that capability demonstrated in the pedestal today, so that is a new one integration. We can run virtual, we call it virtual Python environment. DSX can containerize it and run data that's foreclosed in the HDP cluster. Now we are making use of both the data in the cluster, as well as the infrastructure of the cluster itself for running the workloads. >> In terms of the layers stacked, is also incorporating the IBM distributed deep-learning technology that you've recently announced? Which I think is highly differentiated, because deep learning is increasingly become a set of capabilities that are across a distributed mesh playing together as is they're one unified application. Is that a capability now in this solution, or will it be in the near future? DPL distributed deep learning? >> No, we have not yet. >> I know that's on the AI power platform currently, gotcha. >> It's what we'll be talking about at next year's conference. >> That's definitely on the roadmap. We are starting with the base configuration of bare metals and VM configuration, next one is, depending on how the customers react to it, definitely we're thinking about bare metal with GPUs optimized for Tensorflow workloads. >> Exciting, we'll be tuned in the coming months and years I'm sure you guys will have that. >> Pandit, thank you so much for coming on theCUBE. We appreciate it. I'm Rebecca Knight for James Kobielus. We will have, more from theCUBE's live coverage of Dataworks, just after this.
SUMMARY :
Brought to you by Hortonworks. Thanks so much for coming on the show. and the other parts of your job. and aligning the two portfolios together. and maybe that's the wrong term, getting a lot of components on to the (mumbles) and so forth. a particular aspect of the focus, and so for more of the deep learning development. No it is all part of the relationship, For GDPR for the IBM Hortonworks partnership the personal information so how do you blog One of the requirements that is not often I'm not aware of something from the Hortonworks side Data lineage as opposed to model lineage. It can identify some of the personal information if you guys are still working out what position. in the sense of they're really still understand the and interpret the requirements coming to terms kinds of fairly complex scenarios for compliance purposes. It's not just about the access to data, I wonder if you could speak a little that offers bare metal configuration in the cloud. It's geared to data scientist developers in the cloud you have Hive and Hbase, can you then containerize that machine learning Sparks on all the nodes that I want to deploy. In this first version will you be able build of the cluster itself for running the workloads. is also incorporating the IBM distributed It's what we'll be talking next one is, depending on how the customers react to it, I'm sure you guys will have that. Pandit, thank you so much for coming on theCUBE.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Rebecca | PERSON | 0.99+ |
James Kobielus | PERSON | 0.99+ |
Rebecca Knight | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Asia | LOCATION | 0.99+ |
Rob Thomas | PERSON | 0.99+ |
San Jose | LOCATION | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
Pandit | PERSON | 0.99+ |
last year | DATE | 0.99+ |
Python | TITLE | 0.99+ |
yesterday morning | DATE | 0.99+ |
Hortonworks | ORGANIZATION | 0.99+ |
three solutions | QUANTITY | 0.99+ |
Ricky | PERSON | 0.99+ |
Northsys | ORGANIZATION | 0.99+ |
Hadoop | TITLE | 0.99+ |
Pandit Prasad | PERSON | 0.99+ |
GDPR | TITLE | 0.99+ |
IBM Analytics | ORGANIZATION | 0.99+ |
first version | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
one year ago | DATE | 0.98+ |
Hortonwork | ORGANIZATION | 0.98+ |
three | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
DSX | TITLE | 0.98+ |
Formworks | ORGANIZATION | 0.98+ |
this year | DATE | 0.98+ |
Atlas | ORGANIZATION | 0.98+ |
first | QUANTITY | 0.98+ |
Granger | ORGANIZATION | 0.97+ |
Gaurdium | ORGANIZATION | 0.97+ |
one | QUANTITY | 0.97+ |
Data Steward Studio | ORGANIZATION | 0.97+ |
two portfolios | QUANTITY | 0.97+ |
Truata | ORGANIZATION | 0.96+ |
DataWorks Summit 2018 | EVENT | 0.96+ |
one solution | QUANTITY | 0.96+ |
one way | QUANTITY | 0.95+ |
next year | DATE | 0.94+ |
500 nodes | QUANTITY | 0.94+ |
NTN | ORGANIZATION | 0.93+ |
Watson | TITLE | 0.93+ |
Hortonworks | PERSON | 0.93+ |
Arvind Krishna, IBM | Red Hat Summit 2018
>> [Announcer] 18, brought to you by Red Hat >> Well, welcome back everyone. This is theCUBE's exclusive coverage here in San Francisco, California, for Red Hat Summit 2018. I am John Furrier, co-host of theCUBE with my analyst co-host this week, John Troyer, co-founder of the TechReckoning advisory services. And our next guest is Arvind Krishna, who is the Senior Vice President of Hybrid Cloud at IBM and Director of IBM Research. Welcome back to theCUBE, good to see you. >> Thanks John and John great to meet you guys here. >> You can't get confused here you've got two John's here. Great to have you on because, you guys have been doing some deals with Red Hat, obviously the leader at open storage. You guys are one of them as well contributing to Linuxes well documented in the IBM history books on your role and relationship to Linux so check, check. But you guys are doing a lot of work with cloud, in a way that, frankly, is very specific to IBM but also has a large industry impact, not like the classic cloud. So I want to tie the knot here and put that together. So first I got to ask you, take a minute to talk about why you're here with Red Hat, what's the update with IBM with Red Hat? >> Great John, thanks for giving me the time. I'm going to talk about it in two steps: One, I'm going to talk about a few common tenets between IBM and Red Hat. Then I'll go from there to the specific news. So for the context, we both believe in Linux, I think that easy to state. We both believe in containers, I think that is the next thing to state. We'll come back talk about containers because this is a world, containers are linked to Linux containers are linked to these technologies called Kubernetes. Containers are linked to how you make workloads portable across many different environments, both private and public. Then I go on from there to say, that we both believe in hybrid. Hybrid meaning that people want the ability to run their workload, where ever they want. Be it on a private cloud, be it on a public cloud. And do it without having to rewrite everything as you go across. Okay, so let's establish, those are the market needs. So then you come back and say. And IBM has a great portfolio of Middleware, names like WebSphere and DB2 and I can go on and on. And Red Hat has a great footprint of Linux, in the Enterprise. So now you say, we've got the market need of hybrid. We've got these two thing, which between them are tens of millions, maybe hundreds of millions of end points. How do you make that need get fulfilled by this? And that's what we just announced here. So we announced that IBM Middleware will run containerized on Red Hat containers, on Red Hat Enterprise Linux. In addition, we said IBM Cloud Private, which is the ability to bring all of the IBM Middleware in a sort of a cloud-friendly form. Right you click and you install it, it keep it self up, it doesn't go down, it's elastic in a set of technologies we call IBM Cloud Private, running in turn on Red Hat OpenShift Container service on Red Hat Linux. So now for the first time, if you say I want private, I want public, I want to go here, I want to go there. You have a complete certified stack, that is complete. I think I can say, we're a unique in the industry, in giving you this. >> And this is where, kind of where, the fruit comes off the tree, for you guys. Because, we've been following you guys for years, and everyone's: Where's the cloud strategy? And first of all, it's not, you don't have a cloud strategy you have cloud products. Right, so you have delivered the goods. You got the, so just to replay. The market need we all know is the hybrid cloud, multi-cloud, choice et cetera, et cetera. >> You take Red Hat's footprint, your capabilities, your combined install base, is foundational. >> [Arvind] Right >> So, nothing needs to change. There's no lift and shift, there's no rip and replace, >> you can, it's out there it's foundational. Now on top of it, is where the action is. That where you're kind of getting at, right? >> That's correct, so we can go into somebody running, let's say, a massive online banking application or they're running a reservation system. It's using technologies from us, it's using Linux underneath and today it's all a bunch of piece pods, you have a huge complex stuff it's all hard-wired and rigidly nailed down to the floor in a few places and now you can say: Hey, I'll take the application. I don't have to rewrite the application. I can containerize it, I can put it here. And that same app now begins to work but in a way that's a lot more fluid and elastic. Or my other way: I want to do a bit more work. I want to expose a bit of it up as microservices. I want to insert some IA. You can go do that. You want to fully make it microservices enabled to be able to make it into little components >> and ultimately you can do that. >> So you can take it in sort of bite size chunks and go from one to other, at the pace that you want. >> [John F.] Now that's game changing. >> Yeah, that's what I really like about this announcement. It really brings best of breed together. You know, there is a lot of talk about containers. Legacy and we've been talking about what goes where? And do you have to break everything up? Like you were just saying. But the announcement today, WebSphere, the battle tested huge enterprise scale component, DB2, those things containerized and also in a frame work like with IBM, either with IBM microservices and application development things or others right, that's a huge endorsement for OpenShift as a platform. >> Absolutely, it is and look, we would be remiss if we didn't talk a little bit. I mean we use the word containers and containerized a lot. Yes, you're right. Containers are a really, really important technology but what containers enable is much more than prior attempts such as VM's and all have done. Containers really allow you to say: Hey, I solved the security problem, I solved the patching problem, the restart problem, all those problems that lie around the operations of a typical enterprise, can get solved with containers. VM's solved a lot about isolating the infrastructure but it didn't solve, as John was saying, the top half of the stack. And that's I think the huge power here. >> Yeah, I want to just double click on that because I think the containers thing is instrumental. Because it, first of all, being in the media and loving what we do. We're kind of a new kind of media company but traditional media is been throwing IBM under the bus since saying: Wow old guard and all these things. Here's the thing, you don't have to change anything. You got containers you can essentially wrap it up and then bring a microservice architecture into it. So you can actually leverage at cloud scale. So what interests me is that you can move instantly, >> value proposition wise, pre-existing market, cloudify it, if you will, with operational capabilities. >> Right. >> This is where I like the Cloud Private. So I want to kind of go there for a second. If I have a need to take what I have at IBM, whether it is WebSphere. Now I got developers, I got installed base. I don't have to put a migration plan away. I containerize it. Thank you very much. I do some cloud native stuff but I want to make it private. My use case is very specific, maybe it's confidential, maybe it's like a government region, Whatever. I can create a cloud operations, is that right? I can cloudify it, and run it? >> Absolutely correct, so when you look at Cloud Private, to go down that path, we said Cloud Private allows you to run on your private infrastructure but I want all these abilities you just described John. I want to be able to do microservices. I want to be able to scale up and down. I want to be able to say operations happen automatically. But it gives you all that but in the private without it having to go all the way to the public. If you cared a lot about, your in a regulated industry, you went down government or confidential data. Or you say this data is so sensitive, I don't really, I am not going to take the risk of it being anywhere else. It absolutely gives you that ability to go do that and that is what brought Cloud Private to the market for and then you combine that with OpenShift and now you get the powers of both together. >> See you guys essentially have brought to the table the years of effort with Bluemix, all that good stuff going on, you can bring it in and actually run this in any industry vertical. Pretty much, right? >> Absolutely, so if you look at part what the past has been for the entire industry. It has been a lot about constructing a public cloud. Not just us, but us and our competition. And a public cloud has certain capabilities and it has certain elasticity, it has a global footprint. But it doesn't have a footprint that is in every zip code or in every town or in every city. That's not what happens to a public cloud. So we say. It's a hybrid world meaning that you're going to run some workloads on a public cloud, I'd like to run some workloads on a private and I'd like to have the ability that I don't have to pre decide which is where. And that is what the containers and microservices, the OpenShift that combination all give you to say you don't need to pre decide. You rewrite the workload onto this and then you can decide where it runs. >> Well I was having this conversation with some folks at a recent Amazon Web services conference. Well, if you go to cloud operations, then the on premise is essentially the edge. It's not necessarily. Then the definition of on premise, really doesn't even exist. >> So if you have cloud operations, in a way, what is the data center then? It's just a connected issue. >> That's right, it's the infrastructure which is set up and then, at that point, the Software Manger, at the data center, as opposed to anything else. And that's kind of been the goal that we're all been wanting. >> Sounds like this is visibly at IBM's essentially execution plan from day one. We've been seeing it and connecting the dots. Having the ability to take either pre-existing resources, foundational things like Red Hat or what not in the enterprise. Not throwing it away. Building on top of it and having a new operating model, with software, with elastic scale, horizontally scalable, Synchronous, all these good things. Enabling microservices, with Kubernetes and containers. Now for the first time, >> I can roll out new software development life cycles in a cloud native environment without forgoing legacy infrastructure and investment. >> Absolutely, and one more element. And if you want to insert some cloud service into the environment, be it in private or in public, you can go do that. For example, you want to insert a couple of AI services >> into the middle of your application you could go do that. So the environment allows you to, do what you described and these additionals. >> I want to talk about people for a second. The titles that we haven't mentioned CIO, Business Leader, Business Unit Leaders, how are they looking at >> digital transformation and business transformation in your client bases you go out and talk to them. >> Let's take a hypothetical bank. And every bank today is looking about simple questions. How do I improve my customer experience? And everybody want, when they say customer experience, really do mean digital customer experience to make it very tangible. And what they mean by that is how do I get my end customer engaged with me through an app. The app is probably in a device like this. Some smart phone, we won't say what it is, and so how do you do that? And so they say: Well, all obviously to check your balance. You obviously want to check your credit card. You want to do all those things. The same things we do today. So that application exists, there is not much point in rewriting it. You might do the UI up but it's an app that exists. Then you say but I also want to give you information that's useful to you in the context to what you're doing. I want say, you can get a 10 second loan, not a 30 day loan, but a 10 second loan. I want to make a offer to you in the middle of you browsing credit card. All those are new customergistics, where do you construct those apps? How do you mix and match it? How do you use all the capabilities along with the data you've got to go do that? And what we're trying to now say, here is a platform that you can go, do all that on. Right, that complete lifecycle you mentioned, the development lifecycle but I got to add to it >> the data lifecycle, as well as, here is the versioning, here are my AI models, all those things, built in, into one platform. >> And scales are huge, the new competitive advantage. You guys are enabling that. So I got to ask you a question on multi cloud. Obviously, as people start building out the cloud on PRIM and with Public Cloud and the things you're laying out. I can see that going on for a while, a lot of work being done there. We're seeing that Wikibon had a true Private Cloud report what I thought was truly telling. A lot of growth there, still not going away. Public Cloud's certainly grown in numbers are clear. However, the word multicloud's being kicked around I think it's more of a future stay obviously but people have multiple clouds Will have relationships with multiple clouds. No one's going to have one cloud. It's not a winner take all game. Winner take most but you know you're have multiple clouds. What does multi cloud mean to you guys in your architecture? Is that moving workloads in real time based upon spot pricing indexes or is that just co-locating on clouds and saying I got this app on this cloud, that app on that cloud, control plane it. These are architectural questions. What the hell is multicloud? >> So there's a today, then there is a tomorrow, then there is a long future state, right? So let's take today, let's take IBM. We're on Salesforce, we're on ServiceNow, we're on Workday, we're on SuccessFactors, well all of there are different clouds. We run our own public cloud, we run our own private cloud and we have Judicial Data Center. And we might have some of the other clouds also through apps that we barter we don't even know. Okay, so that's just us. I think everyone of our clients are like this. The multicloud is here today. I begin with that first, simple statement. And I need to connect the data and can connect when thing go where. The next step, I think people, nobody's going to have even one public cloud. Even amongst the big public clouds, most people are going to have two if not more That's today and tomorrow. >> Your channel partners have clouds, by the way, your Global SI's all have clouds, theCUBE is a cloud for crying out load. >> Right, so then you go into the aspirational state and that may be the one you said, where people just spot pricing. But even if I stay back from spot pricing and completely (mumbles) I make. And I'm worrying about network and I'm worrying about radio reach. If I just backup around to but I may decide I have this app, I run it on private, well, but I don't have all the infrastructures I want to burst it today and I, where do I burst it? I got to decide which public and how do I go there? >> And that's a problem of today and we're doing that and that is why I think multicloud is here now. >> Not some point in the future. >> The prime statement there is latency, managing, service level agreements between clouds and so on and so forth. >> Access control on governance, Where does my data go? Because there may be regulatory reasons to decide where the data can flow and all those things. >> Great point about the cloud. I never thought about it that way. It is a good illustration. I would also say that, I see the same arguments in the data base world. Not everyone has DB2, not everyone has Oracle, not everyone has, databases are everywhere, you have databases part of IoT devices now. So like no one makes a decision on the database. Similar with clouds, you see a similar dynamic. It's the glue layer that, interest me. As you, how do you bring them together? So holistically looking at the 20 miles stare in the future, what is the integration strategy long-term? If you look at distributed system or an operating system there has to be an architectural guiding principle for integration, your thoughts. >> This has been a world 30 years in the making. We can say networking, everyone had their own networking standard and the, let's say the '80s probably goes back to the '70s right? You had SNA, you had TCP/IP, you had NetBIO's-- >> DECnet. DECnet. You can on and on and in the end it's TCP/IP that won out as the glue. Others by the way, survived but in packets and then TCP/IP was the glue. Then you can fast forward 15 years beyond that and HTTP became the glue, we call that the internet. Then you can fast forward and you can say, now how do I make applications portable? And I will turn round and tell you that containers on Linux with Kubernetes as orchestration is that glue layer. Now in order to make it so, just like TCP/IP, it wasn't enough to say TCP/IP you needed routing tables, you needed DNS, you needed name repository, you needed all those things. Similarly, you need all those here are called the scatlog and automation, so that's the glue layer that makes all of this work >> This is important, I love this conversation because I have been ranting on theCUBE for years. You nailed it. A new stack is developing and DNS's are old and internet infrastructure, cloud infrastructure at the global scale is seeing things like network effect, okay we see blockchain in token economics, databases, multiple databases, on structure day >> a new plethora of new things are happening that are building on top of say HTTP >> [Arvind] Correct! >> And this is the new opportunity. >> This is the new platform which is emerging and it is going to enable business to operate, as you said, >> at scale, to be very digital, to be very nimble. Application life cycles aren't always going to be months, they're going to come down to days and this is what gets enabled >> So I what you to give your opinion, personal or IBM or whatever perspective because I think you nailed the glue layer on Kubernetes, Docker, this new glue layer that and you made references to, things like HTTP and TCP, which changed the industry landscape, wealth creation, new brands emerged, companies we never heard of emerged out of this and we're all using them today. We expect a new set of brands are going to emerge, new technologies are going to emerge. In your expert opinion, how gigantic is this swarm of new innovation going to be? Just, 'cause you've seen many ways before. In you view, your minds eye, what are you expecting? >> Share your insight into how big of a shift and wave is this going to be and add some color to that. >> I think that if I take a shorter and then a longer term view. in the short term, I think that we said, that this is in the order of $100 billion, that's not just our estimate, I think even Gartner has estimated about the same number. That will be the amount of opportunity for new technologies in what we've been describing. And that is I think short term. If I go longer term, I think as much as a half but at least a fourth of the complete IT market is going to shift round to these technologies. So then the winners of those that make the shift and then by conclusion, the losers are those who don't make the shift fast enough. If half the market moves, that's huge. >> It's interesting we used to look at certain segments going back years just company, oh this company's replatformizing, >> replatforming their op lift and shift and all this stuff. What you're talking about here is so game changing because the industry is replatforming >> That's correct. It's not a company. >>It's an industry! That's right. And I think the internet era of 1995, to put that point, is perhaps the easiest analogy to what is happening. >> Not the emergence of cloud, not the emergence of all that I think that was small steps. >> What we are talking about now is back to the 1995 statement >> [John] Every vertical is upgrading their stack across what from e-commerce to whatever. >> That's right. >> It's completely modernizing. >> Correct. Around cloud. >> What we call digital transformation in a sense, yes >> I'm not a big fan of the word but I understand what you mean. Great insight Arvind, thanks for coming on theCUBE and sharing. We didn't even get to some of the other good stuff. But IBM and Red Hat doing some great stuff obviously foundational, I mean, Red Hat, Tier one, first class citizen in every single enterprise and software environment you know, now OpenSource runs the world. You guys are no stranger to Linux being the first billion dollar investment going back >> so you guys have a heritage there so congratulations on the relationship. >> I mean 18 years ago, if I remember 1999. >> I love the strategy, hybrid cloud here at IBM and Red Hat. This is theCUBE, bringing all the action here in San Francisco. I am John Furrier, John Troyer. More live coverage. Stay with us, here in theCUBE. We'll be right back. (upbeat music)
SUMMARY :
co-founder of the TechReckoning advisory services. Great to have you on because, So for the context, we both believe in Linux, So now for the first time, if you say I want private, the fruit comes off the tree, for you guys. You take Red Hat's footprint, your capabilities, So, nothing needs to change. you can, it's out there it's foundational. and now you can say: and go from one to other, at the pace that you want. And do you have to break everything up? Hey, I solved the security problem, Here's the thing, you don't have to change anything. if you will, with operational capabilities. I don't have to put a migration plan away. and then you combine that with OpenShift all that good stuff going on, you can bring it in the OpenShift that combination all give you to say Well, if you go to cloud operations, So if you have cloud operations, in a way, at the data center, as opposed to anything else. Having the ability to take either pre-existing resources, I can roll out new software development life cycles And if you want to insert some cloud service So the environment allows you to, do what you described I want to talk about people for a second. in your client bases you go out and talk to them. I want to make a offer to you in the middle the data lifecycle, as well as, here is the versioning, So I got to ask you a question on multi cloud. And I need to connect the data and can connect Your channel partners have clouds, by the way, and that may be the one you said, and that is why I think multicloud is here now. and so on and so forth. Because there may be regulatory reasons to decide I see the same arguments in the data base world. let's say the '80s probably goes back to the '70s right? And I will turn round and tell you cloud infrastructure at the global scale and this is what gets enabled So I what you to give your opinion, personal or IBM and add some color to that. a fourth of the complete IT market is going to shift round because the industry is replatforming It's not a company. is perhaps the easiest analogy to what is happening. Not the emergence of cloud, not the emergence of all that what from e-commerce to whatever. and software environment you know, so you guys have a heritage there I love the strategy, hybrid cloud here at IBM and Red Hat.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
John Troyer | PERSON | 0.99+ |
Arvind Krishna | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Arvind | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
1999 | DATE | 0.99+ |
$100 billion | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
1995 | DATE | 0.99+ |
today | DATE | 0.99+ |
John F. | PERSON | 0.99+ |
Gartner | ORGANIZATION | 0.99+ |
two steps | QUANTITY | 0.99+ |
tomorrow | DATE | 0.99+ |
first time | QUANTITY | 0.99+ |
San Francisco, California | LOCATION | 0.99+ |
one | QUANTITY | 0.99+ |
tens of millions | QUANTITY | 0.99+ |
20 miles | QUANTITY | 0.99+ |
15 years | QUANTITY | 0.99+ |
Wikibon | ORGANIZATION | 0.99+ |
Cloud Private | TITLE | 0.99+ |
30 years | QUANTITY | 0.98+ |
10 second loan | QUANTITY | 0.98+ |
Linux | TITLE | 0.98+ |
first | QUANTITY | 0.98+ |
Red Hat Linux | TITLE | 0.98+ |
both | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
18 years ago | DATE | 0.98+ |
Red Hat Enterprise Linux | TITLE | 0.98+ |
Bluemix | ORGANIZATION | 0.98+ |
theCUBE | ORGANIZATION | 0.98+ |
Kubernetes | TITLE | 0.97+ |
one platform | QUANTITY | 0.97+ |
hundreds of millions | QUANTITY | 0.97+ |
TechReckoning | ORGANIZATION | 0.97+ |
30 day loan | QUANTITY | 0.97+ |
Red Hat | TITLE | 0.96+ |
zip code | QUANTITY | 0.96+ |
one more element | QUANTITY | 0.96+ |
Day One Afternoon Keynote | Red Hat Summit 2018
[Music] [Music] [Music] [Music] ladies and gentlemen please welcome Red Hat senior vice president of engineering Matt Hicks [Music] welcome back I hope you're enjoying your first day of summit you know for us it is a lot of work throughout the year to get ready to get here but I love the energy walking into someone on that first opening day now this morning we kick off with Paul's keynote and you saw this morning just how evolved every aspect of open hybrid cloud has become based on an open source innovation model that opens source the power and potential of open source so we really brought me to Red Hat but at the end of the day the real value comes when were able to make customers like yourself successful with open source and as much passion and pride as we put into the open source community that requires more than just Red Hat given the complexity of your various businesses the solution set you're building that requires an entire technology ecosystem from system integrators that can provide the skills your domain expertise to software vendors that are going to provide the capabilities for your solutions even to the public cloud providers whether it's on the hosting side or consuming their services you need an entire technological ecosystem to be able to support you and your goals and that is exactly what we are gonna talk about this afternoon the technology ecosystem we work with that's ready to help you on your journey now you know this year's summit we talked about earlier it is about ideas worth exploring and we want to make sure you have all of the expertise you need to make those ideas a reality so with that let's talk about our first partner we have him today and that first partner is IBM when I talk about IBM I have a little bit of a nostalgia and that's because 16 years ago I was at IBM it was during my tenure at IBM where I deployed my first copy of Red Hat Enterprise Linux for a customer it's actually where I did my first professional Linux development as well you and that work on Linux it really was the spark that I had that showed me the potential that open source could have for enterprise customers now iBM has always been a steadfast supporter of Linux and a great Red Hat partner in fact this year we are celebrating 20 years of partnership with IBM but even after 20 years two decades I think we're working on some of the most innovative work that we ever have before so please give a warm welcome to Arvind Krishna from IBM to talk with us about what we are working on Arvind [Applause] hey my pleasure to be here thank you so two decades huh that's uh you know I think anything in this industry to going for two decades is special what would you say that that link is made right Hatton IBM so successful look I got to begin by first seeing something that I've been waiting to say for years it's a long strange trip it's been and for the San Francisco folks they'll get they'll get the connection you know what I was just thinking you said 16 it is strange because I probably met RedHat 20 years ago and so that's a little bit longer than you but that was out in Raleigh it was a much smaller company and when I think about the connection I think look IBM's had a long long investment and a long being a long fan of open source and when I think of Linux Linux really lights up our hardware and I think of the power box that you were showing this morning as well as the mainframe as well as all other hardware Linux really brings that to life and I think that's been at the root of our relationship yeah absolutely now I alluded to a little bit earlier we're working on some new stuff and this time it's a little bit higher in the software stack and we have before so what do you what would you say spearheaded that right so we think of software many people know about some people don't realize a lot of the words are called critical systems you know like reservation systems ATM systems retail banking a lot of the systems run on IBM software and when I say IBM software names such as WebSphere and MQ and db2 all sort of come to mind as being some of that software stack and really when I combine that with some of what you were talking about this morning along hybrid and I think this thing called containers you guys know a little about combining the two we think is going to make magic yeah and I certainly know containers and I think for myself seeing the rise of containers from just the introduction of the technology to customers consuming at mission-critical capacities it's been probably one of the fastest technology cycles I've ever seen before look we completely agree with that when you think back to what Paul talks about this morning on hybrid and we think about it we are made of firm commitment to containers all of our software will run on containers and all of our software runs Rell and you put those two together and this belief on hybrid and containers giving you their hybrid motion so that you can pick where you want to run all the software is really I think what has brought us together now even more than before yeah and the best part I think I've liked we haven't just done the product in downstream alignment we've been so tied in our technology approach we've been aligned all the way to the upstream communities absolutely look participating upstream participating in these projects really bringing all the innovation to bear you know when I hear all of you talk about you can't just be in a single company you got to tap into the world of innovation and everybody should contribute we firmly believe that instead of helping to do that is kind of why we're here yeah absolutely now the best part we're not just going to tell you about what we're doing together we're actually going to show you so how every once you tell the audience a little bit more about what we're doing I will go get the demo team ready in the back so you good okay so look we're doing a lot here together we're taking our software and we are begging to put it on top of Red Hat and openshift and really that's what I'm here to talk about for a few minutes and then we go to show it to you live and the demo guard should be with us so it'll hopefully go go well so when we look at extending our partnership it's really based on three fundamental principles and those principles are the following one it's a hybrid world every enterprise wants the ability to span across public private and their own premise world and we got to go there number two containers are strategic to both of us enterprise needs the agility you need a way to easily port things from place to place to place and containers is more than just wrapping something up containers give you all of the security the automation the deploy ability and we really firmly believe that and innovation is the path forward I mean you got to bring all the innovation to bear whether it's around security whether it's around all of the things we heard this morning around going across multiple infrastructures right the public or private and those are three firm beliefs that both of us have together so then explicitly what I'll be doing here number one all the IBM middleware is going to be certified on top of openshift and rel and through cloud private from IBM so that's number one all the middleware is going to run in rental containers on OpenShift on rail with all the cloud private automation and deployability in there number two we are going to make it so that this is the complete stack when you think about from hardware to hypervisor to os/2 the container platform to all of the middleware it's going to be certified up and down all the way so that you can get comfort that this is certified against all the cyber security attacks that come your way three because we do the certification that means a complete stack can be deployed wherever OpenShift runs so that way you give the complete flexibility and you no longer have to worry about that the development lifecycle is extended all the way from inception to production and the management plane then gives you all of the delivery and operation support needed to lower that cost and lastly professional services through the IBM garages as well as the Red Hat innovation labs and I think that this combination is really speaks to the power of both companies coming together and both of us working together to give all of you that flexibility and deployment capabilities across one can't can't help it one architecture chart and that's the only architecture chart I promise you so if you look at it right from the bottom this speaks to what I'm talking about you begin at the bottom and you have a choice of infrastructure the IBM cloud as well as other infrastructure as a service virtual machines as well as IBM power and IBM mainframe as is the infrastructure choices underneath so you choose what what is best suited for the workload well with the container service with the open shift platform managing all of that environment as well as giving the orchestration that kubernetes gives you up to the platform services from IBM cloud private so it contains the catalog of all middle we're both IBM's as well as open-source it contains all the deployment capability to go deploy that and it contains all the operational management so things like come back up if things go down worry about auto scaling all those features that you want come to you from there and that is why that combination is so so powerful but rather than just hear me talk about it I'm also going to now bring up a couple of people to talk about it and what all are they going to show you they're going to show you how you can deploy an application on this environment so you can think of that as either a cloud native application but you can also think about it as how do you modernize an application using micro services but you don't want to just keep your application always within its walls you also many times want to access different cloud services from this and how do you do that and I'm not going to tell you which ones they're going to come and tell you and how do you tackle the complexity of both hybrid data data that crosses both from the private world to the public world and as well as target the extra workloads that you want so that's kind of the sense of what you're going to see through through the demonstrations but with that I'm going to invite Chris and Michael to come up I'm not going to tell you which one's from IBM which runs from Red Hat hopefully you'll be able to make the right guess so with that Chris and Michael [Music] so so thank you Arvind hopefully people can guess which ones from Red Hat based on the shoes I you know it's some really exciting stuff that we just heard there what I believe that I'm I'm most excited about when I look out upon the audience and the opportunity for customers is with this announcement there are quite literally millions of applications now that can be modernized and made available on any cloud anywhere with the combination of IBM cloud private and OpenShift and I'm most thrilled to have mr. Michael elder a distinguished engineer from IBM here with us today and you know Michael would you maybe describe for the folks what we're actually going to go over today absolutely so when you think about how do I carry forward existing applications how do I build new applications as well you're creating micro services that always need a mixture of data and messaging and caching so this example application shows java-based micro services running on WebSphere Liberty each of which are then leveraging things like IBM MQ for messaging IBM db2 for data operational decision manager all of which is fully containerized and running on top of the Red Hat open chip container platform and in fact we're even gonna enhance stock trader to help it understand how you feel but okay hang on so I'm a little slow to the draw sometimes you said we're gonna have an application tell me how I feel exactly exactly you think about your enterprise apps you want to improve customer service understanding how your clients feel can't help you do that okay well this I'd like to see that in action all right let's do it okay so the first thing we'll do is we'll actually take a look at the catalog and here in the IBM cloud private catalog this is all of the content that's available to deploy now into this hybrid solution so we see workloads for IBM will see workloads for other open source packages etc each of these are packaged up as helm charts that are deploying a set of images that will be certified for Red Hat Linux and in this case we're going to go through and start with a simple example with a node out well click a few actions here we'll give it a name now do you have your console up over there I certainly do all right perfect so we'll deploy this into the new old namespace and will deploy notate okay alright anything happening of course it's come right up and so you know what what I really like about this is regardless of if I'm used to using IBM clout private or if I'm used to working with open shift yeah the experience is well with the tool of whatever I'm you know used to dealing with on a daily basis but I mean you know I got to tell you we we deployed node ourselves all the time what about and what about when was the last time you deployed MQ on open shift you never I maybe never all right let's fix that so MQ obviously is a critical component for messaging for lots of highly transactional systems here we'll deploy this as a container on the platform now I'm going to deploy this one again into new worlds I'm gonna disable persistence and for my application I'm going to need a queue manager so I'm going to have it automatically setup my queue manager as well now this will deploy a couple of things what do you see I see IBM in cube all right so there's your stateful set running MQ and of course there's a couple of other components that get stood up as needed here including things like credentials and secrets and the service etc but all of this is they're out of the box ok so impressive right but that's the what I think you know what I'm really looking at is maybe how a well is this running you know what else does this partnership bring when I look at IBM cloud private windows inches well so that's a key reason about why it's not just about IBM middleware running on open shift but also IBM cloud private because ultimately you need that common management plane when you deploy a container the next thing you have to worry about is how do I get its logs how do I manage its help how do I manage license consumption how do I have a common security plan right so cloud private is that enveloping wrapper around IBM middleware to provide those capabilities in a common way and so here we'll switch over to our dashboard this is our Griffin and Prometheus stack that's deployed also now on cloud private running on OpenShift and we're looking at a different namespace we're looking at the stock trader namespace we'll go back to this app here momentarily and we can see all the different pieces what if you switch over to the stock trader workspace on open shipped yeah I think we might be able to do that here hey there it is alright and so what you're gonna see here all the different pieces of this op right there's d b2 over here I see the portfolio Java microservice running on Webster Liberty I see my Redis cash I see MQ all of these are the components we saw in the architecture picture a minute ago ya know so this is really great I mean so maybe let's take a look at the actual application I see we have a fine stock trader app here now we mentioned understanding how I feel exactly you know well I feel good that this is you know a brand new stock trader app versus the one from ten years ago that don't feel like we used forever so the key thing is this app is actually all of those micro services in addition to things like business rules etc to help understand the loyalty program so one of the things we could do here is actually enhance it with a a AI service from Watson this is tone analyzer it helps me understand how that user actually feels and will be able to go through and submit some feedback to understand that user ok well let's see if we can take a look at that so I tried to click on youth clearly you're not very happy right now here I'll do one quick thing over here go for it we'll clear a cache for our sample lab so look you guys don't actually know as Michael and I just wrote this no js' front end backstage while Arvin was actually talking with Matt and we deployed it real-time using continuous integration and continuous delivery that we have available with openshift well the great thing is it's a live demo right so we're gonna do it all live all the time all right so you mentioned it'll tell me how I'm feeling right so if we look at so right there it looks like they're pretty angry probably because our cache hadn't been cleared before we started the demo maybe well that would make me angry but I should be happy because I mean I have a lot of money well it's it's more than I get today for sure so but you know again I don't want to remain angry so does Watson actually understand southern I know it speaks like eighty different languages but well you know I'm from South Carolina to understand South Carolina southern but I don't know about your North Carolina southern alright well let's give it a go here y'all done a real real know no profanity now this is live I've done a real real nice job on this here fancy demo all right hey all right likes me now all right cool and the key thing is just a quick note right it's showing you've got a free trade so we can integrate those business rules and then decide to I do put one trade if you're angry give me more it's all bringing it together into one platform all running on open show yeah and I can see the possibilities right of we've not only deployed services but getting that feedback from our customers to understand well how well the services are being used and are people really happy with what they have hey listen Michael this was amazing I read you joining us today I hope you guys enjoyed this demo as well so all of you know who this next company is as I look out through the crowd based on what I can actually see with the sun shining down on me right now I can see their influence everywhere you know Sports is in our everyday lives and these guys are equally innovative in that space as they are with hybrid cloud computing and they use that to help maintain and spread their message throughout the world of course I'm talking about Nike I think you'll enjoy this next video about Nike and their brand and then we're going to hear directly from my twitting about what they're doing with Red Hat technology new developments in the top story of the day the world has stopped turning on its axis top scientists are currently racing to come up with a solution everybody going this way [Music] the wrong way [Music] please welcome Nike vice president of infrastructure engineering Mike witig [Music] hi everybody over the last five years at Nike we have transformed our technology landscape to allow us to connect more directly to our consumers through our retail stores through Nike comm and our mobile apps the first step in doing that was redesigning our global network to allow us to have direct connectivity into both Asia and AWS in Europe in Asia and in the Americas having that proximity to those cloud providers allows us to make decisions about application workload placement based on our strategy instead of having design around latency concerns now some of those workloads are very elastic things like our sneakers app for example that needs to burst out during certain hours of the week there's certain moments of the year when we have our high heat product launches and for those type of workloads we write that code ourselves and we use native cloud services but being hybrid has allowed us to not have to write everything that would go into that app but rather just the parts that are in that application consumer facing experience and there are other back-end systems certain core functionalities like order management warehouse management finance ERP and those are workloads that are third-party applications that we host on relevent over the last 18 months we have started to deploy certain elements of those core applications into both Azure and AWS hosted on rel and at first we were pretty cautious that we started with development environments and what we realized after those first successful deployments is that are the impact of those cloud migrations on our operating model was very small and that's because the tools that we use for monitoring for security for performance tuning didn't change even though we moved those core applications into Azure in AWS because of rel under the covers and getting to the point where we have that flexibility is a real enabler as an infrastructure team that allows us to just be in the yes business and really doesn't matter where we want to deploy different workload if either cloud provider or on-prem anywhere on the planet it allows us to move much more quickly and stay much more directed to our consumers and so having rel at the core of our strategy is a huge enabler for that flexibility and allowing us to operate in this hybrid model thanks very much [Applause] what a great example it's really nice to hear an IQ story of using sort of relish that foundation to enable their hybrid clout enable their infrastructure and there's a lot that's the story we spent over ten years making that possible for rel to be that foundation and we've learned a lot in that but let's circle back for a minute to the software vendors and what kicked off the day today with IBM IBM s one of the largest software portfolios on the planet but we learned through our journey on rel that you need thousands of vendors to be able to sport you across all of your different industries solve any challenge that you might have and you need those vendors aligned with your technology direction this is doubly important when the technology direction is changing like with containers we saw that two years ago bread had introduced our container certification program now this program was focused on allowing you to identify vendors that had those shared technology goals but identification by itself wasn't enough in this fast-paced world so last year we introduced trusted content we introduced our container health index publicly grading red hats images that form the foundation for those vendor images and that was great because those of you that are familiar with containers know that you're taking software from vendors you're combining that with software from companies like Red Hat and you are putting those into a single container and for you to run those in a mission-critical capacity you have to know that we can both stand by and support those deployments but even trusted content wasn't enough so this year I'm excited that we are extending once again to introduce trusted operations now last week we announced that cube con kubernetes conference the kubernetes operator SDK the goal of the kubernetes operators is to allow any software provider on kubernetes to encode how that software should run this is a critical part of a container ecosystem not just being able to find the vendors that you want to work with not just knowing that you can trust what's inside the container but knowing that you can efficiently run that software now the exciting part is because this is so closely aligned with the upstream technology that today we already have four partners that have functioning operators specifically Couchbase dynaTrace crunchy and black dot so right out of the gate you have security monitoring data store options available to you these partners are really leading the charge in terms of what it means to run their software on OpenShift but behind these four we have many more in fact this morning we announced over 60 partners that are committed to building operators they're taking their domain expertise and the software that they wrote that they know and extending that into how you are going to run that on containers in environments like OpenShift this really brings the power of being able to find the vendors being able to trust what's inside and know that you can run their software as efficiently as anyone else on the planet but instead of just telling you about this we actually want to show you this in action so why don't we bring back up the demo team to give you a little tour of what's possible with it guys thanks Matt so Matt talked about the concept of operators and when when I think about operators and what they do it's taking OpenShift based services and making them even smarter giving you insight into how they do things for example have we had an operator for the nodejs service that I was running earlier it would have detected the problem and fixed itself but when we look at it what really operators do when I look at it from an ecosystem perspective is for ISVs it's going to be a catalyst that's going to allow them to make their services as manageable and it's flexible and as you know maintainable as any public cloud service no matter where OpenShift is running and to help demonstrate this I've got my buddy Rob here Rob are we ready on the demo front we're ready awesome now I notice this screen looks really familiar to me but you know I think we want to give folks here a dev preview of a couple of things well we want to show you is the first substantial integration of the core OS tectonic technology with OpenShift and then the other thing is we are going to dive in a little bit more into operators and their usefulness so Rob yeah so what we're looking at here is the service catalog that you know and love and openshift and we've got a few new things in here we've actually integrated operators into the Service Catalog and I'm going to take this filter and give you a look at some of them that we have today so you can see we've got a list of operators exposed and this is the same way that your developers are already used to integrating with products they're right in your catalog and so now these are actually smarter services but how can we maybe look at that I mentioned that there's maybe a new view I'm used to seeing this as a developer but I hear we've got some really cool stuff if I'm the administrator of the console yeah so we've got a whole new side of the console for cluster administrators to get a look at under the infrastructure versus this dev focused view that we're looking at today today so let's go take a look at it so the first thing you see here is we've got a really rich set of monitoring and health status so we can see that we've got some alerts firing our control plane is up and we can even do capacity planning anything that you need to do to maintenance your cluster okay so it's it's not only for the the services in the cluster and doing things that you know I may be normally as a human operator would have to do but this this console view also gives me insight into the infrastructure itself right like maybe the nodes and maybe handling the security context is that true yes so these are new capabilities that we're bringing to open shift is the ability to do node management things like drain and unscheduled nodes to do day-to-day maintenance and then as well as having security constraints and things like role bindings for example and the exciting thing about this is this is a view that you've never been able to see before it's cross-cutting across namespaces so here we've got a number of admin bindings and we can see that they're connected to a number of namespaces and these would represent our engineering teams all the groups that are using the cluster and we've never had this view before this is a perfect way to audit your security you know it actually is is pretty exciting I mean I've been fortunate enough to be on the up and shift team since day one and I know that operations view is is something that we've you know strived for and so it's really exciting to see that we can offer that now but you know really this was a we want to get into what operators do and what they can do for us and so maybe you show us what the operator console looks like yeah so let's jump on over and see all the operators that we have installed on the cluster you can see that these mirror what we saw on the Service Catalog earlier now what we care about though is this Couchbase operator and we're gonna jump into the demo namespace as I said you can share a number of different teams on a cluster so it's gonna jump into this namespace okay cool so now what we want to show you guys when we think about operators you know we're gonna have a scenario here where there's going to be multiple replicas of a Couchbase service running in the cluster and then we're going to have a stateful set and what's interesting is those two things are not enough if I'm really trying to run this as a true service where it's highly available in persistent there's things that you know as a DBA that I'm normally going to have to do if there's some sort of node failure and so what we want to demonstrate to you is where operators combined with the power that was already within OpenShift are now coming together to keep this you know particular database service highly available and something that we can continue using so Rob what have you got there yeah so as you can see we've got our couch based demo cluster running here and we can see that it's up and running we've got three members we've got an off secret this is what's controlling access to a UI that we're gonna look at in a second but what really shows the power of the operator is looking at this view of the resources that it's managing you can see that we've got a service that's doing load balancing into the cluster and then like you said we've got our pods that are actually running the software itself okay so that's cool so maybe for everyone's benefit so we can show that this is happening live could we bring up the the Couchbase console please and keep up the openshift console both sides so what we see there we go so what we see on the on the right hand side is obviously the same console Rob was working in on the left-hand side as you can see by the the actual names of the pods that are there the the couch based services that are available and so Rob maybe um let's let's kill something that's always fun to do on stage yeah this is the power of the operator it's going to recover it so let's browse on over here and kill node number two so we're gonna forcefully kill this and kick off the recovery and I see right away that because of the integration that we have with operators the Couchbase console immediately picked up that something has changed in the environment now why is that important normally a human being would have to get that alert right and so with operators now we've taken that capability and we've realized that there has been a new event within the environment this is not something that you know kubernetes or open shipped by itself would be able to understand now I'm presuming we're gonna end up doing something else it's not just seeing that it failed and sure enough there we go remember when you have a stateful application rebalancing that data and making it available is just as important as ensuring that the disk is attached so I mean Rob thank you so much for you know driving this for us today and being here I mean you know not only Couchbase but as was mentioned by matt we also have you know crunchy dynaTrace and black duck I would encourage you all to go visit their booths out on the floor today and understand what they have available which are all you know here with a dev preview and then talk to the many other partners that we have that are also looking at operators so again rub thank you for joining us today Matt come on out okay this is gonna make for an exciting year of just what it means to consume container base content I think containers change how customers can get that I believe operators are gonna change how much they can trust running that content let's circle back to one more partner this next partner we have has changed the landscape of computing specifically with their work on hardware design work on core Linux itself you know in fact I think they've become so ubiquitous with computing that we often overlook the technological marvels that they've been able to overcome now for myself I studied computer engineering so in the late 90s I had the chance to study processor design I actually got to build one of my own processors now in my case it was the most trivial processor that you could imagine it was an 8-bit subtractor which means it can subtract two numbers 256 or smaller but in that process I learned the sheer complexity that goes into processor design things like wire placements that are so close that electrons can cut through the insulation in short and then doing those wire placements across three dimensions to multiple layers jamming in as many logic components as you possibly can and again in my case this was to make a processor that could subtract two numbers but once I was done with this the second part of the course was studying the Pentium processor now remember that moment forever because looking at what the Pentium processor was able to accomplish it was like looking at alien technology and the incredible thing is that Intel our next partner has been able to keep up that alien like pace of innovation twenty years later so we're excited have Doug Fisher here let's hear a little bit more from Intel for business wide open skies an open mind no matter the context the idea of being open almost only suggests the potential of infinite possibilities and that's exactly the power of open source whether it's expanding what's possible in business the science and technology or for the greater good which is why-- open source requires the involvement of a truly diverse community of contributors to scale and succeed creating infinite possibilities for technology and more importantly what we do with it [Music] you know what Intel one of our core values is risk-taking and I'm gonna go just a bit off script for a second and say I was just backstage and I saw a gentleman that looked a lot like Scott Guthrie who runs all of Microsoft's cloud enterprise efforts wearing a red shirt talking to Cormier I'm just saying I don't know maybe I need some more sleep but that's what I saw as we approach Intel's 50th anniversary these words spoken by our co-founder Robert Noyce are as relevant today as they were decades ago don't be encumbered by history this is about breaking boundaries in technology and then go off and do something wonderful is about innovation and driving innovation in our industry and Intel we're constantly looking to break boundaries to advance our technology in the cloud in enterprise space that is no different so I'm going to talk a bit about some of the boundaries we've been breaking and innovations we've been driving at Intel starting with our Intel Xeon platform Orion Xeon scalable platform we launched several months ago which was the biggest and mark the most advanced movement in this technology in over a decade we were able to drive critical performance capabilities unmatched agility and added necessary and sufficient security to that platform I couldn't be happier with the work we do with Red Hat and ensuring that those hero features that we drive into our platform they fully expose to all of you to drive that innovation to go off and do something wonderful well there's taking advantage of the performance features or agility features like our advanced vector extensions or avx-512 or Intel quick exist those technologies are fully embraced by Red Hat Enterprise Linux or whether it's security technologies like txt or trusted execution technology are fully incorporated and we look forward to working with Red Hat on their next release to ensure that our advancements continue to be exposed and their platform and all these workloads that are driving the need for us to break boundaries and our technology are driving more and more need for flexibility and computing and that's why we're excited about Intel's family of FPGAs to help deliver that additional flexibility for you to build those capabilities in your environment we have a broad set of FPGA capabilities from our power fish at Mac's product line all the way to our performance product line on the 6/10 strat exten we have a broad set of bets FPGAs what i've been talking to customers what's really exciting is to see the combination of using our Intel Xeon scalable platform in combination with FPGAs in addition to the acceleration development capabilities we've given to software developers combining all that together to deliver better and better solutions whether it's helping to accelerate data compression well there's pattern recognition or data encryption and decryption one of the things I saw in a data center recently was taking our Intel Xeon scalable platform utilizing the capabilities of FPGA to do data encryption between servers behind the firewall all the while using the FPGA to do that they preserve those precious CPU cycles to ensure they delivered the SLA to the customer yet provided more security for their data in the data center one of the edges in cyber security is innovation and route of trust starts at the hardware we recently renewed our commitment to security with our security first pledge has really three elements to our security first pledge first is customer first urgency we have now completed the release of the micro code updates for protection on our Intel platforms nine plus years since launch to protect against things like the side channel exploits transparent and timely communication we are going to communicate timely and openly on our Intel comm website whether it's about our patches performance or other relevant information and then ongoing security assurance we drive security into every one of our products we redesigned a portion of our processor to add these partition capability which is adding additional walls between applications and user level privileges to further secure that environment from bad actors I want to pause for a second and think everyone in this room involved in helping us work through our security first pledge this isn't something we do on our own it takes everyone in this room to help us do that the partnership and collaboration was next to none it's the most amazing thing I've seen since I've been in this industry so thank you we don't stop there we continue to advance our security capabilities cross-platform solutions we recently had a conference discussion at RSA where we talked about Intel Security Essentials where we deliver a framework of capabilities and the end that are in our silicon available for those to innovate our customers and the security ecosystem to innovate on a platform in a consistent way delivering that assurance that those capabilities will be on that platform we also talked about things like our security threat technology threat detection technology is something that we believe in and we launched that at RSA incorporates several elements one is ability to utilize our internal graphics to accelerate some of the memory scanning capabilities we call this an accelerated memory scanning it allows you to use the integrated graphics to scan memory again preserving those precious cycles on the core processor Microsoft adopted this and are now incorporated into their defender product and are shipping it today we also launched our threat SDK which allows partners like Cisco to utilize telemetry information to further secure their environments for cloud workloads so we'll continue to drive differential experiences into our platform for our ecosystem to innovate and deliver more and more capabilities one of the key aspects you have to protect is data by 2020 the projection is 44 zettabytes of data will be available 44 zettabytes of data by 2025 they project that will grow to a hundred and eighty s data bytes of data massive amount of data and what all you want to do is you want to drive value from that data drive and value from that data is absolutely critical and to do that you need to have that data closer and closer to your computation this is why we've been working Intel to break the boundaries in memory technology with our investment in 3d NAND we're reducing costs and driving up density in that form factor to ensure we get warm data closer to the computing we're also innovating on form factors we have here what we call our ruler form factor this ruler form factor is designed to drive as much dense as you can in a 1u rack we're going to continue to advance the capabilities to drive one petabyte of data at low power consumption into this ruler form factor SSD form factor so our innovation continues the biggest breakthrough and memory technology in the last 25 years in memory media technology was done by Intel we call this our 3d crosspoint technology and our 3d crosspoint technology is now going to be driven into SSDs as well as in a persistent memory form factor to be on the memory bus giving you the speed of memory characteristics of memory as well as the characteristics of storage given a new tier of memory for developers to take full advantage of and as you can see Red Hat is fully committed to integrating this capability into their platform to take full advantage of that new capability so I want to thank Paul and team for engaging with us to make sure that that's available for all of you to innovate on and so we're breaking boundaries and technology across a broad set of elements that we deliver that's what we're about we're going to continue to do that not be encumbered by the past your role is to go off and doing something wonderful with that technology all ecosystems are embracing this and driving it including open source technology open source is a hub of innovation it's been that way for many many years that innovation that's being driven an open source is starting to transform many many businesses it's driving business transformation we're seeing this coming to light in the transformation of 5g driving 5g into the networked environment is a transformational moment an open source is playing a pivotal role in that with OpenStack own out and opie NFV and other open source projects were contributing to and participating in are helping drive that transformation in 5g as you do software-defined networks on our barrier breaking technology we're also seeing this transformation rapidly occurring in the cloud enterprise cloud enterprise are growing rapidly and innovation continues our work with virtualization and KVM continues to be aggressive to adopt technologies to advance and deliver more capabilities in virtualization as we look at this with Red Hat we're now working on Cube vert to help move virtualized workloads onto these platforms so that we can now have them managed at an open platform environment and Cube vert provides that so between Intel and Red Hat and the community we're investing resources to make certain that comes to product as containers a critical feature in Linux becomes more and more prevalent across the industry the growth of container elements continues at a rapid rapid pace one of the things that we wanted to bring to that is the ability to provide isolation without impairing the flexibility the speed and the footprint of a container with our clear container efforts along with hyper run v we were able to combine that and create we call cotta containers we launched this at the end of last year cotta containers is designed to have that container element available and adding elements like isolation both of these events need to have an orchestration and management capability Red Hat's OpenShift provides that capability for these workloads whether containerized or cube vert capabilities with virtual environments Red Hat openshift is designed to take that commercial capability to market and we've been working with Red Hat for several years now to develop what we call our Intel select solution Intel select solutions our Intel technology optimized for downstream workloads as we see a growth in a workload will work with a partner to optimize a solution on Intel technology to deliver the best solution that could be deployed quickly our effort here is to accelerate the adoption of these type of workloads in the market working with Red Hat's so now we're going to be deploying an Intel select solution design and optimized around Red Hat OpenShift we expect the industry's start deploying this capability very rapidly I'm excited to announce today that Lenovo is committed to be the first platform company to deliver this solution to market the Intel select solution to market will be delivered by Lenovo now I talked about what we're doing in industry and how we're transforming businesses our technology is also utilized for greater good there's no better example of this than the worked by dr. Stephen Hawking it was a sad day on March 14th of this year when dr. Stephen Hawking passed away but not before Intel had a 20-year relationship with dr. Hawking driving breakthrough capabilities innovating with him driving those robust capabilities to the rest of the world one of our Intel engineers an Intel fellow which is the highest technical achievement you can reach at Intel got to spend 10 years with dr. Hawking looking at innovative things they could do together with our technology and his breakthrough innovative thinking so I thought it'd be great to bring up our Intel fellow Lema notch Minh to talk about her work with dr. Hawking and what she learned in that experience come on up Elina [Music] great to see you Thanks something going on about the breakthrough breaking boundaries and Intel technology talk about how you use that in your work with dr. Hawking absolutely so the most important part was to really make that technology contextually aware because for people with disability every single interaction takes a long time so whether it was adapting for example the language model of his work predictor to understand whether he's gonna talk to people or whether he's writing a book on black holes or to even understand what specific application he might be using and then making sure that we're surfacing only enough actions that were relevant to reduce that amount of interaction so the tricky part is really to make all of that contextual awareness happen without totally confusing the user because it's constantly changing underneath it so how is that your work involving any open source so you know the problem with assistive technology in general is that it needs to be tailored to the specific disability which really makes it very hard and very expensive because it can't utilize the economies of scale so basically with the system that we built what we wanted to do is really enable unleashing innovation in the world right so you could take that framework you could tailor to a specific sensor for example a brain computer interface or something like that where you could actually then support a different set of users so that makes open-source a perfect fit because you could actually build and tailor and we you spoke with dr. Hawking what was this view of open source is it relevant to him so yeah so Stephen was adamant from the beginning that he wanted a system to benefit the world and not just himself so he spent a lot of time with us to actually build this system and he was adamant from day one that he would only engage with us if we were commit to actually open sourcing the technology that's fantastic and you had the privilege of working with them in 10 years I know you have some amazing stories to share so thank you so much for being here thank you so much in order for us to scale and that's what we're about at Intel is really scaling our capabilities it takes this community it takes this community of diverse capabilities it takes two births thought diverse thought of dr. Hawking couldn't be more relevant but we also are proud at Intel about leading efforts of diverse thought like women and Linux women in big data other areas like that where Intel feels that that diversity of thinking and engagement is critical for our success so as we look at Intel not to be encumbered by the past but break boundaries to deliver the technology that you all will go off and do something wonderful with we're going to remain committed to that and I look forward to continue working with you thank you and have a great conference [Applause] thank God now we have one more customer story for you today when you think about customers challenges in the technology landscape it is hard to ignore the public cloud these days public cloud is introducing capabilities that are driving the fastest rate of innovation that we've ever seen in our industry and our next customer they actually had that same challenge they wanted to tap into that innovation but they were also making bets for the long term they wanted flexibility and providers and they had to integrate to the systems that they already have and they have done a phenomenal job in executing to this so please give a warm welcome to Kerry Pierce from Cathay Pacific Kerry come on thanks very much Matt hi everyone thank you for giving me the opportunity to share a little bit about our our cloud journey let me start by telling you a little bit about Cathay Pacific we're an international airline based in Hong Kong and we serve a passenger and a cargo network to over 200 destinations in 52 countries and territories in the last seventy years and years seventy years we've made substantial investments to develop Hong Kong as one of the world's leading transportation hubs we invest in what matters most to our customers to you focusing on our exemplary service and our great product and it's both on the ground and in the air we're also investing and expanding our network beyond our multiple frequencies to the financial districts such as Tokyo New York and London and we're connecting Asia and Hong Kong with key tech hubs like San Francisco where we have multiple flights daily we're also connecting Asia in Hong Kong to places like Tel Aviv and our upcoming destination of Dublin in fact 2018 is actually going to be one of our biggest years in terms of network expansion and capacity growth and we will be launching in September our longest flight from Hong Kong direct to Washington DC and that'll be using a state-of-the-art Airbus a350 1000 aircraft so that's a little bit about Cathay Pacific let me tell you about our journey through the cloud I'm not going to go into technical details there's far smarter people out in the audience who will be able to do that for you just focus a little bit about what we were trying to achieve and the people side of it that helped us get there we had a couple of years ago no doubt the same issues that many of you do I don't think we're unique we had a traditional on-premise non-standardized fragile infrastructure it didn't meet our infrastructure needs and it didn't meet our development needs it was costly to maintain it was costly to grow and it really inhibited innovation most importantly it slowed the delivery of value to our customers at the same time you had the hype of cloud over the last few years cloud this cloud that clouds going to fix the world we were really keen on making sure we didn't get wound up and that so we focused on what we needed we started bottom up with a strategy we knew we wanted to be clouded Gnostic we wanted to have active active on-premise data centers with a single network and fabric and we wanted public clouds that were trusted and acted as an extension of that environment not independently we wanted to avoid single points of failure and we wanted to reduce inter dependencies by having loosely coupled designs and finally we wanted to be scalable we wanted to be able to cater for sudden surges of demand in a nutshell we kind of just wanted to make everything easier and a management level we wanted to be a broker of services so not one size fits all because that doesn't work but also not one of everything we want to standardize but a pragmatic range of services that met our development and support needs and worked in harmony with our public cloud not against it so we started on a journey with red hat we implemented Red Hat cloud forms and ansible to manage our hybrid cloud we also met implemented Red Hat satellite to maintain a manager environment we built a Red Hat OpenStack on crimson vironment to give us an alternative and at the same time we migrated a number of customer applications to a production public cloud open shift environment but it wasn't all Red Hat you love heard today that the Red Hat fits within an overall ecosystem we looked at a number of third-party tools and services and looked at developing those into our core solution I think at last count we had tried and tested somewhere past eight different tools and at the moment we still have around 62 in our environment that help us through that journey but let me put the technical solution aside a little bit because it doesn't matter how good your technical solution is if you don't have the culture and the people to get it right as a group we needed to be aligned for delivery and we focused on three core behaviors we focused on accountability agility and collaboration now I was really lucky we've got a pretty fantastic team for whom that was actually pretty easy but but again don't underestimate the importance of getting the culture and the people right because all the technology in the world doesn't matter if you don't have that right I asked the team what did we do differently because in our situation we didn't go out and hire a bunch of new people we didn't go out and hire a bunch of consultants we had the staff that had been with us for 10 20 and in some cases 30 years so what did we do differently it was really simple we just empowered and supported our staff we knew they were the smart ones they were the ones that were dealing with a legacy environment and they had the passion to make the change so as a team we encouraged suggestions and contributions from our overall IT community from the bottom up we started small we proved the case we told the story and then we got by him and only did did we implement wider the benefits the benefit through our staff were a huge increase in staff satisfaction reduction and application and platform outage support incidents risk free and failsafe application releases work-life balance no more midnight deployments and our application and infrastructure people could really focus on delivering customer value not on firefighting and for our end customers the people that travel with us it was really really simple we could provide a stable service that allowed for faster releases which meant we could deliver value faster in terms of stats we migrated 16 production b2c applications to a public cloud OpenShift environment in 12 months we decreased provisioning time from weeks or occasionally months we were waiting for hardware two minutes and we had a hundred percent availability of our key customer facing systems but most importantly it was about people we'd built a culture a culture of innovation that was built on a foundation of collaboration agility and accountability and that permeated throughout the IT organization not those just those people that were involved in the project everyone with an IT could see what good looked like and to see what it worked what it looked like in terms of working together and that was a key foundation for us the future for us you will have heard today everything's changing so we're going to continue to develop our open hybrid cloud onboard more public cloud service providers continue to build more modern applications and leverage the emerging technology integrate and automate everything we possibly can and leverage more open source products with the great support from the open source community so there you have it that's our journey I think we succeeded by not being over awed and by starting with the basics the technology was key obviously it's a cool component but most importantly it was a way we approached our transition we had a clear strategy that was actually developed bottom-up by the people that were involved day to day and we empowered those people to deliver and that provided benefits to both our staff and to our customers so thank you for giving the opportunity to share and I hope you enjoy the rest of the summer [Applause] I got one thanks what a great story would a great customer story to close on and we have one more partner to come up and this is a partner that all of you know that's Microsoft Microsoft has gone through an amazing transformation they've we've built an incredibly meaningful partnership with them all the way from our open source collaboration to what we do in the business side we started with support for Red Hat Enterprise Linux on hyper-v and that was truly just the beginning today we're announcing one of the most exciting joint product offerings on the market today let's please give a warm welcome to Paul correr and Scott Scott Guthrie to tell us about it guys come on out you know Scot welcome welcome to the Red Hat summer thanks for coming really appreciate it great to be here you know many surprises a lot of people when we you know published a list of speakers and then you rock you were on it and you and I are on stage here it's really really important and exciting to us exciting new partnership we've worked together a long time from the hypervisor up to common support and now around hybrid hybrid cloud maybe from your perspective a little bit of of what led us here well you know I think the thing that's really led us here is customers and you know Microsoft we've been on kind of a transformation journey the last several years where you know we really try to put customers at the center of everything that we do and you know as part of that you quickly learned from customers in terms of I'm including everyone here just you know you've got a hybrid of state you know both in terms of what you run on premises where it has a lot of Red Hat software a lot of Microsoft software and then really is they take the journey to the cloud looking at a hybrid of state in terms of how do you run that now between on-premises and a public cloud provider and so I think the thing that both of us are recognized and certainly you know our focus here at Microsoft has been you know how do we really meet customers with where they're at and where they want to go and make them successful in that journey and you know it's been fantastic working with Paul and the Red Hat team over the last two years in particular we spend a lot of time together and you know really excited about the journey ahead so um maybe you can share a bit more about the announcement where we're about to make today yeah so it's it's it's a really exciting announcement it's and really kind of I think first of its kind in that we're delivering a Red Hat openshift on Azure service that we're jointly developing and jointly managing together so this is different than sort of traditional offering where it's just running inside VMs and it's sort of two vendors working this is really a jointly managed service that we're providing with full enterprise support with a full SLA where the you know single throat to choke if you will although it's collectively both are choke the throats in terms of making sure that it works well and it's really uniquely designed around this hybrid world and in that it supports will support both Windows and Linux containers and it role you know it's the same open ship that runs both in the public cloud on Azure and on-premises and you know it's something that we hear a lot from customers I know there's a lot of people here that have asked both of us for this and super excited to be able to talk about it today and we're gonna show off the first demo of it just a bit okay well I'm gonna ask you to elaborate a bit more about this how this fits into the bigger Microsoft picture and I'll get out of your way and so thanks again thank you for coming here we go thanks Paul so I thought I'd spend just a few minutes talking about wouldn't you know that some of the work that we're doing with Microsoft Asher and the overall Microsoft cloud I didn't go deeper in terms of the new offering that we're announcing today together with red hat and show demo of it actually in action in a few minutes you know the high level in terms of you know some of the work that we've been doing at Microsoft the last couple years you know it's really been around this this journey to the cloud that we see every organization going on today and specifically the Microsoft Azure we've been providing really a cloud platform that delivers the infrastructure the application and kind of the core computing needs that organizations have as they want to be able to take advantage of what the cloud has to offer and in terms of our focus with Azure you know we've really focused we deliver lots and lots of different services and features but we focused really in particular on kind of four key themes and we see these four key themes aligning very well with the journey Red Hat it's been on and it's partly why you know we think the partnership between the two companies makes so much sense and you know for us the thing that we've been really focused on has been with a or in terms of how do we deliver a really productive cloud meaning how do we enable you to take advantage of cutting-edge technology and how do we kind of accelerate the successful adoption of it whether it's around the integration of managed services that we provide both in terms of the application space in the data space the analytic and AI space but also in terms of just the end-to-end management and development tools and how all those services work together so that teams can basically adopt them and be super successful yeah we deeply believe in hybrid and believe that the world is going to be a multi cloud and a multi distributed world and how do we enable organizations to be able to take the existing investments that they already have and be able to easily integrate them in a public cloud and with a public cloud environment and get immediate ROI on day one without how to rip and replace tons of solutions you know we're moving very aggressively in the AI space and are looking to provide a rich set of AI services both finished AI models things like speech detection vision detection object motion etc that any developer even at non data scientists can integrate to make application smarter and then we provide a rich set of AI tooling that enables organizations to build custom models and be able to integrate them also as part of their applications and with their data and then we invest very very heavily on trust Trust is sort of at the core of a sure and we now have more compliant certifications than any other cloud provider we run in more countries than any other cloud provider and we really focus around unique promises around data residency data sovereignty and privacy that are really differentiated across the industry and terms of where Iser runs today we're in 50 regions around the world so our region for us is typically a cluster of multiple data centers that are grouped together and you can see we're pretty much on every continent with the exception of Antarctica today and the beauty is you're going to be able to take the Red Hat open shift service and run it on ashore in each of these different locations and really have a truly global footprint as you look to build and deploy solutions and you know we've seen kind of this focus on productivity hybrid intelligence and Trust really resonate in the market and about 90 percent of Fortune 500 companies today are deployed on Azure and you heard Nike talked a little bit earlier this afternoon about some of their journeys as they've moved to a dot public cloud this is a small logo of just a couple of the companies that are on ashore today and what I do is actually even before we dive into the open ship demo is actually just show a quick video you know one of the companies thing there are actually several people from that organization here today Deutsche Bank who have been working with both Microsoft and Red Hat for many years Microsoft on the other side Red Hat both on the rel side and then on the OpenShift side and it's just one of these customers that have helped bring the two companies together to deliver this managed openshift service on Azure and so I'm just going to play a quick video of some of the folks that Deutsche Bank talking about their experiences and what they're trying to get out of it so we could roll the video that'd be great technology is at the absolute heart of Deutsche Bank we've recognized that the cost of running our infrastructure was particularly high there was a enormous amount of under utilization we needed a platform which was open to polyglot architecture supporting any kind of application workload across the various business lines of the third we analyzed over 60 different vendor products and we ended up with Red Hat openshift I'm super excited Microsoft or supporting Linux so strongly to adopting a hybrid approach we chose as here because Microsoft was the ideal partner to work with on constructs around security compliance business continuity as you as in all the places geographically that we need to be we have applications now able to go from a proof of concept to production in three weeks that is already breaking records openshift gives us given entities and containers allows us to apply the same sets of processes automation across a wide range of our application landscape on any given day we run between seven and twelve thousand containers across three regions we start see huge levels of cost reduction because of the level of multi-tenancy that we can achieve through containers open ship gives us an abstraction layer which is allows us to move our applications between providers without having to reconfigure or recode those applications what's really exciting for me about this journey is the way they're both Red Hat and Microsoft have embraced not just what we're doing but what each other are doing and have worked together to build open shift as a first-class citizen with Microsoft [Applause] in terms of what we're announcing today is a new fully managed OpenShift service on Azure and it's really the first fully managed service provided end-to-end across any of the cloud providers and it's jointly engineer operated and supported by both Microsoft and Red Hat and that means again sort of one service one SLA and both companies standing for a link firmly behind it really again focusing around how do we make customers successful and as part of that really providing the enterprise-grade not just isolates but also support and integration testing so you can also take advantage of all your rel and linux-based containers and all of your Windows server based containers and how can you run them in a joint way with a common management stack taking the advantage of one service and get maximum density get maximum code reuse and be able to take advantage of a containerized world in a better way than ever before and make this customer focus is very much at the center of what both companies are really centered around and so what if I do be fun is rather than just talk about openshift as actually kind of show off a little bit of a journey in terms of what this move to take advantage of it looks like and so I'd like to invite Brendan and Chris onstage who are actually going to show off a live demo of openshift on Azure in action and really walk through how to provision the service and basically how to start taking advantage of it using the full open ship ecosystem so please welcome Brendan and Chris we're going to join us on stage for a demo thanks God thanks man it's been a good afternoon so you know what we want to get into right now first I'd like to think Brandon burns for joining us from Microsoft build it's a busy week for you I'm sure your own stage there a few times as well you know what I like most about what we just announced is not only the business and technical aspects but it's that operational aspect the uniqueness the expertise that RedHat has for running OpenShift combined with the expertise that Microsoft has within Azure and customers are going to get this joint offering if you will with you know Red Hat OpenShift on Microsoft Azure and so you know kind of with that again Brendan I really appreciate you being here maybe talk to the folks about what we're going to show yeah so we're going to take a look at what it looks like to deploy OpenShift on to Azure via the new OpenShift service and the real selling point the really great part of this is the the deep integration with a cloud native app API so the same tooling that you would use to create virtual machines to create disks trade databases is now the tooling that you're going to use to create an open chip cluster so to show you this first we're going to create a resource group here so we're going to create that resource group in East us using the AZ tool that's the the azure command-line tooling a resource group is sort of a folder on Azure that holds all of your stuff so that's gonna come back into the second I've created my resource group in East us and now we're gonna use that exact same tool calling into into Azure api's to provision an open shift cluster so here we go we have AZ open shift that's our new command line tool putting it into that resource group I'm gonna get into East us alright so it's gonna take a little bit of time to deploy that open shift cluster it's doing a bunch of work behind the scenes provisioning all kinds of resources as well as credentials to access a bunch of different as your API so are we actually able to see this to you yeah so we can cut over to in just a second we can cut over to that resource group in a reload so Brendan while relating the beauty of what you know the teams have been doing together already is the fact that now open shift is a first-class citizen as it were yeah absolutely within the agent so I presume not only can I do a deployment but I can do things like scale and check my credentials and pretty much everything that I could do with any other service with that that's exactly right so we can anything that you you were used to doing via the my computer has locked up there we go the demo gods are totally with me oh there we go oh no I hit reload yeah that was that was just evil timing on the house this is another use for operators as we talked about earlier today that's right my dashboard should be coming up do I do I dare click on something that's awesome that was totally it was there there we go good job so what's really interesting about this I've also heard that it deploys you know in as little as five to six minutes which is really good for customers they want to get up and running with it but all right there we go there it is who managed to make it see that shows that it's real right you see the sweat coming off of me there but there you can see the I feel it you can see the various resources that are being created in order to create this openshift cluster virtual machines disks all of the pieces provision for you automatically via that one single command line call now of course it takes a few minutes to to create the cluster so in order to show the other side of that integration the integration between openshift and Azure I'm going to cut over to an open shipped cluster that I already have created alright so here you can see my open shift cluster that's running on Microsoft Azure I'm gonna actually log in over here and the first sign you're gonna see of the integration is it's actually using my credentials my login and going through Active Directory and any corporate policies that I may have around smart cards two-factor off anything like that authenticate myself to that open chef cluster so I'll accept that it can access my and now we're gonna load up the OpenShift web console so now this looks familiar to me oh yeah so if anybody's used OpenShift out there this is the exact same console and what we're going to show though is how this console via the open service broker and the open service broker implementation for Azure integrates natively with OpenShift all right so we can go down here and we can actually see I want to deploy a database I'm gonna deploy Mongo as my key value store that I'm going to use but you know like as we talk about management and having a OpenShift cluster that's managed for you I don't really want to have to manage my database either so I'm actually going to use cosmos DB it's a native Azure service it's a multilingual database that offers me the ability to access my data in a variety of different formats including MongoDB fully managed replicated around the world a pretty incredible service so I'm going to go ahead and create that so now Brendan what's interesting I think to me is you know we talked about the operational aspects and clearly it's not you and I running the clusters but you do need that way to interface with it and so when customers are able to deploy this all of this is out of the box there's no additional contemporary like this is what you get when you create when you use that tool to create that open chef cluster this is what you get with all of that integration ok great step through here and go ahead don't have any IP ranges there we go all right and we create that binding all right and so now behind the scenes openshift is integrated with the azure api's with all of my credentials to go ahead and create that distributed database once it's done provisioning actually all of the credentials necessary to access the database are going to be automatically populated into kubernetes available for me inside of OpenShift via service discovery to access from my application without any further work so I think that really shows not only the power of integrating openshift with an azure based API but actually the power of integrating a Druze API is inside of OpenShift to make a truly seamless experience for managing and deploying your containers across a variety of different platforms yeah hey you know Brendan this is great I know you've got a flight to catch because I think you're back onstage in a few hours but you know really appreciate you joining us today absolutely I look forward to seeing what else we do yeah absolutely thank you so much thanks guys Matt you want to come back on up thanks a lot guys if you have never had the opportunity to do a live demo in front of 8,000 people it'll give you a new appreciation for standing up there and doing it and that was really good you know every time I get the chance just to take a step back and think about the technology that we have at our command today I'm in awe just the progress over the last 10 or 20 years is incredible on to think about what might come in the next 10 or 20 years really is unthinkable you even forget 10 years what might come in the next five years even the next two years but this can create a lot of uncertainty in the environment of what's going to be to come but I believe I am certain about one thing and that is if ever there was a time when any idea is achievable it is now just think about what you've seen today every aspect of open hybrid cloud you have the world's infrastructure at your fingertips and it's not stopping you've heard about this the innovation of open source how fast that's evolving and improving this capability you've heard this afternoon from an entire technology ecosystem that's ready to help you on this journey and you've heard from customer after customer that's already started their journey in the successes that they've had you're one of the neat parts about this afternoon you will aren't later this week you will actually get to put your hands on all of this technology together in our live audience demo you know this is what some it's all about for us it's a chance to bring together the technology experts that you can work with to help formulate how to pull off those ideas we have the chance to bring together technology experts our customers and our partners and really create an environment where everyone can experience the power of open source that same spark that I talked about when I was at IBM where I understood the but intial that open-source had for enterprise customers we want to create the environment where you can have your own spark you can have that same inspiration let's make this you know in tomorrow's keynote actually you will hear a story about how open-source is changing medicine as we know it and literally saving lives it is a great example of expanding the ideas it might be possible that we came into this event with so let's make this the best summit ever thank you very much for being here let's kick things off right head down to the Welcome Reception in the expo hall and please enjoy the summit thank you all so much [Music] [Music]
SUMMARY :
from the bottom this speaks to what I'm
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Doug Fisher | PERSON | 0.99+ |
Stephen | PERSON | 0.99+ |
Brendan | PERSON | 0.99+ |
Chris | PERSON | 0.99+ |
Deutsche Bank | ORGANIZATION | 0.99+ |
Robert Noyce | PERSON | 0.99+ |
Deutsche Bank | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Michael | PERSON | 0.99+ |
Arvind | PERSON | 0.99+ |
20-year | QUANTITY | 0.99+ |
March 14th | DATE | 0.99+ |
Matt | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Nike | ORGANIZATION | 0.99+ |
Paul | PERSON | 0.99+ |
Hong Kong | LOCATION | 0.99+ |
Antarctica | LOCATION | 0.99+ |
Scott Guthrie | PERSON | 0.99+ |
2018 | DATE | 0.99+ |
Asia | LOCATION | 0.99+ |
Washington DC | LOCATION | 0.99+ |
London | LOCATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
10 years | QUANTITY | 0.99+ |
two minutes | QUANTITY | 0.99+ |
Arvin | PERSON | 0.99+ |
Tel Aviv | LOCATION | 0.99+ |
two numbers | QUANTITY | 0.99+ |
two companies | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
Paul correr | PERSON | 0.99+ |
September | DATE | 0.99+ |
Kerry Pierce | PERSON | 0.99+ |
30 years | QUANTITY | 0.99+ |
20 years | QUANTITY | 0.99+ |
8-bit | QUANTITY | 0.99+ |
Mike witig | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
2025 | DATE | 0.99+ |
five | QUANTITY | 0.99+ |
dr. Hawking | PERSON | 0.99+ |
Linux | TITLE | 0.99+ |
Arvind Krishna | PERSON | 0.99+ |
Dublin | LOCATION | 0.99+ |
first partner | QUANTITY | 0.99+ |
Rob | PERSON | 0.99+ |
first platform | QUANTITY | 0.99+ |
Matt Hicks | PERSON | 0.99+ |
today | DATE | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
OpenShift | TITLE | 0.99+ |
last week | DATE | 0.99+ |
OLD VERSION | Arvind Krishna, IBM | Red Hat Summit 2018
brought to you by Red Hat well welcome back everyone this two cubes exclusive coverage here in San Francisco California for Red Hat summit 20:18 I'm John Ferreira co-host of the cube with my analyst co-host this week John Troy year co-founder of The Reckoning advisory services and our next guest is Arvind Krishna who's the senior vice president of hybrid cloud at IBM Reese and director of IBM Research welcome back to the cube good to see you hey John and John Wade you guys just kick it confuse get to John's here great to have you on because you guys are doing some deals with Red Hat obviously the leader at open source you guys are one of them as well contributing to Linux it's well documented the IBM has three books on your role relationship to Linux so yeah check check but you guys are doing a lot of work with cloud in a way that you know frankly is very specific to IBM but also has a large industry impact not like the classic cloud so I want to get who tie the knot here and put that together so first I got to ask you take a minute to talk about why you're here with red hat what's the update with IBM with Red Hat yeah great John thanks and thanks for giving me the time I'm going to talk about it in two steps one I'm going to talk about a few common Tenace between IBM and Red Hat and then I'll go from there to the specific news so for the context we both believe in Linux I think that's easy to state we both believe in containers I think that's the next thing to state and we'll come back and talk about containers because this is a world containers are linked to Linux containers are linked to these technologies called kubernetes containers are linked to how you make workloads portable across many different environments both private and public then I go on from there to say and we both believe in hybrid hybrid meaning that people want the ability to run their workload wherever they want beat on a private cloud beat on a public cloud and do it without having to rewrite everything as you go across okay so let's just average those are the market needs so then you come back and say an IBM as a great portfolio of middleware names like WebSphere and db2 and I can go on and on and rather has a great footprint of Linux in the enterprise so now you say we got the market need of hybrid we got these two things which between them of tens of millions maybe hundreds of millions of endpoints how do you make that need get fulfilled by this and that's what we just announced here so we announced that IBM middleware will run containerized on RedHat containers on Red Hat Enterprise Linux in addition we said IBM cloud private which is the ability to bring all of the IBM middleware in a sort of a cloud friendly form right you click and you install it keeps itself up it doesn't go down it's elastic in a set of technologies we call IBM cloud private running in turn on Red Hat open shift container service on Red Hat Linux so now for the first time if you say I want private I want public I want to go here I want to go there you have a complete certified stack that is complete I think I can say we are unique in the industry and giving you this this and this is where this is kind of where the fruit comes on the tree off the tree for you guys you know we've been good following you guys for years you know every where's the cloud strategy and first well it's not like you don't have a cloud strategy you have cloud products right so you have to deliver the goods you've got the system replays the market need we all knows the hybrid cloud multi-cloud choice cetera et cetera right you take Red Hat's footprint your capabilities your combined install base is foundational right so and nothing needs to change there's no lifting shift there's no rip and replace you can it's out there it's foundational now on top of it is where the action is that's what we're that's what were you kind of getting at right that's correct so so we can go into somebody there running let's say a massive online banking application or the running a reservation system is using technologies from Asus using Linux underneath and today it's all a bunch of piece parts you have a huge complex stuff it's all hard wired and rigidly nailed down to the floor in a few places and I can say hey I'll take the application I don't have to rewrite the application I can containerize it I can put it here and that same app now begins to work but in a way that's a lot more fluid in elastic well by the way I want to do a bit more work I want to expose a bit of it up as micro-services I want search Samia you can go do that you want to fully make it microservices enable to be able to make it as little components and digestible you can do that so you can take it in sort of bite-sized chunks and go from one to the other at the pace that you want and that's game-changing yeah that's what I really like about this announcement it really brings the best of breed together right you did you know there's a lot of talk about containers and legacy and we you know we've been talking about what goes where and do you have to break everything up like you were just saying but the the announcement today you know WebSphere the this the you know a battle-tested huge enterprise scale component db2 those things containerized and also in a framework like with IBM we either with IBM Microsoft things or others right that's um that's a huge endorsement for open shipped as a platform absolutely it is and look we would be remiss if we didn't talk a little bit I mean we use the word containers and containers a lot yes you're right containers is a really really important technology but what containers enable is much more than prior attempts such as vm's and all have done containers really allow you to say hey I saw the security problem I solved the patching problem the restart problem all those problems that lie around the operations of a typical enterprise can get solved with containers VM sold a lot about isolating the infrastructure but they didn't solve as John was saying the top half of the stack and that's I think the huge power here yeah I want to just double click on that because I think the containers thing is instrument because you know first of all being in the media and loving what we do we're kind of a new kind of media company but traditional media has been throwing IBM under the bus and saying oh you know old guard and all these things but here's the thing you don't have to change anything you could containers you can essentially wrap it up and then bring a micro-services architecture into it so you can actually leverage at cloud scale so what interests me is is that you can move instantly value proposition wise pre-existing market cloud if I if you will with operational capabilities and this is where I like the cloud private so I want to kind of go with the ever second if I have a need to take what I have an IBM when it's WebSphere now I got developers I got installed base I'd have to put a migration plan away I containerize it thank you very much I do some cloud native stuff but I want to make it private my use case is very specific maybe it's confidential maybe it's like a government region whatever I can create a cloud operations is that right I can cloud apply it and run it absolutely correct so when you look at about private to go down that path we said well private allows you to run on your private infrastructure but I want all these abilities you just described John I want to be able to do micro services I want to be able to scale up and down I want to be able to say operations happen automatically so it gives you all that but in the private without having to go all the way to the public so if you cared a lot about you're in a regulated industry because you went down government or confidential data or you say this data is so sensitive I don't really I'm not going to take the risk of it being anywhere else it absolutely gives you that ability to go do that and and that is what we brought to our private to the market for and then you combine it with open shift and now you get the powers of both together so you guys essentially have brought to the table the years of effort with bluemix all that good stuff going on you can bring any he'd actually run this in any industry vertical pretty much right absolutely so if you look at what what the past has been for the entire industry it has been a lot about constructing a public cloud not just to us but us and our competition and a public cloud has certain capabilities and it has certain elasticity it has a global footprint but it does not have a footprint that's in every zip code or in every town or in every city that song ought to happen to the public cloud so we say it's a hybrid world meaning that you're going to run some bulk loads on a public cloud and like to run some bulk loads on a private and I'd like to have the ability that I don't have to pre decide which is where and that is what the containers the micro services the open ship that combination all gives you to say you don't need to pre decide you fucker you rewrite the workload on to this and then you can decide where it runs well I was having this conversation with some folks at and recent Amazon Web Services conference to say well if you go to cloud operations then the on-prem is essentially the edge it's not necessary then the definition of on-premise really doesn't even exist so if you have cloud operations in a way what is the data center then it's just a connected tissue that's right it's the infrastructure which you set up and then at that point the software manages the data center as opposed to anything else and that's kind of being the goal that we are all being wanted it sounds like this is visibility into IBM's essentially execution plan from day one we've been seeing in connecting the dots having the ability to take either pre-existing resources foundational things like red hat or whatnot in the enterprise not throwing it away building on top of it and having a new operating model with software with elastic scale horizontally scalable synchronous all those good things enabling micro search with kubernetes and containers now for the first time I could roll out new software development life cycles in a cloud native environment without foregoing legacy infrastructure and investment absolutely and one more element and if you want to insert some public cloud services into the environment beat in private or in public you can go do that for example you want to insert a couple of AI services into your middle of your application you can go do that so the environment allows you to do what he described and these additions we're talking about people for a second though the the titles that we haven't mentioned CIO you know business leader business unit leaders how are they looking at the digital transformation and business transformation in your client base as you go out and talk to us so let's take a hypothetical back and every bank today is looking about at simple questions how do i improve my customer experience and everyone in this a customer experience really do mean digital customer experience to make it very tangible and what they mean by that is how I get my end customer engaged with me through an app the apps probably on a device like this some smartphone we won't say what it is and and so how do you do that and so they say well well you were to check your balance you obviously want to maybe look at your credit card you want to do all those things the same things we do today so that application exists there is not much point in rewriting it you might do the UI up but it's an app that exists then you say but I also want to give you information that's useful to you in the context of what you're doing I want to say you can get a 10 second not a not a 30-day load but a ten-second law I want to make it offer to you in the middle of you browsing credit cards all those are new customer this thinks are hot where do you construct those apps how do you mix and match it how do you use all the capabilities along with the data you got to go do that and what we are trying to now say here is a platform that you can go all that do all that on right to that complete lifecycle you mentioned the development lifecycle but I got to add to the the data lifecycle as well as here is the versioning here are my area models all those things built in into one platform and scales are huge the new competitive advantage you guys are enabling that so I got to ask you on the question on on multi cloud I'll see as people start building out the cloud on pram and with public cloud the things you're laying out I can see that going on for a while a lot of work being done there we seeing that wiki bond had a true private cloud before I thought was truly telling a lot of growth they're still not going away public cloud certainly has grown the numbers are clear however the word multi clouds being kicked around I think it's more of a future state obviously but people have multiple clouds will have relationships with multiple clouds no one's gonna have one Klaus not a winner-take-all game winner take most but you're gonna have multiple clouds what does multi-cloud mean to you guys in your architecture because is that moving workloads in real time based upon spot pricing indexes or is that just co-locating on clouds and saying I got this SAP on that cloud that app on that cloud control plane did these are architectural questions it's the thing hell is multi cloud so these are today and then there is a tomorrow and then there is a long future state right so let's take today let's check IBM we're on Salesforce we're on service now we're on workday we're on SuccessFactors well all these are different clouds we run our own public cloud we run our own private cloud and we have traditional data center and we might have some of the other clouds also through apps that we bought that we don't even know okay so let's just toss I think every one of our clients is like this so multi cloud is here today I begin with that first simple statement and I need to connect the data and it comes connect when things go away the next step I think people nobody's gonna have only one even public cloud I think the big public clouds most people are gonna have to if not more that's today and tomorrow your channel partners have clouds by the way your global s lies all have clouds there's a cloud for crying out loud right so then you go into the aspirational state and that may be the one he said where people do spot pricing but even if I stay back from spot pricing and completely dynamic and of worrying about network and I'm worrying about video reach I just back up on to but I may decide it I have this app I run it on private well but I don't have all the infrastructures I want to bust it today and I've very robust it to I got to decide which public and how do I go there and that's a problem of today and we're doing that and that is why I think multi-cloud is here now not some pointed problem the problem statement there is latency managing you know service level agreements between clouds and so on and so forth governance where does my data go because there may be regulate regulate through reasons to decide where the data can flow and all the great point about the cloud I never thought about that way it's a good good illustration I would also say that I see the same argument of database world not everyone has db2 that everyone has Oracle number one has databases are everywhere you have databases part of IOT devices now so like no one makes a decision on the database similar was proud you're seeing a similar dynamic it's the glue layer that to me interest me as you how do you bring them together so holistically looking at the 20 mile stare in the future what is the integration strategy long term if you look at a distributed system or an operating system there has to be an architectural guiding principle for absolute integration you know well that's 30 years now in the making so we can say networking everybody had their own networking standards and the let's say the 80s though it probably goes back to the 70s right yeah an SN a tcp/ip you had NetBIOS TechNet deck that go on and on and in the end is tcp/ip that one out as the glue others by the way survived but in pockets and then tcp/ip was the glue then you can fast forward 15 years beyond that an HTTP became the glue we call that the internet then you can fast forward you can say now how to make applications portable and I would turn around and tell you that containers on linux with kubernetes as orchestration is that glue layer now in order to make it so just like in tcp/ip it wasn't enough to say tcp/ip you needed routing tables you needed DNS you needed name repositories you needed all those things similarly you need all those here I've called those catalogs and automation so that's the glue layer that makes all of this work this is important I love this conversation because I've been ranting on this in the queue for years you're nailed it a new stack is development DNS this is olden Internet infrastructure cloud infrastructure at the global scale is seeing things like Network effect okay we see blockchain in token economics like databases multiple database on structured data a new plethora of new things are happening that are building on top of say HTTP correct and this is the new opportunity this is the new the new platform which is emerging and it's going to enable businesses to operate you said at scale to be very digital to be very nimble application life cycles are not always going to be months they're gonna come down to days and this is what gets enabled so I want you to give your opinion personal or IBM or whatever perspective because I think you nailed the glue layer on cue and a stalker and these this new glue layer that and you made reference system things like HTTP and TCP which changed the industry landscape wealth creation new up new new brands emerged companies we've never heard of emerged out of this and we're all using them today we expect a new set of brands are gonna emerge new technologies and emerge in your expert opinion how gigantic is this swarm of new innovation gonna be just because you've seen many ways before in your view your mind's eye what are you expecting wouldn't share your your insight into how big of a shift and wave is this is going to be and add some color to that I think that if I take a take a shorter and then a longer term view in the short term I think that we said that this is on the order of 100 billion dollars that's not just our estimate I think even Gartner estimated about the same number that'll be the amount of opportunity for new technologies in what we've been describing and that is I think short term if I go longer term I think as much as 1/2 but at least 1/4 of the complete ID market is going to shift onto these technologies so then the winners are those that make the shift and then bye-bye clusion the losers of those who don't make this shift faster Afghan and stop the market moves that's that's he was interesting we used to like look at certain segments going back years oh this companies reap platform Ising we platforming they're their operative lift and shift and all this stuff what you're talking about here is so game-changing because the industries Reap lat forming that's a company that's it's an industry that's right any and I think the the the Internet era of 1995 to put that point it's perhaps the easiest analogy to what is happening not the not the emergence of cloud not the emergence of all that I think that was small steps what we're talking about now is back to the 1995 statement every vertical is upgrading their stack across the board from e-commerce to whatever that's right it's completely modernizing correct around cloud what we call digital transformation in a sense yes what not a big fan of the word but I lied I understand what you mean great insight our thanks for coming on the Kuban Sharon because we even get to some of the other good stuff but IBM and Red Hat doing some great stuff obviously foundational I mean Red Hat Tier one first-class citizen in every single enterprise and software environment you know now saw open source runs the world you guys you guys are no stranger to Linux being the first billion dollar investment going back so you guys have a heritage there so congratulations on the relationships that go around about ninety nine nine yeah and and I love the strategy hybrid cloud here at IBM and right at this the cube bring you all the action here in San Francisco I'm John for John Troy you're more live covers stay with us here in the cube Willie right back
**Summary and Sentiment Analysis are not been shown because of improper transcript**
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
Arvind Krishna | PERSON | 0.99+ |
John Wade | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
John Ferreira | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
30-day | QUANTITY | 0.99+ |
ten-second | QUANTITY | 0.99+ |
10 second | QUANTITY | 0.99+ |
20 mile | QUANTITY | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
100 billion dollars | QUANTITY | 0.99+ |
tens of millions | QUANTITY | 0.99+ |
Linux | TITLE | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
1995 | DATE | 0.99+ |
15 years | QUANTITY | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Gartner | ORGANIZATION | 0.99+ |
Asus | ORGANIZATION | 0.99+ |
30 years | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
tomorrow | DATE | 0.99+ |
Red Hat Linux | TITLE | 0.99+ |
hundreds of millions | QUANTITY | 0.99+ |
first time | QUANTITY | 0.99+ |
RedHat | TITLE | 0.99+ |
two things | QUANTITY | 0.99+ |
Red Hat Enterprise Linux | TITLE | 0.99+ |
Red Hat | TITLE | 0.98+ |
John Troy | PERSON | 0.98+ |
first time | QUANTITY | 0.98+ |
Red Hat | ORGANIZATION | 0.98+ |
two steps | QUANTITY | 0.98+ |
WebSphere | TITLE | 0.98+ |
this week | DATE | 0.98+ |
two cubes | QUANTITY | 0.98+ |
three books | QUANTITY | 0.96+ |
both | QUANTITY | 0.96+ |
70s | DATE | 0.95+ |
first billion dollar | QUANTITY | 0.95+ |
San Francisco California | LOCATION | 0.95+ |
bluemix | ORGANIZATION | 0.95+ |
first | QUANTITY | 0.94+ |
one more element | QUANTITY | 0.93+ |
80s | DATE | 0.92+ |
red hat | TITLE | 0.92+ |
Red Hat | EVENT | 0.92+ |
IBM Research | ORGANIZATION | 0.92+ |
Willie | PERSON | 0.91+ |
every zip code | QUANTITY | 0.89+ |
Amazon Web Services | EVENT | 0.88+ |
one platform | QUANTITY | 0.88+ |
Joel Horwitz, IBM | IBM CDO Summit Sping 2018
(techno music) >> Announcer: Live, from downtown San Francisco, it's theCUBE. Covering IBM Chief Data Officer Strategy Summit 2018. Brought to you by IBM. >> Welcome back to San Francisco everybody, this is theCUBE, the leader in live tech coverage. We're here at the Parc 55 in San Francisco covering the IBM CDO Strategy Summit. I'm here with Joel Horwitz who's the Vice President of Digital Partnerships & Offerings at IBM. Good to see you again Joel. >> Thanks, great to be here, thanks for having me. >> So I was just, you're very welcome- It was just, let's see, was it last month, at Think? >> Yeah, it's hard to keep track, right. >> And we were talking about your new role- >> It's been a busy year. >> the importance of partnerships. One of the things I want to, well let's talk about your role, but I really want to get into, it's innovation. And we talked about this at Think, because it's so critical, in my opinion anyway, that you can attract partnerships, innovation partnerships, startups, established companies, et cetera. >> Joel: Yeah. >> To really help drive that innovation, it takes a team of people, IBM can't do it on its own. >> Yeah, I mean look, IBM is the leader in innovation, as we all know. We're the market leader for patents, that we put out each year, and how you get that technology in the hands of the real innovators, the developers, the longtail ISVs, our partners out there, that's the challenging part at times, and so what we've been up to is really looking at how we make it easier for partners to partner with IBM. How we make it easier for developers to work with IBM. So we have a number of areas that we've been adding, so for example, we've added a whole IBM Code portal, so if you go to developer.ibm.com/code you can actually see hundreds of code patterns that we've created to help really any client, any partner, get started using IBM's technology, and to innovate. >> Yeah, and that's critical, I mean you're right, because to me innovation is a combination of invention, which is what you guys do really, and then it's adoption, which is what your customers are all about. You come from the data science world. We're here at the Chief Data Officer Summit, what's the intersection between data science and CDOs? What are you seeing there? >> Yeah, so when I was here last, it was about two years ago in 2015, actually, maybe three years ago, man, time flies when you're having fun. >> Dave: Yeah, the Spark Summit- >> Yeah Spark Technology Center and the Spark Summit, and we were here, I was here at the Chief Data Officer Summit. And it was great, and at that time, I think a lot of the conversation was really not that different than what I'm seeing today. Which is, how do you manage all of your data assets? I think a big part of doing good data science, which is my kind of background, is really having a good understanding of what your data governance is, what your data catalog is, so, you know we introduced the Watson Studio at Think, and actually, what's nice about that, is it brings a lot of this together. So if you look in the market, in the data market, today, you know we used to segment it by a few things, like data gravity, data movement, data science, and data governance. And those are kind of the four themes that I continue to see. And so outside of IBM, I would contend that those are relatively separate kind of tools that are disconnected, in fact Dinesh Nirmal, who's our engineer on the analytic side, Head of Development there, he wrote a great blog just recently, about how you can have some great machine learning, you have some great data, but if you can't operationalize that, then really you can't put it to use. And so it's funny to me because we've been focused on this challenge, and IBM is making the right steps, in my, I'm obviously biased, but we're making some great strides toward unifying the, this tool chain. Which is data management, to data science, to operationalizing, you know, machine learning. So that's what we're starting to see with Watson Studio. >> Well, I always push Dinesh on this and like okay, you've got a collection of tools, but are you bringing those together? And he flat-out says no, we developed this, a lot of this from scratch. Yes, we bring in the best of the knowledge that we have there, but we're not trying to just cobble together a bunch of disparate tools with a UI layer. >> Right, right. >> It's really a fundamental foundation that you're trying to build. >> Well, what's really interesting about that, that piece, is that yeah, I think a lot of folks have cobbled together a UI layer, so we formed a partnership, coming back to the partnership view, with a company called Lightbend, who's based here in San Francisco, as well as in Europe, and the reason why we did that, wasn't just because of the fact that Reactive development, if you're not familiar with Reactive, it's essentially Scala, Akka, Play, this whole framework, that basically allows developers to write once, and it kind of scales up with demand. In fact, Verizon actually used our platform with Lightbend to launch the iPhone 10. And they show dramatic improvements. Now what's exciting about Lightbend, is the fact that application developers are developing with Reactive, but if you turn around, you'll also now be able to operationalize models with Reactive as well. Because it's basically a single platform to move between these two worlds. So what we've continued to see is data science kind of separate from the application world. Really kind of, AI and cloud as different universes. The reality is that for any enterprise, or any company, to really innovate, you have to find a way to bring those two worlds together, to get the most use out of it. >> Fourier always says "Data is the new development kit". He said this I think five or six years ago, and it's barely becoming true. You guys have tried to make an attempt, and have done a pretty good job, of trying to bring those worlds together in a single platform, what do you call it? The Watson Data Platform? >> Yeah, Watson Data Platform, now Watson Studio, and I think the other, so one side of it is, us trying to, not really trying, but us actually bringing together these disparate systems. I mean we are kind of a systems company, we're IT. But not only that, but bringing our trained algorithms, and our trained models to the developers. So for example, we also did a partnership with Unity, at the end of last year, that's now just reaching some pretty good growth, in terms of bringing the Watson SDK to game developers on the Unity platform. So again, it's this idea of bringing the game developer, the application developer, in closer contact with these trained models, and these trained algorithms. And that's where you're seeing incredible things happen. So for example, Star Trek Bridge Crew, which I don't know how many Trekkies we have here at the CDO Summit. >> A few over here probably. >> Yeah, a couple? They're using our SDK in Unity, to basically allow a gamer to use voice commands through the headset, through a VR headset, to talk to other players in the virtual game. So we're going to see more, I can't really disclose too much what we're doing there, but there's some cool stuff coming out of that partnership. >> Real immersive experience driving a lot of data. Now you're part of the Digital Business Group. I like the term digital business, because we talk about it all the time. Digital business, what's the difference between a digital business and a business? What's the, how they use data. >> Joel: Yeah. >> You're a data person, what does that mean? That you're part of the Digital Business Group? Is that an internal facing thing? An external facing thing? Both? >> It's really both. So our Chief Digital Officer, Bob Lord, he has a presentation that he'll give, where he starts out, and he goes, when I tell people I'm the Chief Digital Officer they usually think I just manage the website. You know, if I tell people I'm a Chief Data Officer, it means I manage our data, in governance over here. The reality is that I think these Chief Digital Officer, Chief Data Officer, they're really responsible for business transformation. And so, if you actually look at what we're doing, I think on both sides is we're using data, we're using marketing technology, martech, like Optimizely, like Segment, like some of these great partners of ours, to really look at how we can quickly A/B test, get user feedback, to look at how we actually test different offerings and market. And so really what we're doing is we're setting up a testing platform, to bring not only our traditional offers to market, like DB2, Mainframe, et cetera, but also bring new offers to market, like blockchain, and quantum, and others, and actually figure out how we get better product-market fit. What actually, one thing, one story that comes to mind, is if you've seen the movie Hidden Figures- >> Oh yeah. >> There's this scene where Kevin Costner, I know this is going to look not great for IBM, but I'm going to say it anyways, which is Kevin Costner has like a sledgehammer, and he's like trying to break down the wall to get the mainframe in the room. That's what it feels like sometimes, 'cause we create the best technology, but we forget sometimes about the last mile. You know like, we got to break down the wall. >> Where am I going to put it? >> You know, to get it in the room! So, honestly I think that's a lot of what we're doing. We're bridging that last mile, between these different audiences. So between developers, between ISVs, between commercial buyers. Like how do we actually make this technology, not just accessible to large enterprise, which are our main clients, but also to the other ecosystems, and other audiences out there. >> Well so that's interesting Joel, because as a potential partner of IBM, they want, obviously your go-to-market, your massive company, and great distribution channel. But at the same time, you want more than that. You know you want to have a closer, IBM always focuses on partnerships that have intrinsic value. So you talked about offerings, you talked about quantum, blockchain, off-camera talking about cloud containers. >> Joel: Yeah. >> I'd say cloud and containers may be a little closer than those others, but those others are going to take a lot of market development. So what are the offerings that you guys are bringing? How do they get into the hands of your partners? >> I mean, the commonality with all of these, all the emerging offerings, if you ask me, is the distributed nature of the offering. So if you look at blockchain, it's a distributed ledger. It's a distributed transaction chain that's secure. If you look at data, really and we can hark back to say, Hadoop, right before object storage, it's distributed storage, so it's not just storing on your hard drive locally, it's storing on a distributed network of servers that are all over the world and data centers. If you look at cloud, and containers, what you're really doing is not running your application on an individual server that can go down. You're using containers because you want to distribute that application over a large network of servers, so that if one server goes down, you're not going to be hosed. And so I think the fundamental shift that you're seeing is this distributed nature, which in essence is cloud. So I think cloud is just kind of a synonym, in my opinion, for distributed nature of our business. >> That's interesting and that brings up, you're right, cloud and Big Data/Hadoop, we don't talk about Hadoop much anymore, but it kind of got it all started, with that notion of leave the data where it is. And it's the same thing with cloud. You can't just stuff your business into the public cloud. You got to bring the cloud to your data. >> Joel: That's right. >> But that brings up a whole new set of challenges, which obviously, you're in a position just to help solve. Performance, latency, physics come into play. >> Physics is a rough one. It's kind of hard to avoid that one. >> I hear your best people are working on it though. Some other partnerships that you want to sort of, elucidate. >> Yeah, no, I mean we have some really great, so I think the key kind of partnership, I would say area, that I would allude to is, one of the things, and you kind of referenced this, is a lot of our partners, big or small, want to work with our top clients. So they want to work with our top banking clients. They want, 'cause these are, if you look at for example, MaRisk and what we're doing with them around blockchain, and frankly, talk about innovation, they're innovating containers for real, not virtual containers- >> And that's a joint venture right? >> Yeah, it is, and so it's exciting because, what we're bringing to market is, I also lead our startup programs, called the Global Entrepreneurship Program, and so what I'm focused on doing, and you'll probably see more to come this quarter, is how do we actually bridge that end-to-end? How do you, if you're startup or a small business, ultimately reach that kind of global business partner level? And so kind of bridging that, that end-to-end. So we're starting to bring out a number of different incentives for partners, like co-marketing, so I'll help startups when they're early, figure out product-market fit. We'll give you free credits to use our innovative technology, and we'll also bring you into a number of clients, to basically help you not burn all of your cash on creating your own marketing channel. God knows I did that when I was at a start-up. So I think we're doing a lot to kind of bridge that end-to-end, and help any partner kind of come in, and then grow with IBM. I think that's where we're headed. >> I think that's a critical part of your job. Because I mean, obviously IBM is known for its Global 2000, big enterprise presence, but startups, again, fuel that innovation fire. So being able to attract them, which you're proving you can, providing whatever it is, access, early access to cloud services, or like you say, these other offerings that you're producing, in addition to that go-to-market, 'cause it's funny, we always talk about how efficient, capital efficient, software is, but then you have these companies raising hundreds of millions of dollars, why? Because they got to do promotion, marketing, sales, you know, go-to-market. >> Yeah, it's really expensive. I mean, you look at most startups, like their biggest ticket item is usually marketing and sales. And building channels, and so yeah, if you're, you know we're talking to a number of partners who want to work with us because of the fact that, it's not just like, the direct kind of channel, it's also, as you kind of mentioned, there's other challenges that you have to overcome when you're working with a larger company. for example, security is a big one, GDPR compliance now, is a big one, and just making sure that things don't fall over, is a big one. And so a lot of partners work with us because ultimately, a number of the decision makers in these larger enterprises are going, well, I trust IBM, and if IBM says you're good, then I believe you. And so that's where we're kind of starting to pull partners in, and pull an ecosystem towards us. Because of the fact that we can take them through that level of certification. So we have a number of free online courses. So if you go to partners, excuse me, ibm.com/partners/learn there's a number of blockchain courses that you can learn today, and will actually give you a digital certificate, that's actually certified on our own blockchain, which we're actually a first of a kind to do that, which I think is pretty slick, and it's accredited at some of the universities. So I think that's where people are looking to IBM, and other leaders in this industry, is to help them become experts in their, in this technology, and especially in this emerging technology. >> I love that blockchain actually, because it's such a growing, and interesting, and innovative field. But it needs players like IBM, that can bring credibility, enterprise-grade, whether it's security, or just, as I say, credibility. 'Cause you know, this is, so much of negative connotations associated with blockchain and crypto, but companies like IBM coming to the table, enterprise companies, and building that ecosystem out is in my view, crucial. >> Yeah, no, it takes a village. I mean, there's a lot of folks, I mean that's a big reason why I came to IBM, three, four years ago, was because when I was in start-up land, I used to work for H20, I worked for Alpine Data Labs, Datameer, back in the Hadoop days, and what I realized was that, it's an opportunity cost. So you can't really drive true global innovation, transformation, in some of these bigger companies because there's only so much that you can really kind of bite off. And so you know at IBM it's been a really rewarding experience because we have done things like for example, we partnered with Girls Who Code, Treehouse, Udacity. So there's a number of early educators that we've partnered with, to bring code to, to bring technology to, that frankly, would never have access to some of this stuff. Some of this technology, if we didn't form these alliances, and if we didn't join these partnerships. So I'm very excited about the future of IBM, and I'm very excited about the future of what our partners are doing with IBM, because, geez, you know the cloud, and everything that we're doing to make this accessible, is bar none, I mean, it's great. >> I can tell you're excited. You know, spring in your step. Always a lot of energy Joel, really appreciate you coming onto theCUBE. >> Joel: My pleasure. >> Great to see you again. >> Yeah, thanks Dave. >> You're welcome. Alright keep it right there, everybody. We'll be back. We're at the IBM CDO Strategy Summit in San Francisco. You're watching theCUBE. (techno music) (touch-tone phone beeps)
SUMMARY :
Brought to you by IBM. Good to see you again Joel. that you can attract partnerships, To really help drive that innovation, and how you get that technology Yeah, and that's critical, I mean you're right, Yeah, so when I was here last, to operationalizing, you know, machine learning. that we have there, but we're not trying that you're trying to build. to really innovate, you have to find a way in a single platform, what do you call it? So for example, we also did a partnership with Unity, to basically allow a gamer to use voice commands I like the term digital business, to look at how we actually test different I know this is going to look not great for IBM, but also to the other ecosystems, But at the same time, you want more than that. So what are the offerings that you guys are bringing? So if you look at blockchain, it's a distributed ledger. You got to bring the cloud to your data. But that brings up a whole new set of challenges, It's kind of hard to avoid that one. Some other partnerships that you want to sort of, elucidate. and you kind of referenced this, to basically help you not burn all of your cash early access to cloud services, or like you say, that you can learn today, but companies like IBM coming to the table, that you can really kind of bite off. really appreciate you coming onto theCUBE. We're at the IBM CDO Strategy Summit in San Francisco.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Joel | PERSON | 0.99+ |
Joel Horwitz | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Kevin Costner | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Dinesh Nirmal | PERSON | 0.99+ |
Alpine Data Labs | ORGANIZATION | 0.99+ |
Lightbend | ORGANIZATION | 0.99+ |
Verizon | ORGANIZATION | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Hidden Figures | TITLE | 0.99+ |
Bob Lord | PERSON | 0.99+ |
Both | QUANTITY | 0.99+ |
MaRisk | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
iPhone 10 | COMMERCIAL_ITEM | 0.99+ |
2015 | DATE | 0.99+ |
Datameer | ORGANIZATION | 0.99+ |
both sides | QUANTITY | 0.99+ |
one story | QUANTITY | 0.99+ |
Think | ORGANIZATION | 0.99+ |
five | DATE | 0.99+ |
hundreds | QUANTITY | 0.99+ |
Treehouse | ORGANIZATION | 0.99+ |
three years ago | DATE | 0.99+ |
developer.ibm.com/code | OTHER | 0.99+ |
Unity | ORGANIZATION | 0.98+ |
two worlds | QUANTITY | 0.98+ |
Reactive | ORGANIZATION | 0.98+ |
GDPR | TITLE | 0.98+ |
one side | QUANTITY | 0.98+ |
Digital Business Group | ORGANIZATION | 0.98+ |
today | DATE | 0.98+ |
Udacity | ORGANIZATION | 0.98+ |
ibm.com/partners/learn | OTHER | 0.98+ |
last month | DATE | 0.98+ |
Watson Studio | ORGANIZATION | 0.98+ |
each year | QUANTITY | 0.97+ |
three | DATE | 0.97+ |
single platform | QUANTITY | 0.97+ |
Girls Who Code | ORGANIZATION | 0.97+ |
Parc 55 | LOCATION | 0.97+ |
one thing | QUANTITY | 0.97+ |
four themes | QUANTITY | 0.97+ |
Spark Technology Center | ORGANIZATION | 0.97+ |
six years ago | DATE | 0.97+ |
H20 | ORGANIZATION | 0.97+ |
four years ago | DATE | 0.97+ |
martech | ORGANIZATION | 0.97+ |
Unity | TITLE | 0.96+ |
hundreds of millions of dollars | QUANTITY | 0.94+ |
Watson Studio | TITLE | 0.94+ |
Dinesh | PERSON | 0.93+ |
one server | QUANTITY | 0.93+ |