Harveer Singh, Western Union | Western Union When Data Moves Money Moves
(upbeat music) >> Welcome back to Supercloud 2, which is an open industry collaboration between technologists, consultants, analysts, and of course, practitioners, to help shape the future of cloud. And at this event, one of the key areas we're exploring is the intersection of cloud and data, and how building value on top of hyperscale clouds and across clouds is evolving, a concept we call supercloud. And we're pleased to welcome Harvir Singh, who's the chief data architect and global head of data at Western Union. Harvir, it's good to see you again. Thanks for coming on the program. >> Thanks, David, it's always a pleasure to talk to you. >> So many things stand out from when we first met, and one of the most gripping for me was when you said to me, "When data moves, money moves." And that's the world we live in today, and really have for a long time. Money has moved as bits, and when it has to move, we want it to move quickly, securely, and in a governed manner. And the pressure to do so is only growing. So tell us how that trend is evolved over the past decade in the context of your industry generally, and Western Union, specifically. Look, I always say to people that we are probably the first ones to introduce digital currency around the world because, hey, somebody around the world needs money, we move data to make that happen. That trend has actually accelerated quite a bit. If you look at the last 10 years, and you look at all these payment companies, digital companies, credit card companies that have evolved, majority of them are working on the same principle. When data moves, money moves. When data is stale, the money goes away, right? I think that trend is continuing, and it's not just the trend is in this space, it's also continuing in other spaces, specifically around, you know, acquisition of customers, communication with customers. It's all becoming digital, and it's, at the end of the day, it's all data being moved from one place or another. At the end of the day, you're not seeing the customer, but you're looking at, you know, the data that he's consuming, and you're making actionable items on it, and be able to respond to what they need. So I think 10 years, it's really, really evolved. >> Hmm, you operate, Western Union operates in more than 200 countries, and you you have what I would call a pseudo federated organization. You're trying to standardize wherever possible on the infrastructure, and you're curating the tooling and doing the heavy lifting in the data stack, which of course lessens the burden on the developers and the line of business consumers, so my question is, in operating in 200 countries, how do you deal with all the diversity of laws and regulations across those regions? I know you're heavily involved in AWS, but AWS isn't everywhere, you still have some on-prem infrastructure. Can you paint a picture of, you know, what that looks like? >> Yeah, a few years ago , we were primarily mostly on-prem, and one of the biggest pain points has been managing that infrastructure around the world in those countries. Yes, we operate in 200 countries, but we don't have infrastructure in 200 countries, but we do have agent locations in 200 countries. United Nations says we only have like 183 are countries, but there are countries which, you know, declare themselves countries, and we are there as well because somebody wants to send money there, right? Somebody has an agent location down there as well. So that infrastructure is obviously very hard to manage and maintain. We have to comply by numerous laws, you know. And the last few years, specifically with GDPR, CCPA, data localization laws in different countries, it's been a challenge, right? And one of the things that we did a few years ago, we decided that we want to be in the business of helping our customers move money faster, security, and with complete trust in us. We don't want to be able to, we don't want to be in the business of managing infrastructure. And that's one of the reasons we started to, you know, migrate and move our journey to the cloud. AWS, obviously chosen first because of its, you know, first in the game, has more locations, and more data centers around the world where we operate. But we still have, you know, existing infrastructure, which is in some countries, which is still localized because AWS hasn't reached there, or we don't have a comparable provider there. We still manage those. And we have to comply by those laws. Our data privacy and our data localization tech stack is pretty good, I would say. We manage our data very well, we manage our customer data very well, but it comes with a lot of complexity. You know, we get a lot of requests from European Union, we get a lot of requests from Asia Pacific every pretty much on a weekly basis to explain, you know, how we are taking controls and putting measures in place to make sure that the data is secured and is in the right place. So it's a complex environment. We do have exposure to other clouds as well, like Google and Azure. And as much as we would love to be completely, you know, very, very hybrid kind of an organization, it's still at a stage where we are still very heavily focused on AWS yet, but at some point, you know, we would love to see a world which is not reliant on a single provider, but it's more a little bit more democratized, you know, as and when what I want to use, I should be able to use, and pay-per-use. And the concept started like that, but it's obviously it's now, again, there are like three big players in the market, and, you know, they're doing their own thing. Would love to see them come collaborate at some point. >> Yeah, wouldn't we all. I want to double-click on the whole multi-cloud strategy, but if I understand it correctly, and in a perfect world, everything on-premises would be in the cloud is, first of all, is that a correct statement? Is that nirvana for you or not necessarily? >> I would say it is nirvana for us, but I would also put a caveat, is it's very tricky because from a regulatory perspective, we are a regulated entity in many countries. The regulators would want to see some control if something happens with a relationship with AWS in one country, or with Google in another country, and it keeps happening, right? For example, Russia was a good example where we had to switch things off. We should be able to do that. But if let's say somewhere in Asia, this country decides that they don't want to partner with AWS, and majority of our stuff is on AWS, where do I go from there? So we have to have some level of confidence in our own infrastructure, so we do maintain some to be able to fail back into and move things it needs to be. So it's a tricky question. Yes, it's nirvana state that I don't have to manage infrastructure, but I think it's far less practical than it said. We will still own something that we call it our own where we have complete control, being a financial entity. >> And so do you try to, I'm sure you do, standardize between all the different on-premise, and in this case, the AWS cloud or maybe even other clouds. How do you do that? Do you work with, you know, different vendors at the various places of the stack to try to do that? Some of the vendors, you know, like a Snowflake is only in the cloud. You know, others, you know, whether it's whatever, analytics, or storage, or database, might be hybrid. What's your strategy with regard to creating as common an experience as possible between your on-prem and your clouds? >> You asked a question which I asked when I joined as well, right? Which question, this is one of the most important questions is how soon when I fail back, if I need to fail back? And how quickly can I, because not everything that is sitting on the cloud is comparable to on-prem or is backward compatible. And the reason I say backward compatible is, you know, there are, our on-prem cloud is obviously behind. We haven't taken enough time to kind of put it to a state where, because we started to migrate and now we have access to infrastructure on the cloud, most of the new things are being built there. But for critical application, I would say we have chronology that could be used to move back if need to be. So, you know, technologies like Couchbase, technologies like PostgreSQL, technologies like Db2, et cetera. We still have and maintain a fairly large portion of it on-prem where critical applications could potentially be serviced. We'll give you one example. We use Neo4j very heavily for our AML use cases. And that's an important one because if Neo4j on the cloud goes down, and it's happened in the past, again, even with three clusters, having all three clusters going down with a DR, we still need some accessibility of that because that's one of the biggest, you know, fraud and risk application it supports. So we do still maintain some comparable technology. Snowflake is an odd one. It's obviously there is none on-prem. But then, you know, Snowflake, I also feel it's more analytical based technology, not a transactional-based technology, at least in our ecosystem. So for me to replicate that, yes, it'll probably take time, but I can live with that. But my business will not stop because our transactional applications can potentially move over if need to. >> Yeah, and of course, you know, all these big market cap companies, so the Snowflake or Databricks, which is not public yet, but they've got big aspirations. And so, you know, we've seen things like Snowflake do a deal with Dell for on-prem object store. I think they do the same thing with Pure. And so over time, you see, Mongo, you know, extending its estate. And so over time all these things are coming together. I want to step out of this conversation for a second. I just ask you, given the current macroeconomic climate, what are the priorities? You know, obviously, people are, CIOs are tapping the breaks on spending, we've reported on that, but what is it? Is it security? Is it analytics? Is it modernization of the on-prem stack, which you were saying a little bit behind. Where are the priorities today given the economic headwinds? >> So the most important priority right now is growing the business, I would say. It's a different, I know this is more, this is not a very techy or a tech answer that, you know, you would expect, but it's growing the business. We want to acquire more customers and be able to service them as best needed. So the majority of our investment is going in the space where tech can support that initiative. During our earnings call, we released the new pillars of our organization where we will focus on, you know, omnichannel digital experience, and then one experience for customer, whether it's retail, whether it's digital. We want to open up our own experience stores, et cetera. So we are investing in technology where it's going to support those pillars. But the spend is in a way that we are obviously taking away from the things that do not support those. So it's, I would say it's flat for us. We are not like in heavily investing or aggressively increasing our tech budget, but it's more like, hey, switch this off because it doesn't make us money, but now switch this on because this is going to support what we can do with money, right? So that's kind of where we are heading towards. So it's not not driven by technology, but it's driven by business and how it supports our customers and our ability to compete in the market. >> You know, I think Harvir, that's consistent with what we heard in some other work that we've done, our ETR partner who does these types of surveys. We're hearing the same thing, is that, you know, we might not be spending on modernizing our on-prem stack. Yeah, we want to get to the cloud at some point and modernize that. But if it supports revenue, you know, we'll invest in that, and get the, you know, instant ROI. I want to ask you about, you know, this concept of supercloud, this abstracted layer of value on top of hyperscale infrastructure, and maybe on-prem. But we were talking about the integration, for instance, between Snowflake and Salesforce, where you got different data sources and you were explaining that you had great interest in being able to, you know, have a kind of, I'll say seamless, sorry, I know it's an overused word, but integration between the data sources and those two different platforms. Can you explain that and why that's attractive to you? >> Yeah, I'm a big supporter of action where the data is, right? Because the minute you start to move, things are already lost in translation. The time is lost, you can't get to it fast enough. So if, for example, for us, Snowflake, Salesforce, is our actionable platform where we action, we send marketing campaigns, we send customer communication via SMS, in app, as well as via email. Now, we would like to be able to interact with our customers pretty much on a, I would say near real time, but the concept of real time doesn't work well with me because I always feel that if you're observing something, it's not real time, it's already happened. But how soon can I react? That's the question. And given that I have to move that data all the way from our, let's say, engagement platforms like Adobe, and particles of the world into Snowflake first, and then do my modeling in some way, and be able to then put it back into Salesforce, it takes time. Yes, you know, I can do it in a few hours, but that few hours makes a lot of difference. Somebody sitting on my website, you know, couldn't find something, walked away, how soon do you think he will lose interest? Three hours, four hours, he'll probably gone, he will never come back. I think if I can react to that as fast as possible without too much data movement, I think that's a lot of good benefit that this kind of integration will bring. Yes, I can potentially take data directly into Salesforce, but I then now have two copies of data, which is, again, something that I'm not a big (indistinct) of. Let's keep the source of the data simple, clean, and a single source. I think this kind of integration will help a lot if the actions can be brought very close to where the data resides. >> Thank you for that. And so, you know, it's funny, we sometimes try to define real time as before you lose the customer, so that's kind of real time. But I want to come back to this idea of governed data sharing. You mentioned some other clouds, a little bit of Azure, a little bit of Google. In a world where, let's say you go more aggressively, and we know that for instance, if you want to use Google's AI tools, you got to use BigQuery. You know, today, anyway, they're not sort of so friendly with Snowflake, maybe different for the AWS, maybe Microsoft's going to be different as well. But in an ideal world, what I'm hearing is you want to keep the data in place. You don't want to move the data. Moving data is expensive, making copies is badness. It's expensive, and it's also, you know, changes the state, right? So you got governance issues. So this idea of supercloud is that you can leave the data in place and actually have a common experience across clouds. Let's just say, let's assume for a minute Google kind of wakes up, my words, not yours, and says, "Hey, maybe, you know what, partnering with a Snowflake or a Databricks is better for our business. It's better for the customers," how would that affect your business and the value that you can bring to your customers? >> Again, I would say that would be the nirvana state that, you know, we want to get to. Because I would say not everyone's perfect. They have great engineers and great products that they're developing, but that's where they compete as well, right? I would like to use the best of breed as much as possible. And I've been a person who has done this in the past as well. I've used, you know, tools to integrate. And the reason why this integration has worked is primarily because sometimes you do pick the best thing for that job. And Google's AI products are definitely doing really well, but, you know, that accessibility, if it's a problem, then I really can't depend on them, right? I would love to move some of that down there, but they have to make it possible for us. Azure is doing really, really good at investing, so I think they're a little bit more and more closer to getting to that state, and I know seeking our attention than Google at this point of time. But I think there will be a revelation moment because more and more people that I talk to like myself, they're also talking about the same thing. I'd like to be able to use Google's AdSense, I would like to be able to use Google's advertising platform, but you know what? I already have all this data, why do I need to move it? Can't they just go and access it? That question will keep haunting them (indistinct). >> You know, I think, obviously, Microsoft has always known, you know, understood ecosystems. I mean, AWS is nailing it, when you go to re:Invent, it's all about the ecosystem. And they think they realized they can make a lot more money, you know, together, than trying to have, and Google's got to figure that out. I think Google thinks, "All right, hey, we got to have the best tech." And that tech, they do have the great tech, and that's our competitive advantage. They got to wake up to the ecosystem and what's happening in the field and the go-to-market. I want to ask you about how you see data and cloud evolving in the future. You mentioned that things that are driving revenue are the priorities, and maybe you're already doing this today, but my question is, do you see a day when companies like yours are increasingly offering data and software services? You've been around for a long time as a company, you've got, you know, first party data, you've got proprietary knowledge, and maybe tooling that you've developed, and you're becoming more, you're already a technology company. Do you see someday pointing that at customers, or again, maybe you're doing it already, or is that not practical in your view? >> So data monetization has always been on the charts. The reason why it hasn't seen the light is regulatory pressure at this point of time. We are partnering up with certain agencies, again, you know, some pilots are happening to see the value of that and be able to offer that. But I think, you know, eventually, we'll get to a state where our, because we are trying to build accessible financial services, we will be in a state that we will be offering those to partners, which could then extended to their customers as well. So we are definitely exploring that. We are definitely exploring how to enrich our data with other data, and be able to complete a super set of data that can be used. Because frankly speaking, the data that we have is very interesting. We have trends of people migrating, we have trends of people migrating within the US, right? So if a new, let's say there's a new, like, I'll give you an example. Let's say New York City, I can tell you, at any given point of time, with my data, what is, you know, a dominant population in that area from migrant perspective. And if I see a change in that data, I can tell you where that is moving towards. I think it's going to be very interesting. We're a little bit, obviously, sometimes, you know, you're scared of sharing too much detail because there's too much data. So, but at the end of the day, I think at some point, we'll get to a state where we are confident that the data can be used for good. One simple example is, you know, pharmacies. They would love to get, you know, we've been talking to CVS and we are talking to Walgreens, and trying to figure out, if they would get access to this kind of data demographic information, what could they do be better? Because, you know, from a gene pool perspective, there are diseases and stuff that are very prevalent in one community versus the other. We could probably equip them with this information to be able to better, you know, let's say, staff their pharmacies or keep better inventory of products that could be used for the population in that area. Similarly, the likes of Walmarts and Krogers, they would like to have more, let's say, ethnic products in their aisles, right? How do you enable that? That data is primarily, I think we are the biggest source of that data. So we do take pride in it, but you know, with caution, we are obviously exploring that as well. >> My last question for you, Harvir, is I'm going to ask you to do a thought exercise. So in that vein, that whole monetization piece, imagine that now, Harvir, you are running a P&L that is going to monetize that data. And my question to you is a there's a business vector and a technology vector. So from a business standpoint, the more distribution channels you have, the better. So running on AWS cloud, partnering with Microsoft, partnering with Google, going to market with them, going to give you more revenue. Okay, so there's a motivation for multi-cloud or supercloud. That's indisputable. But from a technical standpoint, is there an advantage to running on multiple clouds or is that a disadvantage for you? >> It's, I would say it's a disadvantage because if my data is distributed, I have to combine it at some place. So the very first step that we had taken was obviously we brought in Snowflake. The reason, we wanted our analytical data and we want our historical data in the same place. So we are already there and ready to share. And we are actually participating in the data share, but in a private setting at the moment. So we are technically enabled to share, unless there is a significant, I would say, upside to moving that data to another cloud. I don't see any reason because I can enable anyone to come and get it from Snowflake. It's already enabled for us. >> Yeah, or if somehow, magically, several years down the road, some standard developed so you don't have to move the data. Maybe there's a new, Mogli is talking about a new data architecture, and, you know, that's probably years away, but, Harvir, you're an awesome guest. I love having you on, and really appreciate you participating in the program. >> I appreciate it. Thank you, and good luck (indistinct) >> Ah, thank you very much. This is Dave Vellante for John Furrier and the entire Cube community. Keep it right there for more great coverage from Supercloud 2. (uplifting music)
SUMMARY :
Harvir, it's good to see you again. a pleasure to talk to you. And the pressure to do so is only growing. and you you have what I would call But we still have, you know, you or not necessarily? that I don't have to Some of the vendors, you and it's happened in the past, And so, you know, we've and our ability to compete in the market. and get the, you know, instant ROI. Because the minute you start to move, and the value that you can that, you know, we want to get to. and cloud evolving in the future. But I think, you know, And my question to you So the very first step that we had taken and really appreciate you I appreciate it. Ah, thank you very much.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Walmarts | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Microsoft | ORGANIZATION | 0.99+ |
Walgreens | ORGANIZATION | 0.99+ |
Asia | LOCATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Harvir | PERSON | 0.99+ |
Three hours | QUANTITY | 0.99+ |
four hours | QUANTITY | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
New York City | LOCATION | 0.99+ |
United Nations | ORGANIZATION | 0.99+ |
Krogers | ORGANIZATION | 0.99+ |
US | LOCATION | 0.99+ |
one | QUANTITY | 0.99+ |
Databricks | ORGANIZATION | 0.99+ |
Western Union | ORGANIZATION | 0.99+ |
Harvir Singh | PERSON | 0.99+ |
10 years | QUANTITY | 0.99+ |
two copies | QUANTITY | 0.99+ |
one country | QUANTITY | 0.99+ |
183 | QUANTITY | 0.99+ |
European Union | ORGANIZATION | 0.99+ |
Mongo | ORGANIZATION | 0.99+ |
three big players | QUANTITY | 0.99+ |
first step | QUANTITY | 0.99+ |
Snowflake | TITLE | 0.98+ |
AdSense | TITLE | 0.98+ |
more than 200 countries | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
three clusters | QUANTITY | 0.98+ |
Snowflake | ORGANIZATION | 0.98+ |
Mogli | PERSON | 0.98+ |
John Furrier | PERSON | 0.98+ |
supercloud | ORGANIZATION | 0.98+ |
one example | QUANTITY | 0.97+ |
GDPR | TITLE | 0.97+ |
Adobe | ORGANIZATION | 0.97+ |
Salesforce | ORGANIZATION | 0.97+ |
200 countries | QUANTITY | 0.97+ |
one experience | QUANTITY | 0.96+ |
Harveer Singh | PERSON | 0.96+ |
one community | QUANTITY | 0.96+ |
Pure | ORGANIZATION | 0.95+ |
One simple example | QUANTITY | 0.95+ |
two different platforms | QUANTITY | 0.95+ |
Salesforce | TITLE | 0.94+ |
first | QUANTITY | 0.94+ |
Cube | ORGANIZATION | 0.94+ |
BigQuery | TITLE | 0.94+ |
nirvana | LOCATION | 0.93+ |
single source | QUANTITY | 0.93+ |
Asia Pacific | LOCATION | 0.93+ |
first ones | QUANTITY | 0.92+ |
Pete Lilley and Ben Bromhead, Instaclustr | CUBE Conversation
(upbeat music) >> Hello, and welcome to this "CUBE" conversation. I'm John Furrier, host of "theCUBE", Here in Palo Alto, California, beginning in 2022, kicking off the new year with a great conversation. We're with folks from down under, two co-founders of Instaclustr. Peter Lilley, CEO, Ben Bromhead, the CTO, Intaclustr success. 'Cause he's been on "theCUBE" before, 2018 at Amazon re:Invent. Gentlemen, thanks for coming on "theCUBE". Thanks for piping in from Down Under into Palo Alto. >> Thanks, John, it's really good to be here, I'm looking forward to the conversation. >> So, I love the name, Instaclustr. It conjures up cloud, cloud scale, modern application, server list. It just gives me a feel of things coming together. Spin me up a cluster of these kinds of feelings. The cloud is here, open sources is growing, that's what you guys are in the middle of. Take a minute to explain what you guys do real quick and this open source cloud intersection that's just going supernova right now. >> Yeah, yeah, yeah. So, Instaclustr is on a mission to really enable the world's ambitions to use open source technology. And we do that specifically at the data layer. And we primarily do that through what we call our platform offering. And think of it as the way to make it super easy, super scalable, super reliable way to adopt open source technologies at the data layer, to build cutting edge applications in the cloud. Today used by customers all over the world. We started the business in Australia but we've very quickly become a global business. But we are the business that sits behind some of the most successful brands that are building massively scalable cloud based applications. And you did right. We sit at a real intersection of kind of four things. One is open source adoption which is an incredibly powerful journey and wave that's driving the future direction of IT. You've got managed services or managed operations and moving those onto a platform like Instaclustr. You've got the adoption of cloud and cloud as a wave, like open source is a wave. And then you've got the growth of data, everything is data-driven these days. And data is just excellent for businesses and our customers. And in a lot of cases when we work with our customers on Instaclustr today, the application and the data, the data is the business. >> Ben, I want to get your thoughts as a CTO because open source, and technology, and cloud, has been a real game changer. If you go back prior to cloud, open source is very awesome, still great, freedom, we've got code, it's just the scale of open source. And then cloud came along, changed the game, so, open source. And then new business models became, so commercial open source software is now an industry. It's not just open source, "Hey, free software." And then maybe a red hat's out there, or someone like a red hat, have some premium support. There's been innovation on the business model side. So, matching technology innovation with the business model has been a big change in the past, many, many years. And this past year in particular that's been key. And open source, open core, these are the things that people are talking about. License changes, this is a big discussion. Because you could be on the wrong side of history if you make the wrong decision here. >> Yeah, yeah, definitely. I think it's also worth, I guess, taking a step back and understanding a little bit about why have people gravitated towards open source and the cloud? Beyond kind of the hippie freedoms of, I can see the code and I have ownership, and everything's free and great. And I think the reason why it's really taken off in a commercial setting, in an enterprise setting is velocity. How much easier is it to go reach and grab a open-source tool? That you can download, you can grab, you can compile yourself, you can make it work the way you want it to do to solve a problem here and now. Versus the old school way of doing it which is with I have to go download a trial version. Oh no, some of the features are locked. I've got to go talk to a procurement or a salesperson to kind of go and solve the problem that I have. And then I've got to get that approved by my own purchasing department. And do we have budget? And all of a sudden it's way, way, way harder to solve the problem in front of you as an engineer. Whereas with open source I just go grab it and I move on. I've achieved something for the day. >> Basically all that friction that comes, you got a problem to solve, oh, open-source, I'm going to just get a hammer and hammer that nail. Wait, whoa, whoa. I got to stand in line, I got to jump over hoops, I got to do all these things. This is the hassle and friction. >> Exactly, and this is why it's often called one of the most impressive things about that. And I think on the cloud side it's the same thing, but for hardware, and capability, and compute, and memory. Previously, if you wanted to compute, oh, you're going to lodge a ticket. You've got to ask someone to rack a server in a data center. You've got to deal with three different departments. Oh my goodness. How painful is that just to get a server up to go run and do something? That's just pulling your hair out. Whereas with the cloud, that's an API call or clicking a few buttons on a console and off you go. You'd have to combine those two things. And I would say that software engineers are probably the most productive they've ever been in the last 20 years. I know sometimes it doesn't look like that but their ability to solve problems in front of them, especially using external stuff is way way, way better. >> Peter: I think when you put those two things together you get an- >> The fact of the matter is they are productive. They're putting security into the code right in the CICD pipeline. So, this is highly agile right now. So, coders are highly productive and efficient in changing the way people are rolling out applications. So, the game is over, open source has won, open core is winning. And this is where the people are confused. This is why I got you guys here? What's the difference between open source and open core? What's the big deal? Why is it so important? >> Yeah, no, great question. So, really the difference between open source and open core, it comes down to, really it's a business model. So, open core contains open-source software, that's a hundred percent true. So, usually what will happen is a company will take a project that is open source, that has an existing community around it, or they've built it, or they've contributed it, or however that genesis has happened. And then what they'll do is they'll look at all the edges around that open-source project. And I think what are some enterprise features that don't exist in the open-source project that we can build ourselves? And then sprinkle those around the edges and sell that as a proprietary offering. So, what you get is you get the core functionality is powered by an open-source project. And quite often the code is identical. But there's all these kinds of little features around the outside that might make it a little bit easier to use in an enterprise environment. Or might make it a bit easier to do some operations side of things. And they'll charge you a license for that. So, you end up in a situation where you might have adopted the open source project, but then now if you want a feature X, Y, or Z, you then need to go and fork over some money and go into that whole licensing kind of contract. So, that's the core difference between open core and open-source, right? Open core, it's got all these little proprietary bits kind of sprinkled around the outside. >> So, how would you describe your platform for your customers? Obviously, you guys are succeeding, your growth is great, we're going to get that second. But as you guys have been steadily expanding the platform of open source data technologies, what is the main solution that you guys are offering customers? Managing open source technologies? What's the main value that you guys bring to the customer? >> Yeah, definitely. So, really the main value that we bring to the customer is we allow them to, I guess, successfully adopt open source databases or database technologies without having to go down that open core path. Open core can be quite attractive, but what it does is you end up with all these many Oracles drivers. Still having to pay the toll in terms of license fees. What we do, however, is we take those open-source projects and we deliver that as a database, as a service on our managed platform. So, we take care of all the operations, the pain, the care, the feeding, patch management, backups. Everything that you need to do, whether you're running it yourself or getting someone else to run it, we'll take care of that for you. But we do it with the pure upstream open source version. So, that means you get full flexibility, full portability. And more importantly you're not paying those expensive license fees. Plus it's easy and it just works. You get that full cloud native experience and you get your database right now when you need it. >> And basically you guys solve the problem of one, I got this legacy or existing licensed technology I've got to pay for. And it may not be enabling modern applications, and they don't have a team to go do all the work (laughing). Or some companies have like a whole army of people just embedded in open-source, that's very rare. So, it sounds like you guys do both. Did I get that right, is that right? >> Yeah, definitely. So, we definitely enable it if you don't have that capability yourself. We are the outsourced option to that. It's obviously a lot more than that but it's one of those pressures that companies nowadays face. And if we take it back to that concept of developer velocity, you really want them working on your core business problems. You don't want them having to fight database infrastructure. So, you've also got the opportunity cost of having your existing engineers working on running this stuff themselves. Or running a proprietary or an open call solution themselves, when really you should be outsourcing preferably to Instaclustr. But hey, let's be honest, you should be outsourcing it to anyone so that your engineers can be focusing on your core business problems. And really letting them work on the things that make you money. >> That's very smart. You guys have a great business model. Because one of the things we've been reporting on "theCUBE" on SiliconANGLE as well, is that the database market is becoming so diverse for the right reasons. Databases are everywhere now and code is becoming horizontally scalable for the cloud but vertically specialized with machine learning. So, you're seeing applications and new databases, no one database rules the world anymore. It's not about Oracle anymore, or anything else. So, open source fits nicely into this kind of platform view. How do you guys decide which technologies go in to the platform that you support? >> Yeah, great question. So, we certainly live in a world of, I call it polyglot persistence. But a simple way of referring to that is the right tool for the right job. And so, we really live in this world where engineers will reach for a database that solves a specific problem and solves it well. As you mentioned, companies, they're no longer Oracle shops, or they're no longer MySQL shops. You'll quite often see services or applications of teams using two or three different databases to solve different challenges. And so, what we do at Instaclustr is we really look at what are the technologies that our existing customers are using, and using side-by-side with, say, some of the existing Instaclustr offerings. We take great lead from that. We also look at what are the different projects out there that are solving use cases that we don't address at the moment. So, it's very use case driven. Whether it's, "Hey, we need something that's better at," say, "Time series." Or we need something that's a little bit better at translatable workloads. Or something a bit of a better fit for a case, right? And we work with those. And I think importantly, we also have this view that in a world of polyglot persistence, you've also got data integration challenges. So, how do you keep data safe between these two different database types? So, we're also looking at how do we integrate those better and support our users on that particular journey. So, it really comes down to one, listening to your customers, seeing what's out there and what's the right use case for a given technology and then we look to adopt that. >> That's great, Ben, machine learning is completely on fire right now. People love it, they want more of it. AI everything, everyone's putting AI on every label. If it does any automation, it's magic, it's AI. So, really, we know what that's happening, it's just really database work and machine learning under the covers. Pete, the business model here has completely changed too, because now with open source as a platform you have more scale, you have differentiation opportunities. I'm sure business is doing great. Give us an update on the business side of Instaclustr. What's clicking for you guys, what's working? What's the success trajectory look like? >> Yeah, it's been an amazing journey for us. When you think about it we were founded it in 2013, so, we're eight years into our journey. When we started the business we were focused entirely on Cassandra. But as Ben talked about, we've gone in diversified those technologies onto the platform, that common experience that we offer customers. So, you can adopt any one to a number of open source technologies in a highly integrated way and really, really grow off the back of that. It's driving some phenomenal growth in our business and we've really enjoyed growth rates that have been 70, 80, 100 year on year since we've started the business. And that's led to an enormous scale and opportunities for us to invest further in the platform, invest further in additional technologies in a really highly opinionated way. I think Ben talked about that integrations, then that becomes incredibly complex as you have many, many kinds of offerings on the platform. So, Instaclustr is much more targeted in terms of how we want to take our business forward and the growth opportunity before us. We think about being deeply expert and deeply capable in a smaller subset of technologies. But those which actually integrate and inter operate for customers so they can build solutions for their applications. But do that on Instaclustr using its platform with a common experience. And, so we've grown to 270 people now around the world. We started in Australia, we've got a strong presence in the US. We recently acquired a business called credativ in Europe, which was a PostgreSQL specialist organization. And that was because, as Ben said before, talking about those technologies we bring onto our platform. PostgreSQL, huge market, disrupting Oracle, exactly the right place that we want to be as Instaclustr with pure open source offerings. We brought them into the Instaclustr family in March this year and we did that to accelerate it on our platform. And so, we think about that. We think about future technologies on their platform, what we can do, and introduced to even provide an even greater and richer experience. Cadence is new to our platform. Super exciting for us because not only is it something that provides workflow as code, as an open source experience, but as a glue technology to build a complex business technology for applications. It actually drives workloads across Cassandra, PostgreSQL and Kafka, which are kind of core technologies on our platform. Super exciting for us, a big market. Interesting kind of group of adopters. You've got Uber kind of leading the charge there with that and us partnering with them now. We see that as a massive growth opportunity for our business. And as we introduce analytics capabilities, exploration, visibility features into the platform all built on open source. So, you can build a complete top to bottom data services layer using open source technology for your platform. We think that's an incredibly exciting part of the business and a great opportunity for us. >> Opportunities to raise money, more acquisitions on the horizon? >> Well, I think acquisitions where it makes sense. I talked about credativ, where we looked at credativ, we knew that PostgreSQL was new to our market, and we were coming into that market reasonably late. So, the way we thought about that from a strategy perspective was we wanted to accelerate the richness of the capability on our platform that we introduced and became GA late last year. So, we think about when we're selecting that kind of technology, that's the perfect opportunity to consider an acquisition for us. So, as we look at what we're going to introduce in the platform over the next sort of two, three, four years, that sort of decision that will, or that sort of thinking, or frames our thinking on what we would do from an acquisition perspective. I think the other way we think about acquisitions is new markets. So, thinking about globally entering, say into the Japanese market. does that make sense because of any language requirements to be able to support customers? 'Cause one of the things that's really, really important to us is the platform is fantastic for scaling, growing, deploying, running, operating this very powerful open source technology. But so too is the importance of having deep operational open source expertise backing and being there to call on if a customer's having an application issue. And that kind of drives the need for us to have in country kind of market support. And so, when we think about those sort of opportunities, I think we think about acquisition there, isn't it like another string to the bow in terms of getting presence in a particular or an emerging market that we're interested in. >> Awesome, Ben, final question to you is, on the technology front what do you see this year emerging? A lot of changes in 2021. We've got another year of pandemic situation going on. Hopefully it goes by fast. Hopefully it won't be three years, but again, who knows? But you're seeing the cloud open source actually taking as a tailwind from the pandemic. New opportunities, companies are refreshing, they have to, they're forced. There's going to be a lot more changes. What do you see from a tech perspective in open-source, open core, and in general for large companies as opensource continues to power the innovation? >> So, definitely the pandemic has a tailwind, particularly for those companies adopting the cloud. I think it's forced a lot of their hands as well. Their five-year plans have certainly become two or three year plans around moving to the cloud. And certainly, that contest for talent means that you really want to be keeping your engineers focused on core things. So, definitely I think we're going to see a continuation of that. We're going to say the continuation of open source dominating when it comes to a database and the database market, the same with cloud. I think we're going to see the gradual march towards different adoption models within the cloud. So, server lists, right? I think we're going to see that kind of slowly mature. I think it's still a little bit early in the hype cycle there, but we're going to start to see that mature. On the ML, AI side of things as well, people have been talking about it for the last three or four years. And I'm sure to people in the industry, they're like, "Oh, we're over that." But I think on the broader industry we're still quite early in that particular cycle as people figure out, how do they use the data that they've got? How do they use that? How do they train models on that? How do they serve inference on that? And how do they unlock other things with lower down on their data stack as well when it comes to ML and AI, right? We're seeing great research papers come out from AI powered indexes, right? So, the AI is actually speeding up queries, let alone actually solving business problems. So, I think we're going to say more and more of that kind of come out. I think we're going to see more and more process capabilities and organizational responses to this explosion of data. I'm super excited to say people talking about concepts and organizational concepts like data mesh. I think that's going to be fundamental as we move forward and have to manage the complexities of dealing with this. So, it's an old industry, data, when you think about it. As soon as you had computers you had data, and it's an old industry from that perspective. But I feel like we're only just getting started and it's just heating up. So, we're super excited to see what 2022 holds for us. >> Every company will be an source AI company. It has to be no matter what. (Ben laughing) Well, thanks for sharing the data Pete and Ben, the co-founders of Instaclustr. We'll get our "CUBE" AI working on this data we got today from you guys. Thanks for sharing, great stuff. Thanks for sharing the open core perspective. We really appreciate it and congratulations on your success. Companies do need more Instaclustrs out there, and you guys are doing a great job. Thanks for coming on, I appreciate it. >> Thanks John, cheers mate. >> Thanks John. >> It's "theCUBE" Conversation here at Palo Alto. I'm John Furrier, thanks for watching. (bright music)
SUMMARY :
kicking off the new year I'm looking forward to the conversation. So, I love the name, Instaclustr. applications in the cloud. it's just the scale of open source. and the cloud? This is the hassle and friction. in the last 20 years. So, the game is over, So, that's the core difference What's the main value that you So, that means you get full So, it sounds like you guys do both. on the things that make you money. is that the database market is the right tool for the right job. So, really, we know what that's happening, and the growth opportunity before us. And that kind of drives the need for us Awesome, Ben, final question to you and the database market, and you guys are doing a great job. I'm John Furrier, thanks for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Peter Lilley | PERSON | 0.99+ |
Australia | LOCATION | 0.99+ |
2013 | DATE | 0.99+ |
Ben | PERSON | 0.99+ |
John | PERSON | 0.99+ |
70 | QUANTITY | 0.99+ |
Ben Bromhead | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
five-year | QUANTITY | 0.99+ |
Peter | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
Pete | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
2021 | DATE | 0.99+ |
Pete Lilley | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
US | LOCATION | 0.99+ |
three | QUANTITY | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
eight years | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
2022 | DATE | 0.99+ |
PostgreSQL | ORGANIZATION | 0.99+ |
three year | QUANTITY | 0.99+ |
four years | QUANTITY | 0.99+ |
270 people | QUANTITY | 0.99+ |
Instaclustr | ORGANIZATION | 0.99+ |
Today | DATE | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
2018 | DATE | 0.98+ |
three years | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
both | QUANTITY | 0.98+ |
80 | QUANTITY | 0.98+ |
Oracles | ORGANIZATION | 0.98+ |
One | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
100 year | QUANTITY | 0.97+ |
Cassandra | TITLE | 0.97+ |
March this year | DATE | 0.96+ |
Kafka | TITLE | 0.96+ |
MySQL | TITLE | 0.96+ |
second | QUANTITY | 0.95+ |
Intaclustr | ORGANIZATION | 0.95+ |
PostgreSQL | TITLE | 0.94+ |
hundred percent | QUANTITY | 0.93+ |
pandemic | EVENT | 0.93+ |
two co-founders | QUANTITY | 0.92+ |
past year | DATE | 0.91+ |
SiliconANGLE | ORGANIZATION | 0.9+ |
late last year | DATE | 0.9+ |
theCUBE | ORGANIZATION | 0.9+ |
credativ | ORGANIZATION | 0.88+ |
Amazon | ORGANIZATION | 0.86+ |
three different databases | QUANTITY | 0.86+ |
last 20 years | DATE | 0.84+ |
this year | DATE | 0.83+ |
Instaclustr | TITLE | 0.74+ |
Roberto Giordano, Borsa Italiana | Postgres Vision 2021
(upbeat music) >> From around the globe, it's theCUBE! With digital coverage of Postgres Vision 2021, brought to you by EDB. >> Welcome back to Postgres Vision 21, where theCUBE is covering the innovations in open source trends in this new age of application development and how to leverage open source database technologies to create world-class platforms that are cost-effective and also scale. My name is Dave Vellante, and with me is Roberto Giordano, who is the End User Computing, Corporate, and Database Services Manager at Borsa Italiana, the Italian Stock Exchange. Roberto, great to have you. Thanks for coming on. >> Thanks Dave, and thanks to the interview friend for the invitation. >> Okay, and we're going to dig in to the great customer story here. First, Roberto, tell us a little bit more about Borsa Italiana and your role at the organization. >> Absolutely. Well, as you mentioned, Borsa is the Italian Stock Exchange. We used to be part of the London Stock Exchange, but last month we left that group, and we joined another group called Euronext, so we are now part of another group, I would say. And right now within Euronext, Euronext provide the biggest liquidity pool in Europe, just to mention something. And basically we provide the market infrastructure to our customers across Europe and the whole world. So probably if it happens for you to buy a little of, I don't know, Ferrari for instance, probably use our infrastructure. >> So I wonder if you could talk about the key drivers in the exchange business in Italy. I don't know how closely you follow what's going on in the United States, but it's crypto madness, there's the Reddit army driving up stocks that have big short positions, and of course the regulators have to look at that, and there's a big debate going on. Well, I don't know what's it like in Italy, but what are the key drivers that are really informing the priorities for your technology strategy? >> Well, you mentioned, for instance, the stereotypical cases that are a little bit of laterally to the global markets and also to our markets as a it professional running market infrastructure is our first the goal to provide an infrastructure that is reliable and be with the lowest possible latency. So we are very focused on performance and reliability just to mention the two main drivers within our systems. >> Well, and you have end-user computing in your title and we're going to get into the database discussion, but I presumably with with COVID you had to pivot and that that piece of your job was escalated in 2020, I would imagine. And you mentioned latency which is a key factor in obviously in database access but that must've been a big challenge last year. >> Well, it was really a challenge, but basically we move just within a weekend, the wall organization working remotely. And it has been like this since February, 2020. Think about the challenge of moving almost 1000 people that used to come to the office every day to start to work remotely. And as within my team of the end user computing this was really a challenge but it was a good one at the end. We, we, we succeeded and everything work. It's fine from our perspective, no news is is a good news, you know, because normally when something doesn't work, we are on newspapers. So if you didn't heard about us it means that everything worked out just fine. >> Yeah. It's amazing, Roberto. We both in the technology business that you'll be you're a practitioner observer, but I mean if you're in the tech business most companies actually pivoted quite well. You're have always been a digital business, different. I mean, if you're a Ferrari and making cars and you can't get semiconductors, but but most technology companies actually made the transition you know, quite amazingly, let's get into the, the case study a bit of it. I wonder if you could paint a picture of your organization's infrastructure and applications what it looks like and and particularly your database infrastructure what does that look like? >> Well, we are a multi-vendor shop. So we would like to pick the right technology for for the right service. This means that my database services teams currently manage several different technology where possible that plays a big role in, in, in our portfolio. And because we, we, we currently support both the open source, fully open source version of Postgres, but also the EDB distribution in particular we prefer to use EDB distribution where we did specific functionalities that just EDB provide. And we, when we need a first class level of support that EDB in recent year was able to provide to us. >> When you say full functioning, are you talking about things like acid compliance, two phase commits? I mean, all these enterprise capabilities, is that right? Or maybe you could be >> Just too much just to mention one, for instance we recently migrated our wire intrasite availability solution using the ADB fail-over manager. That is an additional component that just it'll be provide. >> Yeah. Okay. So, so par recovery obviously is, is and so that's a solution that you to get from the EDB distro as opposed to having to build it yourself with open source tooling. >> Yeah, correct. Well, basically sterically, we used to rely on OSTP clustering from, from, from that perspective. But over the years we found that even if it's a technology that works fine, it has been around for four decades. And so on. We faced some challenges internally because within my team we don't own also the operative system layers. So we want a solution that was 100% within our control and perimeter. So just few months ago we asked the EDB EDB folks if they can provide something. And after a couple of meetings also with their pre-sales engineers, we found the the right solution for us. So we launched long story short, just a quick proof of concept to a tissue test together, again using the ADB consultancy. And, and then we, beginning of this year, we, we went live with the first mission critical service using this brand new technology, well brand new technology for us. You know, it'd be created a few years ago >> And I do have some follow-up questions but I want to understand what catalyzed the, you know what was the motivation for going with an open source database? I mean, you're, you're a great example because you have your multi-vendor so you have experienced with all of it, the full spectrum. What was it about open source database generally EDB specifically that triggered the, the choice? >> Well thanks for the question. It is, this is one of the, or one of the questions that I always, like. I think what really drove us was the right combination between easy to use, so simplicity and also good value for money. So we like to pick the right database technology for the right kind of service slash budget that the survey says and, and the open source solution for a specific service. It, it, it's, it's our, you know, first, first, first choice. So we are not going to say a company that use just one technology. We like to take the best of breed that the market can offer. In some cases, the open source and Postgres in particular is, is our choice. How involved was >> The line of business in this both the decision and the implementation? Was it kind of invisible to them, or this was really more of a technology decision based on the your interpretation of the requirements I'm interested in who was involved and how you actually got it done? >> Well, I, I think this decision was transplant for, for, for, for the business at the end of the day don't really have that kind of visibility. You know, they just provide requirements in particular in terms of performance and rehabil area, the reliability. And so, so this this is something they are not really involved about. And obviously if they, if we are in opposition to save a little bit of money everybody's at the, even the business >> No. So what did you have to do? So that makes sense to me, I figured that was the case. Who would, who were the stakeholders on your team? I mean, what kind of technical resources did you require an implementation resources? What take us through what the project if you will look like, wh how did you do it? >> Well, it's a combination of database expertise. I got the pleasure to run a team that is paid by very, very senior, very, very skilled database services professional that are able to support more than one more than what the county and also are very open to innovation and changes. Plus obviously we need also the development teams the relevant development teams on board, when you when you run this kind of transformations and it looks like also, they liked the idea to use PostgreSQL for for this specific service I got in mind. So it, it, it was quite, quite easy, not be discussion. You know. >> What was the, what was the elapsed time from from when you said, okay, we're in, you know signed the agreement we're going here you made the decision to actually getting into production. >> Well, as I mentioned, we, we, we were on we're on services and application that are really focused on high availability and performance. So generally speaking, we are not a peak organization. Also we run a business that is highly regulated. So as you know, as you can imagine we are an organization that don't have a lot of appetite for risk, you know, so generally speaking in order to run this kind of transformation is a matter of several months, I will say six nine months to have something delivered in that space. >> Okay. Well, that's, I mean, that's reasonable. I mean, if you could do it inside of a year that's I think quite good especially in the highly regulated industry. And then you mentioned kind of the fail over the high availability Cape Cape capabilities. Were there other specific EDB tools that that you utilize to sort of address the objectives? >> Yeah, absolutely. We were in particular, we used Postgres enterprise, AKA Pam. Okay. And very recently we were involved within ADB about per se specifically developing one functionality that, that that we needed back in the day. I think together with Bart these are the free EDB specific tools that, that we, that that we use right now. >> And, and I'm, I'm interested in, I want to get to the business impact and I know it's early days for you but the real motivation was to save money and simplify. I would actually, I would imagine your developers were happy because they get to use modern tooling and open source. But, but really though if your industry is bottom line, right, I mean that's really what the, the business case was all about. But I wonder if you could add some color there in terms of the business impact that you expect. And then, I mean I don't know how much visibility you have now but anything you can share with us. >> Well, thinking about the EFM implementation that the business impact the, was that in case of a failure or the DBA team that a services team is it is able to provide a solution that is within our 100% within our perimeter. So this means that we are fully accountable for it. So in a nutshell, when you run a service, the less people the less teams you have to involve the more control you can deliver. And in some, again, very critical services that is a great value. >> Okay. So, and, and where do you want to take this? I mean, how do you see w what's your, if you're thinking about your Postgres and, and generally an EDB you know, roadmap, where do you want it to go? >> Well, I stay to, to trends within within the organization, the, the, the, the the first one is about migrating more existing services to open source solution for database is going to be, is going to be prosperous. And other trends that I see within my organization is about designing applications, not really to be, to to use PostgreSQL as the base, as it does a base layer. I think both trends are more or less surroundings at the same state right now. >> Yeah. A lot of the audience members at Postgres vision 21 is just like you they they're managing day-to-day infrastructure. They're there they're expert practitioners. What advice would you give to somebody that is thinking about, you know taking this journey, maybe if you had to do something over again maybe what would you do differently? How can you help your peers here? >> Well, I think in particular, if you are going to say a big organization that runs a highly regulated business in some cases, you are a little bit afraid of open source because there is this, I can say general consideration about the lack of enterprise level support. I would like to say that it is just about the past because they're around bunch of companies like EDB that are we're a hundred percent capable of providing enterprise level of support, even on, on, on even on the open source distribution of Paul's presser. Obviously Dan is you're going to go with their specific distribution. The level of support is going to be even more accurate but as we know, it could be currently is they across say main contributor of the pollsters community. And I think is, is that an insurance for every organization? >> Your advice is don't be afraid. >> Yeah. My advice is done is absolutely, don't be, don't be afraid. And if, if, if I can, if we can mention about also about, you know, the cloud called technologies this is also another, another topic where if possible I would like to suggest to not being afraid EDB as every every I would say organization within the it industry is really pushing for it. And I think for a very, for, for a lot of cases not all of them, but a lot of cases, there is a great value about the design services application to be cloud native or migrating existing application into the cloud. >> Okay. But, but being a highly regulated industry and being a, you know, very much aware of the the narrative around open source, et cetera, you, you must've had just a little piece of your mind saying, okay I have to manage this risk. So there's anything specifically you did with managing the risks that you would advise? Was it, was it or is it really just about good change management? >> I think it was mainly about a good change management when you got, you know the relevant stakeholders that you need on board and we are, everybody's going the same direction. That basically is about executing. >> Excellent. Well, Roberto, I really appreciate your time and your knowledge that you share with the audience. So thanks so much for coming on the cube. >> Thank you, Dave. It was a great pleasure. >> And thank you for watching the cubes continuous coverage of Postgres vision 21. We'll be right back. (upbeat music)
SUMMARY :
brought to you by EDB. the Italian Stock Exchange. for the invitation. role at the organization. Europe and the whole world. and of course the regulators the goal to provide an Well, and you have end-user computing So if you didn't heard about us I wonder if you could paint a picture of Postgres, but also the EDB distribution in particular that just it'll be provide. and so that's a solution that you to get the right solution for us. all of it, the full spectrum. breed that the market can offer. at the end of the day No. So what did you have to do? I got the pleasure to signed the agreement we're going here of appetite for risk, you that you utilize to sort that we needed back in the day. impact that you expect. the less teams you have to involve I mean, how do you see w the same state right now. maybe what would you do differently? of the pollsters community. about also about, you know, that you would advise? the relevant stakeholders that you need So thanks so much for coming on the cube. It was a great pleasure. And thank you for watching the cubes
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Roberto | PERSON | 0.99+ |
Euronext | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
Borsa Italiana | ORGANIZATION | 0.99+ |
Italy | LOCATION | 0.99+ |
Ferrari | ORGANIZATION | 0.99+ |
Roberto Giordano | PERSON | 0.99+ |
100% | QUANTITY | 0.99+ |
Paul | PERSON | 0.99+ |
February, 2020 | DATE | 0.99+ |
Borsa | ORGANIZATION | 0.99+ |
2020 | DATE | 0.99+ |
United States | LOCATION | 0.99+ |
one | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
London Stock Exchange | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
first | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
last month | DATE | 0.99+ |
Pam | PERSON | 0.99+ |
Dan | PERSON | 0.99+ |
both | QUANTITY | 0.99+ |
Postgres | ORGANIZATION | 0.99+ |
EDB | ORGANIZATION | 0.98+ |
two main drivers | QUANTITY | 0.98+ |
six nine months | QUANTITY | 0.98+ |
few months ago | DATE | 0.98+ |
four decades | QUANTITY | 0.98+ |
Bart | PERSON | 0.98+ |
Italian Stock Exchange | ORGANIZATION | 0.97+ |
almost 1000 people | QUANTITY | 0.97+ |
PostgreSQL | TITLE | 0.96+ |
more than one | QUANTITY | 0.95+ |
first class | QUANTITY | 0.95+ |
first one | QUANTITY | 0.94+ |
two phase | QUANTITY | 0.93+ |
few years ago | DATE | 0.9+ |
Cape Cape | LOCATION | 0.9+ |
EDB | TITLE | 0.88+ |
Postgres Vision | ORGANIZATION | 0.88+ |
one technology | QUANTITY | 0.88+ |
this year | DATE | 0.88+ |
a year | QUANTITY | 0.87+ |
one of | QUANTITY | 0.84+ |
first mission | QUANTITY | 0.81+ |
hundred percent | QUANTITY | 0.8+ |
one functionality | QUANTITY | 0.79+ |
recent year | DATE | 0.78+ |
Postgres vision 21 | ORGANIZATION | 0.75+ |
questions | QUANTITY | 0.74+ |
theCUBE | ORGANIZATION | 0.71+ |
2021 | DATE | 0.71+ |
both trends | QUANTITY | 0.7+ |
first choice | QUANTITY | 0.7+ |
Postgres Vision 21 | ORGANIZATION | 0.69+ |
ADB | TITLE | 0.68+ |
ADB | ORGANIZATION | 0.63+ |
Postgres | TITLE | 0.53+ |
COVID | ORGANIZATION | 0.51+ |
Vision 2021 | EVENT | 0.41+ |
Carl Olofson, IDC | Postgres Vision 2021
>> Narrator: From around the globe. It's theCUBE with digital coverage of Postgres vision 2021 brought to you by EDB. >> Welcome back to Postgres Vision 21. My name is Dave Vellante. We're thrilled to welcome Carl Olofsen to theCUBE. Carl is a research vice president at IDC focused on data management. The long-time database analyst is the technologist and market observer. Carl, good to see you again. >> Thanks Dave. Glad to be here. >> All right. Let's let's get into it. Let's talk about, let's go right to the, to the source the open source database space. You know, how, what changes have you seen over the last couple of years in that marketplace? >> Well, this is a dynamic area and it's continuing to evolve. When we first saw the initial open source products like mysQl and PostgreSQL on the early days they were very limited in terms of functionality. They were espoused largely by sort of true believers. You know, people who said everything should be open source. And we saw that mainly they were being used for what I would call rather prosaic database applications. But as time has gone by they both of these products improve. Now there's one key difference, of course, which is a mySQL is company owned open source. So the IP belongs to Oracle corporation. Whereas PostgreSQL is community open source, which means that the IP belongs to the PostgreSQL community. And that can have a big difference in terms of things like licensing and so forth, which really matters now that we're coming into the cloud space because as open-source products moving into the cloud space the revenue model is based on subscriptions. And of course they are always based on subscription to open source cause you don't charge for the license. So what you charge for its support, but in the cloud what you can do is you can set up a database service, excuse me, a database service and then you charge for that service. And if it's open source or it's not open source that actually doesn't matter to the user. If you see what that I mean because they still are paying a subscription fee for a service and they get the service. The main difference between the two types is that if you're a commercial provider of PostgreSQL like enterprise DB, you don't have control over where it goes and you don't have control over the IP and how people use it in different ways. Whereas Oracle owns mySQL so they have a lot more control and they can do things to it on their own. They don't have to consult the community. Now there's also, non-relational open source including MongoDB. And as you may be aware, MongoDB has changed their license. So that it's not possible for third party to offer Mongo DB as a complete managed database service without paying a license fee to MongoDB for that. And that's because they own the IP too. And we're going to see a lot more of this sort of thing. I have conversations with open source all the time and they are getting a little concerned that it has become possible for somebody to simply take their technology, make a lot of money off that. And no money goes back to the community. No money goes back to the IRS. It's a company it's just stays with the supplier. So I think, you know it'll be interesting to see how all this is over time. >> So you're suggesting that the Postgres model then is, is I guess I'll use the word cleaner. And so that feels like it's a it's a benefit or is it a two-edged sword kind of thing? I mean, you were saying before, you know a company controls the IP so they could do things without having to go to the community. So maybe they can do things faster. But at the other hand like you said, you get handcuffed. You think you're going to be able to get a, you know a managed service, but then all of a sudden you're not and the rules change midstream saying it, am I correct? That Postgres, the model is cleaner for the customer? >> Well, you know, I mean, a lot of my friends who are in the open source community don't even consider company owned open source to be true open source because the IP is controlled by a company, not by a community. >> Dave: Right >> So from that perspective certainly Postgres SQL is considered, I don't know if you want to use the word cleaner or more pure or something along those lines, but also because of that the nature of community open source it can be used in many different ways. And so we see Postgres popping up all over the place sometimes partially and sometimes altogether, in other words, a service, a cloud service, we'll take a piece of Postgres and stick it on top of their own technology and offer it. And the reason they do that is they know there are a lot of developers out there who already know how to code for Postgres. So they are immediately first-class users of the service that they're offering. >> So, talk a little bit more about what you're seeing. You just mentioned a lot of different use cases. That's interesting. I didn't realize that was, that was happening. The, what are you seeing in terms of adoption in let's say the last 18, 24 months specific to Postgres? >> Yeah, we're seeing a fair amount of adoption in especially in the middle market. And of course there is rapid adoption in the tech sector. Now, why would that be? Well it's because they have armies of technologists. Who know how to program this stuff. You know, when you, you know, a lot of them will use PostgreSQL without a contract without a support contract, they'll just support themselves. And they can do that because they have the technicians who are capable of doing it. Most regular businesses can't do that. They don't have the staff so they need that support contract. And so that's where a company like enterpriseDB comes. I mentioned them only because they're the leading supplier Postgres to all their other suppliers. >> I was talking to Josh Burgers, red hat and he was, you know, he had just come off a Cubacon and he was explaining kind of what's happening in that community. Big focus of course on security and the whole, you know, so-called shift left. We were having a good discussion about, you know when does it make sense to use, you know Postgres in a container environment should you use Postgres and Kubernetes and he sort of suggested that things have rapidly evolved. There's still, you know, considerations but what are you seeing in terms of the adoption of microservices architectures containers, generally Kubernetes how has that affected the use of things like postsgres? >> So those are all different things or need to be kind of custody. >> Pick your favorite. >> They're related then. So microservices, the microservice concept is that you take an application break it up into little pieces and each one becomes a microservice that's invoked through an API. And then you have this whole structure API system that you use to drive the application and they run. They typically, they run in containers usually Kubernetes govern containers but the reason you do this and this is basically a efficiency because especially in the cloud, you want only to pay for what you use. So when you're running a microservice based application. Applications have lots of little pieces when something needs to be done, microservice fires up it does the thing that needs to be done. It goes away. You only pay for that fraction of a second that the microservice is running. Whereas in a conventional application you load this big heavyweight application. It does stop. It sets some weights with things and does more stuff and sits and waits for things. And you pay for compute for that entire period. So it's much more cost effective to use a microservices application. The thing is that microservice, the concept of microservices is based on the idea that the code is stateless but database code isn't stateless cause it has its attraction to the database which is the ultimate kind of like stateful environment right? So it's a tricky business. Most database technologies that are claimed to be container-based actually run in containers the way they run in servers. In other words, they're not microservice-based they do run in containers. And the reason they're doing that is for portability so that you can deploy them anywhere and you can move them around. But you know deploying a microservice based database is, well, it's it's a big technical project. I mean, that is hard to do. >> Right and so talk about, I mean again we're talking to Josh it was clear that that Kubernetes has evolved, you know quite rapidly at the same time there were cautions. In other words, he would say I think suggested things like, you know, there were known at one point, there were known, you know flaws and known bugs that ship the code that's been been remediated or moderated in terms of that practice but still there's there's considerations just in terms of the frequency of updates. I think he gave the example of when was the last time you know, JVM got, you know, overhauled. And so what kind of considerations should customers think about when considering them, they want the Kubernetes they want the flexibility and the agility but at the same time, if they're going to put it production, they've got to be careful, right? >> Yeah, I think you need to make sure you're using you're using functions that are well-established, you know you wouldn't want to put something into production that's new. They say, oh, here's a new, here's a new operation. Let's try that. And then, you know, you get in trouble. So you want to deal conservative that way you know, Kubernetes is open-source so and the updates and the testing and all that follows a rather slow formal process, you know from the time that the submission comes in to the time that it goes out, whereas you mentioned JVMs JV, but it was owned by Oracle. And so JVMs are managed like products. Now there's a whole sort of legal thing I don't want to get into it as to whether it's legal. They claim it's not libero third parties to build JVMs without paying a licensing. I don't want to talk about that, but it's based on a very state that has a very stable base, you know whereas this area of Kubernetes and govern containers is still rapidly evolving but this is like any technology, right? I mean, when you, if you're going to commit your enterprise to functions that run on an emerging technology then you are accepting some risk. You know, that there's no question about it. >> So we talked about the cloud earlier and the whole trend toward managed services. I mean, how does that specifically apply to Postgres? You can kind of imagine like a sidecar, a little bit of Postgres mixed in with, you know, other services. So what do you see and what do you, what's your telescope say in terms of the the Postgres adoption cloud? How do you see that progressing? >> I think there's a lot of potential. There's a lot of potential there. I think we are nowhere near the option that it should be able to achieve. I say that because for one thing, even though we analyze the future at IDC, that doesn't mean we actually know the future. So I can't say what its adoption will be but I can say that there's a lot of potential there. There's a tremendous number of Postgres developers out there. So there's a huge potential for adoption. And especially in cloud adoption, the main thing that would help that is independent. And I know that enterpriseDB has one independent a managed cloud service. So I think they do. >> Yeah I think so. >> But you know, why do I say that? I say that because alternatives these days there are some small companies that maybe they'll survive and maybe they won't, but that, you know, do you want to get involved with them or the cloud platform providers, but if you use their Postgres you're locked into that cloud platform. You know, if you use Amazon, go press on RDS, right? You're not, you become quickly locked in because you're starting using all the AWS tools that surround it to build and manage your application. And then you can't move. If you see what I mean. >> Dave: Yeah . >> They have have an RDS labor Aurora, and this is actually one of the things that it's really just a thin layer of Postgres interaction code underneath Aurora is their own product. so that's an even deeper level of commitment. >> So what has to happen for, so obviously cloud, you know, big trend. So the Postgres community then adopts the code base for the cloud. Obviously EDB has, you know hundreds of developers contributing to that, but so what does that mean to be able to run in the cloud? Is that making it cloud native? Is that extensions? Is it, you know, what technically has to occur and what has occurred and how mature is it? >> Well, so smaller user organizations are able to migrate fairly quickly cloud because most of their applications are you know, commercially purchased. They're like factories applications. When they move to the cloud, they get the SAS one and often the SAS equivalent runs on Postgres. So that's just fine. Larger enterprises are a real mess. If you've ever been in a large enterprise data center you know what I'm talking about? It's just, there's just servers and storage everywhere. There's, all these applications, databases connections. They are not moving to the cloud anytime soon. But what they are doing is setting up things like private cloud environments and applying in there. And this is a place where if you're thinking about moving to something like a Postgres you know most of these enterprises use the big commercial databases. Oracle SQLserver DB two and so forth. If you're thinking of moving from that to a a PostgreSQL development say, then the smart thing to do would be first to do all your work in the private cloud where you'd have complete control over the environment. It also makes sense still to have a commercial support contract from a vendor that you trust, because I've said this again, unless you are, you know, Cisco or somebody, you know, some super tech company that's got all the technicians you need to do the work. You really don't want to take on that level of risk. If you see that, I mean. Another advantage to working with a supplier, a support supplier, especially if you have a close, intimate relationship is they will speed your security patches on a regular basis which is really important these days, because data security is as you know, a growing concern all over the place. >> So let's stay on the skillsets for a minute. Where do you see the gaps within enterprises? What kind of expertise you mentioned, you know support contracts, what are the types of things that a customer should look for in terms of the the expertise to apply to supporting Postgres databases? >> Well, obviously you want them to do the basics that any software company does, right? You want them to provide you with regular updates and binary form that you can load and, you know test and run. You want to have the you know, 24 hour hotline you know, telephone support, all that kind of thing. I think it's also important to have a solid ability on the part of the vendor that you're working with to provide you with advice and counseling as you, especially, if you're migrating from another technology, help your people convert from what they were using to what they're going to be using. So those are all aspects that I would look for in a vendor for supporting a product like PostgreSQL. >> When you think about the migration to the cloud, you know of course Amazon talks a lot about cloud migration. They have a lot of tooling associated with that. >> Carl: Right. >> But when you step back and look at it it did to a point earlier, I mean a lot of the hardcore mission, critical stuff isn't going to move it, hasn't moved, but a lot of the fat middle, you know, is, are good candidates for it. >> Carl: Right. >> How do you think about that? And how do you look at that? I mean, obviously Oracle is trying to shove everything into OCI and they're, you know, they're all in because they realized that could make a lot of money doing that. But what do you, what are the sort of parameters that we should think about when considering that kind of migration, moving a legacy database into the cloud? >> Well, it has to be done piecemeal. You're not going to be able to do it all at once. You know, if you have hundreds of applications, you're not just you don't even want to, you know, it's a good time to take you into it. And what you've got running, ask yourself are these applications really serving the business interests today and will they in the future or is this a good time to maybe consider something else? Even if you have a packaged application, there might be one that is more aligned with your future goals. So it's important to do that. Look at your data integration, try to simplify it. You know, most data integration that most companies has done piecemeal project by project. They don't reference each other. So you have this chaos of ETL jobs and transformation rules and things like that that are just, you know, even difficult to manage. Now, just forget about any kind of migration or transformation considerations, just trying to run it now is becoming increasingly difficult. You know, maybe you want to change your strategy for doing data integration. Maybe you want to consolidate you want to put more data in one database. I'm not an advocate of the idea that you can put all application data in one database by the way, we know from bitter experience that doesn't work, but we can be rational about the kinds of databases that we use and how they sit together. >> Well, I mean, you've been following this for a long time and you saw the sort of rise and fall of the big data meme. And you know, this idea that you can shove everything into a single place, have a single version of the truth. It's like, it's just never seemed to happen. >> Carl: Right. >> So, you know, Postgres has been around a long time. It's evolved. I mean, I remember when, you know, VMware's ascendancy and people are like, okay, should I, you know should I virtualize my Postgres database is your, you know similar conversations that we were having earlier about Kubernetes. You've seen the move to the cloud. We're going to have this conversation about the edge at some point in time. So what's your outlook for Postgres, the Postgres community and, you know database market overall? >> Well, I really think the future for database growth is in the cloud. That's what all the data we're looking at and the case that's what our recent surveys indicate. As I said before, the rate of change depends on the size of the enterprise. Smaller advices are moving rapidly, large enterprises much more slowly and cautiously for the very simple reason that it's a very complex proposition. And also in some cases, they're wondering if they can move certain data or will they be violating your some sort of regulatory constraint or contractual issue. So they need to deal with those things too. That's why the private cloud is the perfect place to get started and get technology all lined up storing your data center is still under your control no legal issues there, but you can start, you know converting your applications to micro-service architected applications running in containers. You can start replacing your database servers with ones that can run in a container environment and maybe in the future, maybe hope that in the future, some of those will actually also be able to run as microservices. I don't think it's impossible but it just involves programming the database server in a very different way than we've done in the past. But you do those things. You can do those things under your own control over time in your own dataset. And then you reach a point where you want to take the elements of your application environment and say, what pieces of this, can I move to the cloud without creating disruption and issues regarding things like data egress and latency from cloud to data center and that kind of thing. And prepare for that. And then you're doing the step wise and then you start converting in a stepwise manner. I think ultimately it just makes so much sense to be in the cloud that the cloud vendors have economies of scale. They can deploy large numbers of servers and storage systems to satisfy the needs of large numbers of customers and create, you know great considerable savings. Some of which of course becomes their profit which is what's due to them. And some of that comes back to the users. So that's what I expect. We're going to see. And oh gosh, I would say that starting from about three years from now the larger enterprises start making their move and then you'll really start to see changes in the numbers in terms of cloud and cloud revenue. >> Great stuff, Carl, thank you for that. So any cool research you're working on lately, how you're spending your your work time, anything you want to plug? >> Well, working a lot on just as these questions, you know cloud migration is a hot topic, another which is really sort of off the subject. And what we've been talking about is graph database which I've been doing a fair amount of research into. I think that's going to be really important in the coming years and really, you know working with my colleagues in a project called the future of intelligence which looks at all the different related elements not just database, data integration but artificial intelligence, data communications and so on and so forth and how they come together to create a more intelligent enterprise. And that's a major initiative that I see. It's one of the, we call the future of initiatives. >> Great, Carls, thanks so much for coming back to theCUBE. It's great to have you, man. I appreciate it. >> Well, I enjoyed it. Now I have to do it again sometime. >> All right you got it. All right thank you everybody for watching theCUBEs. Continuous coverage of Postgres vision 21. This is Dave Vellante keep it right there. (upbeat music)
SUMMARY :
brought to you by EDB. Carl, good to see you again. You know, how, what changes have you seen that the IP belongs to I mean, you were saying before, you know Well, you know, I mean, but also because of that the The, what are you seeing especially in the middle market. and he was, you know, he or need to be kind of custody. but the reason you do this I think suggested things like, you know, And then, you know, you get in trouble. So what do you see and what do you, And I know that enterpriseDB and maybe they won't, but that, you know, that it's really just a thin so obviously cloud, you know, big trend. you know what I'm talking about? the expertise to apply to and binary form that you can load and, migration to the cloud, you know but a lot of the fat middle, you know, is, And how do you look at that? it's a good time to take you into it. And you know, this idea that the Postgres community and, you know And some of that comes back to the users. anything you want to plug? and really, you know for coming back to theCUBE. Now I have to do it again sometime. All right you got it.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
George | PERSON | 0.99+ |
Steven | PERSON | 0.99+ |
Josh | PERSON | 0.99+ |
Bill | PERSON | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Carl | PERSON | 0.99+ |
Carl Olofsen | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Bill McDermott | PERSON | 0.99+ |
Klara | PERSON | 0.99+ |
Orlando | LOCATION | 0.99+ |
Lisa | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Klara Young | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
Steven Cox | PERSON | 0.99+ |
80% | QUANTITY | 0.99+ |
Bill Miller | PERSON | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Carl Olofson | PERSON | 0.99+ |
17 years | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
US | LOCATION | 0.99+ |
24 hour | QUANTITY | 0.99+ |
five minutes | QUANTITY | 0.99+ |
23,000 customers | QUANTITY | 0.99+ |
1,000 cats | QUANTITY | 0.99+ |
two types | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
Coca-Cola | ORGANIZATION | 0.99+ |
60 industries | QUANTITY | 0.99+ |
26 years | QUANTITY | 0.99+ |
5X | QUANTITY | 0.99+ |
Postgres | ORGANIZATION | 0.99+ |
HANA | TITLE | 0.99+ |
Orlando, Florida | LOCATION | 0.99+ |
360 view | QUANTITY | 0.99+ |
Sapphire | ORGANIZATION | 0.99+ |
more than 20,000 people | QUANTITY | 0.99+ |
one platform | QUANTITY | 0.99+ |
Carls | PERSON | 0.99+ |
first time | QUANTITY | 0.99+ |
IDC | ORGANIZATION | 0.99+ |
one database | QUANTITY | 0.99+ |
NetApp | ORGANIZATION | 0.99+ |
mySQL | TITLE | 0.99+ |
Josh Burgers | PERSON | 0.98+ |
tonight | DATE | 0.98+ |
one time | QUANTITY | 0.98+ |
EDB | ORGANIZATION | 0.98+ |
SAP | ORGANIZATION | 0.98+ |
both | QUANTITY | 0.98+ |
old version - Roberto Giordano, Borsa Italiana | Postgres Vision 2021
(upbeat music) >> From around the globe, it's theCUBE! With digital coverage of Postgres Vision 2021, brought to you by EDB. >> Welcome back to Postgres Vision 21, where theCUBE is covering the innovations in open source trends in this new age of application development and how to leverage open source database technologies to create world-class platforms that are cost-effective and also scale. My name is Dave Vellante, and with me is Roberto Giordano, who is the End User Computing, Corporate, and Database Services Manager at Borsa Italiana, the Italian Stock Exchange. Roberto, great to have you. Thanks for coming on. >> Thanks Dave, and thanks to the interview friend for the invitation. >> Okay, and we're going to dig in to the great customer story here. First, Roberto, tell us a little bit more about Borsa Italiana and your role at the organization. >> Absolutely. Well, as you mentioned, Borsa is the Italian Stock Exchange. We used to be part of the London Stock Exchange, but last month we left that group, and we joined another group called Euronext, so we are now part of another group, I would say. And right now within Euronext, Euronext provide the biggest liquidity pool in Europe, just to mention something. And basically we provide the market infrastructure to our customers across Europe and the whole world. So probably if it happens for you to buy a little of, I don't know, Ferrari for instance, probably use our infrastructure. >> So I wonder if you could talk about the key drivers in the exchange business in Italy. I don't know how closely you follow what's going on in the United States, but it's crypto madness, there's the Reddit army driving up stocks that have big short positions, and of course the regulators have to look at that, and there's a big debate going on. Well, I don't know what's it like in Italy, but what are the key drivers that are really informing the priorities for your technology strategy? >> Well, you mentioned, for instance, the stereotypical cases that are a little bit of laterally to the global markets and also to our markets as a it professional running market infrastructure is our first the goal to provide an infrastructure that is reliable and be with the lowest possible latency. So we are very focused on performance and reliability just to mention the two main drivers within our systems. >> Well, and you have end-user computing in your title and we're going to get into the database discussion, but I presumably with with COVID you had to pivot and that that piece of your job was escalated in 2020, I would imagine. And you mentioned latency which is a key factor in obviously in database access but that must've been a big challenge last year. >> Well, it was really a challenge, but basically we move just within a weekend, the wall organization working remotely. And it has been like this since February, 2020. Think about the challenge of moving almost 1000 people that used to come to the office every day to start to work remotely. And as within my team of the end user computing this was really a challenge but it was a good one at the end. We, we, we succeeded and everything work. It's fine from our perspective, no news is is a good news, you know, because normally when something doesn't work, we are on newspapers. So if you didn't heard about us it means that everything worked out just fine. >> Yeah. It's amazing, Roberto. We both in the technology business that you'll be you're a practitioner observer, but I mean if you're in the tech business most companies actually pivoted quite well. You're have always been a digital business, different. I mean, if you're a Ferrari and making cars and you can't get semiconductors, but but most technology companies actually made the transition you know, quite amazingly, let's get into the, the case study a bit of it. I wonder if you could paint a picture of your organization's infrastructure and applications what it looks like and and particularly your database infrastructure what does that look like? >> Well, we are a multi-vendor shop. So we would like to pick the right technology for for the right service. This means that my database services teams currently manage several different technology where possible that plays a big role in, in, in our portfolio. And because we, we, we currently support both the open source, fully open source version of PostgreSQL, but also the EDB distribution in particular we prefer to use DDB distribution where we did specific functionalities that just EDB provide. And we, when we need a first class level of support that ADB in in recent year was able to provide to us. >> When you say full functioning, are you talking about things like acid compliance, two phase commits? I mean, all these enterprise capabilities, is that right? Or maybe you could be >> Just too much just to mention one, for instance we recently migrated our wire intrasite availability solution using the ADB fail-over manager. That is an additional component that just it'll be provide. >> Yeah. Okay. So, so par recovery obviously is, is and so that's a solution that you to get from the EDB distro as opposed to having to build it yourself with open source tooling. >> Yeah, correct. Well, basically sterically, we used to rely on OSTP clustering from, from, from that perspective. But over the years we found that even if it's a technology that works fine, it has been around for four decades. And so on. We faced some challenges internally because within my team we don't own also the operative system layers. So we want a solution that was 100% within our control and perimeter. So just few months ago we asked the EDB EDB folks if they can provide something. And after a couple of meetings also with their pre-sales engineers, we found the the right solution for us. So we launched long story short, just a quick proof of concept to a tissue test together, again using the ADB consultancy. And, and then we, beginning of this year, we, we went live with the first mission critical service using this brand new technology, well brand new technology for us. You know, it'd be created a few years ago >> And I do have some follow-up questions but I want to understand what catalyzed the, you know what was the motivation for going with an open source database? I mean, you're, you're a great example because you have your multi-vendor so you have experienced with all of it, the full spectrum. What was it about open source database generally EDB specifically that triggered the, the choice? >> Well thanks for the question. It is, this is one of the, or one of the questions that I always, like. I think what really drove us was the right combination between easy to use, so simplicity and also good value for money. So we like to pick the right database technology for the right kind of service slash budget that the survey says and, and the open source solution for a specific service. It, it, it's, it's our, you know, first, first, first choice. So we are not going to say a company that use just one technology. We like to take the best of breed that the market can offer. In some cases, the open source and Pasquesi in particular is, is our choice. How involved was >> The line of business in this both the decision and the implementation? Was it kind of invisible to them, or this was really more of a technology decision based on the your interpretation of the requirements I'm interested in who was involved and how you actually got it done? >> Well, I, I think this decision was transplant for, for, for, for the business at the end of the day don't really have that kind of visibility. You know, they just provide requirements in particular in terms of performance and rehabil area, the reliability. And so, so this this is something they are not really involved about. And obviously if they, if we are in opposition to save a little bit of money everybody's at the, even the business >> No. So what did you have to do? So that makes sense to me, I figured that was the case. Who would, who were the stakeholders on your team? I mean, what kind of technical resources did you require an implementation resources? What take us through what the project if you will look like, wh how did you do it? >> Well, it's a combination of database expertise. I got the pleasure to run a team that is paid by very, very senior, very, very skilled database services professional that are able to support more than one more than what the county and also are very open to innovation and changes. Plus obviously we need also the development teams the relevant development teams on board, when you when you run this kind of transformations and it looks like also, they liked the idea to use PostgreSQL for for this specific service I got in mind. So it, it, it was quite, quite easy, not be discussion. You know. >> What was the, what was the elapsed time from from when you said, okay, we're in, you know signed the agreement we're going here you made the decision to actually getting into production. >> Well, as I mentioned, we, we, we were on we're on services and application that are really focused on high availability and performance. So generally speaking, we are not a peak organization. Also we run a business that is highly regulated. So as you know, as you can imagine we are an organization that don't have a lot of appetite for risk, you know, so generally speaking in order to run this kind of transformation is a matter of several months, I will say six nine months to have something delivered in that space. >> Okay. Well, that's, I mean, that's reasonable. I mean, if you could do it inside of a year that's I think quite good especially in the highly regulated industry. And then you mentioned kind of the fail over the high availability Cape Cape capabilities. Were there other specific EDB tools that that you utilize to sort of address the objectives? >> Yeah, absolutely. We were in particular, we used Postgres enterprise, AKA Pam. Okay. And very recently we were involved within ADB about per se specifically developing one functionality that, that that we needed back in the day. I think together with Bart these are the free EDB specific tools that, that we, that that we use right now. >> And, and I'm, I'm interested in, I want to get to the business impact and I know it's early days for you but the real motivation was to save money and simplify. I would actually, I would imagine your developers were happy because they get to use modern tooling and open source. But, but really though if your industry is bottom line, right, I mean that's really what the, the business case was all about. But I wonder if you could add some color there in terms of the business impact that you expect. And then, I mean I don't know how much visibility you have now but anything you can share with us. >> Well, thinking about the EFM implementation that the business impact the, was that in case of a failure or the DBA team that a services team is it is able to provide a solution that is within our 100% within our perimeter. So this means that we are fully accountable for it. So in a nutshell, when you run a service, the less people the less teams you have to involve the more control you can deliver. And in some, again, very critical services that is a great value. >> Okay. So, and, and where do you want to take this? I mean, how do you see w what's your, if you're thinking about your Postgres and, and generally an EDB you know, roadmap, where do you want it to go? >> Well, I stay to, to trends within within the organization, the, the, the, the the first one is about migrating more existing services to open source solution for database is going to be, is going to be prosperous. And other trends that I see within my organization is about designing applications, not really to be, to to use PostgreSQL as the base, as it does a base layer. I think both trends are more or less surroundings at the same state right now. >> Yeah. A lot of the audience members at Postgres vision 21 is just like you they they're managing day-to-day infrastructure. They're there they're expert practitioners. What advice would you give to somebody that is thinking about, you know taking this journey, maybe if you had to do something over again maybe what would you do differently? How can you help your peers here? >> Well, I think in particular, if you are going to say a big organization that runs a highly regulated business in some cases, you are a little bit afraid of open source because there is this, I can say general consideration about the lack of enterprise level support. I would like to say that it is just about the past because they're around bunch of companies like EDB that are we're a hundred percent capable of providing enterprise level of support, even on, on, on even on the open source distribution of Paul's presser. Obviously Dan is you're going to go with their specific distribution. The level of support is going to be even more accurate but as we know, it could be currently is they across say main contributor of the pollsters community. And I think is, is that an insurance for every organization? >> Your advice is don't be afraid. >> Yeah. My advice is done is absolutely, don't be, don't be afraid. And if, if, if I can, if we can mention about also about, you know, the cloud called technologies this is also another, another topic where if possible I would like to suggest to not being afraid EDB as every every I would say organization within the it industry is really pushing for it. And I think for a very, for, for a lot of cases not all of them, but a lot of cases, there is a great value about the design services application to be cloud native or migrating existing application into the cloud. >> Okay. But, but being a highly regulated industry and being a, you know, very much aware of the the narrative around open source, et cetera, you, you must've had just a little piece of your mind saying, okay I have to manage this risk. So there's anything specifically you did with managing the risks that you would advise? Was it, was it or is it really just about good change management? >> I think it was mainly about a good change management when you got, you know the relevant stakeholders that you need on board and we are, everybody's going the same direction. That basically is about executing. >> Excellent. Well, Roberto, I really appreciate your time and your knowledge that you share with the audience. So thanks so much for coming on the cube. >> Thank you, Dave. It was a great pleasure. >> And thank you for watching the cubes continuous coverage of Postgres vision 21. We'll be right back. (upbeat music)
SUMMARY :
brought to you by EDB. the Italian Stock Exchange. for the invitation. role at the organization. Europe and the whole world. and of course the regulators the goal to provide an Well, and you have end-user computing So if you didn't heard about us We both in the technology of PostgreSQL, but also the that just it'll be provide. and so that's a solution that you to get the right solution for us. all of it, the full spectrum. breed that the market can offer. at the end of the day No. So what did you have to do? I got the pleasure to signed the agreement we're going here of appetite for risk, you that you utilize to sort that we needed back in the day. impact that you expect. the less teams you have to involve I mean, how do you see w the same state right now. maybe what would you do differently? of the pollsters community. about also about, you know, that you would advise? the relevant stakeholders that you need So thanks so much for coming on the cube. It was a great pleasure. And thank you for watching the cubes
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Roberto | PERSON | 0.99+ |
Euronext | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
Borsa Italiana | ORGANIZATION | 0.99+ |
Italy | LOCATION | 0.99+ |
Ferrari | ORGANIZATION | 0.99+ |
Roberto Giordano | PERSON | 0.99+ |
100% | QUANTITY | 0.99+ |
February, 2020 | DATE | 0.99+ |
Borsa | ORGANIZATION | 0.99+ |
Paul | PERSON | 0.99+ |
2020 | DATE | 0.99+ |
United States | LOCATION | 0.99+ |
one | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
first | QUANTITY | 0.99+ |
London Stock Exchange | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
First | QUANTITY | 0.99+ |
last month | DATE | 0.99+ |
PostgreSQL | TITLE | 0.99+ |
Pam | PERSON | 0.99+ |
both | QUANTITY | 0.99+ |
Postgres | ORGANIZATION | 0.99+ |
Dan | PERSON | 0.99+ |
EDB | ORGANIZATION | 0.99+ |
two main drivers | QUANTITY | 0.98+ |
four decades | QUANTITY | 0.98+ |
six nine months | QUANTITY | 0.98+ |
few months ago | DATE | 0.97+ |
Bart | PERSON | 0.97+ |
first one | QUANTITY | 0.97+ |
Italian Stock Exchange | ORGANIZATION | 0.97+ |
almost 1000 people | QUANTITY | 0.97+ |
first class | QUANTITY | 0.96+ |
more than one | QUANTITY | 0.95+ |
two phase | QUANTITY | 0.94+ |
this year | DATE | 0.89+ |
few years ago | DATE | 0.88+ |
Cape Cape | LOCATION | 0.87+ |
both trends | QUANTITY | 0.86+ |
one functionality | QUANTITY | 0.86+ |
first mission | QUANTITY | 0.85+ |
a year | QUANTITY | 0.83+ |
hundred percent | QUANTITY | 0.83+ |
Postgres Vision | ORGANIZATION | 0.82+ |
DDB | TITLE | 0.8+ |
2021 | DATE | 0.8+ |
one technology | QUANTITY | 0.75+ |
theCUBE | ORGANIZATION | 0.71+ |
one of the questions | QUANTITY | 0.71+ |
ADB | TITLE | 0.71+ |
Postgres Vision 21 | ORGANIZATION | 0.69+ |
Postgres vision 21 | ORGANIZATION | 0.68+ |
ADB | ORGANIZATION | 0.66+ |
EDB | TITLE | 0.66+ |
recent year | DATE | 0.65+ |
COVID | ORGANIZATION | 0.51+ |
Vision 2021 | EVENT | 0.41+ |
IBM33 Uli Homann VTT
(upbeat music) >> Narrator: From around the globe. It's theCUBE with digital coverage of IBM Think 2021. Brought to you by IBM. >> Welcome back to theCUBE coverage of IBM. Think 2021 virtual. I'm John Furrier, host of theCUBE. And this is theCUBE virtual and Uli Homann who's here Corporate Vice President, of cloud and AI at Microsoft. Thanks for coming on. I love this session, obviously, Microsoft one of the big clouds. Awesome. You guys partnering with IBM here at IBM Think. First of all, congratulations on all the success with Azure and just the transformation of IBM. I mean, Microsoft's Cloud has been phenomenal and hybrid is spinning perfectly into the vision of what enterprises want. And this has certainly been a great tailwind for everybody. So congratulations. So for first question, thanks for coming on and tell us the vision for hybrid cloud for Microsoft. It's almost like a perfect storm. >> Yeah. Thank you, John. I really appreciate you hosting me here and asking some great questions. We certainly appreciate it being part of IBM Think 2021 virtual. Although I do wish to see some people again, at some point. From our perspective, hybrid computing has always been part of the strategy that Microsoft as policed. We didn't think that public cloud was the answer to all questions. We always believed that there is multiple scenarios where either safety latency or other key capabilities impeded the usage of public cloud. Although we will see more public cloud scenarios with 5G and other capabilities coming along. Hybrid computing will still be something that is important. And Microsoft has been building capabilities on our own as a first party solution like Azure Stack and other capabilities. But we also partnering with VMware and others to effectively enable investment usage of capabilities that our clients have invested in to bring them forward into a cloud native application and compute model. So Microsoft is continuing investing in hybrid computing and we're taking more and more Azure capabilities and making them available in a hybrid scenario. For example, we took our entire database Stack SQL Server PostgreSQL and recently our Azure machine learning capabilities and make them available on a platform so that clients can run them where they need them in a factory in on-premise environment or in another cloud for example, because they trust the Microsoft investments in relational technology or machine learning. And we're also extending our management capabilities that Azure provides and make them available for Kubernetes virtual machine and other environments wherever they might run. So we believe that bringing Azure capabilities into our clients is important and taking also the capabilities that our clients are using into Azure and make it available so that they can manage them end to end is a key element of our strategy. >> Yeah. Thanks Uli for sharing that, I really appreciate that. You and I have been in this industry for a while. And you guys have a good view on this how Microsoft's got perspective riding the wave from the original computer industry. I remember during the client server days in the 80s, late 80s to early 90s the open systems interconnect was a big part of opening up the computer industry that was networking, internetworking and really created more lans and more connections for PCs, et cetera. And the world just went on from there. Similar now with hybrid cloud you're seeing that same kind of vibe. You seeing the same kind of alignment with distributed computing architectures for businesses where now you have, it's not just networking and plumbing and connecting lans and PCs and printers. It's connecting everything. It's almost kind of a whole another world but similar movie, if you will. So this is really going to be good for people who understand that market. IBM does, you guys do. Talk about the alignment between IBM and Microsoft in this new hybrid cloud space? It's really kind of now standardized but yet it's just now coming. >> Yeah. So again, fantastic question. So the way I think about this is first of all, Microsoft and IBM are philosophically very much aligned. We're both investing in key open source initiatives like the Cloud Native Computing Foundation, CNCF something that we both believe in. We are both partnering with the Red Hat organizations. So Red Hat forms a common bond if you still want to between Microsoft and IBM. And again, part of this is how can we establish a system of capabilities that every client has access to and then build on top of that stack. And again, IBM does this very well with their cloud packs which are coming out now with data and AI and others. And again, as I mentioned before we're investing in similar capabilities to make sure that core Azure functions are available on that CNCF cloud environment. So open source, open standards are key elements. And then you mentioned something critical which I believe is misunderstood but certainly not appreciated enough is, this is about connectivity between businesses. And so part of the power of the IBM perspective together with Microsoft is bringing together key business applications for healthcare, for retail, for manufacturing and really make them work together so that our clients that are critical scenarios get the support they need from both IBM as well as Microsoft on top of this common foundation of the CNCF and other open standards. >> It's interesting. I love that point. I'm going to double down and amplify that late and continue to bring it up. Connecting between businesses is one thread. But now people, because you have an edge, that's also industrial business but also people. People are participating in open source. People have wearables, people are connected. And also they're connecting with collaboration. So this kind of brings a whole 'nother architecture which I want to get into the solutions with you on on how you see that playing out. But first I know, you're a veteran with Microsoft for many, many years of decades. Microsoft's core competency has been ecosystems developer ecosystems, customer ecosystems. Today, that the services motion is built around ecosystems. You guys have that playbook IBM's well versed in it as well. How does that impact your partnerships, your solutions and how you deal with down this open marketplace? >> Well, let's start with the obvious. Obviously Microsoft and IBM will work together in common ecosystem. Again, I'm going to reference the CNCF again as the foundation for a lot of these initiatives. But then we're also working together in the ed hat ecosystem because Red Hat has built an ecosystem and Microsoft and IBM are players in that ecosystem. However, we also are looking a higher level there's a lot of times when people think ecosystems it's fairly low level technology. But Microsoft and IBM are talking about partnerships that are focused on industry scenarios. Again retail, for example, or healthcare and others where we're building on top of these lower level ecosystem capabilities and then bringing together the solution scenarios where the strength of IBM capabilities is coupled with Microsoft capabilities to drive this very famous one plus one equals three. And then the other piece that I think we both agree on is the open source ecosystem for software development and software development collaboration and GitHub is a common anchor that we both believe can feed the world's economy with respect to the software solutions that are needed to really bring the capabilities forward, help improve the wealth economy and so forth by effectively bringing together brilliant minds across the ecosystem. And again, just Microsoft and IBM bringing some people but the rest of the world obviously participating in that as well. So thinking again, open source, open standards and then industry specific collaboration and capabilities being a key part. You mentioned people. We certainly believe that people play a key role in software developers and the get hub notion being a key one. But there are others where, again, Microsoft with Microsoft 365 has a lot of capabilities in connecting people within the organization and across organizations. And while we're using zoom here, a lot of people are utilizing teams because teams is on the one side of collaboration platform. But on the other side is also an application host. And so bringing together people collaboration supported and powered by applications from IBM from Microsoft and others is going to be, I think a huge differentiation in terms of how people interact with software in the future. >> Yeah, and I think that whole joint development is a big part of this new people equation where it's not just partnering in market, it's also at the tech and you got open source and just phenomenal innovation, a formula there. So let's ask what solutions here. I want to get into some of the top solutions you're doing with Microsoft and maybe with IBM, but your title is corporate vice president of cloud and AI come on, cause you get a better department. I mean, more relevant than that. I mean, it's exciting. Your cloud-scale is driving tons of innovation. AI is eating software, changing the software paradigm. We can see that playing out. I've done dozens of interviews just in this past month on how AI is more certainly with machine learning and having a control plane with data, changing the game. So tell us what are the hot solutions for hybrid cloud? And why is this a different ball game than say public cloud? >> Well, so first of all let's talk a little bit about the AI capabilities and data because I think there are two categories. You're seeing an evolution of AI capabilities that are coming out. And again, I just read IBM's announcement about integrating the cloud pack with IBM Satellite. I think that's a key capability that IBM is putting out there and we're partnering with IBM in two directions there. Making it run very well on Azure with our Red Hat partners. But on the other side, also thinking through how we can optimize the experience for clients that choose Azure as their platform and IBM cloud Pak for data and AI as their technology, but that's a technology play. And then the next layer up is again, IBM has done a fantastic job to build AI capabilities that are relevant for industries. Healthcare being a very good example. Again, retail being another one. And I believe Microsoft and IBM will work on both partnerships on the technology side as well as the AI usage in specific verticals. Microsoft is doing similar things within our dynamics product line. We're using AI for business applications for planning, scheduling, optimizations, risk assessments those kinds of scenarios. And of course we're using those in the Microsoft 365 environment as well. I always joke that despite my 30 years at Microsoft, I still don't know how to read or use PowerPoint. And I can't do a PowerPoint slide for the life of me but with a new designer, I can actually get help from the system to make beautiful PowerPoint happen. So bringing AI into real life usage I think is the key part. The hybrid scenario is critical here as well. And especially when you start to think about real life scenarios, like safety, worker safety in a critical environment, freshness of product we're seeing retailers deploying cameras and AI inside the retail stores to effectively make sure that the shelves are stocked. That the quality of the vegetables for example, continues to be high and monitored. And previously people would do this on a occasional basis running around in the store. Now the store is monitored 24/7 and people get notified when things need fixing. Another really cool scenario set, is quality. We're working with a finished steel producer that effectively is looking at the stainless steel as it's being produced. And they have cameras on this steel that look at specific marks. And if these marks show up, then they know that the stainless steel will be bad. And I don't know if you've looked at a manufacturing process, but the earlier they can get a failure detected the better it is because they can most likely or more often than not return the product back into the beginning of the funnel and start over. And that's what they're using. So you can see molten steel, logically speaking with a camera and AI. And previously humans did this which is obviously a less reliable and be dangerous because this is very, very hot. This is very blowing steel. And so increasing safety while at the same time, improving the quality is something that we see hybrid scenarios. Again, autonomous driving, another great scenario where perception AI is going to be utilized. So there's a bunch of capabilities out there that really are hybrid in nature and will help us move forward with key scenarios, safety, quality and autonomous behaviors like driving and so forth. >> Uli, great insight, great product vision great alignment with IBM's hybrid cloud space with all customers are looking for now and certainly multi-cloud around the horizon. So great to have you on, great agility and congratulations for your continued success. You got great area cloud and AI and we'll be keeping in touch. I'd love to do a deep dive sometime. Thanks for coming on. >> John, thank you very much for the invitation and great questions. Great interview. Love it. Appreciate it. >> Okay, CUBE coverage here at IBM Think 2021 virtual. I'm John Furrier, your host. Thanks for watching. (upbeat music)
SUMMARY :
Narrator: From around the globe. and just the transformation of IBM. and taking also the capabilities in the 80s, late 80s to early 90s And so part of the power of the solutions with you on and the get hub notion being a key one. of the top solutions that the stainless steel will be bad. and congratulations for for the invitation and great questions. Thanks for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
IBM | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
Uli Homann | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
30 years | QUANTITY | 0.99+ |
Cloud Native Computing Foundation | ORGANIZATION | 0.99+ |
John Furr | PERSON | 0.99+ |
Uli | PERSON | 0.99+ |
CNCF | ORGANIZATION | 0.99+ |
two categories | QUANTITY | 0.99+ |
PowerPoint | TITLE | 0.99+ |
first question | QUANTITY | 0.99+ |
early 90s | DATE | 0.99+ |
three | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
Azure | TITLE | 0.98+ |
late 80s | DATE | 0.98+ |
Today | DATE | 0.98+ |
one thread | QUANTITY | 0.98+ |
two directions | QUANTITY | 0.98+ |
Red Hat | ORGANIZATION | 0.97+ |
Azure Stack | TITLE | 0.97+ |
80s | DATE | 0.97+ |
both partnerships | QUANTITY | 0.97+ |
one side | QUANTITY | 0.96+ |
First | QUANTITY | 0.96+ |
first | QUANTITY | 0.96+ |
one | QUANTITY | 0.94+ |
Think 2021 | COMMERCIAL_ITEM | 0.93+ |
JG Chirapurath, Microsoft CLEAN
>> Okay, we're now going to explore the vision of the future of cloud computing from the perspective of one of the leaders in the field, JG Chirapurath is the Vice President of Azure Data AI and Edge at Microsoft. JG, welcome to theCUBE on Cloud, thanks so much for participating. >> Well, thank you, Dave. And it's a real pleasure to be here with you and just want to welcome the audience as well. >> Well, JG, judging from your title, we have a lot of ground to cover and our audience is definitely interested in all the topics that are implied there. So let's get right into it. We've said many times in theCUBE that the new innovation cocktail comprises machine intelligence or AI applied to troves of data with the scale of the cloud. It's no longer we're driven by Moore's law. It's really those three factors and those ingredients are going to power the next wave of value creation in the economy. So first, do you buy into that premise? >> Yes, absolutely. We do buy into it and I think one of the reasons why we put data analytics and AI together, is because all of that really begins with the collection of data and managing it and governing it, unlocking analytics in it. And we tend to see things like AI, the value creation that comes from AI as being on that continuum of having started off with really things like analytics and proceeding to be machine learning and the use of data in interesting ways. >> Yes, I'd like to get some more thoughts around data and how you see the future of data and the role of cloud and maybe how Microsoft strategy fits in there. I mean, your portfolio, you've got SQL Server, Azure SQL, you got Arc which is kind of Azure everywhere for people that aren't familiar with that you got Synapse which course does all the integration, the data warehouse and it gets things ready for BI and consumption by the business and the whole data pipeline. And then all the other services, Azure Databricks, you got you got Cosmos in there, you got Blockchain, you've got Open Source services like PostgreSQL and MySQL. So lots of choices there. And I'm wondering, how do you think about the future of cloud data platforms? It looks like your strategy is right tool for the right job. Is that fair? >> It is fair, but it's also just to step back and look at it. It's fundamentally what we see in this market today, is that customers they seek really a comprehensive proposition. And when I say a comprehensive proposition it is sometimes not just about saying that, "Hey, listen "we know you're a sequence of a company, "we absolutely trust that you have the best "Azure SQL database in the cloud. "But tell us more." We've got data that is sitting in Hadoop systems. We've got data that is sitting in PostgreSQL, in things like MongoDB. So that open source proposition today in data and data management and database management has become front and center. So our real sort of push there is when it comes to migration management modernization of data to present the broadest possible choice to our customers, so we can meet them where they are. However, when it comes to analytics, one of the things they ask for is give us lot more convergence use. It really, it isn't about having 50 different services. It's really about having that one comprehensive service that is converged. That's where things like Synapse fits in where you can just land any kind of data in the lake and then use any compute engine on top of it to drive insights from it. So fundamentally, it is that flexibility that we really sort of focus on to meet our customers where they are. And really not pushing our dogma and our beliefs on it but to meet our customers according to the way they've deployed stuff like this. >> So that's great. I want to stick on this for a minute because when I have guests on like yourself they never want to talk about the competition but that's all we ever talk about. And that's all your customers ever talk about. Because the counter to that right tool for the right job and that I would say is really kind of Amazon's approach is that you got the single unified data platform, the mega database. So it does it all. And that's kind of Oracle's approach. It sounds like you want to have your cake and eat it too. So you got the right tool with the right job approach but you've got an integration layer that allows you to have that converged database. I wonder if you could add color to that and confirm or deny what I just said. >> No, that's a very fair observation but I'd say there's a nuance in what I sort of described. When it comes to data management, when it comes to apps, we have then customers with the broadest choice. Even in that perspective, we also offer convergence. So case in point, when you think about cosmos DB under that one sort of service, you get multiple engines but with the same properties. Right, global distribution, the five nines availability. It gives customers the ability to basically choose when they have to build that new cloud native app to adopt cosmos DB and adopt it in a way that is an choose an engine that is most flexible to them. However, when it comes to say, writing a SequenceServer for example, if modernizing it, you want sometimes, you just want to lift and shift it into things like IS. In other cases, you want to completely rewrite it. So you need to have the flexibility of choice there that is presented by a legacy of what sits on premises. When you move into things like analytics, we absolutely believe in convergence. So we don't believe that look, you need to have a relational data warehouse that is separate from a Hadoop system that is separate from say a BI system that is just, it's a bolt-on. For us, we love the proposition of really building things that are so integrated that once you land data, once you prep it inside the Lake you can use it for analytics, you can use it for BI, you can use it for machine learning. So I think, our sort of differentiated approach speaks for itself there. >> Well, that's interesting because essentially again you're not saying it's an either or, and you see a lot of that in the marketplace. You got some companies you say, "No, it's the data lake." And others say "No, no, put it in the data warehouse." And that causes confusion and complexity around the data pipeline and a lot of cutting. And I'd love to get your thoughts on this. A lot of customers struggle to get value out of data and specifically data product builders are frustrated that it takes them too long to go from, this idea of, hey, I have an idea for a data service and it can drive monetization, but to get there you got to go through this complex data life cycle and pipeline and beg people to add new data sources and do you feel like we have to rethink the way that we approach data architecture? >> Look, I think we do in the cloud. And I think what's happening today and I think the place where I see the most amount of rethink and the most amount of push from our customers to really rethink is the area of analytics and AI. It's almost as if what worked in the past will not work going forward. So when you think about analytics only in the enterprise today, you have relational systems, you have Hadoop systems, you've got data marts, you've got data warehouses you've got enterprise data warehouse. So those large honking databases that you use to close your books with. But when you start to modernize it, what people are saying is that we don't want to simply take all of that complexity that we've built over, say three, four decades and simply migrate it en masse exactly as they are into the cloud. What they really want is a completely different way of looking at things. And I think this is where services like Synapse completely provide a differentiated proposition to our customers. What we say there is land the data in any way you see, shape or form inside the lake. Once you landed inside the lake, you can essentially use a Synapse Studio to prep it in the way that you like. Use any compute engine of your choice and operate on this data in any way that you see fit. So case in point, if you want to hydrate a relational data warehouse, you can do so. If you want to do ad hoc analytics using something like Spark, you can do so. If you want to invoke Power BI on that data or BI on that data, you can do so. If you want to bring in a machine learning model on this prep data, you can do so. So inherently, so when customers buy into this proposition, what it solves for them and what it gives to them is complete simplicity. One way to land the data multiple ways to use it. And it's all integrated. >> So should we think of Synapse as an abstraction layer that abstracts away the complexity of the underlying technology? Is that a fair way to think about it? >> Yeah, you can think of it that way. It abstracts away Dave, a couple of things. It takes away that type of data. Sort of complexities related to the type of data. It takes away the complexity related to the size of data. It takes away the complexity related to creating pipelines around all these different types of data. And fundamentally puts it in a place where it can be now consumed by any sort of entity inside the Azure proposition. And by that token, even Databricks. You can in fact use Databricks in sort of an integrated way with the Azure Synapse >> Right, well, so that leads me to this notion of and I wonder if you buy into it. So my inference is that a data warehouse or a data lake could just be a node inside of a global data mesh. And then it's Synapse is sort of managing that technology on top. Do you buy into that? That global data mesh concept? >> We do and we actually do see our customers using Synapse and the value proposition that it brings together in that way. Now it's not where they start, oftentimes when a customer comes and says, "Look, I've got an enterprise data warehouse, "I want to migrate it." Or "I have a Hadoop system, I want to migrate it." But from there, the evolution is absolutely interesting to see. I'll give you an example. One of the customers that we're very proud of is FedEx. And what FedEx is doing is it's completely re-imagining its logistics system. That basically the system that delivers, what is it? The 3 million packages a day. And in doing so, in this COVID times, with the view of basically delivering on COVID vaccines. One of the ways they're doing it, is basically using Synapse. Synapse is essentially that analytic hub where they can get complete view into the logistic processes, way things are moving, understand things like delays and really put all of that together in a way that they can essentially get our packages and these vaccines delivered as quickly as possible. Another example, it's one of my favorite. We see once customers buy into it, they essentially can do other things with it. So an example of this is really my favorite story is Peace Parks initiative. It is the premier of white rhino conservancy in the world. They essentially are using data that has landed in Azure, images in particular to basically use drones over the vast area that they patrol and use machine learning on this data to really figure out where is an issue and where there isn't an issue. So that this part with about 200 radios can scramble surgically versus having to range across the vast area that they cover. So, what you see here is, the importance is really getting your data in order, landing consistently whatever the kind of data it is, build the right pipelines, and then the possibilities of transformation are just endless. >> Yeah, that's very nice how you worked in some of the customer examples and I appreciate that. I want to ask you though that some people might say that putting in that layer while you clearly add simplification and is I think a great thing that there begins over time to be a gap, if you will, between the ability of that layer to integrate all the primitives and all the piece parts, and that you lose some of that fine grain control and it slows you down. What would you say to that? >> Look, I think that's what we excel at and that's what we completely sort of buy into. And it's our job to basically provide that level of integration and that granularity in the way that it's an art. I absolutely admit it's an art. There are areas where people crave simplicity and not a lot of sort of knobs and dials and things like that. But there are areas where customers want flexibility. And so I think just to give you an example of both of them, in landing the data, in consistency in building pipelines, they want simplicity. They don't want complexity. They don't want 50 different places to do this. There's one way to do it. When it comes to computing and reducing this data, analyzing this data, they want flexibility. This is one of the reasons why we say, "Hey, listen you want to use Databricks. "If you're buying into that proposition. "And you're absolutely happy with them, "you can plug it into it." You want to use BI and essentially do a small data model, you can use BI. If you say that, "Look, I've landed into the lake, "I really only want to use ML." Bring in your ML models and party on. So that's where the flexibility comes in. So that's sort of that we sort of think about it. >> Well, I like the strategy because one of our guests, Jumark Dehghani is I think one of the foremost thinkers on this notion of of the data mesh And her premise is that the data builders, data product and service builders are frustrated because the big data system is generic to context. There's no context in there. But by having context in the big data architecture and system you can get products to market much, much, much faster. So, and that seems to be your philosophy but I'm going to jump ahead to my ecosystem question. You've mentioned Databricks a couple of times. There's another partner that you have, which is Snowflake. They're kind of trying to build out their own DataCloud, if you will and GlobalMesh, and the one hand they're a partner on the other hand they're a competitor. How do you sort of balance and square that circle? >> Look, when I see Snowflake, I actually see a partner. When we see essentially we are when you think about Azure now this is where I sort of step back and look at Azure as a whole. And in Azure as a whole, companies like Snowflake are vital in our ecosystem. I mean, there are places we compete, but effectively by helping them build the best Snowflake service on Azure, we essentially are able to differentiate and offer a differentiated value proposition compared to say a Google or an AWS. In fact, that's been our approach with Databricks as well. Where they are effectively on multiple clouds and our opportunity with Databricks is to essentially integrate them in a way where we offer the best experience the best integrations on Azure Berna. That's always been our focus. >> Yeah, it's hard to argue with the strategy or data with our data partner and ETR shows Microsoft is both pervasive and impressively having a lot of momentum spending velocity within the budget cycles. I want to come back to AI a little bit. It's obviously one of the fastest growing areas in our survey data. As I said, clearly Microsoft is a leader in this space. What's your vision of the future of machine intelligence and how Microsoft will participate in that opportunity? >> Yeah, so fundamentally, we've built on decades of research around essentially vision, speech and language. That's been the three core building blocks and for a really focused period of time, we focused on essentially ensuring human parity. So if you ever wonder what the keys to the kingdom are, it's the boat we built in ensuring that the research or posture that we've taken there. What we've then done is essentially a couple of things. We've focused on essentially looking at the spectrum that is AI. Both from saying that, "Hey, listen, "it's got to work for data analysts." We're looking to basically use machine learning techniques to developers who are essentially, coding and building machine learning models from scratch. So for that select proposition manifest to us as really AI focused on all skill levels. The other core thing we've done is that we've also said, "Look, it'll only work as long "as people trust their data "and they can trust their AI models." So there's a tremendous body of work and research we do and things like responsible AI. So if you asked me where we sort of push on is fundamentally to make sure that we never lose sight of the fact that the spectrum of AI can sort of come together for any skill level. And we keep that responsible AI proposition absolutely strong. Now against that canvas Dave, I'll also tell you that as Edge devices get way more capable, where they can input on the Edge, say a camera or a mic or something like that. You will see us pushing a lot more of that capability onto the edge as well. But to me, that's sort of a modality but the core really is all skill levels and that responsibility in AI. >> Yeah, so that brings me to this notion of, I want to bring an Edge and hybrid cloud, understand how you're thinking about hybrid cloud, multicloud obviously one of your competitors Amazon won't even say the word multicloud. You guys have a different approach there but what's the strategy with regard to hybrid? Do you see the cloud, you're bringing Azure to the edge maybe you could talk about that and talk about how you're different from the competition. >> Yeah, I think in the Edge from an Edge and I even I'll be the first one to say that the word Edge itself is conflated. Okay, a little bit it's but I will tell you just focusing on hybrid, this is one of the places where, I would say 2020 if I were to look back from a COVID perspective in particular, it has been the most informative. Because we absolutely saw customers digitizing, moving to the cloud. And we really saw hybrid in action. 2020 was the year that hybrid sort of really became real from a cloud computing perspective. And an example of this is we understood that it's not all or nothing. So sometimes customers want Azure consistency in their data centers. This is where things like Azure Stack comes in. Sometimes they basically come to us and say, "We want the flexibility of adopting "flexible button of platforms let's say containers, "orchestrating Kubernetes "so that we can essentially deploy it wherever you want." And so when we designed things like Arc, it was built for that flexibility in mind. So, here's the beauty of what something like Arc can do for you. If you have a Kubernetes endpoint anywhere, we can deploy an Azure service onto it. That is the promise. Which means, if for some reason the customer says that, "Hey, I've got "this Kubernetes endpoint in AWS. And I love Azure SQL. You will be able to run Azure SQL inside AWS. There's nothing that stops you from doing it. So inherently, remember our first principle is always to meet our customers where they are. So from that perspective, multicloud is here to stay. We are never going to be the people that says, "I'm sorry." We will never say (speaks indistinctly) multicloud but it is a reality for our customers. >> So I wonder if we could close, thank you for that. By looking back and then ahead and I want to put forth, maybe it's a criticism, but maybe not. Maybe it's an art of Microsoft. But first, you did Microsoft an incredible job at transitioning its business. Azure is omnipresent, as we said our data shows that. So two-part question first, Microsoft got there by investing in the cloud, really changing its mindset, I think and leveraging its huge software estate and customer base to put Azure at the center of it's strategy. And many have said, me included, that you got there by creating products that are good enough. We do a one Datto, it's still not that great, then a two Datto and maybe not the best, but acceptable for your customers. And that's allowed you to grow very rapidly expand your market. How do you respond to that? Is that a fair comment? Are you more than good enough? I wonder if you could share your thoughts. >> Dave, you hurt my feelings with that question. >> Don't hate me JG. (both laugh) We're getting it out there all right, so. >> First of all, thank you for asking me that. I am absolutely the biggest cheerleader you'll find at Microsoft. I absolutely believe that I represent the work of almost 9,000 engineers. And we wake up every day worrying about our customer and worrying about the customer condition and to absolutely make sure we deliver the best in the first attempt that we do. So when you take the plethora of products we deliver in Azure, be it Azure SQL, be it Azure Cosmos DB, Synapse, Azure Databricks, which we did in partnership with Databricks, Azure Machine Learning. And recently when we premiered, we sort of offered the world's first comprehensive data governance solution in Azure Purview. I would humbly submit it to you that we are leading the way and we're essentially showing how the future of data, AI and the Edge should work in the cloud. >> Yeah, I'd be disappointed if you capitulated in any way, JG. So, thank you for that. And that's kind of last question is looking forward and how you're thinking about the future of cloud. Last decade, a lot about cloud migration, simplifying infrastructure to management and deployment. SaaSifying My Enterprise, a lot of simplification and cost savings and of course redeployment of resources toward digital transformation, other valuable activities. How do you think this coming decade will be defined? Will it be sort of more of the same or is there something else out there? >> I think that the coming decade will be one where customers start to unlock outsize value out of this. What happened to the last decade where people laid the foundation? And people essentially looked at the world and said, "Look, we've got to make a move. "They're largely hybrid, but you're going to start making "steps to basically digitize and modernize our platforms. I will tell you that with the amount of data that people are moving to the cloud, just as an example, you're going to see use of analytics, AI or business outcomes explode. You're also going to see a huge sort of focus on things like governance. People need to know where the data is, what the data catalog continues, how to govern it, how to trust this data and given all of the privacy and compliance regulations out there essentially their compliance posture. So I think the unlocking of outcomes versus simply, Hey, I've saved money. Second, really putting this comprehensive sort of governance regime in place and then finally security and trust. It's going to be more paramount than ever before. >> Yeah, nobody's going to use the data if they don't trust it, I'm glad you brought up security. It's a topic that is at number one on the CIO list. JG, great conversation. Obviously the strategy is working and thanks so much for participating in Cube on Cloud. >> Thank you, thank you, Dave and I appreciate it and thank you to everybody who's tuning into today. >> All right then keep it right there, I'll be back with our next guest right after this short break.
SUMMARY :
of one of the leaders in the field, to be here with you that the new innovation cocktail comprises and the use of data in interesting ways. and how you see the future that you have the best is that you got the single that once you land data, but to get there you got to go in the way that you like. Yeah, you can think of it that way. of and I wonder if you buy into it. and the value proposition and that you lose some of And so I think just to give you an example So, and that seems to be your philosophy when you think about Azure Yeah, it's hard to argue the keys to the kingdom are, Do you see the cloud, you're and I even I'll be the first one to say that you got there by creating products Dave, you hurt my We're getting it out there all right, so. that I represent the work Will it be sort of more of the same and given all of the privacy the data if they don't trust it, thank you to everybody I'll be back with our next guest
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
JG | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Microsoft | ORGANIZATION | 0.99+ |
FedEx | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Jumark Dehghani | PERSON | 0.99+ |
Databricks | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
JG Chirapurath | PERSON | 0.99+ |
first | QUANTITY | 0.99+ |
50 different services | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
50 different places | QUANTITY | 0.99+ |
MySQL | TITLE | 0.99+ |
one | QUANTITY | 0.99+ |
GlobalMesh | ORGANIZATION | 0.99+ |
Both | QUANTITY | 0.99+ |
first attempt | QUANTITY | 0.99+ |
Second | QUANTITY | 0.99+ |
Last decade | DATE | 0.99+ |
three | QUANTITY | 0.99+ |
three factors | QUANTITY | 0.99+ |
Synapse | ORGANIZATION | 0.99+ |
one way | QUANTITY | 0.99+ |
COVID | OTHER | 0.99+ |
One | QUANTITY | 0.98+ |
first one | QUANTITY | 0.98+ |
first principle | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
Azure Stack | TITLE | 0.98+ |
Azure SQL | TITLE | 0.98+ |
Spark | TITLE | 0.98+ |
First | QUANTITY | 0.98+ |
MongoDB | TITLE | 0.98+ |
2020 | DATE | 0.98+ |
about 200 radios | QUANTITY | 0.98+ |
Moore | PERSON | 0.97+ |
PostgreSQL | TITLE | 0.97+ |
four decades | QUANTITY | 0.97+ |
Arc | TITLE | 0.97+ |
single | QUANTITY | 0.96+ |
Snowflake | ORGANIZATION | 0.96+ |
last decade | DATE | 0.96+ |
Azure Purview | TITLE | 0.95+ |
3 million packages a day | QUANTITY | 0.95+ |
One way | QUANTITY | 0.94+ |
three core | QUANTITY | 0.94+ |
Monica Kumar & Bala Kuchibhotla, Nutanix | Introducing a New Era in Database Management
>> Narrator: From around the globe. It's theCUBE with digital coverage of A New Era In Database Management. Brought to you by Nutanix. >> Hi, I'm Stu Miniman. And welcome to this special presentation with Nutanix. We're talking about A New Era In Database Management. To help us dig into it, first of all, I have the Senior Vice President and General Manager of Nutanix Era Databases and Business Critical Applications, that is Bala Kuchibhotla. And one of our other CUBE alongs, Monica Kumar. Who's an SVP also with Nutanix. Bala, Monica, thank you so much for joining us. >> Thank you, thank you so... >> Great to be here. All right, so first of all, Bala a new Era. We, have a little bit of a punj. You've got me with some punjs there. Of course we know that the database for Nutanix solution is Era. So, we always like to bring out the news first. Why don't you tell us, what does this mean? What is Nutanix announcing today? >> Awesome. Thank you, Stu. Yeah, so today's a very big day for us. I'm super excited to inform all of us and our audience that we are announcing the Eratory dot two GA bits for customers to enjoy it. Some customers can download and start playing with it. So what's new with Nutanix Eratory dot two? As you knows 1.0 is a single cluster solution meaning the customers have to have a Nutanix cluster and then have around the same cluster to enjoy the databases. But with Eratory dot two, it becomes multi-cluster solution. It's not just a multi-cluster solution, but customers can enjoy database across clusters, That means that they can have their Always On Availability Groups SQL servers, their Postgres servers across Nutanix clusters. That means that they can spread across Azure Availability Zones. Now, the most interesting point of this is, it's not just across clusters, customers can place these clusters in the cloud. That is AWS. You can have Nutanix cluster in the AWS cluster and then the primary production clusters maybe on the Nutanix and primary enterprise cloud kind of stuff, that's number one. Number two, we have extended our data management capabilities, data management platform capabilities, and what we call them as global time mission. Global time mission with a data access management. Like racing river, that you need to harness the racing river by constructing a dam and then harness it for multipurpose either irrigation projects or hydroelectric project kind of stuff. You need to kind of do the similar things for your data in company, enterprise company. You need to make sure that the right persons get the right amount of data, so that you don't kind of give all production data to everyone in the company. At the same time, they also need the accessible, with one click they can get the database, the data they want. So that's the data access management. Imagine a QA person only gets the sanitized snapshots or sanitize database backups for them to create the copies. And then we are extending our database engine portfolios too to introduce SAP HANA to the thing. As you know, that we support Oracle today, Postgres, MalSQL, Mariadb SQL server. I'm excited to inform that we are introducing SAP HANA. Our customers can do one click sandbox creation into an environment for SAP HANA predown intense platform. And lastly, I'm super excited to inform that we are becoming a Postgres vendor. We are willing to give 24 by seven, 365 day support but Postgres database engine, that's kind of a provision through Nutanix setup platform. So this way the customers can enjoy the engine, platform, service all together in one single shot with a single 180 company that they can call and get the support they want. I'm super duper excited that this is going to make the customers a truly multicloud multi cluster data management platform. Thank you. >> Yeah. And I'll just add to that too. It's fantastic that we are now offering this new capability. I just want to kind of remind our audience that Nutanix for many years has been providing the foundation the infrastructure software, where you can run all these multiple workloads including databases today. And what we're doing with Era is fantastic because now they are giving our customers the ability to take that database that they run on top of Nutanix to provide that as a service now. So now are talking to a whole different organization here. It's database administrations, it's administrators, it's teams that run databases, it teams that care about data and providing access to data and organizations. >> Well, first of all, congratulations, I've taught for a couple of years to the teams at Nutanix especially some of the people working on PostgreSQL really exciting stuff and you've both seen really the unlocking of database. It used to be ,we talked about, I have one database it's kind of the one that everything runs on. Now, customers they have more databases. You talked about that flexibility is then, where we run it. We'd love to hear, maybe Monica we start with you. You talk about the customers, what does this really mean for them? Because one of our most mission critical applications we talk about, we're not just throwing our databases or what. I don't wake up in the morning and say, Oh let me move it to this cloud and put it in this data center. This needs to be reliable. I need to have access to the data. I need to be able to work with it. So, what does this really mean? And what does it unlock for your customers? >> Yes absolutely, I love to talk about this topic. I mean, if you think about databases, they are means to an end. And in this case, the end is being able to mine insights from the data and then make meaningful decisions based on that. So when we talk to customers, it's really clear that data has not become one of the most valuable assets that an organization owns. Well, of course, in addition to the employees that are part of the organization and our customers. Data is one of the most important assets. But most organizations, the challenges they face is a lot of data gets collected. And in fact, we've heard numbers thrown around for many years like, almost 80% of world's data has been created in the last like three or four years. And data is doubling every two years in terms of volume. Well guess what? Data gets collected. It sits there and organizations are struggling to get access to it with the right performance, the right security and regulation compliance, the reliability, availability, by persona, developers need certain access, analysts needs different access line of businesses need different access. So what we see is organizations are struggling in getting access to data at the right time by the right person on the team and when they need it. And I think that's where database as a service is critical. It's not just about having the database software which is of course important but how you know not make that service available to your stakeholders, to developers to lines of business within the SLAs that they demand. So is it instantly? How quickly can you make it available? How quickly can you use have access to data and do something meaningful with it? And mind the insights for smarter business? And then the one thing I'd like to add is that's where IT and business really come together. That's the glue. If you think about it today, what is the blue between an IT Organization and a business organization? It's the data. And that's where they're really coming together to say how can we together deliver the right service? So you, the business owner can deliver the right outcome for our business. >> That's very true. Maybe I'll just add a couple of comments there. What we're trying to do is we are trying to bring the cloud experience, the RDS-like experience to the enterprise cloud and then hybrid cloud. So the customers will now have a choice of cloud. They don't need to be locked in a particular cloud, at the same time enjoy the true cloud utility experience. We help customers create clouds, database clouds either by themselves if that's big enough to manage the cloud themselves or they can partner with a GSIs like Wipro, WorkHCL and then create a completely managed database service kind of stuff. So, this brings this cloud neutrality, portability for customers and give them the choice and their terms, Stu. >> Well Bala, absolutely we've seen a huge growth in managed services as you've said, maybe bring us inside a little bit. What is free up customers? What we've said for so long that back when HCI first started, it was some of the storage administrators might bristle because you were taking things away from them. It was like, no, we're going to free you up to do other things that as Monica said, deliver more business value not mapping LUNs and doing that. How about from the DBA standpoint? What are some of those repetitive, undifferentiated heavy lifting that we're going to take away from them so that they can focus on the business value. >> Yep. Thank you Stu. So think about this. We all do copy paste operations in laptops. Something of that sort happens in data center at a much larger scale. Meaning that the same kind of copy paste operation happens to databases and petabytes and terabytes of scale. Hundreds of petabytes. It has become the most dreaded complex, long running error prone operation. Why should it be that way? Why should the DBS spend all this mundane tasks and then get busy for every cloning operation? It's a two day job for me, every backup job. It's like a hobby job for provisioning takes like three days. We can take this undifferentiated heavy lifting by this and then let the DBS focus on designing the cloud for them. Looking for the database tuning, design data modeling, ML aspects of the data kind of stuff. So we are freeing up the database Ops people, in a way that they can design the database cloud, and make sure that they are energy focused on high valid things and more towards the business center kind of stuff. >> Yeah. And you know automation is really important. You were talking about is automating mundane grunt work. Like IT spends 80% of its time in maintaining systems. So then where is the time for innovation. So if we can automate stuff that's repetitive, stuff that the machine can do, the software can do, why not? And I think that's what our database as a service often does. And I would add this, the big thing our database as a service does really is provide IT organizations and DV organizations a way to manage heterogeneous databases too. It's not like, here's my environment for Postgres. Here's my environment for my SQL. Here's my environment for Oracle. Here's my environment for SQL server. Now with a single offering, a single tool you can manage your heterogeneous environment across different clouds. On premises cloud, or in a public cloud environment. So I think that's the beauty we are talking about with Nutanix's Era. Is a truly, truly gives organizations that single environment to manage heterogeneous databases, apply the same automation and the ease of management across all these different environments. >> Yeah. I'll just add one comment to that. A true managed PaaS obviously customers in like a single shop go to public cloud, just click through and then they get the database and point. And then if someone is managing the database for them. But if you look at the enterprise data centers, they need to bring that enterprise GalNets and structure to these databases. It's not like anyone can do anything to any or these databases. So we are kind of getting the best of both, the needed enterprise GalNets by these enterprise people at the same time bringing the convenience for the application teams and developers they want to consume these databases like utility. So bringing the cloud experience, bringing the enterprise GalNets. At same time, I'm super confident we can cut down the cost. So that is what Nutanix Era is all about across all the clouds, including the enterprise cloud. >> Well, Bala being simpler and being less expensive are one of the original promises of the cloud that don't necessarily always come out there. So, that's super important. One of the other things, you talk about these hybrid environments. It's not just studied, in the public cloud want to understand these environments, if I'm in the public cloud, can I still leverage some of the services that are in the public cloud? So, if I want to run some analytics, if I want to use some of the phenomenal services that are coming out every day. Is that something that can be done in this environment? >> Yeah, beautiful. Thank you Stu. So we are seeing customers who two categories. There is a public cloud customer, completely born in public cloud cloud, native services. They realize that for every database that maintaining five or seven different copies and the management of these copies is prohibited just because every copy is a faulty copy in the public cloud. Meaning you take a backup snapshot and restore it. Your meter like New York taxi, it starts with running for your EBSÂ Â and that you are looking at it kind of stuff. So they can leverage Nutanix clusters and then have a highly efficient cloning capability so that they can cut down some of these costs for these secondary environments that I talk about. What we call is copy data management, that's one kind of use case. The other kind of customers that we are seeing who's cloud is a phenomenon. There's no way that people have to move to cloud. That's the something at a C level mandate that happens. These customers are enjoying their database experience on our enterprise cloud. But when they try to go to these big hyperscalers, they are seeing the disconnect that they're not able to enjoy some of the things that they are seeing on the enterprise cloud with us. So this transition, they are talking to us. Can you get this kind of functionality with Nutanix platform onto some of these big hyperscalers? So there are kind of customers moving both sides, some customers that are public cloud they're time to enjoy our facilities other than copy data management and Nutanix. Customers that are on-prem but they have a mandate to good public cloud ,with our hybrid cloud strategy. They get to enjoy the same kind of convenience that they are seeing it on enterprise and bringing the same kind of governance that they used to do it. so that maybe see customers. Yeah. >> Yeah. Monica, I want to go back to something you talked about customers dealing with that heterogeneous environment that they have reminds me of a lot of the themes that we talked about at nutanix.next because customers have they have multiple clouds they're using, requires different skillsets, different tooling. It's that simplicity layer that Nutanix has been working to deliver since day one. What are you from your customers? How are they doing with this? And especially in the database world. What are some of those challenges that they're really facing that we're looking to help solve with the solution today. >> Yeah. I mean, if you think about it, what customers at least in our experience, what they want or what they're looking for is this modern cloud platform that can really work across multiple cloud environments. Cause people don't want to change running, let's say an Oracle database you're on-prem on a certain stack and then using a whole different stack to run Oracle database in the cloud. What they want is the same exact foundation. So be so they can be, for sure have the right performance. Availability, reliability, the applications don't have to be rewritten on top of Oracle database. They want to preserve all of that, but they want the flexibility to be able to run that cloud platform wherever they choose to. So that's one. So that's choosing the right and modernizing and choosing the right cloud platform is definitely very important to our customers, but you nailed it on the head Stu. It's been about how do you manage it? How do you operate it on a daily basis? And that's where our customers are struggling with multiple types of tools out there, custom tool for every single environment. And that's what they don't want. They want to be able to manage, simply across multiple environments using the same tools and skillsets. And again, and I'm going to beat the same drum, but that's when Nutanix shines. That's a design principle is. It's the exact same technology foundation that you provide to customers to run any applications. In this case it happens to be databases. Exact same foundation you can use to run databases on-prem in the cloud. And then on top of that using Era boom! Simple management, simple operations, simple provisioning simple copy data management, simple patching, all of that becomes easy using just a single framework to manage and operate. And I will tell you this, when we talk to customers, what is it that DBS and database teams are struggling with? They're struggling with SLS and performance on scalability, that's one, number two they're struggling with keeping it up and running and fulfilling the demands of the stakeholders because they cannot keep up with how many databases they need to keep provisioning and patching and updating. So at Nutanix now we are actually solving both those problems with the platform. We are solving the problem of a very specific SLA that we can deliver in any cloud. And with Era, you're solving the issue of that operational complexity. We're making it really easy. So again, IT stakeholders DBS can fulfill the demands of the business stakeholders and really help them monetize the data. >> Yeah. I'll just add on with one concrete examples too. Like we have a big financial customer, they want to run Postgres. They are looking at the public cloud. Can we do a manage services kind of stuff, but you look at this, that the cost difference between a Postgres and your company infrastructure versus managed services almost like $3X to $4X dollars. Now, with Nutanix platform and Era, we were able to show that they can do at much reduced cost, manage their best service experience including their DBA cost are including the cloud administration cost. Like we added the infrastructure picture. We added the people who are going to manage the cloud, internal cloud and then intern experience being, plus plus of what they can see to public cloud. That's what makes the big difference. And this is what data sovereignty, data control, compliance and infrastructure governance, all these things coupled with cloud experiences, what customers really see the value of Era and the enterprise cloud and with an extension to the public cloud, with our hybrid cloud strategy. if they want to move this workload to public cloud they can do it. So, today with AWS clusters and tomorrow with our Azure clusters. So that gives them that kind of insurance not getting locked in by a big hyperscaler, but at same time enjoy the cloud experience. That's what big customers are looking for. >> Alright Bala, all the things you laid out here, what's the availability of Era rotically dot two? >> Era rotically dot two is actually available today. The customers can enjoy download the bits. We already have bunches of beta customers who are trying it out with the recall big telco companies are financial companies, and even big companies that manage big pensions kind of stuff. Let's talk about that kind of stuff. People are looking to us. In fact, there are customers who are looking for, when is this available for Azure cluster so that we can move some of our workloads to and manage the databases in Azure classes. So it is available and I'm looking forward to great feedback from our customers. And I'm hoping that it will solve some of their major critical problems. And in the process they get the best of Nutanix. >> Monica, last question I have for you. This doesn't seem like it's necessarily the same traditional infrastructure go to market for a solution like this. If I think back to, people think of HCI it was like, Oh! well, it was kind of a new box. We know Nutanix is a software company. More of what you do today is subscription based. So, maybe if you could talk a little bit to just how Nutanix goes to market with a solution like this. >> Yeah. And you know what, maybe people don't realize it but I'm hoping a lot of people do that. Nutanix is not just an infrastructure company anymore. In the last many years we've developed a full cloud platform in not only do we offer the infrastructure services with hyperconverged infrastructure which is now really the foundation. It's the hybrid cloud infrastructure. As you know, Stu, we talked to you a month ago and we talked about the evolution of XCI to really becoming the hybrid cloud infrastructure. But in addition to that, we also offer other data center services on storage DR Networking. We also offer DevOps services with application provisioning automation, application orchestration and then of course, database services that we talking about today and we offer desktop services. So Nutanix has really evolved in the last few years to a complete cloud platform really focusing on the application and workloads that run on top of the infrastructure stack. So not just the infrastructure layer but how can we be the best platform to run your databases? Your end is the computing workloads, your analytics applications your enterprise applications, cloud native applications. So that's what this is. And databases is one of our most successful workloads that's that runs a Nutanix very well because of the way the infrastructure software is architected. Because it's really great to scale high performance because again our superior architecture. And now with Era, it's a tool, it's all in one. Now it's also about really simplifying the management of databases and delivering them speedily and with agility to drive innovation in the organizations. >> Yep. Thank you Monica. Thank you. I I'll just add a couple of lines of comments into that. DTM for databases as erotically dots two, is going to be a challenge. And historically we are seen as an infrastructure company but the beauty of databases is so and to send to the infrastructure, the storage. So the language slightly becomes easy. And in fact, this holistic way of looking at solving the problem at the solution level rather than infrastructure helps us to go to a different kind of buyer, different kinds of decision maker, and we are learning. And I can tell you confidently the kind of progress that we have seen for in one enough year, the kind of customers that we are winning. And we are proving that we can bring a big difference to them. Though there is a challenge of DTM speaking the language of database, but the sheer nature of cloud platform the way they are a hundred hyperscale work. That's the kind of language that we take. You can run your solution. And here is how you can cut down your database backup time from hours to less than minute. Here's how you can cut down your patching from 16 hours to less than one hour. It is how you can cut down your provisioning time from multiple weeks to let them like matter of minutes. That holistic way of approaching it coupled with the power of the platform, really making the big difference for us. And I usually tell every time I meet, can you give us an opportunity to cut down your database cost, the PC vote, total cost of operations by close to 50%? That gets them excited that lets then move lean in and say, how do you plan to do it? And then we go about how do we do it? And we do a deep dive and PC people and all of it. So I'm excited. I think this is going to be a big play for Nutanix. We're going to make big difference. >> Absolutely well, Bala, congratulations to the team. Monica, both of you thank you so much for joining, really excited for all the announcements. >> Thank you so much. >> Thank you >> Stay with us. We're going to dig in a little bit more with one more interview for this product launch of the New Era and Database Management from Nutanix. I'm Stu Minimam as always, thank you for watching theCUBE. (cool music)
SUMMARY :
Narrator: From around the globe. I have the Senior Vice that the database for the customers have to our customers the ability I have one database it's kind of the one of the most valuable assets So the customers will now How about from the DBA standpoint? Meaning that the same kind of stuff that the machine can do, So bringing the cloud experience, of the services that are and the management of these of a lot of the themes that we talked about at nutanix.next demands of the stakeholders of Era and the enterprise And in the process they the same traditional of the way the infrastructure the kind of customers that we are winning. really excited for all the announcements. the New Era and Database
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Monica | PERSON | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
Monica Kumar | PERSON | 0.99+ |
five | QUANTITY | 0.99+ |
$3X | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
DBS | ORGANIZATION | 0.99+ |
two day | QUANTITY | 0.99+ |
$4X | QUANTITY | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Postgres | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
80% | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
24 | QUANTITY | 0.99+ |
16 hours | QUANTITY | 0.99+ |
New York | LOCATION | 0.99+ |
three days | QUANTITY | 0.99+ |
Bala Kuchibhotla | PERSON | 0.99+ |
less than one hour | QUANTITY | 0.99+ |
Bala | PERSON | 0.99+ |
both | QUANTITY | 0.99+ |
four years | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
tomorrow | DATE | 0.99+ |
SAP HANA | TITLE | 0.99+ |
365 day | QUANTITY | 0.99+ |
Stu Minimam | PERSON | 0.99+ |
Stu | PERSON | 0.99+ |
Hundreds of petabytes | QUANTITY | 0.99+ |
both sides | QUANTITY | 0.98+ |
Azure | TITLE | 0.98+ |
a month ago | DATE | 0.98+ |
today | DATE | 0.98+ |
SQL | TITLE | 0.98+ |
two categories | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
SAP HANA | TITLE | 0.98+ |
HCI | ORGANIZATION | 0.98+ |
single cluster | QUANTITY | 0.97+ |
seven different copies | QUANTITY | 0.96+ |
single shop | QUANTITY | 0.96+ |
almost 80% | QUANTITY | 0.96+ |
nutanix.next | ORGANIZATION | 0.96+ |
seven | QUANTITY | 0.96+ |
single framework | QUANTITY | 0.95+ |
one kind | QUANTITY | 0.94+ |
Eratory dot two | TITLE | 0.94+ |
every two years | QUANTITY | 0.93+ |
Jeff Mathis, Scalyr & Steve Newman, Scalyr | Scalyr Innovation Day 2019
from San Mateo its the cube covering scalar innovation day brought to you by scaler but I'm John four with the cube we are here in San Mateo California official innovation day at Skylar's headquarters with Steve Neumann the founder of scalar and Jeff Mathis a software engineer guys thanks for joining me today thanks for having us thanks great to have you here so you guys introduced power queries what is all this about yes so the vision for scalar is to become the platform users trust when they want to observe their systems and power queries is a really important step along that journey power queries provide new insights into data with a powerful and expressive query language that's still easy to use so why is this important so we like to scaler we like to think that we're all about speed and a lot of what we're known for is the kind of the raw performance of the query engine that we've built that's sitting underneath this product which is one measure of speed but really we like to think of speed as the time from a question in someone's head to an answer on their screen and so the whole kind of user journey is part of that and you know kind of traditionally in our product we've we provided a set of basic capabilities for searching and counting and graphing that are kind of very easy for people to access and so you can get in quickly pose your question get an answer without even having to learn a query language and and that's been great but there are sometimes the need goes a little bit beyond that the question that some wants to ask is a little bit more complicated or the data needs a little bit of massaging and it just goes beyond the boundaries what you can do in kind of those basic you know sort of basic set of predefined abilities and so that's where we wanted to take a step forward and you know kind of create this more advanced language for for those more advanced cases you know I love the name power query so they want power and it's got to be fast and good so that aside you know queries been around people know search engines search technology discovery finding stuff but as ai/an comes around and more scales and that the system this seems to be a lot more focus on like inference into intuiting what's happening this has been a big trend what do you what's your opinion on that because this has become a big opportunity using data we've seen you know file companies go public we know who they are and they're out there but there's more data coming I mean it's not like it's stopping anytime soon so what's the what's the innovation that that just gonna take power queries to the next level yes so one of the features that I'm really excited about in the future of power queries is our autocomplete feature we've taken a lot of inspiration from just what your navbar does in the browser so the idea is to have a context-sensitive predictive autocomplete feature that's going to take into account a number of individual the syntactic context of where you are in the query what fields you have available to you what fields you've searched recently those kinds of factors Steve what's your take before we get to the customer impact what's the what's the difference it different what's weird whereas power queries gonna shine today and tomorrow so it's some it was a kind of both an interesting and fun challenge for us to design and build this because you're you know we're trying to you know by definition this is for the you know the more advanced use cases the more you know when you need something more powerful and so a big part of the design question for us is how do we how do we let people you know do more sophisticated things with their logs when the when they have that that use case while still making it some you know kind of preserving that that's speed and ease of use that that we like to think we're known for and and in particular you know they've been you know something where you know step one is go you know read this 300 page reference manual and you know learn this complicated query language you know if that was the approach then you know then we would have failed before we started and we had we have the benefit of a lot of hindsight you know there a lot of different sister e of people manipulating data you know working with these sophisticated different and different kinds of systems so there are you know we have users coming to us who are used to working with other other log management tools we have users or more comfortable than SQL we have users who really you know their focus is just a more conventional programming languages especially because you know one of the constituencies we serve our you know it's a trend nowadays that development engineers are responsible also for keeping their code working well in production so they're not experts in this stuff they're not log management experts they're not you know uh telemetry experts and we want them to be able to come in and kind of casual you know coming casually to this tool and get something done but we had all that context of drawn with these different history of languages that people are used to so we came up with about a dozen use cases that we thought kind of covered the spectrum of you know what would people bring bring people into a scenario like this and we actually game to those out well how would you solve this particular question if we were using an SQL like approach or an approach based on this tool or which based on that tool and so we we did this like big exploration and we were able to boil down boil everything down to about ten fairly simple commands that they're pretty much covered the gamut by comparison you know there are there other solutions that have over a hundred commands and it obviously if it's just a lot to learn there at the other end of the spectrum um SQL really does all this with one command select and it's incredibly powerful but you also really have to be a wizard sometimes to kind of shoehorn that into yeah even though sequels out there people know that but people want it easier ultimately machines are gonna be taking over you get the ten commands you almost couldn't get to the efficiency level simplifying the use cases what's the customer scenario looked like what's that why is design important what's what's in it for the customer yeah absolutely so the user experience was a really important focus for us when designing power queries we knew from the start that if tool took you ten minutes to relearn every time you wanted to use it then the query takes ten minutes to execute it doesn't take seconds to execute so one of the ways we approached this problem was to make sure we're constantly giving the user feedback that starts as soon you load the page you've immediately got access to some of the documentation you need you use the feature if you have type in correct syntax you'll get feedback from the system about how to fix that problem and so really focusing on the user experience was a big part of the yeah people gonna factor in the time it takes to actually do the query write it up if you have to code it up and figure it out that's time lag right there you want be as fast as possible interesting design point radical right absolutely so Steve how does it go fast Jeff how does it go fast what are you guys looking at here what's the magic so let me I'm going to step over to the whiteboard shock board here and we'll so chog in one hand Mike in the other will will evaluate my juggling skills but I wanted to start by showing an example of what one of these queries looks like you know I talked about how we kind of boil everything down to about 10 commands so so let's talk through a simple scenario let's say I'm running a tax site you know people come to our web site and they're you know they're putting their taxes together and they're downloading forms and tax laws are different in every state so I have different code that's running for you know you know people in California versus people in Michigan or whatever and I can you know it's easy to do things like graph the overall performance and error rate for my site but I might have a problem with the code for one specific state and it might not show up in those overall statistics very clearly so I don't know I want to get a sense of how well I'm how I am performing for each of the 50 states so I'm gonna and I'm gonna simplify this a little bit but you know I might have an access log for this system where we'll see entries like you know we're loading the tax form and it's for the state of California and the status code was 200 which means that was successful and then we load the tax form and the state is Texas and again that was a success and then we load the tax form for Michigan and the status was a 502 which is a server error and then you know and millions of these mixing with other kinds of logs from other parts of my system and so I want to pull up a report what percentage of requests are succeeding or failing by state and so let me sketch for it first with the query would look like for that and then I'll talk about how how we execute this at speed so so first of all I have to say what which you know of all my other you know I've drawn just the relevant logs but this is gonna be mixed in with all the other logs for my system I need to say which which logs I care about well maybe as simple as just calling out they all have the this page name in them tax form so that that's the first step of my query I'm searching for tax form and now I want to count these count how many of these there are how many of them succeeded or failed and I want to cluster that by state so I'm gonna clustering is with the group command so I'm gonna say I want to count the total number of requests which is just the count so count is a part of the language total is what I'm choosing to name that and I want to count the errors which is also going to be the count command but now I'm going to give it a condition I want to only count where the status is at least 500 and I rather you can see that but behind the plant is a 500 and I'm gonna group that by state so we're we're counting up how many of these values were above 500 and we're grouping it by this field and what's gonna come out of that is a table that'll say for each state the total number of requests the number of errors oh and sorry I actually left out a couple of steps but so it's but actually let's draw what this would give us so far so it's gonna show me for California maybe I had nine thousand one hundred and fifty two requests thirteen of them were errors for Texas I had and so on but I'm still not really there you know that might show me that California had you know maybe California had thirteen errors and Rodi had 12 errors but only there were only 12 requests for Rhode Island Rhode Island is broke you know I've broken my code for Rhode Island but it's only 12 errors because it's a smaller population so that's you know this analysis is still not quite gonna get me where I need to go so I can now add another command I've done this group now I'm gonna say I'm gonna say let which triggers a calculation let error rate equal errors divided by total and so that's going to give me the fraction and so for California you know that might be 0.01 or whatever but for Rhode Island it's gonna be one 100% of the requests are failing and then I can add another command to sort by the error rate and now my problem states are gonna pop to the top so real easy to use language it's great for the data scientists digging in their practitioners you don't need to be hard core coder to get into this exactly that's the idea you know groups or you know very simple commands that just directly you know kind of match the English description of what you're trying to do so then but you know yeah asked a great question then which is how do we take this whole thing and execute it quickly so I'm gonna erase here you're getting into speed now right so yeah bit like that how you get the speed exactly speed is good so simplicity to use I get that it's now speed becomes the next challenge exactly and the speed feeds into the simplicity also because you know step one for anything any tool like this is learning the tool yeah and that involves a lot of trial and error and if the trial and error involves waiting and then at the end of the wait for a query to run you learn that oh you did the query wrong that's very discouraging to people and so we actually think of speed really then becomes some ease of use but all right so how do we actually do this so you've got you know you'll have your whole mass of log data tax forms other forms internal services database logs that are you got your whole you know maybe terabytes of log data somewhere in there are the the really important stuff the tax form errors as well as all the other tax form logs mixed in with a bigger pile of everything else so step one is to filter from that huge pile of all your logs down to just the tax form logs and for that we were able to leverage our existing query engine and one of the main things that makes that engine there's kind of two things that make that that engine as fast it is as it is it's massively parallel so we we segment the data across hundreds of servers our servers so all this data is already distributed across all these servers and once your databases you guys build your own in-house ok got it exactly so this is on our system so we've already collected we're collecting the logs in real time so by the time the user comes and types in that query we already have the data and it's already spread out across all these service then the you know the first step of that query was just a search for tax form and so that's our existing query engine that's not the new thing we've built for power queries so that existing very highly optimized engine this server scans through these logs this service insula these logs each server does its share and they collectively produce a smaller set of data which is just the tax form logs and that's still distributed by the way so really each server is doing this independently and and is gonna continue locally doing the next step so so we're harnessing the horsepower of all these servers each page I only have to work with a small fraction of the data then the next step was that group command we were counting the requests counting the errors and rolling that up by state so that's the new engine we've built but again it each server can do just its little share so this server is gonna take whichever tax form logs it found and produce a little table of counts in it by state this server is gonna do the same thing so at each produce they're a little grouping table with just their share of the logs and then all of that funnels down to one central server where we do the later steps we do the division divide number of errors by total count and and then sort it but by now you know here we might have you might have trillions of log messages down to millions or billions of messages that are relevant to your query now we here we have 50 records you know just one for each state so suddenly the amount of data is very small and so the you know the later steps may be kind of interesting from a processing perspective but they're easy from a speed perspective so you solve a lot of database challenges by understanding kind of how things flow once you've got everything with the columnar database is there just give up perspective of like what if the alternative would be if we this is like I just drew this to a database and I'm running sequel trillions of log files I mean it's not trivial I mean it's a database problem then it's a user problem kind of combine what's order of magnitude difference if I was gonna do the old way yeah so I mean I mean the truth is there's a hundred old ways know how much pain yes they're healthy you know if you're gonna you know if you try to just throw this all into one you know SQL sir you know MySQL or PostgreSQL bytes of data and and by the way we're glossing over the data has to exist but also has to get into the system so you know in you know when you're checking you know am i letting everyone in Rhode Island down on the night before you know the 15th you need up to the moment information but the date you know your database is not necessarily even if it could hold the data it's not necessarily designed to be pulling that in in real time so you know just sort of a simple approach like let me spin up my SQL and throw all the data in it's it's just not even gonna happen I'm gonna have so now you're sharding the data or you're looking at some you know other database solution or ever in it it's a heavy lift either way it's a lot of extra effort taxing on the developers yeah you guys do the heavy lifting yeah okay what's next where's the scale features come in what do you see this evolving for the customers so you know so Jeff talked about Auto complete which you were really excited about because it's gonna again you know a lot of this is for the casual user you know they're you know they're a power user of you know JavaScript or Java or something you're they're building the code and then they've got to come in and solve the problem and get back to what they think of as their real job and so you know we think autocomplete and the way we're doing it we're we're really leveraging both the context of what you're typing as well as the history of what you and your team have done in queried in the past as well as the content of your data every think of it a little bit like the the browser location bar which somehow you type about two letters and it knows exactly which page you're looking for because it's relying on all those different kinds of cues yeah it seems like that this is foundational heavy-lift you myself minimize all that pain then you get the autocomplete start to get in a much more AI machine learning kicks in more intelligent reasoning you start to get a feel for the data it seems like yeah Steve thanks for sharing that there it is on the whiteboard I'm trying for a year thanks for watching this cube conversation
SUMMARY :
small and so the you know the later
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff Mathis | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
California | LOCATION | 0.99+ |
ten minutes | QUANTITY | 0.99+ |
Michigan | LOCATION | 0.99+ |
Steve Neumann | PERSON | 0.99+ |
50 records | QUANTITY | 0.99+ |
Rhode Island | LOCATION | 0.99+ |
12 errors | QUANTITY | 0.99+ |
thirteen errors | QUANTITY | 0.99+ |
Steve | PERSON | 0.99+ |
Texas | LOCATION | 0.99+ |
nine thousand | QUANTITY | 0.99+ |
San Mateo | LOCATION | 0.99+ |
millions | QUANTITY | 0.99+ |
Steve Newman | PERSON | 0.99+ |
thirteen | QUANTITY | 0.99+ |
Java | TITLE | 0.99+ |
MySQL | TITLE | 0.99+ |
two things | QUANTITY | 0.99+ |
ten commands | QUANTITY | 0.99+ |
50 states | QUANTITY | 0.99+ |
each page | QUANTITY | 0.99+ |
0.01 | QUANTITY | 0.99+ |
300 page | QUANTITY | 0.99+ |
Rhode Island | LOCATION | 0.99+ |
today | DATE | 0.99+ |
each server | QUANTITY | 0.99+ |
each server | QUANTITY | 0.99+ |
hundreds of servers | QUANTITY | 0.98+ |
500 | QUANTITY | 0.98+ |
first step | QUANTITY | 0.98+ |
over a hundred commands | QUANTITY | 0.98+ |
tomorrow | DATE | 0.98+ |
JavaScript | TITLE | 0.98+ |
Rodi | PERSON | 0.98+ |
502 | OTHER | 0.98+ |
one | QUANTITY | 0.98+ |
step one | QUANTITY | 0.97+ |
Mike | PERSON | 0.97+ |
PostgreSQL | TITLE | 0.97+ |
billions of messages | QUANTITY | 0.97+ |
12 requests | QUANTITY | 0.96+ |
both | QUANTITY | 0.96+ |
100% | QUANTITY | 0.96+ |
each state | QUANTITY | 0.96+ |
200 | OTHER | 0.95+ |
a year | QUANTITY | 0.95+ |
one command | QUANTITY | 0.95+ |
John | PERSON | 0.95+ |
first | QUANTITY | 0.95+ |
about a dozen use cases | QUANTITY | 0.95+ |
about ten fairly simple commands | QUANTITY | 0.95+ |
trillions of log messages | QUANTITY | 0.95+ |
SQL | TITLE | 0.95+ |
English | OTHER | 0.93+ |
about 10 commands | QUANTITY | 0.93+ |
one central server | QUANTITY | 0.92+ |
one measure | QUANTITY | 0.92+ |
above 500 | QUANTITY | 0.9+ |
one of the main things | QUANTITY | 0.89+ |
each | QUANTITY | 0.89+ |
one specific state | QUANTITY | 0.89+ |
15th | QUANTITY | 0.89+ |
scalar innovation day | EVENT | 0.88+ |
Skylar | ORGANIZATION | 0.88+ |
Scalyr | PERSON | 0.84+ |
at least 500 | QUANTITY | 0.84+ |
Markus Strauss, McAfee | AWS re:Invent 2018
>> Live from Las Vegas, it's theCUBE, covering AWS re:Invent 2018, brought to you by Amazon Web Services, Intel, and their ecosystem partners. >> Hi everybody, welcome back to Las Vegas. I'm Dave Vellante with theCUBE, the leader in live tech coverages. This is day three from AWS re:Invent, #reInvent18, amazing. We have four sets here this week, two sets on the main stage. This is day three for us, our sixth year at AWS re:Invent, covering all the innovations. Markus Strauss is here as a Product Manager for database security at McAfee. Markus, welcome. >> Hi Dave, thanks very much for having me. >> You're very welcome. Topic near and dear to my heart, just generally, database security, privacy, compliance, governance, super important topics. But I wonder if we can start with some of the things that you see as an organization, just general challenges in securing database. Why is it important, why is it hard, what are some of the critical factors? >> Most of our customers, one of the biggest challenges they have is the fact that whenever you start migrating databases into the cloud, you inadvertently lose some of the controls that you might have on premise. Things like monitoring the data, things like being able to do real time access monitoring and real time data monitoring, which is very, very important, regardless of where you are, whether you are in the cloud or on premise. So these are probably really the biggest challenges that we see for customers, and also a point that holds them back a little, in terms of being able to move database workloads into the cloud. >> I want to make sure I understand that. So you're saying, if I can rephrase or reinterpret, and tell me if I'm wrong. You're saying, you got great visibility on prem and you're trying to replicate that degree of visibility in the cloud. >> Correct. >> It's almost the opposite of what you hear oftentimes, how people want to bring the cloud while on premise. >> Exactly. >> It's the opposite here. >> It's the opposite, yeah. 'Cause traditionally, we're very used to monitoring databases on prem, whether that's native auditing, whether that is in memory monitoring, network monitoring, all of these things. But once you take that database workload, and push it into the cloud, all of those monitoring capabilities essentially disappear, 'cause none of that technology was essentially moved over into the cloud, which is a really, really big point for customers, 'cause they cannot take that and just have a gap in their compliance. >> So database discovery is obviously a key step in that process. >> Correct, correct. >> What is database discovery? Why is it important and where does it fit? >> One of the main challenges most customers have is the ability to know where the data sits, and that begins with knowing where the database and how many databases customers have. Whenever we talk to customers and we ask how many databases are within an organization, generally speaking, the answer is 100, 200, 500, and when the actual scanning happens, very often the surprise is it's a lot more than what the customer initially thought, and that's because it's so easy to just spin off a database, work with it, and then forget about it, but from a compliance point of view, that means you're now sitting there, having data, and you're not monitoring it, you're not compliant. You don't even know it exists. So data discovery in terms of database discovery means you got to be able to find where your database workload is and be able to start monitoring that. >> You know, it's interesting. 10 years ago, database was kind of boring. I mean it was like Oracle, SQL Server, maybe DB2, maybe a couple of others, then all of a sudden, the NoSQL explosion occurred. So when we talk about moving databases into the cloud, what are you seeing there? Obviously Oracle is the commercial database market share leader. Maybe there's some smaller players. Well, Microsoft SQL Server obviously a very big... Those are the two big ones. Are we talking about moving those into the cloud? Kind of a lift and shift. Are we talking about conversion? Maybe you could give us some color on that. >> I think there's a bit of both, right? A lot of organizations who have proprietary applications that run since many, many years, there's a certain amount of lift and shift, right, because they don't want to rewrite the applications that run on these databases. But wherever there is a chance for organizations to move into some of their, let's say, more newer database systems, most organizations would take that opportunity, because it's easier to scale, it's quicker, it's faster, they get a lot more out of it, and it's obviously commercially more valuable as well, right? So, we see quite a big shift around NoSQL, but also some of the open source engines, like MySQL, ProsgreSQL, Percona, MariaDB, a lot of the other databases that, traditionally within the enterprise space, we probably wouldn't have seen that much in the past, right? >> And are you seeing that in a lot of those sort of emerging databases, that the attention to security detail is perhaps not as great as it has been in the traditional transaction environment, whether it's Oracle, DB2, even certainly, SQL Server. So, talk about that potential issue and how you guys are helping solve that. >> Yeah, I mean, one of the big things, and I think it was two years ago, when one of the open source databases got discovered essentially online via some, and I'm not going to name names, but the initial default installation had admin as username and no password, right? And it's very easy to install it that way, but unfortunately it means you potentially leave a very, very big gaping hole open, right? And that's one of the challenges with having open source and easily deployable solutions, because Oracle, SQLServer, they don't let you do that that quickly, right? But it might happen with other not as large database instances. One of the things that McAfee for instance does is helps customers making sure that configuration scans are done, so that once you have set up a database instance, that as an organization, you can go in and can say, okay, I need to know whether it's up to patch level, whether we have any sort of standard users with standard passwords, whether we have any sort of very weak passwords that are within the database environment, just to make sure that you cover all of those points, but because it's also important from a compliance point of view, right? It brings me always back to the compliance point of view of the organization being the data steward, the owner of the data, and it has to be our, I suppose, biggest point to protect the data that sits on those databases, right? >> Yeah, well there's kind of two sides of the same coin. The security and then compliance, governance, privacy, it flips. For those edicts, those compliance and governance edicts, I presume your objective is to make sure that those carry over when you move to the cloud. How do you ensure that? >> So, I suppose the biggest point to make that happen is ensure that you have one set of controls that applies to both environments. It brings us back to the hybrid point, right? Because you got to be able to reuse and use the same policies, and measures, and controls that you have on prem and be able to shift these into the cloud and apply them to the same rigor into the cloud databases as you would have been used to on prem, right? So that means being able to use the same set of policies, the same set of access control whether you're on prem or in the cloud. >> Yeah, so I don't know if our folks in our audience saw it today, but Werner Vogels gave a really, really detailed overview of Aurora. He went back to 2004, when their Oracle database went down because they were trying to do things that were unnatural. They were scaling up, and the global distribution. But anyway, he talked about how they re-architected their systems and gave inside baseball on Aurora. Huge emphasis on recovery. So you know, being very important to them, data accessibility, obviously security is a big piece of that. You're working with AWS on Aurora, and RDS as well. Can you talk specifically about what you're doing there as a partnership? >> So, AWS has, I think it was two days ago, essentially put the Aurora database activity stream into private preview, which is essentially a way for third party vendors to be able to read a activity stream off Aurora, enabling McAfee, for instance, to consume that data and bring customers the same level of real-time monitoring to the database as the servers were, as were used to on prem or even in a EC2 environment, where it's a lot easier because customers have access to the infrastructure, install things. That's always been a challenge within the database as the servers were because that access is not there, right? So, customers need to have an ability to get the same level of detail, and with the database activity stream and the ability for McAfee to read that, we give customers the same ability with Aurora PostgreSQL at the moment as customers have on premise with any of the other databases that we support. >> So you're bringing your expertise, some of which is really being able to identify anomalies, and scribbling through all this noise, and identifying the signal that's dangerous, and then obviously helping people respond to that. That's what you're enabling through that connection point. >> Correct, 'cause for organizations, using something like Aurora is a big saving, and the scalability that comes with it is fantastic. But if I can't have the same level of data control that I have on premise, it's going to stop me as an organization, moving critical data into that, 'cause I can't protect it, and I have to be able to. So, with this step, it's a great first step into being able to provide that same level of activity monitoring in real time as we're used to on prem. >> Same for RDS, is that pretty much what you're doing there? >> It's the same for RDS, yes. There is a certain set level of, obviously, you know, we go through before things go into GA but RDS is part of that program as well, yes. >> So, I wonder if we can step back a little bit and talk about some of the big picture trends in security. You know, we've gone from a world of hacktivists to organized crime, which is very lucrative. There are even state sponsored terrorism. I think Stuxnet is interesting. You probably can't talk about Stuxnet. Anyway-- >> No, not really. >> But, conceptually, now the bar is raised and the sophistication goes up. It's an arms race. How are you keeping pace? What role does data have? What's the state of security technology? >> It's very interesting, because traditionally, databases, nobody wanted to touch the areas. We were all very, very good at building walls around and being very perimeter-oriented when it comes to data center and all of that. I think that has changed little bit with the, I suppose the increased focus on the actual data. Since a lot of the legislations have changed since the threat of what if GDPR came in, a lot of companies had to rethink their take on protecting data at source. 'Cause when we start looking at the exfiltration path of data breaches, almost all the exfiltration happens essentially out of the database. Of course, it makes sense, right? I mean I get into the environment through various different other ways, but essentially, my main goal is not to see the network traffic. My main goal as any sort of hacker is essentially get onto the data, get that out, 'cause that's where the money sits. That's what essentially brings the most money in the open market. So being able to protect that data at source is going to help a lot of companies make sure that that doesn't happen, right? >> Now, the other big topic I want to touch on in the minute we have remaining is ransomware. It's a hot topic. People are talking about creating air gaps, but even air gaps, you can get through an air gap with a stick. Yeah, people get through. Your thoughts on ransomware, how are you guys combating that? >> There is very specific strains, actually, developed for databases. It's a hugely interesting topic. But essentially what it does is it doesn't encrypt the whole database, it encrypts very specific key fields, leaves the public key present for a longer period of time than what we're used to see on the endpoint board, where it's a lot more like a shotgun approach and you know somebody is going to pick it up, and going to pay the $200, $300, $400, whatever it is. On the database side, it's a lot more targeted, but generally it's a lot more expensive, right? So, that essentially runs for six months, eight months, make sure that all of the backups are encrypted as well, and then the public key gets removed, and essentially, you have lost access to all of your data, 'cause even the application that access the data can't talk to the database anymore. So, we have put specific controls in place that monitor for changes in the encryption level, so even if only one or two key fields starting to get encrypted with a different encryption key, we're able to pick that up, and alert you on it, and say hey, hang on, there is something different to what you usually do in terms of your encryption. And that's a first step to stopping that, and being able to roll back and bring in a backup, and change, and start looking where the attacker essentially gained access into the environment. >> Markus, are organizations at the point where they are automating that process, or is it still too dangerous? >> A lot of it is still too dangerous, although, having said that, we would like to go more into the automation space, and I think it's something as an industry we have to, because there is so much pressure on any security personnel to follow through and do all of the rules, and sift through, and find the needle in the haystack. But especially on a database, the risk of automating some of those points is very great, because if you make a mistake, you might break a connection, or you might break something that's essentially very, very valuable, and that's the crown jewels, the data within the company. >> Right. All right, we got to go. Thanks so much. This is a really super important topic. >> Appreciate all the good work you're doing. >> Thanks for having me. >> You're very welcome. All right, keep it right there, everybody. You're watching theCUBE. We'll be right back, right after this short break from AWS re:Invent 2018, from Las Vegas. We'll be right back. (techno music)
SUMMARY :
brought to you by Amazon Web Services, covering all the innovations. some of the things that you see is the fact that whenever you start and you're trying to replicate It's almost the opposite of and push it into the cloud, a key step in that process. is the ability to know where the data sits, Obviously Oracle is the commercial database a lot of the other databases that, that the attention to security detail and it has to be our, those carry over when you move to the cloud. and controls that you have on prem and the global distribution. and the ability for McAfee to read that, and identifying the signal that's dangerous, and the scalability It's the same for RDS, yes. the big picture trends in security. and the sophistication goes up. Since a lot of the legislations have changed in the minute we have remaining is ransomware. that monitor for changes in the encryption level, and do all of the rules, This is a really super important topic. Appreciate all the good work You're very welcome.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Amazon Web Services | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
six months | QUANTITY | 0.99+ |
eight months | QUANTITY | 0.99+ |
Markus Strauss | PERSON | 0.99+ |
one | QUANTITY | 0.99+ |
Markus | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
$200 | QUANTITY | 0.99+ |
2004 | DATE | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
McAfee | ORGANIZATION | 0.99+ |
MySQL | TITLE | 0.99+ |
$300 | QUANTITY | 0.99+ |
$400 | QUANTITY | 0.99+ |
100 | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
sixth year | QUANTITY | 0.99+ |
NoSQL | TITLE | 0.99+ |
two sides | QUANTITY | 0.99+ |
two years ago | DATE | 0.98+ |
both environments | QUANTITY | 0.98+ |
first step | QUANTITY | 0.98+ |
Werner Vogels | PERSON | 0.98+ |
two days ago | DATE | 0.98+ |
ProsgreSQL | TITLE | 0.98+ |
two sets | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
10 years ago | DATE | 0.98+ |
today | DATE | 0.98+ |
MariaDB | TITLE | 0.98+ |
SQL Server | TITLE | 0.97+ |
Aurora | TITLE | 0.97+ |
#reInvent18 | EVENT | 0.96+ |
GDPR | TITLE | 0.96+ |
One | QUANTITY | 0.96+ |
500 | QUANTITY | 0.96+ |
four sets | QUANTITY | 0.95+ |
200 | QUANTITY | 0.95+ |
DB2 | TITLE | 0.95+ |
SQL | TITLE | 0.94+ |
day three | QUANTITY | 0.94+ |
this week | DATE | 0.93+ |
Aurora PostgreSQL | TITLE | 0.89+ |
two key fields | QUANTITY | 0.89+ |
Percona | TITLE | 0.88+ |
one set | QUANTITY | 0.87+ |
re:Invent | EVENT | 0.86+ |
prem | ORGANIZATION | 0.84+ |
AWS re:Invent | EVENT | 0.83+ |
two big ones | QUANTITY | 0.79+ |
AWS re:Invent 2018 | EVENT | 0.77+ |
RDS | TITLE | 0.76+ |
EC2 | TITLE | 0.73+ |
Invent 2018 | TITLE | 0.7+ |
Invent 2018 | EVENT | 0.68+ |
Stuxnet | ORGANIZATION | 0.63+ |
theCUBE | ORGANIZATION | 0.59+ |
Stuxnet | PERSON | 0.57+ |
ttacker | TITLE | 0.52+ |
SQLServer | ORGANIZATION | 0.5+ |
challenges | QUANTITY | 0.49+ |
VMworld Day 1 General Session | VMworld 2018
For Las Vegas, it's the cube covering vm world 2018, brought to you by vm ware and its ecosystem partners. Ladies and gentlemen, Vm ware would like to thank it's global diamond sponsors and it's platinum sponsors for vm world 2018 with over 125,000 members globally. The vm ware User Group connects via vmware customers, partners and employees to vm ware, information resources, knowledge sharing, and networking. To learn more, visit the [inaudible] booth in the solutions exchange or the hemoglobin gene vm village become a part of the community today. This presentation includes forward looking statements that are subject to risks and uncertainties. Actual results may differ materially as a result of various risk factors including those described in the 10 k's 10 q's and k's vm ware. Files with the SEC. Ladies and Gentlemen, please welcome Pat Gelsinger. Welcome to vm world. Good morning. Let's try that again. Good morning and I'll just say it is great to be here with you today. I'm excited about the sixth year of being CEO. When it was on this stage six years ago were Paul Maritz handed me the clicker and that's the last he was seen. We have 20,000 plus here on site in Vegas and uh, you know, on behalf of everyone at Vm ware, you know, we're just thrilled that you would be with us and it's a joy and a thrill to be able to lead such a community. We have a lot to share with you today and we really think about it as a community. You know, it's my 23,000 plus employees, the souls that I'm responsible for, but it's our partners, the thousands and we kicked off our partner day yesterday, but most importantly, the vm ware community is centered on you. You know, we're very aware of this event would be nothing without you and our community and the role that we play at vm wares to build these cool breakthrough innovations that enable you to do incredible things. You're the ones who take our stuff and do amazing things. You altogether. We have truly changed the world over the last two decades and it is two decades. You know, it's our anniversary in 1998, the five people that started a vm ware, right. You know, it was, it was exactly 20 years ago and we're just thrilled and I was thinking about this over the weekend and it struck me, you know, anniversary, that's like old people, you know, we're here, we're having our birthday and it's a party, right? We can't have a drink yet, but next year. Yeah. We're 20 years old. Right. We can do that now. And I'll just say the culture of this community is something that truly is amazing and in my 38 years, 38 years in tech, that sort of sounds like I'm getting old or something, but the passion, the loyalty, almost a cult like behavior that we see in this team of people to us is simply thrilling. And you know, we put together a little video to sort of summarize the 20 years and some of that history and some of the unique and quirky aspects of our culture. Let's watch that now. We knew we had something unique and then we demonstrated that what was unique was also some reasons that we love vm ware, you know, like the community out there. So great. The technology I love it. Ware is solid and much needed. Literally. I do love Vmr. It's awesome. Super Awesome. Pardon? There's always someone that wants to listen and learn from us and we've learned so much from them as well. And we reached out to vm ware to help us start building. What's that future world look like? Since we're doing really cutting edge stuff, there's really no better people to call and Bmr has been known for continuous innovation. There's no better way to learn how to do new things in it than being with a company that's at the forefront of technology. What do you think? Don't you love that commitment? Hey Ashley, you know, but in the prep sessions for this, I thought, boy, what can I do to take my commitment to the next level? And uh, so, uh, you know, coming in a couple days early, I went to down the street to bad ass tattoo. So it's time for all of us to take our commitment up level and sometimes what happens in Vegas, you take home. Thank you. Vm Ware has had this unique role in the industry over these 20 years, you know, and for that we've seen just incredible things that have happened over this period of time and it's truly extraordinary what we've accomplished together. And you know, as we think back, you know, what vm ware has uniquely been able to do is I'll say bridge across know and we've seen time and again that we see these areas of innovation emerging and rapidly move forward. But then as they become utilized by our customers, they create this natural tension of what business wants us flexibility to use across these silos of innovation. And from the start of our history, we have collectively had this uncanny ability to bridge across these cycles of innovation. You know, an act one was clearly the server generation. You know, it may seem a little bit, uh, ancient memory now, but you remember you used to walk into your data center and it looked like the loove the museum of it passed right? You know, and you had your old p series and your z series in your sparks and your pas and your x86 cluster and Yo, it had to decide, well, which architecture or am I going to deploy and run this on? And we bridged across and that was the magic of Esx. You don't want to just changed the industry when that occurred. And I sort of called the early days of Esx and vsphere. It was like the intelligence test. If you weren't using it, you fail because Yup. Servers, 10 servers become one months, become minutes. I still have people today who come up to me and they reflect on their first experience of vsphere or be motion and it was like a holy moment in their life and in their careers. Amazing and act to the Byo d, You know, can we bridge across these devices and users wanted to be able to come in and say, I have my device and I'm productive on it. I don't want to be forced to use the corporate standard. And maybe more than anything was the power of the iphone that was introduced, the two, seven, and suddenly every employee said this is exciting and compelling. I want to use it so I can be more productive when I'm here. Bye. Jody was the rage and again it was a tough challenge and once again vm ware helped to bridge across the surmountable challenge. And clearly our workspace one community today is clearly bridging across these silos and not just about managing devices but truly enabling employee engagement and productivity. Maybe act three was the network and you know, we think about the network, you know, for 30 years we were bound to this physical view of what the network would be an in that network. We are bound to specific protocols. We had to wait months for network upgrades and firewall rules. Once every two weeks we'd upgrade them. If you had a new application that needed a firewall rule, sorry, you know, come back next month we'll put, you know, deep frustration among developers and ceos. Everyone was ready to break the chains. And that's exactly what we did. An NSX and Nice Sierra. The day we acquired it, Cisco stock drops and the industry realizes the networking has changed in a fundamental way. It will never be the same again. Maybe act for was this idea of cloud migration. And if we were here three years ago, it was student body, right to the public cloud. Everything is going there. And I remember I was meeting with a cio of federal cio and he comes up to me and he says, I tried for the last two years to replatform my 200 applications I got to done, you know, and all of a sudden that was this. How do I do cloud migration and the effective and powerful way. Once again, we bridged across, we brought these two worlds together and eliminated this, uh, you know, this gap between private and public cloud. And we'll talk a lot more about that today. You know, maybe our next act is what we'll call the multicloud era. You know, because today in a recent survey by Deloitte said that the average business today is using eight public clouds and expected to become 10 plus public clouds. And you know, as you're managing different tools, different teams, different architectures, those solution, how do you, again bridge across, and this is what we will do in the multicloud era, we will help our community to bridge across and take advantage of these powerful cycles of innovation that are going on, but be able to use them across a consistent infrastructure and operational environment. And we'll have a lot more to talk about on this topic today. You know, and maybe the last item to bridge across maybe the most important, you know, people who are profit. You know, too often we think about this as an either or question. And as a business leader, I'm are worried about the people or the And Milton Friedman probably set us up for this issue decades ago when he said, planet, right? the sole purpose of a business is to make profits. You want to create a multi-decade dilemma, right? For business leaders, could I have both people and profits? Could I do well and do good? And particularly for technology, I think we don't have a choice to think about these separately. We are permeating every aspect of business. And Society, we have the responsibility to do both and have all the things that vm ware has accomplished. I think this might be the one that I'm most proud of over, you know, w we have demonstrated by vsphere and the hypervisor alone that we have saved over 540 million tons of co two emissions. That is what you have done. Can you believe that? Five hundred 40 million tons is enough to have 68 percent of all households for a year. Wow. Thank you for what you have done. Thank you. Or another translation of that. Is that safe enough to drive a trillion miles and the average car or you could go to and from Jupiter just in case that was in your itinerary a thousand times. Right? He was just incredible. What we have done and as a result of that, and I'll say we were thrilled to accept this recognition on behalf of you and what you have done. You know, vm were recognized as number 17 in the fortune. Change the world list last week. And we really view it as accepting this honor on behalf of what you have done with our products and technology tech as a force for good. We believe that fundamentally that is our opportunity, if not our obligation, you know, fundamentally tech is neutral, you know, we together must shape it for good. You know, the printing press by Gutenberg in 1440, right? It was used to create mass education and learning materials also can be used for extremist propaganda. The technology itself is neutral. Our ecosystem has a critical role to play in shaping technology as a force for good. You know, and as we think about that tomorrow, we'll have a opportunity to have a very special guest and I really encourage you to be here, be on time tomorrow morning on the stage and you know, Sanjay's a session, we'll have Malala, Nobel Peace Prize winner and fourth will be a bit of extra security as you come in and you understand that. And I just encourage you not to be late because we see this tech being a force for good in everything that we do at vm ware. And I hope you'll enjoy, I'm quite looking forward to the session tomorrow. Now as we think about the future. I like to put it in this context, the superpowers of tech know and you know, 38 years in the industry, you know, I am so excited because I think everything that we've done over the last four decades is creating a foundation that allows us to do more and go faster together. We're unlocking game, changing opportunities that have not been available to any people in the history of humanity. And we have these opportunities now and I, and I think about these four cloud, you have unimaginable scale. You'll literally with your Amex card, you can go rent, you know, 10,000 cores for $100 per hour. Or if you have Michael's am ex card, we can rent a million cores for $10,000 an hour. Thanks Michael. But we also know that we're in many ways just getting started and we have tremendous issues to bridge across and compatible clouds, mobile unprecedented scale. Literally, your application can reach half the humans on the planet today. But we also know that five percent, the lowest five percent of humanity or the other half of humanity, they're still in the lower income brackets, less than five percent penetrated. And we know that we have customer examples that are using mobile phones to raise impoverished farmers in Africa, out of poverty just by having a smart phone with proper crop, the information field and whether a guidance that one tool alone lifting them out of poverty. Ai knows, you know, I really love the topic of ai in 1986. I'm the chief architect of the 80 46. Some of you remember what that was. Yeah, I, you know, you're, you're my folk, right? Right. And for those of you who don't, it was a real important chip at the time. And my marketing manager comes running into my office and he says, Pat, pat, we must make the 46 a great ai chip. This is 1986. What happened? Nothing an AI is today, a 30 year overnight success because the algorithms, the data have gotten so much bigger that we can produce results, that we can bring intelligence to everything. And we're seeing dramatic breakthroughs in areas like healthcare, radiology, you know, new drugs, diagnosis tools, and designer treatments. We're just scratching the surface, but ai has so many gaps, yet we don't even in many cases know why it works. Right? And we'll call that explainable ai and edge and Iot. We're connecting the physical and the digital worlds was never before possible. We're bridging technology into every dimension of human progress. And today we're largely hooking up things, right? We have so much to do yet to make them intelligent. Network secured, automated, the patch, bringing world class it to Iot, but it's not just that these are super powers. We really see that each and each one of them is a super power in and have their own right, but they're making each other more powerful as well. Cloud enables mobile conductivity. Mobile creates more data, more data makes the AI better. Ai Enables more edge use cases and more edge requires more cloud to store the data and do the computing right? They're reinforcing each other. And with that, we know that we are speeding up and these superpowers are reshaping every aspect of society from healthcare to education, the transportation, financial institutions. This is how it all comes together. Now, just a simple example, how many of you have ever worn a hardhat? Yeah, Yo. Pretty boring thing. And it has one purpose, right? You know, keep things from smacking me in the here's the modern hardhat. It's a complete heads up display with ar head. Well, vr capabilities that give the worker safety or workers or factory workers or supply people the ability to see through walls to understand what's going on inside of the equipment. I always wondered when I was a kid to have x Ray Vision, you know, some of my thoughts weren't good about why I wanted it, but you know, I wanted to. Well now you can have it, you know, but imagine in this environment, the complex application that sits behind it. You know, you're accessing maybe 50 year old building plants, right? You're accessing HVAC systems, but modern ar and vr capabilities and new containerized displays. You'll think about that application. You know, John Gage famously said the network is the computer pat today says the application is now a network and pretty typically a complicated one, you know, and this is the vm ware vision is to make that kind of environment realizable in every aspect of our business and community and we simply have been on this journey, any device, any application, any cloud with intrinsic security. And this vision has been consistent for those of you who have been joining us for a number of years. You've seen this picture, but it's been slowly evolving as we've worked in piece by piece to refine and extend this vision, you know, and for it, we're going to walk through and use this as the compass for our discussion today as we walk through our conversation. And you know, we're going to start by a focus on any cloud. And as we think about this cloud topic, you know, we see it as a multicloud world hybrid cloud, public cloud, but increasingly seeing edge and telco becoming clouds in and have their own right. And we're not gonna spend time on it today, but this area of Telco to the is an enormous opportunity for us in our community. You know, data centers and cloud today are over 80 percent virtualized. The Telco network is less than 10 percent virtualized. Wow. An industry that's almost as big as our industry entirely unvirtualized, although the technologies we've created here can be applied over here and Telco and we have an enormous buildout coming with five g and environments emerging. What an opportunity for us, a virgin market right next to us and we're getting some early mega winds in this area using the technologies that you have helped us cure rate than the So we're quite excited about this topic area as well. market. So let's look at this full view of the multicloud. Any cloud journey. And we see that businesses are on a multicloud journey, you know, and today we see this fundamentally in these two paths, a hybrid cloud and a public cloud. And these paths are complimentary and coexisting, but today, each is being driven by unique requirements and unique teams. Largely the hybrid cloud is being driven by it. And operations, the public cloud being driven more by developers and line of business requirements and as some multicloud environment. So how do we deliver upon that and for that, let's start by digging in on the hybrid cloud aspect of this and as we think about the hybrid cloud, we've been talking about this subject for a number of years and I want to give a very specific and crisp definition. You're the hybrid cloud is the public cloud and the private cloud cooperating with consistent infrastructure and consistent operations simply put seamless path to and from the cloud that my workloads don't care if it's here or there. I'm able to run them in a agile, scalable, flexible, efficient manner across those two environments, whether it's my data center or someone else's, I can bring them together to make that work is the magic of the Vm ware Cloud Foundation. The vm ware Cloud Foundation brings together computer vsphere and the core of why we are here, but combines with that networking storage delivered through a layer of management and automation. The rule of the cloud is ruthlessly automate everything. We laid out this vision of the software defined data center seven years ago and we've been steadfastly working on this vision and vm ware. Cloud Foundation provides this consistent infrastructure and operations with integrated lifecycle management automation. Patching the m ware cloud foundation is the simplest path to the hybrid cloud and the fastest way to get vm ware cloud foundation is hyperconverged infrastructure, you know, and with this we've combined integrated then validated hardware and as a building block inside of this we have validated hardware, the v Sand ready environments. We have integrated appliances and cloud delivered infrastructure, three ways that we deliver that integrate integrated hyperconverged infrastructure solution. And we have by far the broadest ecosystem of partners to do it. A broad set of the sand ready nodes from essentially everybody in the industry. Secondly, we have integrated appliances, the extract of vxrail that we have co engineered with our partners at Dell technology and today in fact Dell is releasing the power edge servers, a major step in blade servers that again are going to be powering vxrail and vxrack systems and we deliver hyperconverged infrastructure through a broader set of Vm ware cloud partners as well. At the heart of the hyperconverged infrastructure is v San and simply put, you know, be San has been the engine that's just been moving rapidly to take over the entire integration of compute and storage and expand to more and more areas. We have incredible momentum over 15,000 customers for v San Today and for those of you who joined us, we say thank you for what you have done with this product today. Really amazing you with 50 percent of the global 2000 using it know vm ware. V San Vxrail are clearly becoming the standard for how hyperconverge is done in the industry. Our cloud partner programs over 500 cloud partners are using ulv sand in their solution, you know, and finally the largest in Hci software revenue. Simply put the sand is the software defined storage technology of choice for the industry and we're seeing that customers are putting this to work in amazing ways. Vm Ware and Dell technologies believe in tech as a force for good and that it can have a major impact on the quality of life for every human on the planet and particularly for the most underdeveloped parts of the world. Those that live on less than $2 per day. In fact that this moment 5 billion people worldwide do not have access to modern affordable surgery. Mercy ships is working hard to change the global surgery crisis with greater than 400 volunteers. Mercy ships operates the largest NGO hospital ship delivering free medical care to the poorest of the poor in Africa. Let's see from them now. When the ship shows up to port, literally people line up for days to receive state of the art life, sane changing life saving surgeries, tumor site limbs, disease blindness, birth defects, but not only that, the personnel are educating and training the local healthcare providers with new skills and infrastructure so they can care for their own. After the ship has left, mercy ships runs on Vm ware, a dell technology with VX rail, Dell Isilon data protection. We are the it platform for mercy ships. Mercy ships is now building their next generation ship called global mercy, which were more than double. It's lifesaving capacity. It's the largest charity hospital ever. It will go live in 20 slash 20 serving Africa and I personally plan on being there for its launch. It is truly amazing what they are doing with our technology. Thanks. So we see this picture of the hybrid cloud. We've talked about how we do that for the private cloud. So let's look over at the public cloud and let's dig into this a little bit more deeply. You know, we're taking this incredible power of the Vm ware Cloud Foundation and making it available for the leading cloud providers in the world and with that, the partnership that we announced almost two years ago with Amazon and on the stage last year, we announced their first generation of products, no better example of the hybrid cloud. And for that it's my pleasure to bring to stage my friend, my partner, the CEO of aws. Please welcome Andy Jassy. Thank you andy. You know, you honor us with your presence, you know, and it really is a pleasure to be able to come in front of this audience and talk about what our teams have accomplished together over the last, uh, year. Yo, can you give us some perspective on that, Andy and what customers are doing with it? Well, first of all, thanks for having me. I really appreciate it. It's great to be here with all of you. Uh, you know, the offering that we have together customers because it allows them to use the same software they've been using to again, where cloud and aws is very appealing to manage their infrastructure for years to be able to deploy it an aws and we see a lot of customer momentum and a lot of customers using it. You see it in every imaginable vertical business segment in transportation. You see it with stagecoach and media and entertainment. You see it with discovery communications in education, Mit and Caltech and consulting and accenture and cognizant and dxc you see in every imaginable vertical business segment and the number of customers using the offering is doubling every quarter. So people were really excited about it and I think that probably the number one use case we see so far, although there are a lot of them, is customers who are looking to migrate on premises applications to the cloud. And a good example of that is mit. We're there right now in the process of migrating. In fact, they just did migrate 3000 vms from their data centers to Vm ware cloud native us. And this would have taken years before to do in the past, but they did it in just three months. It was really spectacular and they're just a fun company to work with and the team there. But we're also seeing other use cases as well. And you're probably the second most common example is we'll say on demand capabilities for things like disaster recovery. We have great examples of customers you that one in particular, his brakes, right? Urban in those. The brings security trucks and they all armored trucks coming by and they had a critical need to retire a secondary data center that they were using, you know, for Dr. so we quickly built to Dr Protection Environment for $600. Bdms know they migrated their mission critical workloads and Wallah stable and consistent Dr and now they're eliminating that site and looking for other migrations as well. The rate of 10 to 15 percent. It was just a great deal. One of the things I believe Andy, he'll customers should never spend capital, uh, Dr ever again with this kind of capability in place. That is just that game changing, you know, and you know, obviously we've been working on expanding our reach, you know, we promised to make the service available a year ago with the global footprint of Amazon and now we've delivered on that promise and in fact today or yesterday if you're an ozzie right down under, we announced in Sydney, uh, as well. And uh, now we're in US Europe and in APJ. Yeah. It's really, I mean it's very exciting. Of course Australia is one of the most virtualized places in the world and, and it's pretty remarkable how fast European customers have started using the offering to and just the quarter that's been out there and probably have the many requests customers has had. And you've had a, probably the number one request has been that we make the offering available in all the regions. The aws has regions and I can tell you by the end of 2019 will largely be there including with golf clubs and golf clap. You guys have been, that's been huge for you guys. Yeah. It's a government only region that we have that a lot of federal government workloads live in and we are pretty close together having the offering a fedramp authority to operate, which is a big deal on a game changer for governments because then there'll be able to use the familiar tools they use and vm ware not just to run their workloads on premises but also in the cloud as well with the data privacy requirements, security requirements they need. So it's a real game changer for government too. Yeah. And this you can see by the picture here basically before the end of next year, everywhere that you are and have an availability zone. We're going to be there running on data. Yup. Yeah. Let's get with it. Okay. We're a team go faster. Okay. You'll and you know, it's not just making it available, but this pace of innovation and you know, you guys have really taught us a few things in this respect and since we went live in the Oregon region, you know, we've been on a quarterly cadence of major releases and two was really about mission critical at scale and we added our second region. We added our hybrid cloud extension with m three. We moved the global rollout and we launched in Europe with m four. We really add a lot of these mission critical governance aspects started to attack all of the industry certifications and today we're announcing and five right. And uh, you know, with that, uh, I think we have this little cool thing you know, two of the most important priorities for that we're doing with ebs and storage. Yeah, we'll take, customers, our cost and performance. And so we have a couple of things to talk about today that we're bringing to you that I think hit both of those on a storage side. We've combined the elasticity of Amazon Elastic Block store or ebs with ware is Va v San and we've provided now a storage option that you'll be able to use that as much. It's very high capacity and much more cost effective and you'll start to see this initially on the Vm ware cloud. Native us are five instances which are compute instances, their memory optimized and so this will change the cost equation. You'll be able to use ebs by default and it'll be much more cost effective for storage or memory intensive workloads. Um, it's something that you guys have asked for. It's been very frequently requested it, it hits preview today. And then the other thing is that we've worked really hard together to integrate vm ware's Nsx along with aws direct neck to have a private even higher performance conductivity between on premises and the cloud. So very, very exciting new capabilities to show deep integration between the companies. Yeah. You know, in that aspect of the deep integration. So it's really been the thing that we committed to, you know, we have large engineering teams that are working literally every day. Right on bringing together and how do we fuse these platforms together at a deep and intimate way so that we can deliver new services just like elastic drs and the c and ebs really powerful, uh, capabilities and that pace of innovation continue. So next maybe. Um, maybe six. I don't know. We'll see. All right. You know, but we're continuing this toward pace of innovation, you know, completing all of the capabilities of Nsx. You'll full integration for all of the direct connect to capabilities. Really expanding that. You're only improving licensed capabilities on the platform. We'll be adding pks on top of for expanded developer a capabilities. So just. Oh, thank you. I, I think that was formerly known as Right, and y'all were continuing this pace of storage Chad. So anyway. innovation going forward, but I think we also have a few other things to talk about today. Andy. Yeah, I think we have some news that hopefully people here will be pretty excited about. We know we have a pretty big database business and aws and it's. It's both on the relational and on the nonrelational side and the business is billions of dollars in revenue for us and on the relational side. We have a service called Amazon relational database service or Amazon rds that we have hundreds of thousands of customers using because it makes it much easier for them to set up, operate and scale their databases and so many companies now are operating in hybrid mode and will be for a while and a lot of those customers have asked us, can you give us the ease of manageability of those databases but on premises. And so we talked about it and we thought about and we work with our partners at Vm ware and I'm excited to announce today, right now Amazon rds on Vm ware and so that will bring all the capabilities of Amazon rds to vm ware's customers for their on premises environments. And so what you'll be able to do is you'll be able to provision databases. You'll be able to scale the compute or the memory or the storage for those database instances. You'll be able to patch the operating system or database engines. You'll be able to create, read replicas to scale your database reads and you can deploy this rep because either on premises or an aws, you'll be able to deploy and high high availability configuration by replicating the data to different vm ware clusters. You'll be able to create online backups that either live on premises or an aws and then you'll be able to take all those databases and if you eventually want to move them to aws, you'll be able to do so rather easily. You have a pretty smooth path. This is going to be available in a few months. It will be available on Oracle sql server, sql postgresql and Maria DB. I think it's very exciting for our customers and I think it's also a good example of where we're continuing to deepen the partnership and listen to what customers want and then innovate on their behalf. Absolutely. Thank you andy. It is thrilling to see this and as we said, when we began the partnership, it was a deep integration of our offerings and our go to market, but also building this bi-directional hybrid highway to give customers the capabilities where they wanted cloud on premise, on premise to the cloud. It really is a unique partnership that we've built, the momentum we're feeling to our customer base and the cool innovations that we're doing. Andy, thank you so much for you Jordan Young, rural 20th. You guys appreciate it. Yeah, we really have just seen incredible momentum and as you might have heard from our earnings call that we just finished this. We finished the last quarter. We just really saw customer momentum here. Accelerating. Really exciting to see how customers are starting to really do the hybrid cloud at scale and with this we're just seeing that this vm ware cloud foundation available on Amazon available on premise. Very powerful, but it's not just the partnership with Amazon. We are thrilled to see the momentum of our Vm ware cloud provider program and this idea of the vm ware cloud providers has continued to gain momentum in the industry and go over five years. Right. This program has now accumulated more than 4,200 cloud partners in over 120 countries around the globe. It gives you choice, your local provider specialty offerings, some of your local trusted partners that you would have in giving you the greatest flexibility to choose from and cloud providers that meet your unique business requirements. And we launched last year a program called Vm ware cloud verified and this was saying you're the most complete embodiment of the Vm ware Cloud Foundation offering by our cloud partners in this program and this logo you know, allows you to that this provider has achieved the highest standard for cloud infrastructure and that you can scale and deliver your hybrid cloud and partnering with them. It know a particular. We've been thrilled to see the momentum that we've had with IBM as a huge partner and our business with them has grown extraordinarily rapidly and triple digits, but not just the customer count, which is now over 1700, but also in the depth of customers moving large portions of the workload. And as you see by the picture, we're very proud of the scope of our partnerships in a global basis. The highest standard of hybrid cloud for you, the Vm ware cloud verified partners. Now when we come back to this picture, you know we, you know, we're, we're growing in our definition of what the hybrid cloud means and through Vm Ware Cloud Foundation, we've been able to unify the private and the public cloud together as never before, but we're also seeing that many of you are interested in how do I extend that infrastructure further and farther and will simply call that the edge right? And how do we move data closer to where? How do we move data center resources and capacity closer to where the data's being generated at the operations need to be performed? Simply the edge and we'll dig into that a little bit more, but as we do that, what are the things that we offer today with what we just talked about with Amazon and our VCP p partners is that they can consume as a service this full vm ware Cloud Foundation, but today we're only offering that in the public cloud until project dimension of project dimension allows us to extend delivered as a service, private, public, and to the edge. Today we're announcing the tech preview, a project dimension Vm ware cloud foundation in a hyperconverged appliance. We're partnered deeply with Dell EMC, Lenovo for the first partners to bring this to the marketplace, built on that same proven infrastructure, a hybrid cloud control plane, so literally just like we're managing the Vm ware cloud today, we're able to do that for your on premise. You're small or remote office or your edge infrastructure through that exact same as a service management and control plane, a complete vm ware operated end to end environment. This is project dimension. Taking the vcf stack, the full vm ware cloud foundation stack, making an available in the cloud to the edge and on premise as well, a powerful solution operated by BM ware. This project dimension and project dimension allows us to have a fundamental building block in our approach to making customers even more agile, flexible, scalable, and a key component of our strategy as well. So let's click into that edge a little bit more and we think about the edge in the following layers, the compute edge, how do we get the data and operations and applications closer to where they need to be. If you remember last year I talked about this pendulum swinging of centralization and decentralization edge is a decentralization force. We're also excited that we're moving the edge of the devices as well and we're doing that in two ways. One with workspace, one for human optimized devices and the second is project pulse or Vm ware pulse. And today we're announcing pulse two point zero where you can consume it now as a service as well as with integrated security. And we've now scaled pulse to support 500 million devices. Isn't that incredible, right? I mean this is getting a scale. Billions and billions and finally networking is a key component. You all that. We're stretching the networking platform, right? And evolving how that edge operates in a more cloud and that's a service white and this is where Nsx St with Velo cloud is such a key component of delivering the edge of network services as well. Taken together the device side, the compute edge and rethinking and evolving the networking layer together is the vm ware edge strategy summary. We see businesses are on this multicloud journey, right? How do we then do that for their private of public coming together, the hybrid cloud, but they're also on a journey for how they work and operate it across the public cloud and the public cloud we have this torrid innovation, you'll want Andy's here, challenges. You know, he's announcing 1500 new services or were extraordinary innovation and you'll same for azure or Google Ibm cloud, but it also creates the same complexity as we said. Businesses are using multiple public clouds and how do I operate them? How do I make them work? You know, how do I keep track of my accounts and users that creates a set of cloud operations problems as well in the complexity of doing that. How do you make it work? Right? And your for that. We'll just see that there's this idea cloud cost compliance, analytics as these common themes that of, you know, keep coming up and we're seeing in our customers that are new role is emerging. The cloud operations role. You're the person who's figuring out how to make these multicloud environments work and keep track of who's using what and which data is landing where today I'm thrilled to tell you that the, um, where is acquiring the leader in this space? Cloudhealth technologies. Thank you. Cloudhealth technologies supports today, Amazon, azure and Google. They have some 3,500 customers, some of the largest and most respected brands in the, as a service industry. And Sasa business today rapidly span expanding feature sets. We will take cloudhealth and we're going to make it a fundamental platform and branded offering from the um, where we will add many of the other vm ware components into this platform, such as our wavefront analytics, our cloud, choreo compliance, and many of the other vm ware products will become part of the cloudhealth suite of services. We will be enabling that through our enterprise channels as well as through our MSP and BCPP partners as well know. Simply put, we will make cloudhealth the cloud operations platform of choice for the industry. I'm thrilled today to have Joe Consella, the CTO and founder. Joe, please stand up. Thank you joe to your team of a couple hundred, you know, mostly in Boston. Welcome to the Vm ware family, the Vm ware community. It is a thrill to have you part of our team. Thank you joe. Thank you. We're also announcing today, and you can think of this, much like we had v realize operations and v realize automation, the compliment to the cloudhealth operations, vm ware, cloud automation, and some of you might've heard of this in the past, this project tango. Well, today we're announcing the initial availability of Vm ware, cloud automation, assemble, manage complex applications, automate their provisioning and cloud services, and manage them through a brokerage the initial availability of cloud automation services, service. Your today, the acquisition of cloudhealth as a platform, the aware of the most complete set of multicloud management tools in the industry, and we're going to do so much more so we've seen this picture of this multicloud journey that our customers are on and you know, we're working hard to say we are going to bridge across these worlds of innovation, the multicloud world. We're doing many other things. You're gonna hear a lot at the show today about this year. We're also giving the tech preview of the Vm ware cloud marketplace for our partners and customers. Also today, Dell technologies is announcing their cloud marketplace to provide a self service, a portfolio of a Dell emc technologies. We're fundamentally in a unique position to accelerate your multicloud journey. So we've built out this any cloud piece, but right in the middle of that any cloud is the network. And when we think about the network, we're just so excited about what we have done and what we're seeing in the industry. So let's click into this a little bit further. We've gotten a lot done over the last five years. Networking. Look at these numbers. 80 million switch ports have been shipped. We are now 10 x larger than number two and software defined networking. We have over 7,500 customers running on Nsx and maybe the stat that I'm most proud of is 82 percent of the fortune 100 has now adopted nsx. You have made nsx these standard and software defined networking. Thank you very much. Thank you. When we think about this journey that we're on, we started. You're saying, Hey, we've got to break the chains inside of the data center as we said. And then Nsx became the software defined networking platform. We started to do it through our cloud provider partners. Ibm made a huge commitment to partner with us and deliver this to their customers. We then said, boy, we're going to make a fundamental to all of our cloud services including aws. We built this bridge called the hybrid cloud extension. We said we're going to build it natively into what we're doing with Telcos, with Azure and Amazon as a service. We acquired the St Wagon, right, and a Velo cloud at the hottest product of Vm ware's portfolio today. The opportunity to fundamentally transform branch and wide area networking and we're extending it to the edge. You're literally, the world has become this complex network. We have seen the world go from the old defined by rigid boundaries, simply put in a distributed world. Hardware cannot possibly work. We're empowering customers to secure their applications and the data regardless of where they sit and when we think of the virtual cloud network, we say it's these three fundamental things, a cloud centric networking fabric with intrinsic security and all of it delivered in software. The world is moving from data centers to centers of data and they need to be connected and Nsx is the way that we will do that. So you'll be aware of is well known for this idea of talking but also showing. So no vm world keynote is okay without great demonstrations of it because you shouldn't believe me only what we can actually show and to do that know I'm going to have our CTL come onstage and CTL y'all. I used to be a cto and the CTO is the certified smart guy. He's also known as the chief talking officer and today he's my demo partner. Please walk, um, Vm ware, cto ray to the stage. Right morning pat. How you doing? Oh, it's great ray, and thanks so much for joining us. Know I promised that we're going to show off some pretty cool stuff here. We've covered a lot already, but are you up to the task? We're going to try and run through a lot of demos. We're going to do it fast and you're going to have to keep me on time to ask an awkward question. Slow me down. Okay. That's my fault if you run along. Okay, I got it. I got it. Let's jump right in here. So I'm a CTO. I get to meet lots of customers that. A few weeks ago I met a cio of a large distribution company and she described her it infrastructure as consisting of a number of data centers troll to us, which he also spoke of a large number of warehouses globally, and each of these had local hyperconverged compute and storage, primarily running surveillance and warehouse management applications, and she pulls me four questions. The first question she asked me, she says, how do I migrate one of these data centers to Vm ware cloud on aws? I want to get out of one of these data centers. Okay. Sounds like something andy and I were just talking exactly, exactly what you just spoke to a few moments ago. She also wanted to simplify the management of the infrastructure in the warehouse as themselves. Okay. He's age and smaller data centers that you've had out there. Her application at the warehouses that needed to run locally, butter developers wanted to develop using cloud infrastructure. Cloud API is a little bit late. The rds we spoken with her in. Her final question was looking to the future, make all this complicated management go away. I want to be able to focus on my application, so that's what my business is about. So give me some new ways of how to automate all of this infrastructure from the edge to the cloud. Sounds pretty clear. Can we do it? Yes we can. So we're going to dive right in right now into one of these demos. And the first demo we're going to look at it is vm ware cloud on aws. This is the best solution for accelerating this public cloud journey. So can we start the demo please? So what you were looking at here is one of those data centers and you should be familiar with this product. It's a familiar vsphere client. You see it's got a bunch of virtual machines running in there. These are the virtual machines that we now want to be able to migrate and move the VMC on aws. So we're going to go through that migration right now. And to do that we use a product that you've seen already atx, however it's the x has been, has got some new cool features since the last time we download it. Probably on this stage here last year, I wanted those in particular is how do we do bulk migration and there's a new cool thing, right? Whole thing we want to move the data center en mass and his concept here is cloud motion with vsphere replication. What this does is it replicates the underlying storage of the virtual machines using vsphere replication. So if and when you want to now do the final migration, it actually becomes a vmotion. So this is what you see going on right here. The replication is in place. Now when you want to touch you move those virtual machines. What you'll do is a vmotion and the key thing to think about here is this is an actual vmotion. Those the ends as room as they're moving a hustler, migrating remained life just as you would in a v motion across one particular infrastructure. Did you feel complete application or data center migration with no dying town? It's a Standard v motion kind of appearance. Wow. That is really impressive. That's correct. Wow. You. So one of the other things we want to talk about here is as we are moving these virtual machines from the on prem infrastructure to the VMC on aws infrastructure, unfortunately when we set up the cloud on VMC and aws, we only set up for hosts, uh, that might not be, that'd be enough because she is going to move the whole infrastructure of that this was something you guys, you and Andy referred to briefly data center. Now, earlier, this concept of elastic drs. what elastic drs does, it allows the VMC on aws to react to the workloads as they're being created and pulled in onto that infrastructure and automatically pull in new hosts into the VMC infrastructure along the way. So what you're seeing here is essentially the MC growing the infrastructure to meet the needs of the workloads themselves. Very cool. So overseeing that elastic drs. we also see the ebs capabilities as well. Again, you guys spoke about this too. This is the ability to be able to take the huge amount of stories that Amazon have, an ebs and then front that by visa you get the same experience of v Sign, but you get this enormous amount of storage capabilities behind it. Wow. That's incredible. That's incredible. I'm excited about this. This is going to enable customers to migrate faster and larger than ever before. Correct. Now she had a series of little questions. Okay. The second question was around what about all those data centers and those age applications that I did not move, and this is where we introduce the project which you've heard of already tonight called project dementia. What this does, it gives you the simplicity of Vm ware cloud, but bringing that out to the age, you know what's basically going on here, vmc on aws is a service which manages your infrastructure in aws. We know stretch that service out into your infrastructure, in your data center and at the age, allowing us to be able to manage that infrastructure in the same way. Once again, let's dive down into a demo and take a look at what this looks like. So what you've got here is a familiar series of services available to you, one of them, which is project dimension. When you enter project dimension, you first get a view of all of the different infrastructure that you have available to you, your data centers, your edge locations. You can then dive deeply into one of these to get a closer look at what's going on here. We're diving into one of these The problem is there's a networking problem going on in this warehouse. warehouses and we see it as a problem here. How do we know? We know because vm ware is running this as a managed service. We are directly managing or sorry, monitoring your infrastructure or we discover there's something going wrong here. We automatically create the ASR, so somebody is dealing with this. You have visibility to what's going on, but the vm ware managed service is already chasing the problem for you. Oh, very good. So now we're seeing this dispersed infrastructure with project dementia, but what's running on it so well before we get with running out, you've got another problem and the problem is of course, if you're managing a lot of infrastructure like this, you need to keep it up to date. And so once again, this is where the vm ware managed service kicks in. We manage that infrastructure in terms of patching it and updating it for you. And as an example, when we released a security patch, here's one for the recent l, one terminal fault, the Vmr managed service is already on that and making sure that your on prem and edge infrastructure is up to date. Very good. Now, what's running? Okay. So what's running, uh, so we mentioned this case of this software running at the edge infrastructure itself, and these are workloads which are running locally in those age, uh, those edge locations. This is a surveillance application. You can see it here at the bottom it says warehouse safety monitor. So this is an application which gathers images and then stores those images He said my sql database on top there, now this is where we leverage the somewhere and it puts them in a database. technology you just learned about when Andy and pat spoke about disability to take rds and run that on your on prem infrastructure. The block of virtual machines in the moment are the rds components from Amazon running in your infrastructure or in your edge location, and this gives you the ability to allow your developers to be able to leverage and operate against those Apis, but now the actual database, the infrastructure is running on prem and you might be doing just for performance reasons because of latency, you might be doing it simply because this data center is not always connected to the cloud. When you take a look into under the hood and see what's going on here, what you actually see this is vsphere, a modified version of vsphere. You see this new concept of my custom availability zone. That is the availability zone running on your infrastructure which supports or ds. What's more interesting is you flip back to the Amazon portal. This is typically what your developers are going to do. Once again, you see an availability zone in your Amazon portal. This is the availability zone running on your equipment in your data center. So we've truly taken that already as infrastructure and moved it to the edge so the developer sees what they're comfortable with and the infrastructure sees what they're comfortable with bridging those two worlds. Fabulous. Right. So the final question of course that we got here was what's next? How do I begin to look to the future and say I am going to, I want to be able to see all of my infrastructure just handled in an automated fashion. And so when you think about that, one of the questions there is how do we leverage new technologies such as ai and ml to do that? So what you've got here is, sorry we've got a little bit later. What you've got here is how do I blend ai in a male and the power of what's in the data center itself. Okay. And we could do that. We're bringing you the AI and ml, right? And fusing them together as never before to truly change how the data center operates. Correct. And it is this introduction is this merging of these things together, which is extremely powerful in my mind. This is a little bit like a self driving vehicle, so thinking about a car driving down the street is self driving vehicle, it is consuming information from all of the environment around it, other vehicles, what's happening, everything from the wetter, but it also has a lot of built in knowledge which is built up to to self learning and training along the way in the kids collecting lots of that data for decades. Exactly. And we've got all that from all the infrastructure that we have. We can now bring that to bear. So what we're focusing on here is a project called project magna and project. Magna leverage is all of this infrastructure. What it does here is it helps connect the dots across huge datasets and again a deep insight across the stack, all the way from the application hardware, the infrastructure to the public cloud, and even the age and what it does, it leverages hundreds of control points to optimize your infrastructure on Kpis of cost performance, even user specified policies. This is the use of machine language in order to fundamentally transform. I'm sorry, machine learning. I'm going back to some. Very early was here, right? This is the use of machine learning and ai, which will automatically transform. How do you actually automate these data centers? The goal is true automation of your infrastructure, so you get to focus on the applications which really served needs of your business. Yeah, and you know, maybe you could think about that as in the past we would have described the software defined data center, but in the future we're calling it the self driving data center. Here we are taking that same acronym and redefining it, right? Because the self driving data center, the steep infusion of ai and machine learning into the management and automation into the storage, into the networking, into vsphere, redefining the self driving data center and with that we believe fundamentally is to be an enormous advance and how they can take advantage of new capabilities from bm ware. Correct. And you're already seeing some of this in pieces of projects such as some of the stuff we do in wavefront and so already this is how do we take this to a new level and that's what project magnet will do. So let's summarize what we've seen in a few demos here as we work in true each of these very quickly going through these demos. First of all, you saw the n word cloud on aws. How do I migrate an entire data center to the cloud with no downtime? Check, we saw project dementia, get the simplicity of Vm ware cloud in the data center and manage it at the age as a managed service check. Amazon rds and Vm ware. Cool Demo, seamlessly deploy a cloud service to an on premises environment. In this case already. Yes, we got that one coming in are in m five. And then finally project magna. What happens when you're looking to the future? How do we leverage ai and ml to self optimize to virtual infrastructure? Well, how did ray do as our demo guy? Thank you. Thanks. Thanks. Right. Thank you. So coming back to this picture, our gps for the day, we've covered any cloud, let's click into now any application, and as we think about any application, we really view it as this breadth of the traditional cloud native and Sas Coobernetti is quickly maybe spectacularly becoming seen as the consensus way that containers will be managed and automate as the framework for how modern APP teams are looking at their next generation environment, quickly emerging as a key to how enterprises build and deploy their applications today. And containers are efficient, lightweight, portable. They have lots of values for developers, but they need to also be run and operate and have many infrastructure challenges as well. Managing automation while patch lifecycle updates, efficient move of new application services, know can be accelerated with containers. We also have these infrastructure problems and you know, one thing we want to make clear is that the best way to run a container environment is on a virtual machine. You know, in fact, every leader in public cloud runs their containers and virtual machines. Google the creator and arguably the world leader in containers. They runs them all in containers. Both their internal it and what they run as well as G K, e for external users as well. They just announced gke on premise on vm ware for their container environments. Google and all major clouds run their containers and vms and simply put it's the best way to run containers. And we have solved through what we have done collectively the infrastructure problems and as we saw earlier, cool new container apps are also typically some ugly combination of cool new and legacy and existing environments as well. How do we bridge those two worlds? And today as people are rapidly moving forward with containers and Coobernetti's, we're seeing a certain set of problems emerge. And Dan cone, right, the director of CNCF, the Coobernetti, uh, the cloud native computing foundation, the body for Coobernetti's collaboration and that, the group that sort of stewards the standardization of this capability and he points out these four challenges. How do you secure them? How do you network and you know, how do you monitor and what do you do for the storage underneath them? Simply put, vm ware is out to be, is working to be is on our way to be the dial tone for Coobernetti's. Now, some of you who were in your twenties might not know what that means, so we know over to a gray hair or come and see me afterward. We'll explain what dial tone means to you or maybe stated differently. Enterprise grade standard for Cooper netties and for that we are working together with our partners at Google as well as pivotal to deliver Vm ware, pks, Cooper netties as an enterprise capability. It builds on Bosh. The lifecycle engine that's foundational to the pivotal have offerings today, uh, builds on and is committed to stay current with the latest Coobernetti's releases. It builds on Nsx, the SDN container, networking and additional contributions that were making like harbor the Vm ware open source contribution for the container registry. It packages those together makes them available on a hybrid cloud as well as public cloud environments with pks operators can efficiently deploy, run, upgrade their coopernetties environments on SDDC or on all public clouds. While developers have the freedom to embrace and run their applications rapidly and efficiently, simply put, pks, the standard for Coobernetti's in the enterprise and underneath that Nsx you'll is emerging as the standard for software defined networking. But when we think about and we saw that quote on the challenges of Kubernetes today, we see that networking is one of the huge challenge is underneath that and in a containerized world, things are changing even more rapidly. My network environment is moving more quickly. NSX provides the environment's easily automate networking and security for rapid deployment of containerized environments that fully supports the MRP chaos, fully supports pivotal's application service, and we're also committed to fully support all of the major kubernetes distribution such as red hat, heptio and docker as well Nsx, the only platform on the planet that can address the complexity and scale of container deployments taken together Vm Ware, pks, the production grade computer for the enterprise available on hybrid cloud, available on major public clouds. Now, let's not just talk about it again. Let's see it in action and please walk up to the stage. When di Carter with Ray, the senior director of cloud native marketing for Vm ware. Thank you. Hi everybody. So we're going to talk about pks because more and more new applications are built using kubernetes and using containers with vm ware pts. We get to simplify the deploying and the operation of Kubernetes at scale. When the. You're the experts on all of this, right? So can you take as true the scenario of how pks or vm ware pts can really help a developer operating the Kubernedes environment, developed great applications, but also from an administrator point of view, I can really handle things like networking, security and those configurations. Sounds great. I love to dive into the demo here. Okay. Our Demo is. Yeah, more pks running coubernetties vsphere. Now pks has a lot of cool functions built in, one of which is Nsx. And today what I'm going to show you is how NSX will automatically bring up network objects as quick Coobernetti's name spaces are spun up. So we're going to start with the fees per client, which has been extended to Ron pks, deployed cooper clusters. We're going to go into pks instance one, and we see that there are five clusters running. We're going to select one other clusters, call application production, and we see that it is running nsx. Now a cluster typically has multiple users and users are assigned namespaces, and these namespaces are essentially a way to provide isolation and dedicated resources to the users in that cluster. So we're going to check how many namespaces are running in this cluster and more brought up the Kubernetes Ui. We're going to click on namespace and we see that this cluster currently has four namespaces running wire. We're going to do next is bringing up a new name space and show that Nsx will automatically bring up the network objects required for that name space. So to do that, we're going to upload a Yammel file and your developer may actually use Ku Kata command to do this as well. We're going to check the namespace and there it is. We have a new name space called pks rocks. Yeah. Okay. Now why is that guy now? It's great. We have a new name space and now we want to make sure it has the network elements assigned to us, so we're going to go to the NSX manager and hit refresh and there it is. PKS rocks has a logical robber and a logical switch automatically assigned to it and it's up and running. So I want to interrupt here because you made this look so easy, right? I'm not sure people realize the power of what happened here. The developer, winton using Kubernetes, is api infrastructure to familiar with added a new namespace and behind the scenes pks and tardy took care of the networking. It combination of Nsx, a combination of what we do at pks to truly automate this function. Absolutely. So this means that if you are on the infrastructure operation, you don't need to worry about your developer springing up namespaces because Nsx will take care of bringing the networking up and then bringing them back down when the namespace is not used. So rate, but that's not it. Now, I was in operations before and I know how hard it is for enterprises to roll out a new product without visibility. Right, so pks took care of those dates, you operational needs as well, so while it's running your clusters, it's also exporting Meta data so that your developers and operators can use wavefront to gain deep visibility into the health of the cluster as well as resources consumed by the cluster. So here you see the wavefront Ui and it's showing you the number of nodes running, active parts, inactive pause, et cetera. You can also dive deeper into the analytics and take a look at information site, Georgia namespace, so you see pks rocks there and you see the number of active nodes running as well as the CPU utilization and memory consumption of that nice space. So now pks rocks is ready to run containerized applications and microservices. So you just get us a very highlight of a demo here to see a little bit what pks pks says, where can we learn more? So we'd love to show you more. Please come by the booth and we have more cool functions running on pks and we'd love to have you come by. Excellent. Thank you, Lindy. Thank you. Yeah, so when we look at these types of workloads now running on vsphere containers, Kubernedes, we also see a new type of workload beginning to appear and these are workloads which are basically machine learning and ai and in many cases they leverage a new type of infrastructure, hardware accelerators, typically gps. What we're going to talk about here is how in video and Vm ware have worked together to give you flexibility to run sophisticated Vdi workloads, but also to leverage those same gpu for deep learning inference workloads also on vsphere. So let's dive right into a demo here. Again, what you're seeing here is again, you're looking at here, you're looking at your standard view realized operations product, and you see we've got two sets of applications here, a Vdi desktop workload and machine learning, and the graph is showing what's happening with the Vdi desktops. These are office workers leveraging these desktops everyday, so of course the infrastructure is super busy during the daytime when they're in the office, but the green area shows this is not been used very heavily outside of those times. So let's take a look. What happens to the machine learning application in this case, this organization leverages those available gpu to run the machine learning operations outside the normal working hours. Let's take a little bit of a deeper dive into what the application it is before we see what we can do from an infrastructure and configuration point of view. So this machine learning application processes a vast number of images and it clarify or sorry, it categorizes these images and as it's doing so, it is moving forward and putting each of these in a database and you can see it's operating here relatively fast and it's leveraging some gps to do that. So typical image processing type of machine learning problem. Now let's take a dive in and look at the infrastructure which is making this happen. First of all, we're going to look only at the Vdi employee Dvt, a Vdi infrastructure here. So I've got a bunch of these applications running Vdi applications. What I want to do is I want to move these so that I can make this image processing out a application run a lot faster. Now normally you wouldn't do this, but pot insisted that we do this demo at 10:30 in the morning when the office workers are in there, so we're going to move older Vdi workloads over to the other cluster and that's what you're seeing is going on right now. So as they move over to this other cluster, what we are now doing is freeing up all of the infrastructure. The GPU that Vdi workload was using here. We see them moving across and now you've freed up that infrastructure. So now we want to take a look at this application itself, the machine learning application and see how we can make use of that. Now freed up infrastructure we've got here is the application is running using one gpu in a vsphere cluster, but I've got three more gpu is available now because I've moved the Vdi workloads. We simply modify the application, let it know that these are available and you suddenly see an increase in the processing capabilities because of what we've done here in terms of making the flexibility of accessing those gps. So what you see here is the same gps that youth for Vdi, which you probably have in your infrastructure today, can also be used to run sophisticated machine learning and ai type of applications on your vsphere infrastructure. So let's summarize what we've seen in the various demos here in this section. First of all, we saw how the MRPS simplifies the deployment and operating operation of Kubernetes at scale. What we've also seen is that leveraging the Nvidia Gpu, we can now run the most demanding workloads on vsphere. When we think about all of these applications and these new types of workloads that people are running. I want to take one second to speak to another workload that we're seeing beginning to appear in the data center. And this is of course blockchain. We're seeing an increasing number of organizations evaluating blockchains for smart contract and digital consensus solutions. So this tech, this technology is really becoming or potentially becoming a critical role in how businesses will interact each other, how they will work together. We'd project concord, which is an open source project that we're releasing today. You get the choice, performance and scale of verifiable trust, which you can then bring to bear and run in the enterprise, but this is not just another blockchain implementation. We have focused very squarely on making sure that this is good for enterprises. It focuses on performance, it focuses on scalability. We have seen examples where running consensus algorithms have taken over 80 days on some of the most common and widely used infrastructure in blockchain and we project conquered. You can do that in two and a half hours. So I encourage you to check out this project on get hub today. You'll also see lots of activity around the whole conference. Speaking about this. Now we're going to dive into another section which is the anti device section. And for that I need to welcome pat back up there. Thank you pat. Thanks right. So diving into any device piece of the puzzle, you and as we think about the superpowers that we have, maybe there are no more area that they are more visible than in the any device aspect of our picture. You know, and as we think about this, the superpowers, you know, think about mobility, right? You know, and how it's enabling new things like desktop as a service in the mobile area, these breadth of smartphones and devices, ai and machine learning allow us to manage them, secure them and this expanding envelope of devices in the edge that need to be connected and wearables and three d printers and so on. We've also seen increasing research that says engaged employees are at the center of business success. Engaged employees are the critical ingredient for digital transformation. And frankly this is how I run vm ware, right? You know, I have my device and my work, all my applications, every one of my 23,000 employees is running on our transformed workspace one environment. Research shows that companies that, that give employees ready anytime access are nearly three x more likely to be leaders in digital transformation. That employees spend 20 percent of their time today on manual processes that can be automated. The way team collaboration and speed of division decisions increases by 16 percent with engaged employees with modern devices. Simply put this as a critical aspect to enabling your business, but you remember this picture from the silos that we started with and each of these environments has their own tribal communities of management, security automation associated with them, and the complexity associated with these is mind boggling and we start to think about these. Remember the I'm a pc and I'm a Mac. Well now you have. I'm an Ios. I'm a droid and other bdi and I'm now a connected printer and I'm a connected watch. You remember citrix manager and good is now bad and sccm a failed model and vpns and Xanax. The chaos is now over at the center of that is vm ware, workspace one, get it out of the business of managing devices, automate them from the cloud, but still have the mentor price. Secure cloud based analytics that brings new capabilities to this critical topic. You'll focus your energy on creating employee and customer experiences. You know, new capabilities to allow like our airlift, the new capability to help customers migrate from their sccm environment to a modern management, expanding the use of workspace intelligence. Last year we announced the chromebook and a partnership with HP and today I'm happy to announce the next step in our partnerships with Dell. And uh, today we're announcing that Dell provisioning for Vm ware, workspace one as part of Dell's ready to work solutions Dallas, taking the next leap and bringing workspace one into the core of their client to offerings. And the way you can think about this as Literally a dell drop ship, lap pops showing up to new employee. day one, productivity. You give them their credential and everything else is delivered by workspace one, your image, your software, everything patched and upgraded, transforming your business, right beginning at that device experience that you give to your customer. And again, we don't want to talk about it. We want to show you how this works. Please walk to the stage with re renew the head of our desktop products marketing. Thank you. So we just heard from pat about how workspace one integrated with Dell laptops is really set up to manage windows devices. What we're broadly focused on here is how do we get a truly modern management system for these devices, but one that has an intelligence behind it to make sure that we're kept with a good understanding of how to keep these devices always up to date and secure. Can we start the demo please? So what we're seeing here is to be the the front screen that you see of workspace one and you see you've got multiple devices a little bit like that demo that patch assured. I've got Ios, android, and of course I've got windows renewal. Can you please take us through how workspace one really changes the ability of somebody an it administrator to update and manage windows into our environment? Absolutely. With windows 10, Microsoft has finally joined the modern management body and we are really excited about that. Now. The good news about modern management is the frequency of ostp updates and how quickly they come out because you can address all those security issues that are hitting our radar on a daily basis, but the bad news about modern management is the frequency of those updates because all of us in it admins, we have to test each and every one of our applications would that latest version because we don't want to roll out that update in case of causes any problems with workspace one, we saw that we simply automate and provide you with the APP compatibility information right out of the box so you can now automate that update process. Let's take a quick look. Let's drill down here further into the windows devices. What we'll see is that only a small percentage of those devices are on that latest version of operating system. Now, that's not a good thing because it might have an important security fix. Let's scroll down further and see what the issue is. We find that it's related to app compatibility. In fact, 38 percent of our devices are blocked from being upgraded and the issue is app compatibility. Now we were able to find that not by asking the admins to test each and every one of those, but we combined windows analytics data with APP intelligent out of the box and be provided that information right here inside of the console. Let's dig down further and see what those devices and apps look like. So knew this is the part that I find most interesting. If I am a system administrator at this point I'm looking at workspace one is giving me a key piece of information. It says if you proceed with this update, it's going to fail 84, 85 percent at a time. So that's an important piece of information here, but not alone. Is it telling me that? It is telling me roughly speaking why it thinks it's going to fail. We've got a number of apps which are not ready to work with this new version, particularly the Mondo card sales lead tracker APP. So what we need to do is get engineering to tackle the problems with this app and make sure that it's updated. So let's get fixing it in order to fix it. What we'll do is create an automation and we can do this right out of the box in this automation will open up a Jira ticket right from within the console to inform the engineers about the problem, not just that we can also flag and send a notification to that engineering manager so that it's top of mine and they can get working on this fixed right away. Let's go ahead and save that automation right here, ray UC. There's the automation that we just So what's happening here is essentially this update is now scheduled meeting. saved. We can go and update oldest windows devices, but workspace one is holding the process of proceeding with that update, waiting for the engineers to update the APP, which is going to cause the problem. That's going to take them some time, right? So the engineers have been working on this, they have a fixed and let's go back and see what's happened to our devices. So going back into the ios updates, what we'll find is now we've unblocked those devices from being upgraded. The 38 percent has drastically dropped down. It can rest in peace that all of the devices are compliant and on that latest version of operating system. And again, this is just a snapshot of the power of workspace one to learn more and see more. I invite you all to join our EOC showcase keynote later this evening. Okay. So we've spoken about the presence of these new devices that it needs to be able to manage and operate across everything that they do. But what we're also seeing is the emergence of a whole new class of computing device. And these are devices which are we commonly speak to have been at the age or embedded devices or Iot. And in many cases these will be in factories. They'll be in your automobiles, there'll be in the building, controlling, controlling, uh, the building itself, air conditioning, etc. Are quite often in some form of industrial environment. There's something like this where you've got A wind farm under embedded in each of these turbines. This is a new class of computing which needs to be managed, secured, or we think virtualization can do a pretty good job of that in new virtualization frontier, right at the edge for iot and iot gateways, and that's gonna. That's gonna, open up a whole new realm of innovation in that space. Let's dive down and taking the demo. This spaces. Well, let's do that. What we're seeing here is a wind turbine farm, a very different than a data center than what we're used to and all the compute infrastructure is being managed by v center and we see to edge gateway hose and they're running a very mission critical safety watchdog vm right on there. Now the safety watchdog vm is an fte mode because it's collecting a lot of the important sensor data and running the mission critical operations for the turbine, so fte mode or full tolerance mode, that's a pretty sophisticated virtualization feature allowing to applications to essentially run in lockstep. So if there's a failure, wouldn't that gets to take over immediately? So this no sophisticated virtualization feature can be brought out all the way to the edge. Exactly. So just like in the data center, we want to perform an update, so as we performed that update, the first thing we'll do is we'll suspend ft on that safety watchdog. Next, we'll put two. Oh, five into maintenance mode. Once that's done, we'll see the power of emotion that we're all familiar with. We'll start to see all the virtual machines vmotion over to the second backup host. Again, all the maintenance, all the update without skipping a heartbeat without taking down any daily operations. So what we're seeing here is the basic power of virtualization being brought out to the age v motion maintenance mode, et cetera. Great. What's the big deal? We've been doing that for years. What's the, you know, come on. What's the big deal? So what you're on the edge. So when you get to the age pack, you're dealing with a whole new class of infrastructure. You're dealing with embedded systems and new types of cpu hours and process. This whole demo has been done on an arm 64. Virtualization brought to arm 64 for embedded devices. So we're doing this on arm on the edge, correct. Specifically focused for embedded for age oems. Okay. Now that's good. Okay. Thank you ray. Actually, we've got a summary here. Pat, just a second before you disappear. A lot to rattle off what we've just seen, right? We've seen workspace one cross platform management. What we've also seen, of course esx for arm to bring the power of vfx to edge on 64, but are in platforms will go no. Okay. Okay. Thank you. Thanks. Now we've seen a look at a customer who is taking advantage of everything that we just saw and again, a story of a customer that is just changing lives in a fundamental way. Let's see. Make a wish. So when a family gets the news that a child is sick and it's a critical illness, it could be a life threatening illness. The whole family has turned upside down. Imagine somebody comes to you and they say, what's the one thing you want that's in your heart? You tell us and then we make that happen. So I was just calling to give you the good news that we're going to be able to grant jackson a wish make, which is the largest wish granting organizations in the United States. English was featured in the cbs 60 minutes episode. Interestingly, it got a lot of hits, but uh, unfortunately for the it team, the whole website crashed make a wish is going through a program right now where we're centralizing technology and putting certain security standards in place at our chapters. So what you're seeing here, we're configuring certain cloud services to make sure that they always are able to deliver on the mission whether they have a local problem or not is we continue to grow the partnership and work with vm ware. It's enabling us to become more efficient in our processes and allows us to grant more wishes. It was a little girl. She had a two year old brother. She just wanted a puppy and she was forthright and I want to name the puppy in my name so my brother would always have me to list them off a five year old. It's something we can't change their medical outcome, but we can change their spiritual outcome and we can transform their lives. Thank you. Working together with you truly making wishes come true. The last topic I want to touch on today, and maybe the most important to me personally is security. You got to fundamentally, when we think about this topic of security, I'll say it's broken today and you know, we would just say that the industry got it wrong that we're trying to bolt on or chasing bad, and when we think about our security spend, we're spending more and we're losing more, right? Every day we're investing more in this aspect of our infrastructure and we're falling more behind. We believe that we have to have much less security products and much more security. You know, fundamentally, you know, if you think about the problem, we build infrastructure, right? Generic infrastructure, we then deploy applications, all kinds of applications, and we're seeing all sorts of threats launched that as daily tens of millions. You're simple virus scanner, right? Is having tens of millions of rules running and changing many times a day. We simply believe the security model needs to change. We need to move from bolted on and chasing bad to an environment that has intrinsic security and is built to ensure good. This idea of built in security. We are taking every one of the core vm ware products and we are building security directly into it. We believe with this, we can eliminate much of the complexity. Many of the sensors and agents and boxes. Instead, they'll directly leverage the mechanisms in the infrastructure and we're using that infrastructure to lock it down to behave as we intended it to ensure good, right on the user side with workspace one on the network side with nsx and microsegmentation and storage with native encryption and on the compute with app defense, we are building in security. We're not chasing threats or adding on, but radically reducing the attack surface. When we look at our applications in the data center, you see this collection of machines running inside of it, right? You know, typically running on vsphere and those machines are increasingly connected. Through nsx and last year we introduced the breakthrough security solution called app defense and app defense. Leverages the unique insight we get into the application so that we can understand the application and map it into the infrastructure and then you can lock down, you could take that understanding, that manifest of its behavior and then lock those vms to that intended behavior and we do that without the operational and performance burden of agents and other rear looking use of attack detection. We're shrinking the attack surface, not chasing the latest attack vector, you know, and this idea of bolt on versus chasing bad. You sort of see it right in the network. Machines have lots of conductivity, lots of applications running and something bad happens. It basically has unfettered access to move horizontally through the data center and most of our security is north, south. MosT of the attacks are eastwest. We introduced this idea of microsegmentation five years ago, and by it we're enabling organizations to secure some networks and separate sensitive applications and services as never before. This idea isn't new, that just was never practical before nsx, but we're not standing still. Our teams are innovating to leap beyond 12. What's next beyond microsegmentation, and we see this in three simple words, learn, imagine a system that can look into the applications and understand their behavior and how they should operate. we're using machine learning and ai instead of chasing were to be able to ensure good where that that system can then locked down its behavior so the system consistently operates that way, but finally we know we have a world of increasing dynamic applications and as we move to more containerize the microservices, we know this world is changing, so we need to adapt. We need to have more automation to adapt to the current behavior. Today I'm very excited to have two major announcements that are delivering on this vision. The first of those vsphere platinum, our flagship vm ware vsphere product now has app defense built right in platinum will enable virtualization teams. Yeah, go ahead. Yeah, let's use it. Platinum will enable virtualization teams you to give an enormous contribution to the security profile of your enterprise. You could see whatever vm is for its purpose, its behavior until the system. That's what it's allowed to do. Dramatically reducing the attack surface without impact. On operations or performance, the capability is so powerful, so profound. We want you to be able to leverage it everywhere, and that's why we're building it directly into vsphere, vsphere platinum. I call it the burger and fries. You know, nobody leaves the restaurant without the fries who would possibly run a vm in the future without turning security on. That's how we want this to work going forward. Vsphere platinum and as powerful as microsegmentation has been as an idea. We're taking the next step with what we call adaptive microsegmentation. We are fusing Together app defense and vsphere with nsx to allow us to align the policies of the application through vsphere and the network. We can then lock down the network and the compute and enable this automation of the microsegment formation taken together adaptive microsegmentation. But again, we don't want to just tell you about it. We want to show you. Please welcome to the stage vj dante, who heads our machine learning team for app dispense. Vj a very good vj. Thanks for joining us. So, you know, I talked about this idea right, of being able to learn, lock and adapt. Uh, can you show it to us? Great. Yeah. Thank you. With vc a platinum, what we have done is we have put in everything you need to learn, lock and adapt, right with the infrastructure. The next time you bring up your wifi at line, you'll actually see a difference right in there. Let's go with that demo. There you go. And when you look at our defense there, what you see is that all your guests, virtual machines and all your host, hundreds of them and thousands of virtual machines enabling for that difference. It's in there. And what that does is immediately gets you visibility into the processes running on those virtual machines and the risk for the first time. Think about it for the first time. You're looking at the infrastructure through the lens of an application. Here, for example, the ecommerce application, you can see the components that make up that application, how they interact with each other, the specific process, a specific ip address on a specific board. That's what you get, but so we're learning the behavior. Yes. Yeah, that's very good. But how do you make sure you only learn good behavior? Exactly. How do we make sure that it's not bad? We actually verify me insured. It's all good. We ensured that everybody these reputation is verified. We ensured that the haven is verified. Let's go to svc host, for example. This process can exhibit hundreds of behaviors across numerous. Realize what we do here is we actually verify that failure saw us. It's actually a machine learning models that had been trained on millions of instances of good, bad at you said, and then automatically verify that for okay, so we said, you. We learned simply, learn now, lock. How does that work? Well, once you learned the application, locking it is as simple as clicking on that verify and protect button and then you can lock both the compute and network and it's done. So we've pushed those policies into nsx and microsegmentation has been established actually locked down the compute. What is the operating system is exactly. Let's first look at compute, protected the processes and the behaviors are locked down to exactly what is allowed for that application. And we have bacon policies and program your firewall. This is nsx being configured automatically for you, laurie, with one single click. Very good. So we said learn lock. Now, how does this adapt thing work? Well, a bad change is the only constant, but modern applications applications change on a continuous basis. What we do is actually pretty simple. We look at every change as it comes in determinant is good or bad. If it's good, we say allow it, update the policies. That's bad. We denied. Let's look at an example as asco dxc. It's exhibiting a behavior that they've not seen getting the learning period. Okay? So this machine has never behave this This hasn't been that way. But. way. But again, our machine learning models had seen thousands of instances of this process. They know this is normal. It talks on three 89 all the time. So what it's done to the few things, it's lowered the criticality of the alarm. Okay, so false positive. Exactly. The bane of security operations, false positives, and it has gone and updated. Jane does locks on compute and network to allow for that behavior. Applications continues to work on this project. Okay, so we can learn and adapt and action right through the compute and the network. What about the client? Well, we do with workplace one, intelligence protect and manage end user endpoint, but what's one intelligence? Nsx and actually work together to protect your entire data center infrastructure, but don't believe me. You can watch it for yourself tomorrow tom cornu keynote. You want to be there, at 1:00 PM, be there or be nowhere. I love you. Thank you veejay. Great job. Thank you so much. So the idea of intrinsic security and ensuring good, we believe fundamentally changing how security will be delivered in the enterprise in the future and changing the entire security industry. We've covered a lot today. I'm thrilled as I stand on stage to stand before this community that truly has been at the center of changing the world of technology over the last couple of decades. In it. We've talked about this idea of the super powers of technology and as they accelerate the huge demand for what you do, you know in the same way we together created this idea of the virtual infrastructure admin. You'll think about all the jobs that we are spawning in the discussion that we had today, the new skills, the new opportunities for each one of us in this room today, quantum program, machine learning engineer, iot and edge expert. We're on the cusp of so many new capabilities and we need you and your skills to do that. The skills that you possess, the abilities that you have to work across these silos of technology and enabled tomorrow. I'll tell you, I am now 38 years in the industry and I've never been more excited because together we have the opportunity to build on the things that collective we have done over the last four decades and truly have a positive global impact. These are hard problems, but I believe together we can successfully extend the lifespan of every human being. I believe together we can eradicate chronic diseases that have plagued mankind for centuries. I believe we can lift the remaining 10 percent of humanity out of extreme poverty. I believe that we can reschedule every worker in the age of the superpowers. I believe that we can give modern ever education to every child on the planet, even in the of slums. I believe that together we could reverse the impact of climate change. I believe that together we have the opportunity to make these a reality. I believe this possibility is only possible together with you. I asked you have a please have a wonderful vm world. Thanks for listening. Happy 20th birthday. Have a great topic.
SUMMARY :
of devices in the edge that need to be
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Amazon | ORGANIZATION | 0.99+ |
Andy | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Michael | PERSON | 0.99+ |
1998 | DATE | 0.99+ |
Telco | ORGANIZATION | 0.99+ |
1986 | DATE | 0.99+ |
Telcos | ORGANIZATION | 0.99+ |
Europe | LOCATION | 0.99+ |
Paul Maritz | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Boston | LOCATION | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
Lenovo | ORGANIZATION | 0.99+ |
10 | QUANTITY | 0.99+ |
Deloitte | ORGANIZATION | 0.99+ |
Joe | PERSON | 0.99+ |
Sydney | LOCATION | 0.99+ |
Joe Consella | PERSON | 0.99+ |
Africa | LOCATION | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
Oregon | LOCATION | 0.99+ |
20 percent | QUANTITY | 0.99+ |
Ashley | PERSON | 0.99+ |
16 percent | QUANTITY | 0.99+ |
Vegas | LOCATION | 0.99+ |
Jupiter | LOCATION | 0.99+ |
Last year | DATE | 0.99+ |
last year | DATE | 0.99+ |
first question | QUANTITY | 0.99+ |
Lindy | PERSON | 0.99+ |
telco | ORGANIZATION | 0.99+ |
John Gage | PERSON | 0.99+ |
10 percent | QUANTITY | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Dan cone | PERSON | 0.99+ |
68 percent | QUANTITY | 0.99+ |
200 applications | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
50 percent | QUANTITY | 0.99+ |
Vm Ware Cloud Foundation | ORGANIZATION | 0.99+ |
1440 | DATE | 0.99+ |
30 year | QUANTITY | 0.99+ |
HP | ORGANIZATION | 0.99+ |
38 percent | QUANTITY | 0.99+ |
38 years | QUANTITY | 0.99+ |
$600 | QUANTITY | 0.99+ |
20 years | QUANTITY | 0.99+ |
one months | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
windows 10 | TITLE | 0.99+ |
hundreds | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
80 million | QUANTITY | 0.99+ |
five percent | QUANTITY | 0.99+ |
second question | QUANTITY | 0.99+ |
Jody | PERSON | 0.99+ |
Today | DATE | 0.99+ |
tomorrow | DATE | 0.99+ |
Sanjay | PERSON | 0.99+ |
23,000 employees | QUANTITY | 0.99+ |
five people | QUANTITY | 0.99+ |
sixth year | QUANTITY | 0.99+ |
82 percent | QUANTITY | 0.99+ |
five instances | QUANTITY | 0.99+ |
tomorrow morning | DATE | 0.99+ |
Coobernetti | ORGANIZATION | 0.99+ |
Kalyan Ramanathan, SumoLogic| AWS re:Invent
>> Narrator: Live from Las Vegas, it's the CUBE. Covering AWS re:Invent 2017, presented by AWS, Intel, and our ecosystem of partners. (the CUBE theme music) >> Hey, welcome back everyone. Here live in Las Vegas, the CUBE's coverage of Amazon re:Invent. It's 45,000 people, lot of action. Again, three days of wall-to-wall coverage. This is day two, trying not to lose my voice. I'm here with Justin Warren, my cohost this week, along with Stu Miniman, Keith Townsend, and a variety of other great, great hosts for the CUBE. Doing our share to get that data to you. Our next guest is Kalyan Ramanathan, who's the vice president in product market at SumoLogic, but also the author with a group of people from SumoLogic on a great report that they have out called Modern Applications in the Cloud, and he came and he took some time to come from his meetings to come on the CUBE to talk about it. Because we've been riffing on what is a modern application? What is a modern cloud? You know that Justin and I were talking about this renaissance in software development. Obviously, the cloud wars are happening. The water's being pulled out, that tsunami's coming. It's changing the face of startups, IT, and developers at the heart of the action, a new cultural renaissance. Welcome to the CUBE. >> Thank you very much. >> So, a little editorializing there, an opining. But we believe that we are seeing a C change, a renaissance in software. Because the things that are now possible, the creativity, the power of developers, the end-to-end visibility into services is just like putting a PowerPoint slide together, or LEGO blocks. It's just like, it's so easy, not. But I mean, it could be easy, it's easier. >> Kalyan: Absolutely. >> So modern applications are top of our mind, so everyone wants to be modern. They wanna be hip, they wanna be cool. But there's some serious work getting done right now in the cloud. And there's a shift of greatness coming. What does your report show? Because we wanna dig into it. What the hell is a modern application? Is Oracle a modern application? Do I buy Watson at IBM? I see that on TV a lot. What is a modern application? >> Yeah let me, thank you John. So let me start with a quick introduction about SumoLogic, so that I can set a context about this modern application report. So SumoLogic is a cloud-native machine data analytics service, and what we do is to help our customers manage the operations and security of their mission-critical applications, right? The end goal to our customers is that now they can deliver an application with very good security posture and with exceptionally good customer experience. Now, we've been in AWS for about seven years. We have about 1,600 customers under management today. So what we've been able to do in this modern application report is to fundamentally mine data from our customers in a very anonymous way and give insight into what typically makes up a modern application in the cloud, right? And when we talk about a modern application, and I typically see three characteristics to these modern applications. First and foremost, many of these applications are indeed architected or perhaps I should say even re-architected in public cloud environments like AWS or Azure or Google Cloud Platform. Secondly, many of these applications are built using DevOps and Agile-style practices, so the rate and speed of change in this application is completely off the charts. The third thing that we are starting to see a lot more of is that many of these applications are built using Microservices-style technology, so it's very easy to compose these applications. You can put them together very easily, you can make changes to these applications a lot. So that's our typical definition of a modern application. >> Okay, well, we heard Andrew Jassy, I think, one or two days ago, was talking about if I started AWS again from scratch, today I would be using serverless. So I wouldn't be deploying virtual machines, I wouldn't actually be using a lot of the AWS services that we have today. So what are you seeing in the momentum for how developers are using the different types of stack. We're seeing a lot of growth in NoSQL, we're seeing a lot of growth in serverless functions. If I were starting a modern application today, what would my stack look like? >> Yeah, I mean, that's at the heart of the report that we put together, right? The report actually provides an end-to-end application stack, starting all the way from the infrastructure layer to the applications and even perhaps the management and the security technologies that you may need to manage these modern applications well. So let's start off with the infrastructure layer, right? So what SumoLogic has identified in, anonymously again, mining our customer data is that, you know, on the infrastructure side, Linux rules. As a operating system, goes without saying, Linux is the dominant operating system in AWS and that is to be understood. But here's the other interesting data point. Linux is also getting significant foothold in the Azure world. And that is not commonplace knowledge today, right? I mean, you would expect that Windows is ruling the Azure world, but we are actually starting to see dramatic year over year growth in terms of Linux within the Azure world. Now, let's move up the stack, right? Let's go from the host and the operating system now to the container world. What we are starting to see is dramatic growth in container adoption within AWS. Last year, when we put out the first version of this report, we saw that 18% of our customers are using Docker within AWS. This year, we are seeing that one in four customers are actually using Docker within their environment. >> Node.js, we saw a New Relic kind of report too. They laid out a little bit different instrumentation of it, with what languages. Python and Node.js, certainly Node.js, really awesome for the cloud and you're seeing that continue to be great. How does that gonna fit into Azure, for instance? What are they doing in their clients? So we were talking about Azure, right? So you look at their numbers, right? Azure versus AWS OS adoption. Okay, Linux is moving up because they made that announcement. But people have been looking at Azure and confused by the Azure stack. It's almost like a black box. Here, Amazon lays it out very cleanly. How is the Azure stack piece impacted? >> Yeah I mean, Microsoft, they've historically been a much more of a closed ecosystem. But I think in the Azure world, we are definitely starting to see Microsoft open the kimono, in some sense, and start to adopt, not just opensource technologies, but also technologies that are not very core to the Microsoft stack itself. A lot of our customers who are using us in Azure today, are, as I mentioned, they're using Linux in a fairly significant way. We are also starting to see Azure functions being used in a significant way. In terms of the entire application stack, again, Azure has, while they are behind AWS in terms of the number of services, the richness of the services, we are starting to see them catch up in a very significant way. >> All right, here's a Here's a pointed question for you, it's a tough question, okay? Maybe tough to answer, maybe you know the answer. A lot of people will try to fake it until they make it. And you've heard that term around. You really can't fake being a modern application, so what do you see as ones that aren't making it, in terms of architecture and stacks? Maybe it's Legacy trying to bolt on a little bit of glam front end, Javascript, or Node. Where's the failure, or having one relational database, maybe Oracle and trying to blend that in? Is there a formula that you see that's not working? >> You know, I think the act of just putting on a shim around a Legacy technology and calling that modern, I think what we are starting to see more and more of, is that that can take you so far, but only so far, right? The underlying infrastructure technologies of today, especially containers and you guys heard Andy Jassy talk about Kubernetes today at his keynote. There are such technology advances that are so core to the architecture of the modern app that if you choose not to implement them and if you just put, in some sense, a lipstick on a pig and a tiny little shim on top of a Legacy application, >> Sprinkle a little bit of glitter on things, yeah. >> You're, can you get away with it for a year or so? Absolutely, but then you're talking about, you know, dealing with extreme scalability, high elasticity, security of the kind that is needed for most enterprises. That's where the Legacy technology and just a sprinkling of dust, as you described it, is going to fall apart. >> I love the top two data, two of the three top datas are NoSQL. Interesting you got MySQL, Redis, Mongo and PostgreSQL, and then Cassandra and then Redshift. Redis, really kicking ass at number two. >> Kalyan: Absolutely. That's surprising. I always loved Redis but that's moving up. That's ahead of Mongo. >> Yeah, absolutely. I mean, Redis has a huge following. It's a in-memory database, as you know. It also has a lot of shades of NoSQL. >> John: It's flexible. >> It's very flexible, absolutely. So I mean, the interesting data point in the database analysis that we did was that in the cloud world, NoSQL and SQL are pretty much head-to-head, right? So, I mean the way we think about it is, when you are re-architecting your applications to the cloud, it really gives you the opportunity to step back and say, what do I do with my data store? Does it have to be the Oracle of the past? Can I re-architect it for something that's more optimized for what I'm trying to do now? And that's where, I think NoSQL has really caught on. >> We, you know Justin, we were talking yesterday, and then Andy's keynote. I had one-on-one with him a week ago. It's good, some of my content made it into his keynote, because one of the things I've been banging on we talked about yesterday was, these modern databases, modern apps, could have multiple databases. And you, look at Redis, there's different use cases. DynamoDB is slow on lookups, I might wanna have a queue there. I might wanna tie it with Redis and a little bit of architectural shape. It's a whole new normal, it's not a one trick pony. >> Yeah, and Redis is really popular in the Kubernetes community, I know. So as we see Kubernetes growing, then I expect that the Redis growth will also follow that. >> The question is, this is what I've put, and he put inside his keynote was, the new modern app can have multiple databases. This is gonna have a huge impact. How does that impact this report? What do you see, because now it kinda changes the game? It's not one, I can't just throw MySQL at it, or Mongo. Used to be the old days, LAMP stack and say, okay, Mongo's awesome, I'm gonna build my app, but now I gotta integrate it with another app. >> Yeah, no, absolutely. I mean, we're seeing heterogeneity across the board, right? And that is part of the goal of a report like this, too. Right, I mean, we put this report out mostly focused on cloud architects, DevOps engineers, SRE engineers who are rethinking what it takes to run an application in the cloud, may it be AWS, Azure, et cetera. And we wanted to provide them a roadmap of what are their peers doing in this world. >> Well, we really appreciate you and SumoLogic doing a report. New Relic has one. We love these kind of reports and when they're this good, we like to talk about them. I know you're being really nice and you don't wanna lose customers by pissing off other cloud guys, because you're in Switzerland, you play with all of them. But there's really some interesting data here that points to who's leading and who's not, and then the stacks do matter. The developers are influencing IT decisions now. So knowing the stack, knowing your stack, what works for developers, super important. We're gonna keep track of it. We'll certainly invite you into our powwow out at the studios to do some check-ins on the report. Maybe do a deeper dive, appreciate it. >> Yeah, and all I'll say is this report is available on our website. It's, you know, you don't have to register, you get it. >> John: Free. Yeah, it's free. >> They don't even ask for an email address, which is great. (laughter) So thanks so much for SumoLogic. Thanks for coming on the CUBE and breaking down the report. More live coverage here from Las Vegas, from Amazon re:Invent, I'm John Furrier with Justin Warren. We'll be right back with more after this short break. (the CUBE theme music)
SUMMARY :
Narrator: Live from Las Vegas, it's the CUBE. and developers at the heart of the action, the creativity, the power of developers, What the hell is a modern application? a modern application in the cloud, right? of the AWS services that we have today. and the security technologies that you may need and confused by the Azure stack. in terms of the number of services, so what do you see as ones that aren't making it, is that that can take you so far, and just a sprinkling of dust, as you described it, I love the top two data, I always loved Redis but that's moving up. It's a in-memory database, as you know. in the database analysis that we did was that because one of the things I've been banging on in the Kubernetes community, I know. the new modern app can have multiple databases. And that is part of the goal of a report like this, too. out at the studios to do some check-ins on the report. Yeah, and all I'll say is Thanks for coming on the CUBE and breaking down the report.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Justin Warren | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Kalyan Ramanathan | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Justin | PERSON | 0.99+ |
Keith Townsend | PERSON | 0.99+ |
Andy | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
Switzerland | LOCATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Last year | DATE | 0.99+ |
Andrew Jassy | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
18% | QUANTITY | 0.99+ |
Node.js | TITLE | 0.99+ |
This year | DATE | 0.99+ |
three | QUANTITY | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Kalyan | PERSON | 0.99+ |
45,000 people | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
Python | TITLE | 0.99+ |
yesterday | DATE | 0.99+ |
MySQL | TITLE | 0.99+ |
NoSQL | TITLE | 0.99+ |
SumoLogic | ORGANIZATION | 0.99+ |
third thing | QUANTITY | 0.99+ |
PowerPoint | TITLE | 0.98+ |
a year | QUANTITY | 0.98+ |
Oracle | ORGANIZATION | 0.98+ |
Azure | TITLE | 0.98+ |
Docker | TITLE | 0.98+ |
Linux | TITLE | 0.98+ |
one | DATE | 0.98+ |
about seven years | QUANTITY | 0.98+ |
four customers | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
about 1,600 customers | QUANTITY | 0.98+ |
a week ago | DATE | 0.98+ |
SQL | TITLE | 0.98+ |
three days | QUANTITY | 0.98+ |
Redis | ORGANIZATION | 0.98+ |
this week | DATE | 0.98+ |
New Relic | ORGANIZATION | 0.97+ |
Mongo | ORGANIZATION | 0.97+ |
first version | QUANTITY | 0.97+ |
Intel | ORGANIZATION | 0.97+ |
Secondly | QUANTITY | 0.96+ |
CUBE | ORGANIZATION | 0.96+ |
Javascript | TITLE | 0.95+ |
two days ago | DATE | 0.95+ |
Node | TITLE | 0.94+ |
Karsten Ronner, Swarm64 | Super Computing 2017
>> Announcer: On Denver, Colorado, it's theCUBE, covering SuperComputing '17, brought to you by Intel. >> Hey, welcome back, everybody. Jeff Frick here with theCUBE. We're in Denver, Colorado at this SuperComputing conference 2017. I think there's 12,000 people. Our first time being here is pretty amazing. A lot of academics, a lot of conversations about space and genomes and you know, heavy-lifting computing stuff. It's fun to be here, and we're really excited. Our next guest, Karsten Ronner. He's the CEO of Swarm64. So Karsten, great to see you. >> Yeah, thank you very much for this opportunity. >> Absolutely. So for people that aren't familiar with Swarm64, give us kind of the quick eye-level. >> Yeah. Well, in a nutshell, Swarm64 is accelerating relational databases, and we allow them to ingest data so much faster, 50 times faster than a relational database. And we can also then query that data 10, 20 times faster than relational database. And that is very important for many new applications in IoT and in netbanking and in finance, and so on. >> So you're in a good space. So beyond just general or better performance, faster, faster, faster, you know, we're seeing all these movements now in real-time analytics and real-time applications, which is only going to get crazier with IoT and Internet of Things. So how do you do this? Where do you do this? What are some of the examples you could share with us? >> Yeah, so all our solution is a combination of a software wrapper that attaches our solution to existing databases. And inside, there's an FPGA from Intel, the Arria 10. And we are combining both, such that they actually plug into standard interfaces of existing databases, like in PostgreSQL, Foreign Data Wrappers, the storage engine in MySQL, and MariaDB and so on. And with that mechanism, we ensure that the database, the application doesn't see us. For the application, there's just fast database but we're invisible and also the functionality of the database remains what it was. That's the net of what we're doing. >> So that's so important because we talked a little bit about offline, you said you had a banking customer that said they have every database that's ever been created. They've been buying them all along so they've got embedded systems, you can't just rip and replace. You have to work with existing infrastructure. At the same time, they want to go faster. >> Yeah, absolutely right. Absolutely right. And there's a huge code base, which has been verified, which has been debugged, and in banking, it's also about compliance so you can't just rip out your old code base and do something new, because again, you would have to go through compliance. Therefore, customers really, really, really want their existing databases faster. >> Right. Now the other interesting part, and we've talked to some of the other Intel execs, is kind of this combination hybrid of the Hardware Software Solution in the FPGA, and you're really opening up an ecosystem for people to build more software-based solutions that leverage that combination of the hardware software power. Where do you see that kind of evolving? How's that going to help your company? >> Yeah. We are a little bit unique in that we are hiding that FPGA from the user, and we're not exposing it. Many people, actually, many applications expose it to the user, but apart from that, we are benefiting a lot from what Intel is doing. Intel is providing the entire environment, including virtualization, all those things that help us then to be able to get into Cloud service providers or into proprietary virtualized environments and things like that. So it is really a very close cooperation with Intel that helps us and enables us to do what we're doing. >> Okay. And I'm curious because you spend a lot of time with customers, you said a lot of legacy customers. So as they see the challenges of this new real-time environment, what are some of their concerns, what are some of the things that they're excited that they can do now with real-time, versus bash and data lake. And I think it's always funny, right? We used to make decisions based on stuff that happened in the past. And we're kind of querying now really the desires just to make action on stuff that's happening now, it's a fundamentally different way to address a problem. >> Yeah, absolutely. And a very, very key element of our solution is that we can not only insert these very, very large amounts of data that also other solutions can do, massively parallel solutions, streaming solutions, you know them all. They can do that too. However, the difference is that we can make that data available within less than 10 microseconds. >> Jeff: 10 microseconds? >> So dataset arrives within less than 10 microseconds, that dataset is part of the next query and that is a game changer. That allows you to do controlled loop processing of data in machine-to-machine environments, and autonomous, for autonomous applications and all those solutions where you just can't wait. If your car is driving down the street, you better know what has happened, right? And you can react to it. As an example, it could be a robot in a plant or things like that, where you really want to react immediately. >> I'm curious as to the kind of value unlocking that that provides to those old applications that were working with what they think is an old database. Now, you said, you know, you're accelerating it. To the application, it looks just the same as it looked before. How does that change those performances of those applications? I would imagine there's a whole other layer of value unlocking in those entrenched applications with this vast data. >> Yeah. That is actually true, and it's on a business level, the applications enable customers to do things they were not capable of doing before, and look for example in finance. If you can analyze the market data much quicker, if you can analyze past trades much quicker, then obviously you're generating value for the firm because you can react to market trends more accurately, you can mirror them in a more tighter fashion, and if you can do that, then you can reduce the margin of error with which you're estimating what's happening, and all of that is money. It's really pure money in the bank account of the customer, so to speak. >> Right. And the other big trend we talked about, besides faster, is you know, sampling versus not sampling. In the old days, we sampled old data and made decisions. Now we don't want to sample, we want all of the data, we want to make decisions on all the data, so again that's opening up another level of application performance because it's all the data, not a sample. >> For sure. Because before, you were aggregating. When you aggregate, you reduce the amount of information available. Now, of course, when you have the full set of information available, your decision-making is just so much smarter. And that's what we're enabling. >> And it's funny because in finance, you mentioned a couple of times, they've been doing that forever, right. The value of a few units of time, however small, is tremendous, but now we're seeing it in other industries as well that realize the value of real-time, aggregated, streaming data versus a sampling of old. Really opens up new types of opportunities. >> Absolutely, yes, yes. Yeah, finance, as I mentioned is an example, but then also IoT, machine-to-machine communication, everything which is real-time, logging, data logging, security and network monitoring. If you want to really understand what's flowing through your network, is there anything malicious, is there any actor on my network that should not be there? And you want to react so quickly that you can prevent that bad actor from doing anything to your data, this is where we come in. >> Right. And security's so big, right? It in everywhere. Especially with IoT and machine learning. >> Absolutely. >> All right, Karsten, I'm going to put you on the spot. So we're November 2017, hard to believe. As you look forward to 2018, what are some of your priorities? If we're standing here next year, at SuperComputing 2018, what are we going to be talking about? >> Okay, what we're going to talk about really is that we will, right now we're accelerating single-server solutions and we are working very, very hard on massively parallel systems, while retaining the real-time components. So we will not only then accelerate a single server, by then, allowing horizontal scaling, we will then bring a completely new level of analytics performance to customers. So that's what I'm happy to talk to you about next year. >> All right, we'll see you next year, I think it's in Texas. >> Wonderful, yeah, great. >> So thanks for stopping by. >> Thank you. >> He's Karsten, I'm Jeff. You're watching TheCUBE, from SuperComputing 2017. Thanks for watching.
SUMMARY :
brought to you by Intel. and genomes and you know, Yeah, thank you very of the quick eye-level. And that is very important for So how do you do this? ensure that the database, about offline, you said about compliance so you can't just rip out How's that going to help your company? that FPGA from the user, stuff that happened in the past. is that we can make the street, you better know that provides to those and if you can do that, then you can And the other big trend we talked about, Now, of course, when you have the in finance, you mentioned quickly that you can prevent And security's so big, right? going to put you on the spot. talk to you about next year. All right, we'll see you next Thanks for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Karsten Ronner | PERSON | 0.99+ |
Karsten | PERSON | 0.99+ |
Texas | LOCATION | 0.99+ |
Jeff | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
50 times | QUANTITY | 0.99+ |
November 2017 | DATE | 0.99+ |
MySQL | TITLE | 0.99+ |
next year | DATE | 0.99+ |
2018 | DATE | 0.99+ |
next year | DATE | 0.99+ |
less than 10 microseconds | QUANTITY | 0.99+ |
Swarm64 | ORGANIZATION | 0.99+ |
10 microseconds | QUANTITY | 0.99+ |
12,000 people | QUANTITY | 0.99+ |
Denver, Colorado | LOCATION | 0.99+ |
PostgreSQL | TITLE | 0.99+ |
10 | QUANTITY | 0.99+ |
MariaDB | TITLE | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
Denver, Colorado | LOCATION | 0.99+ |
20 times | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
first time | QUANTITY | 0.98+ |
Swarm64 | TITLE | 0.97+ |
SuperComputing | EVENT | 0.97+ |
single server | QUANTITY | 0.95+ |
SuperComputing '17 | EVENT | 0.91+ |
theCUBE | ORGANIZATION | 0.85+ |
Super Computing 2017 | EVENT | 0.83+ |
TheCUBE | TITLE | 0.81+ |
single | QUANTITY | 0.65+ |
2017 | EVENT | 0.63+ |
bash | TITLE | 0.6+ |
Foreign Data Wrappers | TITLE | 0.54+ |
SuperComputing | TITLE | 0.54+ |
Arria 10 | ORGANIZATION | 0.52+ |
2017 | DATE | 0.4+ |
Lenley Hensarling & Marc Linster, EnterpriseDB - #IBMEdge
>> Announcer: Live from Las Vegas! It's theCUBE. Covering Edge 2016. Brought to you by IBM. Here's your host, Dave Vellante. >> Welcome back to IBM Edge everybody. This is theCUBE's fifth year covering IBM Edge. We were at the inaugural Edge five years ago in Orlando. Marc Linster is here and he's joined by Lenley Hensarling. Marc is the Senior Vice President of Product Development. And Lenley is the Senior Vice President of Product Management and Strategy at EDB, Enterprise Database. Gentlemen, welcome to theCUBE. Thanks for coming on. >> Male Voice: Thank you. >> Okay, who wants to start. Enterprise Database, tell us about the company and what you guys are all about. >> Well the company has been around for little over 10 years now. And our job is really to give companies the ability to use Postgres as the platform for their digital business. So think about this, Postgres is a great open source database. Great capabilities for transactional management of data. But also multi-model data management. So think about standard SQL data but think also about document oriented, think about key-value pair. Think about GIS. So a great capability that is very, very robust. Has been around for quite a few years. And is really ready to allow companies to build on them for the new digital business but also to migrate off their existing commercial databases that are too expensive. >> What's the history of Postgres? Can you sort of educate me on that? >> Sort of the same roots back with System R, where DB2 came from, Oracle came from. So Berkeley, that's where the whole thing started out. Postgres is really the successor to Ingres. >> Dave: Umhmm. >> And then it turned into PostgreSQL. And it has been licensed under open source license, the Postgres license since 1996. And it's a very, very vibrant open source community that has been driving forward for many years now. And our view is the best available relational and multi-model database today. >> It's the mainspring of relational database management systems essentially >> Marc: Yeah. >> is what you're saying. And Lindley, from a product standpoint, how do you productize that, open source. >> Open source really, companies that have a distribution of open source for database and operating system, whatever the open source company most people are acquainted with, is Red Hat and Linux right. And so, we do the same thing that they do but for Postgres database. We take the distribution, we add testing, we add some other functionality around it so you can run Postgres responsively as Marc likes to say. So high availability, capability, fail-over management, replication, a backup solution. And instead of leaving it as an exercise for a customer, who wants to use open source, we test all this together. And then we validate it and we give them a complete package with documentation and services that they can access to help them be successful it. >> So if Michael Stonebraker were sitting right here, I say Michael, what do you think about Postgres? I'd say I had to start Vertica because we needed a new way. Yet, sort of PostgreSQL, is the killer remains the killer platform in the industry, doesn't it? >> Male Voice: Umhmm. Why is that? It's interesting when you talk to guys like Stonebraker, it's sort of dogma almost. But yet, customers, talk with their wallet. >> And it is, >> He did a very, very nice job of architecting it. It is a database that is extensible. The reason we add the first JSONB or document oriented implementation in the relational database space is because it was designed to make it easy to add new capabilities, new datatypes, new indexes, et cetera, into the same transactional model. That's why we have JSONB. That's why we have PostGIS. That's why we have key-value pair. So it was really well architected. And when you think about who else, not just Vertica has taken this engine >> Dave: Yeah. >> It is in Netezza, it is in a bunch of other. >> Dave: Master Data. >> Lenley: Greenplum. >> Greenplum yes. So it's a really robust architecture. Very, very nicely designed. It just does the job and it does it really well. Which is, what you want a database to do, right. It's not that exciting but it's really stable. It really works. The data is still there tomorrow. That's what really the requirements are. >> And to translate a little bit, Marc mentioned PostGIS, which is geo spacial capability for the Postgres database. And so we distribute that along with Postgres and test it so that you know it works. And he mentioned H-Store, so that's how you can actually store internet of things data really well into Postgres. And we talk about SQL, noSQL databases, so they're document databases. And the ability to have personalization at the same level you can in a document oriented database but in a structured SQL database are the kinds of things that have been added to Postgres over the years. Again, it's because of the basic architecture that Stonebraker put in place as an object relational database. >> It's so interesting to look at the history of database. Talk about Stonebraker, he's been on a number of times. It's just fascinating to listen to one of the fathers of this industry. But 10 years ago, database was like such a boring topic. And now it's exploded. Now you got Amazon going after Oracle. Oracle fighting the good fight. So many noSQL databases coming in. SQL becoming the killer big data app if you will. >> Male Voice: Umhmm. >> Why all of a sudden did database get so interesting? >> What happened was, application models changed. Led by Facebook, led by Amazon and Google. They said, let's refactor the applications and let's refactor the way we handle storage. >> Dave: Umhmm. >> And that led to the rise of the polyglot of databases is what a lot of people are saying. You have fit for purpose solutions and you may have three or four or five of them in your overall architecture. One thing about Postgres is, we're able to, because of the datatypes support that Marc mentioned, fit into that well. We don't try and do everything so if somebody says, I'm going to use Mongo for data capture, or I'm going to use Cassandra for capturing my internet of things data. We have what we call foreign data wrappers in the Postgres world. We call them just Enterprise DB Adapters but to Mongo, to Casandra, to Hadoop and can do bidirectional data there and just keep that data at rest over there in the other world. But be able to project relational schema onto it. We can push our data into those. We've got a great use case we've been talking about with a customer who had over a petabyte of data. And in the past what you do is, you'd go buy an expensive archiving solution and add that to it. Now, you just use Hadoop distributed file system. Push the data off there as it ages and have a foreign data wrapper that allows you to still query that data when it's out of your basic operational dataset. And move forward. >> Can I call that a connector or? >> Lenley: Yeah, a connector, that's not a bad idea. >> And it's interesting because If you guys remember Hadapt, probably. [Male Voices] Yeah. Yes. >> They came out, they were the connector killer. >> Male Voice: Umhmm. >> And it failed. >> Male Voice: Yeah. >> Seems like connectors are just fine. >> Male Voice: Yeah. >> And one of the really interesting things is, we call it data federation right. With philosophy here is, leave the data where it is. There are some data that should live in Hadoop or Cassandra. If I'm doing an e-commerce site with transactions and click streams, well, the click streams really should live in Hadoop. That the night natural place for them. The transactions should be in a transactional database. With the foreign data wrapper, I can run queries without moving the data, that will allow me to say, well, before you bought the brown teddy bear, which pages did you look at? >> Dave: Yeah. >> And I can do that integrated system and I can do a fit for purpose architecture. And that's what we think is really exciting. >> And that's fundamental to this new sort of programming or application models. >> Male Voice: That's right. >> The one that you were talking about is moving five megabytes of code to a petabyte of data. As opposed to moving data which we know has gravity and speed of light issues and so forth. >> Thank you for that little brief education. Appreciate it. So let's get into your business now, your relationship with IBM. What customers are doing. You mentioned IoT data so talk more about your business and your relationship with IBM and what you guys are doing for customers. >> There are a couple of things. We mentioned Oracle. And there are all the new databases. And then there's your, dare we say, legacy, proprietary databases as well. And people are looking to become more efficient in how they spend. We've done another thing with Postgres. We've added Oracle compatibility in terms of datatypes. So we support all the datatypes that Oracle does. And we support PL/SQL, they're sort of variant of stored procedure language. And implemented a lot of the packages that they have as well. So we can migrate workloads from Oracle over into an open source based solution. And give a lot cost effectiveness options to customers. >> Dave: Steal. This is a way that I can sort of have Oracle licensed database licensed and maintenance avoidance. >> Lenley: Yes. Yeah. >> Where possible, right. >> Where it makes sense. Where it makes sense. >> Obvious my quorum, I keep, but let's face it, the number one cost component of a TCO analysis of an Oracle customer is the database license and maintenance cost. >> Male Voice: That's right. >> It's not the people. One of the few examples I can think of where that's the case. There's always the people cost. [Male Voice] That's right, that's right. IT is very labor intensive. But for an Oracle customer, it's the database license. Cuz they license by Core. >> Male Voice: Yup. Cores are going through the roof. >> Male Voice: That's right. It's been great for Oracle's business. Although, wouldn't you agree, Oracle sees the writing on the wall that the SAS is really sort of the new control point for the industry. You see the acquisition of NetSuite and competition with Workday >> Male Voice: Yup. >> and the like. >> But the database remains the heart of the business. >> And really it's movement to the cloud, both private cloud and public cloud. And so we've been doing work there. We've had public cloud database as a service solution on Amazon for, what, [Marc] Four years. >> Four years, Marc. And have gained a lot experience with that. And were running that sort of running a retail, you can license the database and we'll provision it there. And so what we've done recently is change our perspective and said, let's put this into hands of customers. And let them standup their own database as a service. But also do it in a way that they can choose what workload should go to Amazon and what workload might go to their private cloud, built on open stack. And be able to arbitrage that if you will. Because they now have a way to provision the databases and make a choice about where to put it. >> So that's a bring your own license model that you just talked about? >> Bring your own license model or >> Are you in the Marketplace and, >> We're in the Marketplace in Amazon, where we can supply it that way. But customers have shown a preference for bring your own license. They want to make the best enterprise deal they can with a vendor like us or whomever else. And then have control over it. >> Amazon obviously wants you to be in the Marketplace. I won't even mention but I talked to some CEOs of database companies and they say, you know, we're in the Marketplace but we get in the Marketplace, next thing you know, Amazon is pushing them towards DynamoDB or you know. >> Male Voice: That's right, that's right. >> Now Amazon's come out with Aurora and Oracle migration and you know the intent to go after that business. Amazon's moving up the stack and you got to be careful. >> They are. But the thing about Amazon is that, they're a pure play in the cloud company. >> Dave: Yup. >> And all of the data shows that it's like a mix, it's going to be a hybrid cloud. Half the company in this world [Dave] Not Angie Jassie's data >> Eighty percent of the people in the cloud are going to be on-prem, still continuing their journey through virtualization. >> Dave: Yeah, that's right. >> Let along going to the cloud. But we want to be something that let's them put what they want in the public cloud and let's them manage on the private cloud in the same manner. So they can provision databases with a few clicks. Just like they do on Amazon. But do it in their data center. >> You doing that with Softlayer as well or not yet? >> Lenley: Not yet. >> Marc: Not yet. >> We've built this provisioning capability ourselves. And it came out of the work we did putting up databases on Amazon. >> So what are you guys doing here at Edge. Edge is kind of infrastructure show. Database is infrastructure. >> We're talking about our work with Power. >> Power is a big partner for us. Power is I think very, very interesting for our database customers. Because of the much higher clock speeds and the capabilities that the Power processor has. When I'm looking at Power, I get more oomph out of a single core which really for a database customer is very, very interesting. Because all databases are licensed by Core. >> Dave: Right. >> So it's a much better deal for the customer. And specifically for Postgres, Postgres scales very well with higher clock speeds. So by having, let's say, by growing performance, not by adding more cores but by making the individual cores faster, that plays very, very well to the Postgres capabilities. >> Okay, so you are a Power partner, part of that ecosystem that IBM is appealing to to grow the OpenPOWER base. And what kind of workloads are you seeing your customers demand and where you're having success? >> Across the board. Database is mostly infrastructure capabilities so there's a lot of interest that we're seeing that, for all kinds of applications really. >> What's the typical Power customer look like these days? You got some Oracle, you got some DB2, you guys are running on there, what's the mix? Paint the picture for us. >> I think the typical Power customer is the typical enterprise company. And, [Dave] Little bit of everything. >> It's a little bit of everything. But one of the key things is that, people are also looking at what they've got and the skills they have in place. You were talking about people cost right. [Dave] Yeah. >> And their understanding of management. Their understanding of how to manage the relationship with the vendor even. And then saying, look, how can I move into the new world of digital transformation and start my own private cloud options and things like that in an efficient way. That makes efficient use of hardware I have in place and has a growth curve and new hardware that's coming out that fits my workloads. >> Dave: Umhmm. >> And the profiles that Marc was talking about. >> And also the resources. Which is very interesting when we look at these new digital applications with Postgres. Because you can do so much in Postgres from geographic information systems to document oriented to key-value. But you can do that with your existing developers through existing DBAs. They don't need to go to school to learn a new database. And that's also a very, very, interesting capability. So you can use your existing team to do new stuff. [Male Voice] Yup. >> What's happening in IoT, what problems are you solving there and where's the limit? >> Sensor data collection. >> Lenley: Yeah. Real interesting because sensor data tends to come in all different forms. We have customer who collects temperature sensor, temperature data. But the sensors are all sending different data packets. So because we can do document oriented or key-value, we can easily accommodate that. In the old days with the relational model, I had to do all kinds of tricks to sort of stuff all that into a relational table. My table would be almost empty at the end because I'd have to add columns for every vendor et cetera. Here, now I can use put all that into the same format and provide it for analysis. So that's a real interesting capability. >> And it's interesting too because we've got really strong geo spacial data support. And the intersection of that, with IoT is a big deal. They track your iPhone, they know where we are. They know what's going on. That's sensor data. They know which lights in which building, which you know, louvers that are controlling HVAC are malfunctioning or not. They want to know specifically where it is, not just what the sensor is. And some of that stuff moves around. And it gets replaced in a new place in the building and such. So we're well setup to handle those types of workloads. >> What's interesting, when IBM bought the weather company, [Lenley] Yeah. >> And they thought okay great, they're getting all these data scientists and weather data, that's cool. They can monetize that but it's an IoT play, isn't it? [Male Voice] Right. Right. >> Talk about sensor. >> It's reference data. It's reference data for other company specific IoT plays. To have a broader set of sensors out there in their region and understand what's happening with weather and things. And then play that against what their experience is, managing new building or manufacturing processes, everything. >> So what's the engagement model. I'm a customer, I want to do business with you. How do I do it, how do I engage? >> Well, a lot of our businesses direct with us. Others through partners. And then a lot of customers come to us because they want to get off legacy systems. But really, what they do is, once they understand the database and the capabilities, they say, okay yeah, you can do the Oracle stuff. But what I'm really going to do with you is my new things. Because that's really exciting and it helps me kind of put a lid on the commercial license growth. So maybe I'm not going to get off it, but I will stop growing it. So I will start doing my new stuff on Postgres. Whenever I modernize something, Postgres is going to be my database of choice. If I already open up an application with its whole stack, this is one of the changes I'm going to make. And then the database as service, is very, very interesting. So these four entry vectors and what happens is, quite a few customers after a short time when they started with project or applications, they end up making Postgres as one of their database standards. Not the only one. But they make it one of the database standards so it gets into the catalog and every new project then has to consider Postgres. >> It's interesting, there's a space created as Microsoft sort of put all their wood behind the era of becoming a competitor to high end Oracle. And with this last release, they probably are on there, arguable. But they've also raised their prices too. And they've made the solution more complex. So there's this space that was vacated for like a ton of workloads and Postgres fits in there just about perfectly. We see enterprise after enterprise come to us with a sheet that says, now we're going to get some of this noSQL stuff. We're going to keep Oracle or DB2 over here for these really high end things. Run my financials, run my sales order processing, my manufacturing. And then we got this space in here. We got a slot for relational database and we want to go open source. Because of the cost savings. Because of other factors. It's ability to grow and not be bound to, hey, what if the vendor decides they're going to go for a new cooler thing and make me upgrade. >> Dave: Right. >> And I want to stay there and know that there's still being an investment made. And so there's a vibrant community around it. And it just fits that slot perfectly. >> You got to pay for that digital transformation and all these IoT initiates. You can't just keep pouring [Male Voice] Somehow. >> down to database licenses. [Male Voice] That's right. >> Tell me, we have to leave it there. >> Thanks very much >> Male Voice: Alright. >> for coming to theCUBE. >> Thanks so much. >> We appreciate the time. You welcome. [Male Voice] Enjoy it. Keep it right there buddy. We'll be right back with our next guest. This is theCUBE. We're live from IBM Edge 2016, be right back. (upbeat music)
SUMMARY :
Brought to you by IBM. And Lenley is the Senior Vice President tell us about the company and what you guys are all about. And is really ready to allow companies to build on them Postgres is really the successor to Ingres. And it's a very, very vibrant open source community And Lindley, from a product standpoint, And then we validate it and we give them a complete package is the killer It's interesting when you talk to guys like Stonebraker, And when you think about who else, Netezza, it is in a bunch of other. It just does the job and it does it really well. And the ability to have personalization SQL becoming the killer big data app if you will. and let's refactor the way we handle storage. And in the past what you do is, And it's interesting because And one of the really interesting things is, And I can do that integrated system And that's fundamental to this new sort of is moving five megabytes of code to a petabyte of data. and what you guys are doing for customers. And implemented a lot of the packages This is a way that I can sort of have Oracle licensed Where it makes sense. is the database license and maintenance cost. But for an Oracle customer, it's the database license. Male Voice: Yup. that the SAS is really sort of And really it's movement to the cloud, And be able to arbitrage that if you will. We're in the Marketplace in Amazon, of database companies and they say, you know, and you know the intent to go after that business. But the thing about Amazon is that, And all of the data shows Eighty percent of the people in the cloud in the same manner. And it came out of the work we did So what are you guys doing here at Edge. and the capabilities that the Power processor has. So it's a much better deal for the customer. And what kind of workloads Across the board. What's the typical Power customer look like these days? is the typical enterprise company. and the skills they have in place. manage the relationship with the vendor even. And also the resources. In the old days with the relational model, And the intersection of that, with IoT is a big deal. What's interesting, when IBM bought the weather company, And they thought okay great, And then play that against what their experience is, I'm a customer, I want to do business with you. And then a lot of customers come to us Because of the cost savings. And it just fits that slot perfectly. You got to pay for that digital transformation down to database licenses. We appreciate the time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
Marc Linster | PERSON | 0.99+ |
Marc | PERSON | 0.99+ |
Lenley | PERSON | 0.99+ |
Lenley Hensarling | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Michael | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Angie Jassie | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
five | QUANTITY | 0.99+ |
Four years | QUANTITY | 0.99+ |
Eighty percent | QUANTITY | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
Lindley | PERSON | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
four | QUANTITY | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Orlando | LOCATION | 0.99+ |
1996 | DATE | 0.99+ |
three | QUANTITY | 0.99+ |
Postgres | ORGANIZATION | 0.99+ |
fifth year | QUANTITY | 0.99+ |
SQL | TITLE | 0.99+ |
one | QUANTITY | 0.98+ |
Stonebraker | PERSON | 0.98+ |
10 years ago | DATE | 0.98+ |
five years ago | DATE | 0.98+ |
both | QUANTITY | 0.98+ |
Berkeley | LOCATION | 0.98+ |
first | QUANTITY | 0.97+ |
Michael Stonebraker | PERSON | 0.97+ |