Juan Carlos Garcia, Telefónica & Ihab Tarazi, Dell Technologies | MWC Barcelona 2023
>> Narrator: TheCUBE's live coverage is made possible by funding from Dell Technologies, creating technologies that drive human progress. (upbeat music) (logo background tingles) >> Hey everyone, it's so good to see you, welcome back to theCube's day two coverage of MWC 23. We are live in Barcelona, Lisa Martin with Dave Nicholson, Dave we have had no signage of people dropping out, this conference is absolutely jam packed. There's so much interest in the industry, you've had a lot of interviews this morning, before we introduce our guests and have a great conversation about the industry and challenges and how they're being solved, what are some of the things that stuck out to you in conversations today? >> Well, I think the interesting, kind of umbrella conversation, that seems to be overlapping you know, overlying everything is this question about Open RAN and open standards in radio access network technology and where the operators of networks and the providers of technology come together to chart a better path forward. A lot of discussion of private 5G networks, it's very interesting, I think I've said this a few times, from a consumer's perspective, we feel like 5G has been with us for a long time- >> We do. >> But it's very clear that this, that we're really at the beginning of stages of this and I'm super excited for our guests that we have here because we're going to be able to talk to an actual operator- >> Yes. >> And hear what they have to say, we've heard a lot of people talking about the cool stuff they build, but we're going to get to hear from someone who actually works with this stuff, so- >> Who actually built it, absolutely. Please welcome our two guests, we have Ihab Tarazi CTO and SVP at Dell Technologies, and Juan Carlos Garcia SVP Technology Innovation and Ecosystems at Telephonica, it's great to have you guys on the program. >> So, thank you very much. >> So the buzz around this conference is incredible, 80,000 plus people, 2000 exhibitors, it's standing room only. Lot of opportunity in the industry, a lot of challenges though, Juan Carlos we'd love to get your perspective on, what are some of the industry challenges that Telephonica has faced that your peers are probably facing as well? >> Well we have two kinds of challenges, one is a business challenge, I would say that we may find in other industries, like profitability and growth and I will talk about it. And the second challenge is our technology challenge, we need the network to be ready to embrace a new wave of technologies and applications that are, you know, very demanding in terms of network characteristics and features. On the efficiency and profitability and growth, the solution comes as a challenge from changing the way networks are built and operated, from the traditional way to make them become software platforms. And this is not just at the knowledge challenge, it's also changing the mindset of network operators from a network and service provider to a digital service provider, okay? And this means several things, your network needs to become software-based so that you can manage it digitally and on top of it, you need to be able to deliver detail services digitally, okay? So there are three aspects, making your network so (indistinct) and cloud and cloud waste and then be able to sell in a different way to our customers. >> So some pretty significant challenges, but to your point, Juan Carlos, you share some of those challenges with other industries so there's some commonality there. I wanted to bring Ihab into the conversation, from Dell's perspective, we're seeing, you know, the explosion of data. Every company has to be a data company, we expect to have access to data in real time, if it's a new app, whatever it is. What are some of the challenges that you're seeing from your seat at Dell? >> Yeah, I think Juan Carlos explained that really well, what all the operators are talking about here between new applications, think metaverse, think video streaming, going all the way to the edge, think all the automation of factories and everything that's happening. It's not only requiring a whole new model for delivery and for building networks, but it's throwing out enormous amount of data and the data needs to be acted on to get the value of it. So the challenge is how do I collect the data? How do I catalog it? How do I make it usable? And then how do I make it persistent? So you know, it's high performance data storage and then after that, how do I move it to where I want to and be able to use it. And for many applications that has to happen in milliseconds for the value to come out. So now we've seen this before with enterprise but now I would say this digital transformation is happening at very large scale for all the telcos and starting to deal with very familiar themes we've seen before. >> So Juan Carlos, Telephonica, you hear from partners, vendors that they've done this before, don't worry, you're in good hands. >> Juan Carlos: Yeah, yeah. >> But as a practical matter, when you look at the challenges that you have and you think about the things you'll do to address them as you move forward, what are the immediate short term priorities? >> Okay. >> Versus the longer term priorities? What's realistic? You have a network to operate- >> Yeah. >> You're not just building something out of nothing, so you have to keep the lights on. >> Yeah. >> And you have to innovate, we call that by the way, in the CTO trade, ambidextrous, management using both hands, so what's your order of priorities? >> Well, the first thing, new technologies you are getting into the network need to come with a detail shape, so being cloud native, working by software. On the legacies that you need to keep alive, you need to go for a program to switch (indistinct) off progressively, okay? In fact, in Spain we are going to switch up the copper network in two years, so in 2024, Telephonica will celebrate 100 years and the celebration will be switching up the copper network and we'll have on the fixed access only fiber, okay. So more than likely, the network is necessary, all this digitalization may happen only on the new technologies because the new technologies are cloud-based, cloud native, become already ready for this digitalization process. And not only that, so you need also to build new things, we need an abstraction layer on top of the physical infrastructure to be able to manage the network by software, okay. This is something that happened in the computing world, okay, where the servers, you know, were covered with a cloud stack layer and we are doing the same thing in the network. We are trained to abstract the network services and capabilities and be able to offer them digitally to our customers. And this is a process that we are ongoing with many initiatives in the market, so one was the CAMARA community that was opened in Linux Foundation and the other one was the announcement we made yesterday of the open gateway initiative here at Mobile World Congress where all telecom operators have agreed to launch in this year a set of service APIs that are common worldwide, okay. This is a similar thing to what we did with 2G 35 years ago, to agree on a standard way of delivering a service and in this case is digital services based on APIs. >> What's the net result of? What are the benefits of having those open standards? Is it a benefit that myself as a consumer would enjoy? It seems, I mean, I've been, I'm old enough to remember, you know, a time before cellular telephones and I remember a time when it was very, very difficult to travel from North America to Europe with a cell phone. Now I land and my provider says, "Hey, welcome-" >> Juan Carlos: Yes. >> "Welcome, we're going to charge you a little extra money." And I say, "Hallelujah, awesome." So is part of that interoperability a benefit to consumers or, how, what? >> Yeah, you touch the right point. So in the same way you travel anywhere and you want to still make a call and send an SMS and connect to the internet, you will like your applications in your smartphone to work being them edge applications, okay, and these applications, each application will have to work to be executed very close to where you are, in a way that if you travel abroad the visitor network is serving you, okay. So this means that we are somehow extending the current interconnection and roaming agreements between operators to be able also to deliver edge applications wherever you are, in whatever network, with whatever technology. >> We have that expectation on the consumer side, that it's just going to work no matter where we are, we want apps to be updated, whether I'm banking or I'm shopping for groceries, I want to make sure that they know who I am, the data's got to be there, it's got to be real time, it's got to be right, it's got to serve me personally, but it just has to work. You guys talked about some of the big challenges, but also the opportunities in terms of the future of networking, the data turning companies in the data companies. Walk us through the future of networking from Telephonica's lens, you talked about some of the big initiatives that you have by 2024. >> Yes. >> But if you had a crystal ball and you could look in there and go it looks like this for operators, what would you say? And I'd love to get your feedback too. >> Yeah, I liked how Juan Carlos talked about how the future is, I think I want to add one thing to it, to say, a lot of times the user is no longer a consumer, it's an automated thing, you know, AI think robots, so a lot of times, more and more the usage is happening by some autonomous thing and it needs to always connect. And more and more these things are extending to places where even cellular coverage doesn't exist today, so you have edge compute show up. So, and when you think about it, the things we have to solve as a community here and this is all the discussions is, number one, how you make it a fully open standard model, so everything plugs and play, more and more, there's so many pieces coming, software, hardware, from different components and the integration of all of that is probably one of the biggest challenges people want solved. You know, how it's no longer one box, you buy from one person and put it away, now you have a complex combination of hardware and software. Also the operational model is very important and that is one of the areas we're focused on at Dell, is that while the operational model works inside the data centers for certain application, for telcos, it looks different when you're out at the cell tower and you're going to have these extended temperature changes. And sometimes this may not be inside a cabinet, maybe outside and the person servicing it is not an IT technician. This is somebody that needs to know exactly how to plug it, to be able to place equipment quickly and add capacity, those are just two of the areas, the cloud, making it work like a cloud, where it's intuitive, automated and you can easily add capacity, you can, you know, get a lot of monitoring, a lot of metrics, those are some of the things that we're all solving in this community. >> Let's talk about exactly how you're achieving this, Telephonica and Dell have been working together for a couple of years, you said before we went live. Talk about, you're doing this, you talked about the challenges, the opportunities how are you solving them and why with Dell? >> Okay, well you need to go with the right partners, not to this kind of process of transforming your network into a digital platform. There are big challenges on creating the cloud infrastructure that you need to support the complex, functionality and network requires. And I think you need to have with you, companies that know about the processors, that know about the hardware, the server, that know about how to make an abstraction of that hardware layer so that you can manage that digitally and this is not something any company can do, so you need companies that are very specialized. Telecom operators are changing the way to work, we work in the past with traditionally, with network equipment vendors, now we need to start working with technology providers, hardware (indistinct) providers with cloud providers with an ecosystem that is probably wider than what we had in the past. >> Yes. >> So I come from a background, I call myself a "knuckle dragging hardware engineer" sort of guy, so I'm almost fascinated by the physical part of this. You have a network, part of that network includes towers that have transmitters, receivers, at the base of those towers and like you mentioned, they're not all necessarily in urban areas or easy to access. There's equipment there, let's say that, that tower has been there for 5 years, 10 years, in the traditional world of IT, we have this this concept of the "refresh cycle" >> Juan Carlos: Yeah. >> Where a server may have a useful life of 36 months before it's consuming more power than it should based on the technology. How do you move from, kind of a legacy more proprietary, all-inclusive stack to an open system? I mean, is this a, "Okay, we're planning for an outage for the tower and you're wheeling out old equipment and wheeling in new equipment?" >> Juan Carlos: Yeah. >> I mean that's not, that's what we say as a non-trivial exercise, it's something that isn't, it's not something that's just easy to do, but is that what progress looks like? Sort of, methodically one site at a time? >> Yeah, well, I mean, you have touched an important point. In the technology renewal cycles, we were taking an appliance and replacing that by another one. Now with the current technology, you have the couple, the hardware from the software and the hardware, you need to replace it only when you run out of processing capacity to do what you want, okay? So then we'll be there 2, 3, 4, 5 years, whatever, when you need additional capacity, you replace it, but on the software side you can make the replacement every hour, every week. And this is something that the new technologies are bringing, a flexibility for the telecom operator to introduce a new feature without having to be physically there in the place, okay, by software remotely and this is the kind of software network we want to build. >> Lisa Martin: You know- >> Yeah, I want to add to that if I can- >> Please. >> Yeah. >> I think this is one of the biggest benefits of the open model. If the stack is all integrated as one appliance, when a new technology, we all know how quickly selecon technology comes out and now we have GPU's coming out for AI more increasingly, in an appliance model it may take you two years to take advantage of some new selecon that just came out. In this new open model, as Juan Carlos was saying, you just swap out, you know, you have time to market CPUs launched, it can be put out there at the cell tower and it could double capacity instantly and we're going to need that in that world, that easily going to be AI enabled- >> Lisa Martin: Right. >> So- >> So my last question to you, we only got a minute left or so, is given everything that we've talked about, the challenges, the opportunities, what you're doing together, how would you Juan Carlos summarize how the business is benefiting from the Dell partnership and the technologies that you're enabling with this new future network? >> Well, as I said before, we will need to be able to cover all the characteristics and performance of our network. We will need the right kind of processing capacity, the right kind of hardware solutions. We know that the functionality of the network is a very demanding one, we need hardware acceleration, we need a synchronization, we need time-sensitive solutions and all these can only done by hardware, so you need a good hardware partner, that ensures that you have the processing capacity you need to be able then to run your software, you know, with the confidence that it will work and with the performance that you need. >> That confidence is key. Well it sounds like what Telephonica and Dell have achieved together has been quite successful. Congratulations on the first couple of years, sounds like it's really helping Telephonica's business move in the strategic direction that it wants. We appreciate you joining us on the program today, describing all this, thank you both so much for your time. >> Thank you very much. >> Thank you, this was fun. >> A pleasure. >> Good, our pleasure. For our guests and for Dave Nicholson, I'm Lisa Martin, you're watching theCUBE live day two from Barcelona, MWC 23. Don't go anywhere, Dave and I will be right back with our next guests. (cheerful bouncy music)
SUMMARY :
that drive human progress. to you in conversations today? and the providers of it's great to have you So the buzz around this and on top of it, you What are some of the and the data needs to be acted you hear from partners, so you have to keep the lights on. into the network need to What are the benefits of we're going to charge you So in the same way you travel anywhere the data's got to be there, And I'd love to get your feedback too. and that is one of the areas for a couple of years, you that know about the hardware, the server, and like you mentioned, for the tower and you're and the hardware, you need to replace it benefits of the open model. and with the performance that you need. Congratulations on the and I will be right back
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Nicholson | PERSON | 0.99+ |
Juan Carlos | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
Spain | LOCATION | 0.99+ |
Barcelona | LOCATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
5 years | QUANTITY | 0.99+ |
Europe | LOCATION | 0.99+ |
10 years | QUANTITY | 0.99+ |
36 months | QUANTITY | 0.99+ |
Telephonica | ORGANIZATION | 0.99+ |
2 | QUANTITY | 0.99+ |
two guests | QUANTITY | 0.99+ |
North America | LOCATION | 0.99+ |
two | QUANTITY | 0.99+ |
2024 | DATE | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
two years | QUANTITY | 0.99+ |
Juan Carlos Garcia | PERSON | 0.99+ |
2000 exhibitors | QUANTITY | 0.99+ |
Linux Foundation | ORGANIZATION | 0.99+ |
Telefónica | ORGANIZATION | 0.99+ |
second challenge | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
80,000 plus people | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
both hands | QUANTITY | 0.98+ |
two kinds | QUANTITY | 0.98+ |
100 years | QUANTITY | 0.98+ |
MWC 23 | EVENT | 0.98+ |
each application | QUANTITY | 0.98+ |
3 | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
one box | QUANTITY | 0.98+ |
35 years ago | DATE | 0.98+ |
couple | QUANTITY | 0.98+ |
this year | DATE | 0.98+ |
first thing | QUANTITY | 0.97+ |
three aspects | QUANTITY | 0.97+ |
Ihab Tarazi | PERSON | 0.96+ |
CAMARA | ORGANIZATION | 0.96+ |
today | DATE | 0.95+ |
one thing | QUANTITY | 0.95+ |
one person | QUANTITY | 0.95+ |
4 | QUANTITY | 0.95+ |
day two | QUANTITY | 0.93+ |
first couple of years | QUANTITY | 0.92+ |
this morning | DATE | 0.91+ |
MWC | EVENT | 0.9+ |
2G | ORGANIZATION | 0.9+ |
SVP | PERSON | 0.88+ |
Mobile World Congress | EVENT | 0.85+ |
one appliance | QUANTITY | 0.85+ |
one site | QUANTITY | 0.84+ |
a minute | QUANTITY | 0.83+ |
CTO | PERSON | 0.82+ |
Juan Loaiza, Oracle | Building the Mission Critical Supercloud
(upbeat music) >> Welcome back to Supercloud two where we're gathering a number of industry luminaries to discuss the future of cloud services. And we'll be focusing on various real world practitioners today, their challenges, their opportunities with an emphasis on data, self-service infrastructure and how organizations are evolving their data and cloud strategies to prepare for that next era of digital innovation. And we really believe that support for multiple cloud estates is a first step of any Supercloud. And in that regard Oracle surprise some folks with its Azure collaboration the Oracle database and exit database services. And to discuss the challenges of developing a mission critical Supercloud we welcome Juan Loaiza, who's the executive vice president of Mission Critical Database Technologies at Oracle. Juan, you're many time CUBE alums so welcome back to the show. Great to see you. >> Great to see you, and happy to be here with you. >> Yeah, thank you. So a lot of people felt that Oracle was resistant to multicloud strategies and preferred to really have everything run just on the Oracle cloud infrastructure, OCI and maybe that was a misperception maybe you guys were misunderstood or maybe you had to change your heart. Take us through the decision to support multiple cloud platforms >> Now we've supported multiple cloud platforms for many years, so I think that was probably a misperception. Oracle database, we partnered up with Amazon very early on in their cloud when they had kind of the the first cloud out there. And we had Oracle database running on their cloud. We have backup, we have a lot of stuff running. So, yeah, part of the philosophy of Oracle has always been we partner with every platform. We're very open we started with SQL and APIs. As we develop new technologies we push them into the SQL standard. So that's always been part of the ecosystem at Oracle. That's how we think we get an advantage by being more open. I think if we try to create this isolated little world it actually hurts us and hurts customers. So for us it's a win-win to be open across the clouds. >> So Supercloud is this concept that we put forth to describe a platform or some people think it's an architecture if you have an opinion, and I'd love to hear it but it provides a programmatically consistent set of services that hosted on heterogeneous cloud providers. And so we look at the Oracle database service for Azure as fitting within this definition. In your view, is this accurate? >> Yeah, I would broaden it. I'd see a little bit more than that. We just think that services should be available from everywhere, right? So, I mean, it's a little bit like if you go back to the pre-internet world, there was things like AOL and CompuServe and those were kind of islands. And if you were on AOL, you really didn't have access to anything on CompuServe and vice versa. And the cloud world has evolved a little bit like that. And we just think that's the wrong model. They shouldn't these clouds are part of the world and they need to be interconnected like all the rest of the world. It's been a long time with telephones internet, everything, everything's interconnected. Everything should work seamlessly together. So that's how we believe if you're running in one cloud and you're running let's say an application, one cloud you want to use a service from another cloud should be completely simple to do that. It shouldn't be, I can only use what's in AOL or CompuServe or whatever else. It should not be isolated. >> Well, we got a long way to go before that Nirvana exists but one example is the Oracle database service with Azure. So what exactly does that service provide? I'm interested in how consistent the service experience is across clouds. Did you create a purpose-built PaaS layer to achieve this common experience? Or is it off the shelf Terraform? Is there unique value in the PaaS layer? Let's dig into some of those questions. I know I just threw six at you. >> Yeah, I mean, so what this is, is what we're trying to do is very simple. Which is, for example, starting with the Oracle database we want to make that seamless to use from anywhere you're running. Whether it's on-prem, on some other cloud, anywhere else you should be able to seamlessly use the Oracle database and it should look like the internet. There's no friction. There's not a lot of hoops you got to jump just because you're trying to use a database that isn't local to you. So it's pretty straightforward. And in terms of things like Azure, it's not easy to do because all these clouds have a lot of kind of very unique technologies. So what we've done is at Oracle is we've said, "Okay we're going to make Oracle database look exactly like if it was running on Azure." That means we'll use the Azure security systems, the identity management systems, the networking, there's things like monitoring and management. So we'll push all these technologies. For example, when we have monitoring event or we have alerts we'll push those into the Azure console. So as a user, it looks to you exactly as if that Oracle database was running inside Azure. Also, the networking is a big challenge across these clouds. So we've basically made that whole thing seamless. So we create the super high bandwidth network between Azure and Oracle. We make sure that's extremely low latency, under two milliseconds round trip. It's all within the local metro region. So it's very fast, very high bandwidth, very low latency. And we take care establishing the links and making sure that it's secure and all that kind of stuff. So at a high level, it looks to you like the database is--even the look and feel of the screens. It's the Azure colors, it's the Azure buttons it's the Azure layout of the screens so it looks like you're running there and we take care of all the technical details underlying that which there's a lot which has taken a lot of work to make it work seamlessly. >> In the magic of that abstraction. Juan, does it happen at the PaaS layer? Could you take us inside that a little bit? Is there intelligence in there that helps you deal with latency or are there any kind of purpose-built functions for this service? >> You could think of it as... I mean it happens at a lot of different layers. It happens at the identity management layer, it happens at the networking layer, it happens at the database layer, it happens at the monitoring layer, at the management layer. So all those things have been integrated. So it's not one thing that you just go and do. You have to integrate all these different services together. You can access files in Azure from the Oracle database. Again, that's completely seamless. You, it's just like if it was local to our cloud you get your Azure files in your kind of S3 equivalent. So yeah, the, it's not one thing. There's a whole lot of pieces to the ecosystem. And what we've done is we've worked on each piece separately to make sure that it's completely seamless and transparent so you don't have to think about it, it just works. >> So you kind of answered my next question which is one of the technical hurdles. It sounds like the technical hurdles are that integration across the entire stack. That's the sort of architecture that you've built. What was the catalyst for this service? >> Yeah, the catalyst is just fulfilling our vision of an open cloud world. It's really like I said, Oracle, from the very beginning has been believed in open standards. Customers should be able to have choice customers should be able to use whatever they want from wherever they want. And we saw that, you know in the new world of cloud that had broken down everybody had their own authentication system management system, monitoring system networking system, configuration system. And it became very difficult. There was a lot of friction to using services across cloud. So we said, "Well, okay we can fix that." It's work, it's significant amount of work but we know how to do it and let's just go do it and make it easy for customers. >> So given Oracle is really your main focus is on mission critical workloads. You talked about this low latency network, I mean but you still have physical distances, so how are you managing that latency? What's the experience been for customers across Azure and OCI? >> Yeah, so it, it's a good point. I mean, latency can be an issue. So the good thing about clouds is we have a lot of cloud data centers. We have dozens and dozens of cloud data centers around the world. And Azure has dozens and dozens of cloud data centers. And in most cases, they're in the same metro region because there's kind of natural metro regions within each country that you want to put your cloud data centers in. So most of our data centers are actually very close to the Azure data centers. There's the kind of northern Virginia, there's London, there's Tokyo I mean, there's natural places where everybody puts their data centers Seoul et cetera. And so that's the real key. So that allows us to put a very high bandwidth and low latency network. The real problems with latency come when you're trying to go along physical distance. If you're trying to connect, you know across the Pacific or you know across the country or something like that, then you can get in trouble with latency within the same metro region. It's extremely fast. It tends to be around one, you know the highest two millisecond that's roundtrip through all the routers and connections and gateways and everything else. With everything taken into consideration, what we guarantee is it's always less than two millisecond which is a very low latency time. So that tends to not be a problem because it's extremely low latency. >> I was going to ask you less than two milliseconds. So, earlier in the program we had Jack Greenfield who runs architecture for Walmart, and he was explaining what we call their Supercloud, and it's runs across Azure, GCP, and they're on-prem. They have this thing called the triplet model. So my question to you is, are you in situations where you guaranteeing that less than two milliseconds do you have situations where you're bringing, you know Exadata Cloud, a customer on-prem to achieve that? Or is this just across clouds? >> Yeah, in this case, we're talking public cloud data center to public cloud data center. >> Oh okay. >> So add your public cloud data center to Oracle Public Cloud data center. They're in the same metro region. We set up the connections, we do all the technology to make it seamless. And from a customer point of view they don't really see the network. Also, remember that SQL is actually designed to have very low bandwidth and latency requirements. So it is a language. So you don't go to the database and say do this one little thing for me. You send it a SQL statement that can actually access lots of data while in the database. So the real latency requirement of a SQL database is within the database. So I need to access all that data fast. So I need very fast access to storage very fast access across node. That's what exit data gives you. But you send one request and that request can do a huge amount of work and then return one answer. And that's kind of the design point of SQL. So SQL is inherently low bandwidth requirements, it was used back in the eighties when we used to have 10 megabit networks and the the biggest companies in the world ran back then. So right now we're talking over hundred hundreds of gigabits. So it's really not much of a challenge. When you're designed to run on 10 megabit to say, okay I'm going to give you 10,000 times what you were designed for it's really, it's a pretty low hurdle jump. >> What about the deployment models? How do you handle this? Is it a single global instance across clouds or do you sort of instantiate in each you got exudate in Azure and exudates in OCI? What's the deployment model look like? >> It's pretty straightforward. So customer decides where they want to run their application and database. So there's natural places where people go. If you're in Tokyo, you're going to choose the local Tokyo data centers for both, you know Microsoft and Oracle. If you're in London, you're going to do that. If you're in California you're going to choose maybe San Jose, something like that. So a customer just chooses. We both have data centers in that metro region. So they create their service on Azure and then they go to our console which looks just like an Azure console and say all right create me a database. And then we choose the closest Oracle data center which is generally a few miles away, and then it it all gets created. So from a customer point of view, it's very straightforward. >> I'm always in awe about how simple you make things sound. All right what about security? You talked a little bit before about identity access how you sort of abstracting the Azure capabilities away so that you've simplified it for your customers but are there any other specific security things that you need to do? How much did you have to abstract the underlying primitives of Azure or OCI to present that common experience to customers? >> Yeah, so there's really two big things. One is the identity management. Like my name is X on Azure and I have this set of privileges. Oracle has its own identity management system, right? So what we didn't want is that you have to kind of like bridge these things yourself. It's a giant pain to do that. So we actually what we call federate across these identity managements. So you put your credentials into Azure and then they automatically get to use the exact same credentials and identity in the Oracle cloud. So again, you don't have to think about it, it just works. And then the second part is that the whole bridging the network. So within a cloud you generally have virtual network that's private to your company. And so at Oracle, we bridge the private network that you created in, for example, Azure to the private network that we create for you in Oracle. So it is still a private network without you having to do a whole bunch of work. So it's just like if you were in your own data center other people can't get into your network. So it's secured at the network level, it's secured at the identity management, and encryption level. And again we did a lot of work to make that seamless for customers and they don't have to worry about it because we did the work. That's really as simple as it gets. >> That's what's Supercloud's supposed to be all about. Alright, we were talking earlier about sort of the misperception around multicloud, your view of Open I think, which is you run the Oracle database, wherever the customer wants to run it. So you got this database service across OCI and Azure customers today, they run Oracle database in AWS. You got heat wave, MySQL, heat wave that you announced on AWS, Google touts a bare metal offering where you can run Oracle on GCP. Do you see a day when you extend an OCI Azure like situation across multiple clouds? Would that bring benefits to customers or will the world of database generally remain largely fenced with maybe a few exceptions like what you're doing with OCI and Azure? I'm particularly interested in your thoughts on egress fees as maybe one of the reasons that there is a barrier to this happening and why maybe these stove pipes, exist today and in the future. What are your thoughts on that? >> Yeah, we're very open to working with everyone else out there. Like I said, we've always been, big believers in customers should have choice and you should be able to run wherever you want. So that's been kind of a founding principle of Oracle. We have the Azure, we did a partnership with them, we're open to doing other partnerships and you're going to see other things coming down the pipe on the topic of egress. Yeah, the large egress fees, it's pretty obvious what goes on with that. Various vendors like to have large egress fees because they want to keep things kind of locked into their cloud. So it's not a very customer friendly thing to do. And I think everybody recognizes that it's really trying to kind of course or put a lot of friction on moving data out of a particular cloud. And that's not what we do. We have very, very low egress fees. So we don't really do that and we don't think anybody else should do that. But I think customers at the end of the day, will win that battle. They're going to have to go back to their vendor and say, well I have choice in clouds and if you're going to impose these limits on me, maybe I'll make a different choice. So that's ultimately how these things get resolved. >> So do you think other cloud providers are going to take a page out of what you're doing with Azure and provide similar solutions? >> Yeah, well I think customers want, I mean, I've talked to a lot of customers, this is what they want, right? I mean, there's really no doubt no customer wants to be locked into a single ecosystem. There's nobody out there that wants that. And as the competition, when they start seeing an open ecosystem evolving they're going to be like, okay, I'd rather go there than the closed ecosystem, and that's going to put pressure on the closed ecosystems. So that's the nature of competition. That's what ultimately will tip the balance on these things. >> So Juan, even though you have this capability of distributing a workload across multiple clouds as in our Supercloud premise it's still something that's relatively new. It's a big decision that maybe many people might consider somewhat of a risk. So I'm curious who's driving the decisions for your initial customers? What do they want to get out of it? What's the decision point there? >> Yeah, I mean, this is generally driven by customers that want a specific technology in a cloud. I think the risk, I haven't seen a lot of people worry too much about the risk. Everybody involved in this is a very well known, very reputable firm. I mean, Oracle's been around for 40 years. We run most of the world's largest companies. I think customers understand we're not going to build a solution that's going to put their technology and their business at risk. And the same thing with Azure and others. So I don't see customers too worried about this is a risky move because it's really not. And you know, everybody understands networking at the end the day networking works. I mean, how does the internet work? It's a known quantity. It's not like it's some brand new invention. What we're really doing is breaking down the barriers to interconnecting things. Automating 'em, making 'em easy. So there's not a whole lot of risk here for customers. And like I said, every single customer in the world loves an open ecosystem. It's just not a question. If you go to a customer would you rather put your technology or your business to run on a closed ecosystem or an open system? It's kind of not even worth asking a question. It's a no-brainer. >> All right, so we got to go. My last question. What do you think of the term "Supercloud"? You think it'll stick? >> We'll see. There's a lot of terms out there and it's always fun to see which terms stick. It's a cool term. I like it, but the decision makers are actually the public, what sticks and what doesn't. It's very hard to predict. >> Yeah well, it's been a lot of fun having you on, Juan. Really appreciate your time and always good to see you. >> All right, Dave, thanks a lot. It's always fun to talk to you. >> You bet. All right, keep it right there. More Supercloud two content from theCUBE Community Dave Vellante for John Furrier. We'll be right back. (upbeat music)
SUMMARY :
and cloud strategies to prepare happy to be here with you. just on the Oracle cloud of the ecosystem at Oracle. and I'd love to hear it And the cloud world has Or is it off the shelf Terraform? So at a high level, it looks to you Juan, does it happen at the PaaS layer? it happens at the database layer, So you kind of And we saw that, you know What's the experience been for customers across the Pacific or you know So my question to you is, to public cloud data center. So the real latency requirement and then they go to our console the Azure capabilities away So it's secured at the network level, So you got this database We have the Azure, we did So that's the nature of competition. What's the decision point there? down the barriers to the term "Supercloud"? and it's always fun to and always good to see you. It's always fun to talk to you. Vellante for John Furrier.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Microsoft | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
Walmart | ORGANIZATION | 0.99+ |
Juan Loaiza | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
San Jose | LOCATION | 0.99+ |
California | LOCATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Tokyo | LOCATION | 0.99+ |
Juan | PERSON | 0.99+ |
London | LOCATION | 0.99+ |
six | QUANTITY | 0.99+ |
10,000 times | QUANTITY | 0.99+ |
Jack Greenfield | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
second part | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
less than two millisecond | QUANTITY | 0.99+ |
less than two milliseconds | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
SQL | TITLE | 0.99+ |
10 megabit | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
AOL | ORGANIZATION | 0.98+ |
each piece | QUANTITY | 0.98+ |
MySQL | TITLE | 0.98+ |
first cloud | QUANTITY | 0.98+ |
single | QUANTITY | 0.98+ |
each country | QUANTITY | 0.98+ |
John Furrier | PERSON | 0.98+ |
two big things | QUANTITY | 0.98+ |
under two milliseconds | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
northern Virginia | LOCATION | 0.98+ |
CompuServe | ORGANIZATION | 0.97+ |
first step | QUANTITY | 0.97+ |
Mission Critical Database Technologies | ORGANIZATION | 0.97+ |
one request | QUANTITY | 0.97+ |
Seoul | LOCATION | 0.97+ |
Azure | TITLE | 0.97+ |
each | QUANTITY | 0.97+ |
two millisecond | QUANTITY | 0.97+ |
Azure | ORGANIZATION | 0.96+ |
one cloud | QUANTITY | 0.95+ |
one thing | QUANTITY | 0.95+ |
cloud data centers | QUANTITY | 0.95+ |
one answer | QUANTITY | 0.95+ |
Supercloud | ORGANIZATION | 0.94+ |
Juan Tello, Deloitte | Snowflake Summit 2022
>>Welcome back to Vegas. Lisa Martin here covering snowflake summit 22. We are live at Caesar's forum. A lot of guests here about 10,000 attendees, actually 10,000 plus a lot of folks here at the momentum and the buzz. I gotta tell you the last day and a half we've been covering this event is huge. It's probably some of the biggest we've seen in a long time. We're very pleased to welcome back. One of our cube alumni to the program, Ron Tayo principal and chief data officer at Deloitte one. It's great to have you joining us. >>Yeah, no, thank you. Super excited to be here with you today. >>Isn't it great to be back in person? Oh, >>I love it. I mean the, the energy, the, you know, connections that we're making definitely, definitely loving and loving the experience. >>Good experience, but the opportunity to connect with customers. Yes. I'm hearing a lot of conversations from snowflake folks from their partners like Deloitte from customers themselves. Like it's so great to be back in person. And they're really talking about some of the current challenges that are being faced by so many industries. >>That's right. Oh, that, that is, you know, I would say as a consultant, you know, it all comes down to that personal connection and that relationship. And so I am, I'm all for this and love, you know, being able to connect with our customers. >>Yeah. Talk to me about the Deloitte snowflake partnership. Obviously a ton of news announced from snowflake yesterday. Snowflake is a rocket ship. Talk to us about the partnership, what you guys do together, maybe some joint customer examples. >>Yeah. I mean, so snowflake is a strategic Alliance partner. We won the, you know, SI partner of the year award and for us, the, the shift and the opportunity to help our clients modernize and achieve a level of data maturity in their journey is, is strategically it's super important. And it's really about how do we help them leverage, you know, snowflake has underlying data platform to ultimately achieve, you know, broader goals around, you know, their business strategy. And our approach is always very much connected to overarching business strategies and sense of, is it a finance transformation, a supply chain transformation, a customer transformation, and what are the goals of those transformations and how do we ensure that data is a critical component to enabling that and with, you know, technologies and vendors and partners like snowflake, allowing us to even do that at a faster, better, cheaper pace only increases the overall business case and the value and the impact that it generates. >>And so we are super, super excited about our partnership with snowflake and we believe, you know, the journey is very, very bright. You know, we, this is the future, you know, often tell folks that, you know, data has and will continue to be more valuable than sort of the systems that own it and manage it. And I think we're starting to see that. I think the topic that I discussed today around data collaboration and data sharing is an example of how we're starting to see, you know, the importance and the value of data, you know, become way more important and more of the focus around the strategy for, for organizations >>As the chief data officer, what do data sharing and data collaboration mean to somebody in your position and what are some of the conversations you have with customer other CDOs at customer organizations? >>Yeah, so, so my role is, is sort of twofold. I, I am responsible for our internal data strategy. So when you think about Deloitte as a professional service organization, across four unique businesses, I am a customer of snowflake in our own data modernization journey, and we have our own strategy on how and what we share, not only internally across our businesses, but also externally across, you know, our partners. So, so I bring that perspective, but then I also am a client service professional and serve our clients in their own journey. So I often feel very privileged in, in the opportunity to be able to sort of not only share my own experience from a Deloitte perspective, but also in how we help our clients >>Talk about data maturation. You mentioned, you know, the volume of data just only continues to grow. We've seen so much growth in the last two years alone of data. We've seen all of us be so dependent on things like media and entertainment and retail, eCommerce, healthcare, and life sciences. What, how do you define data maturation and how does Deloitte and snowflake help companies create a pathway to get there? >>Yeah. Yeah. So I would say step one for us is all about the overarching business strategy. And when you sort of double click on the big, broad business strategy and what that means from a data strategy perspective, we have to develop business models where there is an economical construct to the value of data. And it's extremely important specifically when we talk about sharing and collaborating data, I would say the, the, the, the assumption or the, or, or, or, or the posture typically seems to be, it's a one way relationship, our strategy and what we're pushing, you know, again, not only internally within ourselves, but also with our clients, is it has to be a bidirectional relationship. And so you, you hear of, of the concepts of, you know, the, the, the data clean room where you have two partners coming together and agreeing with certain terms to share data bidirectionally. Like I do believe that is the future in how we need to do, you know, more data collaboration, more data sharing at a scale that we've not quite seen. Yes. Yet >>The security and privacy areas are increasingly critical. We've seen the threat landscape change so dramatically the last couple of years, it's not, will we get hit by a cyber talk? It's when yes. For every industry, right? The privacy legislation that just we've seen it with GDPR, CCPA is gonna become CPR in California, other states doing the same thing. How do you help customers kind of balance that line of being able to share data equitably between organizations between companies do so in a secure way, and in a way that ensures data privacy will be maintained. >>Yeah. Yeah. So first absolutely recognizing, evolving, recognize the evolving regulatory landscape. You mentioned, you know, California, there's actually now 22 states that have a, is it 22 now? Right? Yeah. 22 states that have a privacy act enacted. And our projection is in the next 12 to 18 months, all states will have one. And so absolutely a, a perceived challenge, but one that I think is, is addressable. And, and I think that gets to the spirit of the question for us. There's, there's four dimensions that an organization needs to work through when it comes to data sharing. The first one is back to the, the business goal and objective, like, is there truly a business need? And is there value in sharing data? And it needs to have a very solid business model. Okay. So, so that's the first step. The second step is what are the legal terms? >>What are the legal terms? What can you do? What can't you do? Do you have primary rights, secondary rights? The third dimension is around risk. What is the risk and exposure, not only from a data security perspective, but what is the risk if someone uses a data inappropriately, and then the fourth one is around ethics and the ethical use of data. And we see lots of examples where an organization has consent has rights to the data, but the way they used it might have not necessarily been, you know, among the kind of ethical framing. And so for us, those four dimensions is what guides us and our clients in developing a very robust data, sharing data collaboration framework that ensures it's connected to the overall business strategy, but it provides enough of the guardrails to minimize legal and ethical risk. So >>With that in mind, what do the customer conversations look like? Cause you gotta have a lot of players, the business folks, the data folks, every line of business needs data for its functions. Talk to us about how the customer conversations and projects have evolved as data is increasingly important to every line of business. >>Yes. I would say the biggest channel, or maybe the, the, the denominator at this point that we're seeing bring the, let's say diversity of needs to more common denominator has been AI. So every organization at this point is driving massive AI programs. And in order to really scale AI, you know, the, the algorithm cannot execute without data. Yeah. And so for us, at least in our experience with our customers, AI has almost been the, the, the mechanism to have these conversations across the different business stakeholders and do it in a way that, you know, you're not necessarily boiling the ocean, cuz I think that's the other element that makes this a bit hard is, well, what, what data do you want me to share and for what purpose? And when you start to bring it into sort of more individual swim lanes and, and, and our experience with our customers is AI has sort of been that mechanism to say, am I automating, you know, our factory floor? Am I bringing AI and how we engage and serve our customers? Right? Like it be, it be begins to sort of bring a little bit more of, of that repeatability at a, at an individual level. So that's been a, a really good strategy for us in our customers >>In terms of the customer's strategy and kind of looking forward, what are some of the things that excite you about the, the future of data collaboration, especially given all of the news that snowflake announced just yesterday? >>Yes. Yeah. I think for me, and this is both the little bit of the ambition, as well as the push, it's no longer a question of should it's it's how and for what? And so, so yes, I mean the, the, the snowflake data cloud is a network that allows us to integrate, you know, disparate and unique data assets that have never, you know, been possible before. Right. So we're in this network, it's now a matter of figuring out how to use that and for what purpose. And so I, I go back to, we, each individual organization needs to be figuring out the how, and for what not, when this is the future, we all need it. Yeah. And we just need to figure out how that fits in our individual businesses >>In terms of the, how that's such an interesting, I love how you bring that up. It's not, it's not when it's definitely how, because there's gonna be another competing business or several right there in the rear view mirror, ready to take your place. Yep. If you don't act quickly, how does Deloitte and snowflake help customers achieve the, how quickly enough to be able to really take advantage of data sharing and data collaboration so that they can be very competitive? >>Yeah. So there's two main, maybe even three driving forces in this. What we see is when there's a common purpose across director, indirect competitors and the need to share data. So I think the poster child of this was the pandemic, and we started to see organizations again, either competitively or non-com competitively share data in ways for a greater good, right. When there was a purpose, we believe when that element exists, the ability to share data is going to increase. We believe the next big sort of common purpose out there in the world is around ESG. And so that's gonna be a big driver for sharing data. So that's one element. The other one is the concept of developing integrated value chains. So when you think about any individual business and sort of where they are in that piece of the value chain, developing more integrated value across, let's say a manufacturer of goods with a distributor of those goods that ultimately get to an end customer. >>They're not sharing data in a meaningful way to really maximize their overall, you know, profitability. And so that's another really good, meaningful example that we're seeing is where there's value across, you know, a, what appears to be a siloed set of steps, and really looking at it more as an integrated value chain, the need to share data is the only way to unlock that. And so that's, that's the second one. The, the third one I would say is, is around the need to address the consumer across sort of the multiple personas that we all individually sit. Right? So I go into a bank and I'm, I'm a client. I walk into a retail store and I'm a customer. I walk into my physician's office and I'm a patient at the end of the day. I am still the same person. I am still one. And so that consumer element and the convergence of how we are engaging and serving that consumer is the third, big shift that is really going to bring data collaboration and sharing to the next level. >>Do you think snowflake is, is the right partner of the defacto for delight to do that with? >>Absolutely. I think, you know, the head start of the cloud, the data cloud platform and the network that it's already established with all the sort of data privacy and security constraints around it. Like that's a big, that's a big, you know, check right. That we don't have to worry about. It's there for sure. >>Awesome. Sounds like a great partnership, Juan. Thank you so much for joining me on the program. It's great to have you back on the cube in person sharing what Deloitte and snowflake are doing and how you're really helping to transform organizations across every industry. We appreciate >>Your insights. Yeah. No, thank you for having me here. My pleasure. Always a pleasure. Thank you. >>All right. For Juan. I am Lisa Martin. You're watching the cube live from snowflake summit 22 at Caesar's forum. You write back with our next guest.
SUMMARY :
It's great to have you joining us. Super excited to be here with you today. I mean the, the energy, the, you know, connections that we're making definitely, Good experience, but the opportunity to connect with customers. I'm all for this and love, you know, being able to connect with our customers. what you guys do together, maybe some joint customer examples. a critical component to enabling that and with, you know, technologies and vendors and partners is an example of how we're starting to see, you know, the importance and the value of data, you know, our partners. You mentioned, you know, the volume of data just only continues to grow. of the concepts of, you know, the, the, the data clean room where you have two partners coming together and change so dramatically the last couple of years, it's not, will we get hit by a is in the next 12 to 18 months, all states will have one. might have not necessarily been, you know, among the kind of ethical framing. Cause you gotta have a lot of players, And when you start to bring it into sort allows us to integrate, you know, disparate and unique data assets that In terms of the, how that's such an interesting, I love how you bring that up. So when you think about any individual business and sort of where meaningful example that we're seeing is where there's value across, you know, I think, you know, the head start of the cloud, the data cloud platform and It's great to have you back on the cube in person Always a pleasure. You write back with our next guest.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Ron Tayo | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Deloitte | ORGANIZATION | 0.99+ |
Juan | PERSON | 0.99+ |
Juan Tello | PERSON | 0.99+ |
California | LOCATION | 0.99+ |
second step | QUANTITY | 0.99+ |
Vegas | LOCATION | 0.99+ |
two partners | QUANTITY | 0.99+ |
22 states | QUANTITY | 0.99+ |
third dimension | QUANTITY | 0.99+ |
first step | QUANTITY | 0.99+ |
10,000 | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
one element | QUANTITY | 0.99+ |
first | QUANTITY | 0.98+ |
Snowflake Summit 2022 | EVENT | 0.98+ |
GDPR | TITLE | 0.98+ |
four dimensions | QUANTITY | 0.98+ |
second one | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
fourth one | QUANTITY | 0.98+ |
third | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
one | QUANTITY | 0.97+ |
22 | QUANTITY | 0.96+ |
third one | QUANTITY | 0.96+ |
first one | QUANTITY | 0.96+ |
both | QUANTITY | 0.96+ |
about 10,000 attendees | QUANTITY | 0.95+ |
last couple of years | DATE | 0.91+ |
one way | QUANTITY | 0.91+ |
three driving forces | QUANTITY | 0.9+ |
step one | QUANTITY | 0.9+ |
18 months | QUANTITY | 0.9+ |
snowflake summit 22 | EVENT | 0.88+ |
two main | QUANTITY | 0.88+ |
pandemic | EVENT | 0.88+ |
last two years | DATE | 0.88+ |
12 | QUANTITY | 0.86+ |
each individual organization | QUANTITY | 0.84+ |
Caesar | PERSON | 0.83+ |
ESG | TITLE | 0.74+ |
last day | DATE | 0.71+ |
four | QUANTITY | 0.71+ |
snowflake summit 22 | EVENT | 0.68+ |
double | QUANTITY | 0.63+ |
snowflake | ORGANIZATION | 0.62+ |
businesses | QUANTITY | 0.59+ |
CCPA | TITLE | 0.56+ |
news | QUANTITY | 0.52+ |
CPR | TITLE | 0.48+ |
Video Exclusive: Oracle EVP Juan Loaiza Announces Lower Priced Entry Point for ADB
(upbeat music) >> Oracle is in the midst of an acceleration of its product cycles. It really has pushed new capabilities across its database, the database platforms, and of course the cloud in an effort to really maintain its position as the gold standard for cloud database. We've reported pretty extensively on Exadata, most recently the X9M that increased database IOPS and throughput. Organizations running mission critical OLTP, analytics and mix workloads tell us that they've seen meaningfully improved performance and lower costs, which you expect in a technology cycle. I often say if Oracle calls you out by name it's a compliment and it means you've succeeded. So just a couple of weeks ago, Oracle turned up the heat on MongoDB with a Mongo compatible API, in an effort to persuade developers to run applications in a autonomous database and on OCI, Oracle cloud infrastructure. There was a big emphasis by Oracle on acid compliance transactions and automatic scaling as well as access to multiple data types. This caught my attention because in the early days of no SQL, there was a lot of chatter from folks about not needing acid capability in the database anymore. Funny how that comes around. And anyway, you see Oracle investing, they spend money in R&D We've always said that`, they're protecting their moat. Now in social I've seen some criticisms like Oracle still is not adding enough new logos, and Oracle of course will dispute that and give you some examples. But to me what's most impressive is the big name customers that Oracle gets to talk in public. Deutsche Bank, Telephonic, Experian, FedEx, I mean dozens and dozens and dozens. I work with a lot of companies and the quality of the customers Oracle puts in front of analysts like myself is very very high. At the top of the list I would say. And they're big spending customers. And as we said many times when it comes to mission critical workloads, Oracle is the king. And one of the executives behind the success is a longtime Cube alum, Juan Loaiza who's executive vice president of mission critical technologies at Oracle. And we've invited him back on today to talk about some news and Oracle's latest developments and database, Juan welcome back to the show and thanks for coming on today and talking about today's announcement. >> I'm very happy to be here today with you. >> Okay, so what are you announcing and how does this help organizations particularly with those existing Exadata cloud at customer installations? >> Yeah, the big thing we're announcing is our very successful cloud at customer platform. We're extending the capabilities of our autonomous database running on it. And specifically we're allowing much smaller configurations so customers can start small and grow with our autonomous database on our cloud customer platform. >> So let's get into granularity a little bit and double click on this. Can you go over how customers, carve up VM clusters for different workloads? What's the tangible benefit to them? >> Yeah, so it's pretty straightforward. We deploy our Cloud@Customer system anywhere the customer wants it, let's say in their data center. And then through our cloud APIs and GUIs they can carve up into pieces into basically VMs. They can say, Hey, I want a VM with eight CPUs to do this, I want a VM with 20 CPUs to that, I want a 500 CPUVM to do something else. And that's what we call a VM cluster because in Cloud@Customer, it is a highly available environment. So you don't just get one VM, you get a cluster of highly available VMs. So you carve it up. You hand it out to different aspects of a company. You might have development on one, testing on another one, some production sales on one VM, marketing on a different VM. And then you run your databases in there and that's kind of how it works and it's all done completely through our GUI and it's very, very simple 'cause they use it the same cloud APIs and GUIs that we use in the public cloud. It is the same APIs and GUIs that we use in the public cloud. >> Yeah, I was going to say sounds like cloud. So what about prerequisites? What do customers have to do to take advantage of the new capabilities? Can they run it on an Exadata cloud a customer that they installed a couple years ago? Do they have to upgrade the hardware? What migration pain is involved? >> Yeah, there's no pain, so it's just, (coughs) excuse me. I can take their existing system, they get our free software update and they can just deploy autonomous database as a VM in their existing Exadata cloud system. >> Oh nice okay what's the bottom line dollars? Our audience are always interested in cutting costs. It's one of the reasons they're moving to the cloud for example. So how does autonomous database on VM clusters, on Exadata Cloud at Customer? How does it help cut their cost? >> Well, it's pretty straightforward. So previous to this a customer would have to have dedicated a system to either autonomous database or to non autonomous data. So you have to choose one together. So on a system by system basis, you chose I want this thing autonomous, or I don't want it autonomous. Now you carve in the VMs and say for this VM I want that autonomous for that VM I want to run a regular database managed database on there. So lets customers now start small with any size they want. They could start with two CPUs and run an autonomous database and that's all they pay for is the two CPUs that they use. >> Let's talk a little about traction. I mean, I remember we covered the original Exadata announcement quite a long time ago and it's obviously evolved and taken many forms. Look, it's hard to argue that it hasn't been a big success. It has for Oracle and your target customers. Does this announcement make Exadata cloud a customer more attractive for smaller companies. In other words, does it expand the team for ADB? And if so, how? >> Yeah, absolutely. I mean our Exadata cloud platform is extremely successful. We have thousands of deployments, we have on our data platform we have about almost 90% of the global fortune 100 and thousands of smaller customers. In the cloud we have now up to 40% of the global 100 a hundred biggest companies in the world running on that. So it's been extremely successful platform and cloud a customer is super key. A lot of customers can't move their data to the public cloud. So we bring the public cloud to them with our cloud customer offering. And so that's the big customer is the fortune hundred but we have thousands of smaller customers also. And the nice thing about this offering is we can start with literally two CPUs. So we can be a very small customer and still run our autonomous data based on our cloud customer platform. >> Well, everybody cares about security and governance. I mean, especially the big guys, but the little guys that in many ways as well they want the capabilities of the large companies but they can't necessarily afford it. So I want to talk about security in particular governance and it's especially important for mission-critical apps. So how does this all change the security in governance paradigm? What do customers need to know there? >> Yeah, so the beauty of autonomous database which is the thing that we're talking about today is Oracle deals with all the security. So the OS, the hardware, firmware, VMs, the database itself all the interfaces to the VM, to the database all that is it's all done by Oracle. So, which is incredibly important because there's a constant stream of security alerts that are coming out and it's very difficult for customers to keep up with this stuff. I mean, it's hard for us and we have thousands of engineers. And so we take that whole burden away from customers. And you just don't have to think about it, we deal with it. So once you deploy an autonomous database it is always secure because anytime a security alert comes out, we will apply that and we do it in an online fashion also. So it's really, particularly for smaller customers it's even harder because to keep up with all the security that you you need a giant team of security experts and even the biggest customers struggle with that and a small customer's going to really struggle. There's just two, you have to look at the entire stack, all the different components switches, firmware, OS, VMs, database, everything. It's just very difficult to keep up. So we do it all and for small cut, they just can't do it. So really they really need to partner with a company like Oracle that has thousands of engineers that can keep up with this stuff. >> It's true what you say, even large customers this CSOs will tell you that lack of talent, lack of skill sets. They just don't have enough people and so even the big guys can't keep up. Okay, I want you to pitch me as though I'm a developer, which I'm not, but we got a lot of developers in our community. We'll be Cube con next month in Valencia, sell me on why a developer should lean into ADB on Exadata cloud as a customer? >> Yeah, it's very straightforward. So Oracle has the most advanced database in the industry and that's widely recognized by database analysts and experts in the field. Traditionally, it's been hard for a developer to use it because it's been hard to manage. It's been hard to set up, install, configure, patch, back up all that kind of stuff. Autonomous database does it all for you. So as a developer, you can just go into our console, click on creating a database. We ask you four questions, how big, how many CPUs how much storage and say, give your password. And within minutes you have a database. And at that point you can go crazy and just develop. And you don't have to worry about managing the database, patching the database, maintaining the security and the database backing up to all that stuff. You can instantly scale it. You can say, Hey, I want to grow it, you just click a button, take, grow it to much any size you want and you get all the mission critical capabilities. So it works for tiny databases but it is a stock exchange quality in terms of performance, availability, security it's a rock solid database that's super trivial. So what used to be a very complex thing is now completely trivial for a developer. So they get the best of both worlds, they get everything on the database side and it it's trivial for them to use. >> Wow, if you're doing all that stuff for 'em are they going to do on their weekends? Code? (chuckles) >> They should be developing their application and add value to their company that's kind of what they should focus on. And they can be looking at all sorts of new technologies like JSON and the database machine learning in the database graph in the database. So you can build very sophisticated applications because you don't have to worry about the database anymore. >> All right, let's talk about the competition. So it's always a topic I like to bring up with you. From a competitive perspective how is this latest and instantiation of Exadata cloud a customer X9M how's this different from running an AWS database service for instance on outpost, or let's say I want to run SQL server on Azure Stack or whatever Microsoft's calling it these days. Give us the competitive angle here. >> Yeah, there kind of is no real competition. So both Amazon and Microsoft have an at customer solution but they're very primitive. I mean, just to give you an example like Amazon doesn't run any of their premier database offerings at customers. So whether it's Aurora Redshift, doesn't run just plane does not run. It's not that it runs badly or it's got limited, just does not run. They can't run Oracle RDS on premise and same thing with Microsoft. They can't run Azure SQL, which is their premier database on their act customer platform. So that kind of tells you how limited that platform is when even their own premier offerings doesn't run on it. In contrast, we're running Exadata with our premier autonomous database. So it's our premier platform that's in use today by most of the biggest, banks, telecom to retailers et cetera in the world, thousands of smaller customers. So it's super mission critical, super proven with our premier cloud database, which is autonomous theory. So it couldn't be more black and white, this is a case where it's there really is no competition in the cloud of customer space on the database side. >> Okay, but let me follow up on that, Juan, if I may, so, okay. So it took you guys a while to get to the cloud, it's taken them a while to figure it on-prem. I mean, aren't they going to eventually sort of get there? What gives you confidence that you'll be able to to keep ahead? >> Well, there's two things, right? One is we've been doing this for a long time. I mean, that's what Oracle initially started as an on-prem and our Exadata platform has been available for over a decade. And we have a ton of experience on this. We run the biggest banks in the world already, it's not some hope for the future. This is what runs today. And our focus has always been a combination of cloud and on-prem their heart's not really in the on-prem stuff they really like. Amazon's really a public cloud only vendor and you can see from the result, it's not you can say, they can say whatever they want but you can see the results. Their outpost platform has been available for several years now and it still doesn't even run their own products. So you can kind of see how hard they're trying and how much they really care about this market. >> All right, boil it down if you just had a few things that you'd tell someone about why they should run ADB on Exadata cloud at customer, what would you say? >> It's pretty simple, which is it's the world's most sophisticated database made completely simple, that's it? So you get a stock exchange level database, you can start really small and grow and it's completely trivial to run because Oracle is automated everything within our autonomous data we use machine learning and a lot of automation to automate everything around the database. So it's kind of the best of both worlds. The best possible database starts as small as you want and is the simplest database in the world. >> So I probably should have asked you this while I was pushing the competitive question but this may be my last question, I promise. It's the age old debate It rages on, you got specialized databases kind of a right tool for the right job approach. That's clearly where Amazon is headed or what Oracle refers to is converge database. Oracle says its approach is more complete and "simpler." Take us through your thinking on this and the latest positioning so the audience can understand it a bit better. >> Yeah, so apps aren't what they used to business apps, data driven apps aren't what they used to be. They used to be kind of green screens where you just entered data. Now everyone's a very sophisticated app, they want to be have location, they want to have maps, they want to have graph in there. They want to have machine learning, they want machine learning built into the app. So they want JSON they want text, they want text search. So all these capabilities are what a modern app has to support. And so what Oracle's done is we provided a single solution that provides everything you need to build a modern app and it's all integrated together. It's all transactional. You have analytics built into the same thing. You have reporting built into the same thing. So it has everything you need to build a modern app. In contrast, what most of our competitors do is they give you these little solutions, say, okay here you do machine learning over here, you do analytics over there, you do JSON over here, you do spatial over here you do graph over there. And then it's left a developer to put an app together from all these pieces. So it's like getting the pieces of a card and having to assemble it yourself and then maintain it for the rest of your life, which is the even harder part. So one part upgrades, you got to test that. So of other piece upgrade or changes, you got to test that, you got to deal with all the security problems of all these different systems. You have to convert the data, you have to move the data back and forth it's extraordinarily complicated. Our converge database, the data sits in one place and all the algorithms come to the data. It's very simple, it is dramatically simpler. And then autonomous database is what makes managing it trivial. You don't really have to manage anything more because Oracle's automated the whole thing. >> So, Juan, we got a pretty good Cadence going here. I mean I really appreciate you coming on and giving us these little video exclusives. You can tell by again, that Cadence how frequently you guys are making new announcements. So that's great, congrats on yet another announcement. Thanks for coming back in the program appreciate it. >> Yeah, of course we invest heavily in data management. That's our core and we will continue to do that. I mean, we're investing billions of dollars a year and we intend to stay the leaders in this market. >> Great stuff and thank you for watching the Cube, your leader in enterprise tech coverage, this is Dave Vellante we'll see you next time.
SUMMARY :
and of course the cloud be here today with you. Yeah, the big thing we're announcing What's the tangible benefit to them? So you don't just get one VM, Do they have to upgrade the hardware? and they can just deploy It's one of the reasons So on a system by system basis, you chose and it's obviously evolved And so that's the big customer I mean, especially the big and even the biggest and so even the big guys can't keep up. and the database backing So you can build very about the competition. So that kind of tells you how limited So it took you guys a and you can see from the result, So it's kind of the best of both worlds. and the latest positioning and all the algorithms come to the data. I mean I really appreciate you coming on and we intend to stay the you for watching the Cube,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
FedEx | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Experian | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Juan | PERSON | 0.99+ |
Deutsche Bank | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Juan Loaiza | PERSON | 0.99+ |
Telephonic | ORGANIZATION | 0.99+ |
20 CPUs | QUANTITY | 0.99+ |
Valencia | LOCATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
thousands | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
two CPUs | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
Cadence | ORGANIZATION | 0.99+ |
four questions | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
One | QUANTITY | 0.98+ |
thousands of deployments | QUANTITY | 0.98+ |
eight CPUs | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
Azure Stack | TITLE | 0.98+ |
MongoDB | TITLE | 0.98+ |
Azure SQL | TITLE | 0.97+ |
both worlds | QUANTITY | 0.97+ |
JSON | TITLE | 0.97+ |
over a decade | QUANTITY | 0.96+ |
next month | DATE | 0.96+ |
single solution | QUANTITY | 0.94+ |
Aurora Redshift | TITLE | 0.94+ |
one VM | QUANTITY | 0.94+ |
ADB | ORGANIZATION | 0.94+ |
SQL | TITLE | 0.94+ |
thousands of engineers | QUANTITY | 0.94+ |
100 | QUANTITY | 0.94+ |
one part | QUANTITY | 0.93+ |
billions of dollars a year | QUANTITY | 0.93+ |
up to 40% | QUANTITY | 0.93+ |
500 CPUVM | QUANTITY | 0.92+ |
one place | QUANTITY | 0.92+ |
couple of weeks ago | DATE | 0.92+ |
couple years ago | DATE | 0.87+ |
dozens | QUANTITY | 0.87+ |
Mongo | TITLE | 0.87+ |
Exadata | ORGANIZATION | 0.86+ |
Cube | ORGANIZATION | 0.85+ |
Juan Loaiza, Oracle | CUBE Conversation, September 2021
(bright music) >> Hello, everyone, and welcome to this CUBE video exclusive. This is Dave Vellante, and as I've said many times what people sometimes forget is Oracle's chairman is also its CTO, and he understands and appreciates the importance of engineering. It's the lifeblood of tech innovation, and Oracle continues to spend money on R and D. Over the past decade, the company has evolved its Exadata platform by investing in core infrastructure technology. For example, Oracle initially used InfiniBand, which in and of itself was a technical challenge to exploit for higher performance. That was an engineering innovation, and now it's moving to RoCE to try and deliver best of breed performance by today's standards. We've seen Oracle invest in machine intelligence for analytics. It's converged OLTB and mixed workloads. It's driving automation functions into its Exadata platform for things like indexing. The point is we've seen a consistent cadence of improvements with each generation of Exadata, and it's no secret that Oracle likes to brag about the results of its investments. At its heart, Oracle develops database software and databases have to run fast and be rock solid. So Oracle loves to throw around impressive numbers, like 27 million AKI ops, more than a terabyte per second for analytics scans, running it more than a terabyte per second. Look, Oracle's objective is to build the best database platform and convince its customers to run on Oracle, instead of doing it themselves or in some other cloud. And because the company owns the full stack, Oracle has a high degree of control over how to optimize the stack for its database. So this is how Oracle intends to compete with Exadata, Exadata Cloud@Customer and other products, like ZDLRA against AWS Outposts, Azure Arc and do it yourself solutions. And with me, to talk about Oracle's latest innovation with its Exadata X9M announcement is Juan Loaiza, who's the Executive Vice President of Mission Critical Database Technologies at Oracle. Juan, thanks for coming on theCUBE, always good to see you, man. >> Thanks for having me, Dave. It's great to be here. >> All right, let's get right into it and start with the news. Can you give us a quick overview of the X9M announcement today? >> Yeah, glad to. So, we've had Exadata on the market for a little over a dozen years, and every year, as you mentioned, we make it better and better. And so this year we're introducing our X9M family of products, and as usual, we're making it better. We're making it better across all the different dimensions for OLTP, for analytics, lower costs, higher IOPs, higher throughputs, more capacity, so it's better all around, and we're introducing a lot of new software features as well that make it easier to use, more manageable, more highly available, more options for customers, more isolation, more workload consolidation, so it's our usual better and better every year. We're already way ahead of the competition in pretty much every metric you can name, but we're not sitting back. We have the pedal to the metal and we're keeping it there. >> Okay, so as always, you announced some big numbers. You're referencing them. I did in my upfront narrative. You've claimed double to triple digit performance improvements. Tell us, what's the secret sauce that allows you to achieve that magnitude of performance gain? >> Yeah, there's a lot of secret sauce in Exadata. First of all, we have custom designed hardware, so we design the systems from the top down, so it's not a generic system. It's designed to run database with a specific and sole focus of running database, and so we have a lot of technologies in there. Persistent memory is a really big one that we've introduced that enables super low response times for OLTP. The RoCE, the remote RDMA over convergency ethernet with a hundred gigabit network is a big thing, offload to storage servers is a big thing. The columnar processing of the storage is a huge thing, so there's a lot of secret sauce, most of it is software and hardware related and interesting about it, it's very unique. So we've been introducing more and more technologies and actually advancing our lead by introducing very unique, very effective technologies, like the ones I mentioned, and we're continuing that with our X9 generation. >> So that persistent memory allows you to do a right directly, atomic right directly to memory, and then what, you update asynchronously to the backend at some point? Can you double click on that a little bit? >> Yeah, so we use persistent memory as kind of the first tier of storage. And the thing about persistent memory is persistent. Unlike normal memory, it doesn't lose its contents when you lose power, so it's just as good as flash or traditional spinning disks in terms of storing data. And the integration that we do is we do what's called remote direct memory access, that means the hardware sends the new data directly into persistent memory and storage with no software, getting rid of all the software layers in between, and that's what enables us to achieve this extremely low latency. Once it's in persistent memory, it's stored. It's as good as being in flash or disc. So there's nothing else that we need to do. We do age things out of persistent memory to keep only hot data in there. That's one of the tricks that we do to make sure, because persistent memory is more expensive than flash or disc, so we tier it. So we age data in and out as it becomes hot, age it out as it becomes cold, but once it's in persistent memory, it's as good as being stored. It is stored. >> I love it. Flash is a slow tier now. So, (laughs) let's talk about what this-- >> Right, I mean persistent memory is about an order of magnitude faster. Flash is more than an order of magnitude faster than disk drive, so it is a new technology that provides big benefits, particularly for latency on OLTP. >> Great, thank you for that, okay, we'll get out of the plumbing. Let's talk about what this announcement means to customers. How does all this performance, and you got a lot of scale here, how does it translate into tangible results say, for a bank? >> Yeah, so there's a lot of ways. So, I mentioned performance is a big thing, always with Exadata. We're increasing the performance significantly for OLTP, analytics, so OLTP, 50, 60% performance improvements, analytics, 80% performance improvements in terms of costs, effectiveness, 30 to 60% improvement, so all of these things are big benefits. You know, one of the differences between a server product like Exadata and a consumer product is performance translates in the cost also. If I get a new smartphone that's faster, it doesn't actually reduce my costs, it just makes my experience a little better. But with a server product like Exadata, if I have 50% faster, I can translate that into I can serve 50% more users, 50% more workload, 50% more data, or I can buy a 50% smaller system to run the same workload. So, when we talk about performance, it also means lower costs, so if big customers of ours, like banks, telecoms, retailers, et cetera, they can take that performance and turn it into better response times. They can also take that performance and turn it into lower costs, and everybody loves both of those things, so both of those are big benefits for our customers. >> Got it, thank you. Now in a move that was maybe a little bit controversial, you stated flat out that you're not going to bother to compare Exadata cloud and customer performance against AWS Outposts and Azure Stack, rather you chose to compare to RDS, Redshift, Azure SQL. Why, why was that? >> Yeah, so our Exadata runs in the public cloud. We have Exadata that runs in Cloud@Customer, and we have Exadata that runs on Prem. And Azure and Azure Stack, they have something a little more similar to Cloud@Customer. They have where they take their cloud solutions and put them in the customer data center. So when we came out with our new X8, 9M Cloud@Customer, we looked at those technologies and honestly, we couldn't even come up with a good comparison with their equivalent, for example, AWS Outpost, because those products really just don't really run. For example, the two database products that Outposts promote or that Amazon promotes is Aurora for OLTP and Redshift for analytics. Well, those two can't even run at all on their Outposts product. So, it's kind of like beating up on a child or something. (laughs) It doesn't make sense. They're out of our weight class, so we're not even going to compare against them. So we compared what we run, both in public cloud and Cloud@Customer against their best product, which is the Redshifts and the Auroras in their public cloud, which is their most scalable available products. With their equivalent Cloud@Customer, not only does it not perform, it doesn't run at all. Their Premiere products don't run at all on those platforms. >> Okay, but RDS does, right? I think, and Redshift and Azure SQL, right, will run a their version, so you compare it against those. What were the results of the benchmarks when you did made those comparisons? >> Yeah, so compared against their public cloud or Cloud@Customer, we generally get results that are something like 50 times lower latency and close to a hundred times higher analytic throughput, so it's orders of magnitude. We're not talking 50%, we're talking 50 times, so compared to those products, there really is kind of, we're in a different league. It's kind of like they're the middle school little league and we're the professional team, so it's really dramatically different. It's not even in the same league. >> All right, now you also chose to compare the X9M performance against on-premises storage systems. Why and what were those results? >> Yeah, so with the on-premises, traditionally customers bought conventional storage and that kind of stuff, and those products have advanced quite a bit. And again, those aren't optimized. Those aren't designed to run database, but some customers have traditionally deployed those, you know, there's less and less these days, but we do get many times faster both on OLTP and analytic performance there, I mean, with analytics that can be up to 80 times faster, so again, dramatically better, but yeah, there's still a lot of on-premise systems, so we didn't want to ignore that fact and compare only to cloud products. >> So these are like to like in the sense that they're running the same level of database. You're not playing games in terms of the versioning, obviously, right? >> Actually, we're giving them a lot of the benefit. So we're taking their published numbers that aren't even running a database, and they use these low-level benchmarking tools to generate these numbers. So, we're comparing our full end-to-end database to storage numbers against their low-level IO tool that they've published in their data sheets, so again, we're trying to give them the benefit of the doubt, but we're still orders of magnitude better. >> Okay, now another claim that caught our attention was you said that 87% of the Fortune 100 organizations run Exadata, and you're claiming many thousands of other organizations globally. Can you paint a picture of the ICP, the Ideal Customer Profile for Exadata? What's a typical customer look like, and why do they use Exadata, Juan? >> Yeah, so the ideal customer is pretty straightforward, customers that care about data. That's pretty much it. (Dave laughs) If you care about data, if you care about performance of data, if you care about availability of data, if you care about manageability, if you care about security, those are the customers that should be looking strongly at Exadata, and those are the customers that are adopting Exadata. That's why you mentioned 87% of the global Fortune 100 have already adopted Exadata. If you look at a lot of industries, for example, pretty much every major bank almost in the entire world is running Exadata, and they're running it for their mission critical workloads, things like financial trading, regulatory compliance, user interfaces, the stuff that really matters. But in addition to the biggest companies, we also have thousands of smaller companies that run it for the same reason, because their data matters to them, and it's frankly the best platform, which is why we get chosen by these very, very sophisticated customers over and over again, and why this product has grown to encompass most of the major corporations in the world and governments also. >> Now, I know Deutsche bank is a customer, and I guess now an engineering partner from the announcement that I saw earlier this summer. They're using Cloud@Customer, and they're collaborating on things like security, blockchain, machine intelligence, and my inference is Deutsch Bank is looking to build new products and services that are powered by your platforms. What can you tell us about that? Can you share any insights? Are they going to be using X9M, for example? >> Yes, Deutsche Bank is a partnership that we announced a few months ago. It's a major partnership. Deutsche Bank is one of the biggest banks in the world. They traditionally are an on-premises customer, and what they've announced is they're going to move almost the entire database estate to our Exadata Cloud@Customer platform, so they want to go with a cloud platform, but they're big enough that they want to run it in their own data center for certain regulatory reasons. And so, the announcement that we made with them is they're moving the vast bulk of their data estate to this platform, including their core banking, regulatory applications, so their most critical applications. So, obviously they've done a lot of testing. They've done a lot of trials and they have the confidence to make this major transition to a cloud model with the Exadata Cloud@Customer solution, and we're also working with them to enhance that product and to work in various other fields, like you mentioned, machine learning, blockchain, that kind of project also. So it's a big deal when one of the biggest, most conservative, best respected financial institution in the world says, "We're going all in on this product," that's a big deal. >> Now outside of banking, I know a number of years ago, I stumbled upon an installation or a series of installations that Samsung found out about them as a customer. I believe it's now public, but they've something like 300 Exadatas. So help us understand, is it common that customers are building these kinds of Exadata farms? Is this an outlier? >> Yeah, so we have many large customers that have dozens to hundreds of Exadatas, and it's pretty simple, they start with one or two, and then they see the benefits, themselves, and then it grows. And Samsung is probably the biggest, most successful and most respected electronics company in the world. They are a giant company. They have a lot of different sub units. They do their own manufacturing, so manufacturing's one of their most critical applications, but they have lots of other things they run their Exadata for. So we're very happy to have them as one of our major customers that run Exadata, and by the way, Exadata again, very huge in electronics, in manufacturing. It's not just banking and that kind of stuff. I mean, manufacturing is incredibly critical. If you're a company like Samsung, that's your bread and butter. If your factory stops working, you have huge problems. You can't produce products, and you will want to improve the quality. You want to improve the tracking. You want to improve the customer service, all that requires a huge amount of data. Customers like Samsung are generating terabytes and terabytes of data per day from their manufacturing system. They track every single piece, everything that happens, so again, big deal, they care about data. They care deeply about data. They're a huge Exadata customer. That's kind of the way it works. And they've used it for many years, and their use is growing and growing and growing, and now they're moving to the cloud model as well. >> All right, so we talked about some big customers and Juan, as you know, we've covered Exadata since its inception. We were there at the announcement. We've always stressed the fit in our research with mission critical workloads, which especially resonates with these big customers. My question is how does Exadata resonate with the smaller customer base? >> Yeah, so we talk a lot about the biggest customers, because honestly they have the most critical requirements. But, at some level they have worldwide requirements, so if one of the major financial institutions goes down, it's not just them that's affected, that reverberates through the entire world. There's many other customers that use Exadata. Maybe their application doesn't stop the world, but it stops them, so it's very important to them. And so one of the things that we've introduced in our Cloud@Customer and public cloud Exadata platforms is the ability for Oracle to manage all the infrastructure, which enables smaller customers that don't have as much IT sophistication to adopt these very mission critical technology, so that's one of the big advancements. Now, we've always had smaller customers, but now we're getting more and more. We're getting universities, governments, smaller businesses adopting Exadata, because the cloud model for adopting is dramatically simpler. Oracle does all the administration, all the low-level stuff. They don't have to get involved in it at all. They can just use the data. And, on top of that comes our autonomous database, which makes it even easier for smaller customers to adapt. So Exadata, which some people think of as a very high-end platform in this cloud model, and particularly with autonomous databases is very accessible and very useful for any size customer really. >> Yeah, by all accounts, I wouldn't debate Exadata has been a tremendous success. But you know, a lot of customers, they still prefer to roll their own, do it themselves, and when I talk to them and ask them, "Okay, why is that?" They feel it limits their reliance on a single vendor, and it gives them better ability to build what I call a horizontal infrastructure that can support say non-Oracle workloads, so what do you tell those customers? Why should those customers run Oracle database on Exadata instead of a DIY infrastructure? >> Yeah, so that debate has gone on for a lot of years. And actually, what I see, there's less and less of that debate these days. You know, initially customers, many customers, they were used to building their own. That's kind of what they did. They were pretty good at it. What we have shown customers, and when we talk about these major banks, those are the kinds of people that are really good at it. They have giant IT departments. If you look at a major bank in the world, they have tens of thousands of people in their IT departments. These are gigantic multi-billion dollar organizations, so they were pretty good at this kind of stuff. And what we've shown them is you can't build this yourself. There's so much software that we've written to integrate with the database that you just can't build yourself, it's not possible. It's kind of like trying to build your own smartphone. You really can't do it, the scale, the complexity of the problem. And now as the cloud model comes in, customers are realizing, hey, all this attention to building my own infrastructure, it's kind of last decade, last century. We need to move on to more of an as a service model, so we can focus on our business. Let enterprises that are specialized in infrastructure, like Oracle that are really, really good at it, take care of the low-level details, and let me focus on things that differentiate me as a business. It's not going to differentiate them to establish their own storage for database. That's not a differentiator, and they can't do it nearly as well as we can, and a lot of that is because we write a lot of special technology and software that they just can't do themselves, it's not possible. It's just like you can't build your own smartphone. It's just really not possible. >> Now, another area that we've covered extensively, we were there at the unveiling, as well is ZDLRA, Zero Data Loss Recovery Appliance. We've always liked this product, especially for mission critical workloads, we're near zero data loss, where you can justify that. But while we always saw it as somewhat of a niche market, first of all, is that fair, and what's new with ZDLRA? >> Yeah ZDLRA has been in the market for a number of years. We have some of the biggest corporations in the world running on that, and one of the big benefits has been zero data loss, so again, if you care about data, you can't lose data. You can't restore to last night's backup if something happens. So if you're a bank, you can't restore everybody's data to last night. Suppose you made a deposit during the day. They're like, "Hey, sorry, Mr. Customer, your deposit, "well, we don't have any record of it anymore, "'cause we had to restore to last night's backup," you know, that doesn't work. It doesn't work for airlines. It doesn't work for manufacturing. That whole model is obsolete, so you need zero data loss, and that's why we introduced Zero Data Loss Recovery Appliance, and it's been very successful in the market. In addition to zero data loss, it actually provides much faster restore, much more reliable restores. It's more scalable, so it has a lot of advantages. With our X9M generation, we're introducing several new capabilities. First of all, it has higher capacity, so we can store more backups, keep data for longer. Another thing is we're actually dropping the price of the entry-level configuration of ZDLRA, so it makes it more affordable and more usable by smaller businesses, so that's a big deal. And then the other thing that we're hearing a lot about, and if you read the news at all, you hear a lot about ransomware. This is a major problem for the world, cyber criminals breaking into your network and taking the data ransom. And so we've introduced some, we call cyber vault capabilities in ZDLRA. They help address this ransomware issue that's kind of rampant throughout the world, so everybody's worried about that. There's now regulatory compliance for ransomware that particularly financial institutions have to conform to, and so we're introducing new capabilities in that area as well, which is a big deal. In addition, we now have the ability to have multiple ZDLRAs in a large enterprise, and if something happens to one, we automatically fail over backups to another. We can replicate across them, so it makes it, again, much more resilient with replication across different recovery appliances, so a lot of new improvements there as well. >> Now, is an air gap part of that solution for ransomware? >> No, air gap, you really can't have your back, if you're continuously streaming changes to it, you really can't have an air gap there, but you can protect the data. There's a number of technologies to protect the data. For example, one of the things that a cyber criminal wants to do is they want to take control of your data and then get rid of your backup, so you can't restore them. So as a simple example of one thing we're doing is we're saying, "Hey, once we have the data, "you can't delete it for a certain amount of days." So you might say, "For the 30 days, "I don't care who you are. "I don't care what privileges you have. "I don't care anything, I'm holding onto that data "for at least 30 days," so for example, a cyber criminal can't come in and say, "Hey, I'm going to get into the system "and delete that stuff or encrypt it," or something like that. So that's a simple example of one of the things that the cyber vault does. >> So, even as an administrator, I can't change that policy? >> That's right, that's one of the goals is doesn't matter what privileges you have, you can't change that policy. >> Does that eliminate the need for an air gap or would you not necessarily recommend, would you just have another layer of protection? What's your recommendation on that to customers? >> We always recommend multiple layers of protection, so for example, in our ZDLRA, we support, we offload tape backups directly from the appliance, so a great way to protect the data from any kind of thing is you put it on a tape, and guess what, once that tape drive is filed away, I don't care what cyber criminal you are, if you're remote, you can't access that data. So, we always promote multiple layers, multiple technologies to protect the data, and tape is a great way to do that. We can also now archive. In addition to tape, we can now archive to the public cloud, to our object storage servers. We can archive to what we call our ZFS appliance, which is a very low cost storage appliance, so there's a number of secondary archive copies that we offload and implement for customers. We make it very easy to do that. So, yeah, you want multiple layers of protection. >> Got it, okay, your tape is your ultimate air gap. ZDLRA is your low RPO device. You've got cloud kind of in the middle, maybe that's your cheap and deep solution, so you have some options. >> Juan: Yes. >> Okay, last question. Summarize the announcement, if you had to mention two or three takeaways from the X9M announcement for our audience today, what would you choose to share? >> I mean, it's pretty straightforward. It's the new generation. It's significantly faster for OLTP, for analytics, significantly better consolidation, more cost-effective. That's the big picture. Also there's a lot of software enhancements to make it better, improve the management, make it more usable, make it better disaster recovery. I talked about some of these cyber vault capabilities, so it's improved across all the dimensions and not in small ways, in big ways. We're talking 50% improvement, 80% improvements. That's a big change, and also we're keeping the price the same, so when you get a 50 or 80% improvement, we're not increasing the price to match that, so you're getting much better value as well. And that's pretty much what it is. It's the same product, even better. >> Well, I love this cadence that we're on. We love having you on these video exclusives. We have a lot of Oracle customers in our community, so we appreciate you giving us the inside scope on these announcements. Always a pleasure having you on theCUBE. >> Thanks for having me. It's always fun to be with you, Dave. >> All right, and thank you for watching. This is Dave Vellante for theCUBE, and we'll see you next time. (bright music)
SUMMARY :
and databases have to run It's great to be here. of the X9M announcement today? We have the pedal to the metal sauce that allows you to achieve and so we have a lot of that means the hardware sends the new data Flash is a slow tier now. that provides big benefits, and you got a lot of scale here, and everybody loves both of those things, Now in a move that was maybe and we have Exadata that runs on Prem. and Azure SQL, right, and close to a hundred times Why and what were those results? and compare only to cloud products. of the versioning, obviously, right? and they use these of the Fortune 100 and it's frankly the best platform, is looking to build new and to work in various other it common that customers and now they're moving to and Juan, as you know, is the ability for Oracle to and it gives them better ability to build and a lot of that is because we write first of all, is that fair, and so we're introducing new capabilities of one of the things That's right, that's one of the goals In addition to tape, we can now You've got cloud kind of in the middle, from the X9M announcement the price to match that, so we appreciate you It's always fun to be with you, Dave. and we'll see you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Samsung | ORGANIZATION | 0.99+ |
Deutsche Bank | ORGANIZATION | 0.99+ |
Juan | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
Juan Loaiza | PERSON | 0.99+ |
Deutsche bank | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
September 2021 | DATE | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
50 times | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
30 days | QUANTITY | 0.99+ |
Deutsch Bank | ORGANIZATION | 0.99+ |
50% | QUANTITY | 0.99+ |
30 | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
50 | QUANTITY | 0.99+ |
80% | QUANTITY | 0.99+ |
87% | QUANTITY | 0.99+ |
ZDLRA | ORGANIZATION | 0.99+ |
60% | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
last night | DATE | 0.99+ |
last century | DATE | 0.99+ |
first tier | QUANTITY | 0.99+ |
dozens | QUANTITY | 0.98+ |
this year | DATE | 0.98+ |
more than a terabyte per second | QUANTITY | 0.98+ |
Redshift | TITLE | 0.97+ |
Exadata | ORGANIZATION | 0.97+ |
First | QUANTITY | 0.97+ |
hundreds | QUANTITY | 0.97+ |
X9M | TITLE | 0.97+ |
more than a terabyte per second | QUANTITY | 0.97+ |
Outposts | ORGANIZATION | 0.96+ |
Azure SQL | TITLE | 0.96+ |
Azure Stack | TITLE | 0.96+ |
zero data | QUANTITY | 0.96+ |
over a dozen years | QUANTITY | 0.96+ |
Juan Loaiza, Oracle | CUBE Conversation 2021
(upbeat music) >> The innovation around databases has exploded over the last few years. Not only do organizations continue to rely on database technology to manage their most mission critical business data. But new use cases have emerged that process and analyze unstructured data. They share data at scale, protect data, provide greater heterogeneity. New technologies are being injected into the database equation. Not just cloud which has been a huge force in the space, but also AI to drive better insights and automation, blockchain to protect data and provide better auditability, new file formats to expand the utility of database technology and more. Debates are bound as to who's the best number one, the fastest, the most cloudy, the least expensive, et cetera. But there is no debate, when it comes to leadership and mission critical database technologies. That status goes to Oracle. And with me to talk about the developments of database technology in the market is cube alum Juan Loaiza, who's executive vice president of Mission Critical Database Technology at Oracle. Juan always great to see you, thanks for making some time. >> Thanks, great to see you Dave, always a pleasure to join you. >> Yeah and I hope you have some time because they've got a lot of questions for you. (chuckles) I want to start with- >> All right I love questions. >> Good I want to start and we'll go deep if you're up for it. I want to start with the GoldenGate announcement. We're covering that recent announcement, the service on OCI. GoldenGate it's part of this your super high availability capabilities that Oracle is so well known for. What do we need to know about the new service and what it brings for your customers? >> Yeah, so first of all, GoldenGate is all about creating real time data throughout an enterprise. So it does replication, data integration, moving data into analytic workloads, streaming analytics of data, migrating of databases and making databases highly available. All those are use cases for real-time data movement. And GoldenGate is really the leading product in the market, has been for many years. We have about 80% of the global fortune 500 running GoldenGate today, in addition to thousands and thousands of smaller customers. So it is the premier data integration, replication, high availability, anything involving moving data in real time, GoldenGate is the premier platform. And so we've had that available as a product for many years. And what we just recently done is we've released it as a cloud service, as a fully managed and automated cloud service. So that's kind of the big new thing that's happening right now. >> So is that what's unique about this, is it's now a service, or there are other attributes that are unique to Oracle? >> Yeah, so the service is kind of the most basic part to it. But the big thing about the service is it makes this product dramatically easier to use. So traditionally the data integration, replication products, although very powerful, also are very complex to use. And one of the big benefits of the service is we've made a dramatically simpler. So not just super experts can use it, but anyone can use it. And also as part of releasing it as a cloud service, we've done a number of unique things including making it completely elastically scalable, pay per use and dynamic scalability. So just in time, real time scalability. So as your workload increases we automatically increase the throughput of GoldenGate. So previously you had to figure all this stuff out ahead of time. It was very static. All these products have been very static. Now it's completely dynamic a native cloud product and that's very unique in the market. >> So, I mean, from an availability standpoint, I guess IBM sort of has this with Db2 but it doesn't offer the heterogeneity that GoldenGate has. But at what about like AWS, Microsoft, Google, do they provide services like, like GoldenGate? >> There's really nothing like the GoldenGate service. When you're talking about people like Google and Azure, they really have do it yourself third-party products. So there'll be a third party data integration replication product, and it's kind of available in their marketplace and customers have to do everything. So it's basically a put it together, your own kit. And it's very complicated. I mean these data integration products have always been complicated, and they're even more complicated in the cloud, if you have to do everything yourself. Amazon has a product but it's really focused on basic data migration to their cloud. It doesn't have the same capabilities as Oracle has. It doesn't have the elasticity, it doesn't have pay peruse, so it's really not very clavy at all. >> Well, so I mean the biggest customers have always glommed onto GoldenGate because they need that super ultra high availability. And they're capable of do it yourself. So, tell us how this compares to two DIY. >> Yeah, so you have mentioned the big customers so you're absolutely right. The big customers have been big users of GoldenGate. Smaller customers or users as well, however, it's been challenging because it's complicated. Data integration has been a complicated area of data management. More and most complicated. And so one of the things this does, is that it expands the market. Makes it much dramatically easier for smaller companies that don't have as many it resources to use the product. Also, smaller companies obviously don't have as much data as the really large giants. So they don't have as much data throughput. So traditionally the price has been high for a small customer. But now, with pay per use in the cloud, it eliminates the two big blockers for smaller enterprises. Which are the costs, the high fixed costs and the complexity of the products. So in which, by the way, it's helpful for everyone also. And for big customers they've also struggled with elasticity. So sometimes a huge batch job will kick in, the rate of change increases and suddenly the replication product doesn't keep up. Because on-prem products aren't really very elastic. So it helps large customers as well. Everybody loves these reviews but the elasticity pay per use, on demand nature of it's really helpful for everybody. >> Well, and because it's delivered as a service I would imagine for the large customers that you're giving them more granularity, so they can apply it maybe for a single application, as opposed to trying to have to justify it across a whole suite. And because the cost is higher, but now if you're allowing me to pay by the drink, is that right? I could just sort of apply it in a more granular level. >> Yes, that's exactly right. It's really pay per use. You can use it as much or as little as you want. You just pay for what you use. And as I mentioned, it's not a static payment either. So if you have a lot of data loads going on and right now you pay a little more, at night when you have less going on, you pay a lot less. So you really just paying for what use. It's very easy to set it up for a single application or all your applications. >> How about for things like continuous replication or real-time analytics, is the service designed to support that? >> Yes, so that's the heritage of GoldenGate. GoldenGate has been around for decades and we've worked with some of the most demanding customers in the world on exactly those things. So real time data all over the enterprise is really the goal that everyone wants. Real-time data from OTP and to analytics, from one system to another system, and for availability. That is the key benefit of GoldenGate. And that's the key technology that we've been working on for decades. And now we have it very easy to use in the cloud. >> Well what would be the overheads associated with that? I mean, for instance, you've go it, you need a second copy. You need the other database copies, and where does it make sense to incur that overhead? Obviously the super high availability apps that can exploit real time. Think like fraud detection is the obvious one, but what else can you add there? >> Well, GoldenGate itself doesn't require any extra copies of anything. However, it does enable customers that want to create for example, an analytics system, a data warehouse, to feed data from all their systems in real time into that data warehouse for example. And it also enables the real-time capabilities, enable high availability and you can get high availability within the cloud with it, between on premises in the cloud, between clouds. Also, you can migrate data. Migrate databases without having to take them down. So all these capabilities are available now and they're very easy to use. >> Okay. Thanks for that clarification. What about autonomous? Is that on the roadmap or what you thinking? >> Yeah, the GoldenGate is essentially an autonomous service. And it works with the Oracle Autonomous Database. So you can both use it as a source for data and as a sink for data, as a place you're writing data. So for example, you can have an autonomous OTP database, that's replicating to another autonomous OTP database in real time. And both of them are replicating changes to the autonomous data warehouse. But it doesn't all have to be autonomous. You can have any mix of, autonomous not autonomous, on-prem in cloud, in anybody's cloud. So that's the beauty of GoldenGate, It's extremely flexible. >> Well, you mentioned the plasticity a couple of times. I mean, why is that so important that that GoldenGate on OCI gives you that elastic, whatever billing the auto-scaling talk, talk to me in terms of what that does for the customer. >> Yeah, there's really two big benefits. One benefit is it's very difficult to predict workloads. So normally on an on-prem configuration, you have to say, okay what is the max possible workload that's going to happen here? And then you have to buy the product, configure the product, get hardware, basically size, everything for that. And then if you guess wrong, you're either spending too much because you oversized it or you have a big data real-time problem. The data can't keep up with the real-time because you've undersized the configuration. So that's hard to do. So the beauty of elasticity and the dynamic elasticity, the pay per use, is you don't have to figure all this stuff out. So if you have more workload, we grow it automatically. If you have less workload, we shrink it automatically. And you don't have to guess ahead of time. You don't have to price ahead of time. So you, you just use what, what you use, right? You don't pay for something that you're not using. So it's a very big change in the whole model of how you use these data, replication, integration, high availability technologies. >> Well, I think I'm correct to say GoldenGate primarily has been for big companies. You mentioned that small companies can now take advantage of this service. We talked about the granularity. And I could definitely see, can they afford it? I guess this is part one and then, and then the other part of the question is, I can see GoldenGate really satisfying your on-prem customers and them taking advantage of it, but do you think this will attract new customers beyond your core? So two part question there. >> Yeah, absolutely. So small customers have been challenged by the complexity of data integration. And that's one of the great things about the cloud services is it's dramatically simpler. So Oracle manages everything. Oracle does the patching, the upgrades. Oracle does the monitoring. It takes care of the high availability of the product. So all that management, complexity, all the configuration set up, everything like that, that's all automated, that's owned by Oracle. So small customers were always challenged by the complexity of product, along with everything else that they had to do. And then the other of course benefit is small customers were challenged by the large fixed price. So now with pay per use, they pay only for what they use. It's really usable by easily by small customers also. So it really expands the market and makes it more broadly applicable. >> So kind of same answer for beyond your existing customer base, beyond the on-prem that that's kind of... You answered >> Right. >> my two part question with one answer, so that was pretty efficient, (chuckles) pun intended. So the bottom line for me and squinting through this announcement is you've got the heterogeneity piece with GoldenGate OCI and as such it's going to give you the capability to create what I'll call an architecturally coherent decentralized data mesh. Big on this data mesh these days, could have decentralized data. With the proviso then I going to be able to connect to OCI, which of course you can do with Azure or I guess you could bring cloud to a customer on prem, first of all, is this correct? And can we expect you over time to do this with AWS or other cloud providers? >> It can move data from Amazon or to Amazon. It can actually handle, any data wherever it lives. So, yeah, it's very flexible and it's really just the automation of all the management, that we're running in our public cloud But the data can be from anywhere to anywhere. >> Cool, all right, let's switch topics here a little bit. Just talk about some of the things that you've been working on, some of the innovation. I sat through your blockchain announcement, it was very cool. Of course I love anything blockchain and crypto, NFTs are exploding, so that Coinbase IPO. It's just really an exciting time out there. I think a lot of people don't really appreciate the innovation that's occurring. So you've been making a lot of big announcements last several months. You've been taking your R and D bringing it into product, So that's great, we love to always see that because that's where really the rubber meets the road. Just for the database side of the house, you announced 21c the next generation of the self-driving data warehouse, ADW, blockchain tables, now you got GoldenGate running on OCI. Take us inside the development organizations. What are the underlying drivers other than your boss. >> When we talk about our autonomous database, it is the mission critical Oracle database, but it's dramatically easier to do. So Oracle does all the management all on automation, but also we use machine learning to tune, and to make it highly available, and to make it highly secure. So that that's been one of our biggest products we've been working on for many years. And recently we enhanced our autonomous data warehouse taking it beyond being a data warehouse to complete a data analytics platform. So it includes things like ETL. So we built ETL into the autonomous data warehouse. We're building our GoldenGate replication into autonomous data warehousing. We built machine learning directly natively into the database. So now, if someone wants to run some machine learning they just run a machine learning queries. They no longer have to stand up a separate system. So a big move that we've been making is, taking it beyond just a database to a full analytic platform. And this goes beyond what anyone else in the industry is doing, because we have a lot more technology. So for example, the ML machine learning directly in the database, the ETL directly in the database. The data replication is directly in the database. All these things are very unique to Oracle. And they dramatically simplify for customers how they manage data. In addition to that, we've also been working in our database product. We've enhanced it tremendously. So our big goal there is to provide what we call it converged database. So everything you need, all the data types. Whether it's JSON, relational, spatial, graph, all that different kinds of data types, all the different kinds of workloads. Analytics, OTP, things like blockchain, microservices events, all built into the Oracle database, making it dramatically easier to both develop and deploy new applications. So those are some of our big, big goals. Make it simple, make it integrated. Take the complexity, we'll take on the complexity. So developers and customers find it easy to develop an easy to use. And we've made huge strides in all these areas in the last couple of years. >> That's awesome. I wonder if we could land on blockchain again for now it's kind of jogging, but sort of on crypto. Though you're not about crypto but you are about applying blockchain. Maybe you can help our audience understand what are some of the real use cases where blockchain tech can be used with Oracle database. >> Yeah, so that's a very interesting topic. As you mentioned, blockchain is very currently, we see a lot of cryptocurrencies. I distributed applications for blockchain. So in general, in the past, we've had two worlds. We've had the enterprise data management world and we've had the blockchain world. And these are very distinct, right? And on the blockchain side the applications have mostly centered around, distributed multi-party applications, right? So where you have multiple parties that all want to reach consensus and then that consensus is stored in a blockchain. So that's kind of been the focus of blockchain. And what we've done is very innovative. We're the first company to ever do this. Is we've taken the core architecture, ideas. And really a lot of it has to do with the cryptography of blockchain. And we've built, we've engineered that natively into the mainstream Oracle database. So now in mainstream Oracle database, we have blockchain technology built in. And it's very dramatically simpler to use. And the use cases, you asked about the use case, that's what we've done. And it's taken us about five years to do this. Now it's been released into the market in our mainstream 19c Oracle database. So the use case is different from the conventional blockchain use case. Which I mentioned was really multi-party consensus based apps. We're trying to make blockchain useful for mainstream, enterprise and government applications. So any kind of mainstream government application, or enterprise application. And that idea of blockchain, the core concept of blockchain, is it addresses a different kind of security problem. So when you look at conventional security, it's really trying to keep people out. So we have things like firewalls, passwords, networking cryption, data encryption. It's all about keeping bad people out of the data. And there's really two big problems that it doesn't address well. One problem is that there's always new security exploits being published. So you have hackers out there that are working overtime. Sometimes they're nation States that are trying to attack data providers. And every week, every month there's a new security exploit that's discovered and this happens all the time. So that's one big problem. So we're building up these elaborate walls of protection around our core data assets. And in the meantime, we have basically barbarians attacking on every side.(chuckles) And every once in a while, they get over the walls and this is just what's happening. So that's one big problem. And the second big problem is elicit changes made by people with credentials. So sometimes you have an insider in your, in your company. Whether it's an administrator or a sales person, a support person, that has valid credentials, but then uses those valid credentials in some illicit way. They go out and change somebody's data for their own gain. And even more common than that cause there's not that many bad guys inside the company to they exist, is stolen credentials. So what's happened in many cases is hackers or nation States will steal for example, administrative credentials and then use those administrative credentials to come into a system and steal data. So that's the kind of problem that is not well addressed by security mechanism. So if you have privileges security mechanism says, yeah you're fine. If somebody steals your privileges, again you get the pass through the gate. And so what we've done with blockchain is we've taken the cryptography elements of blockchain. We call it crypto secure data management. And we've built those into the Oracle database. So think of it this way. If someone actually makes it through over the walls that we built, and in into the core data, what we've done with that cryptographic technology of blockchain, is we've made that immutable. So you can't change it. So even if you make it over the gate you can't get into the core data assets and change those assets. And that's not built into Oracle databases is super easy to adopt. And I think it's going to really enhance and expand the community of people that can actually use that blockchain technology. >> I mean, that's awesome. I could talk all day about blockchain. And I mean, when you think about hackers, it's all there. They're all about ROI, value over cost. And if you can increase the denominator they're going to go somewhere else, right? Because the value will will decline. And this is really the intersection of software engineering cryptography. And I guess even when you bring crypto currency into it, it's like sort of the game theory. That's really kind of not what you're all about, but the first two pieces are really critical in terms of just next generation of raising that security hurdle. Love it. Now, go ahead. >> Yeah it's a different approach. I was just going to say, it's a different approach. Because think about trying to keep people out with things like passwords and firewalls, you can have basically bugs in that software that allow people to exploit and get in. When you're talking about cryptography, that's math, it's very difficult. I mean, you really can't fight pass math. Once the data is cryptographically protected on a blockchain, a hacker can't really do anything with that. It's just, math is math. There's nothing you can do to break it, right. It's very different from trying to get through some algorithm. That's really trying to keep you out. >> Awesome. I said, I could talk forever on this topic. But let me, let me go into some competitive dynamics. You recently announced Autonomous Data Warehouse. You've got service capabilities that are really trying to appeal to the line of business. I want to get your take on that announcement and specifically how you think it compares name names. I'm going to name names you don't have to. But Snowflake, obviously a lot of momentum in the marketplace. AWS with Redshift is doing very, very well. Obviously there are others. But those are two prominent ones that we've tracked in our data shows that have momentum. How do you compare? >> Yeah, so there's a number of different ways to look at the comparison. So the most simplest and straightforward is there's a lot more functionality in Oracle data warehousing. Oracle has been doing this for decades. We have a lot of built-in functionality. For example, machine learning natively built into the database makes it super easy to use. We have mixed workloads, we have spatial capabilities. We have graph capabilities. We have JSON capabilities. We have a microservice capabilities. We have-- So there's a lot more capabilities. So that's number one. Number two, our cloud service is dramatically more elastic. So with our cloud service all you really do, is you basically move the slide. You say hey, I want more resources, I want less resources. In fact, we'll do that automatically, that's called auto-scaling. In contrast when you look at people like Snowflake or Redshift they want you to stand up a new cluster. Hey you have some more workload on Monday, stand up another cluster and then we'll have two sets of clusters or maybe you'd want a third cluster, maybe you want a fourth cluster. So you end up with all these different systems which is how they scale. They say, hey, I can have multiple sets of servers access the same data. With Oracle you don't have to even think about those things. We auto scale, you get more workload. We just give it more resources. You don't even have to think about that. And then the other thing is we're looking at the whole data management end to end problem. So starting with capturing the data, moving the data in real time, transforming the data, loading the data, running machine learning and analytics on the data. Putting all kinds of data in a single place that you can do analytics on all of it together. And then having very rich screen capabilities for viewing the data, graphing the data, modeling the data, all those things. So it's all integrated. It makes it super easy to use. So a much easier, much more functionality and much more elastic than any of our competitors in the market. >> Interesting, thank you for those comments. I mean, it's a different world, right? I mean, you guys got all the market share, they got all the growth, those things over time, you've been around, you see it, they come together and you fight it out and may the best approach wins. >> So we'll be watching >> Yeah also I forgot to mention the obvious thing, which is Oracle runs everywhere. So you can run Oracle on premises. You can run Oracle on the public cloud. You can run what we call cloud at customer. Our competitors really are just public cloud only. So you customers don't get the choice of where they want to run their data warehouse. >> Now Juan a while ago I sat down with David foyer and Mark steamer. We reviewed how Gartner looks at the marketplace and it wasn't surprise that when it came to operational workloads, Oracle stood out. I mean, that's kind of an understatement relative to the major competitors. Most of our viewers, I don't think expected for instance Microsoft or AWS to be that far away from you. But at the same time, the database magic quadrant maybe didn't reflect that gap as widely. So there's some dissonance there with the detailed workload drill downs were dramatic. And I wonder what your take on the results. I mean, obviously you're happy with them. You came out leading in virtually every category or you will one and two, and some of that sort of not even non-mission critical operational stuff. But what can you add to my narrative there? >> Yeah, so Gartner, first of all, we're talking about cloud databases. >> Right. >> Right, so this is not on premises databases this is pure cloud databases. And what they did is they did two things. One is, the main thing was a technical rating of the databases, of the cloud databases. And, there's other vendors that have been had database in the cloud for longer than we have. But in the most recent Gartner analysis report, as you mentioned, Oracle came out on top for cloud database technology, in almost every single operational use case including things like Internet of Things, things like JSON data, variable data, analytics as well as a traditional OTP and mixed workloads. So Oracle was rated the highest technology which isn't a big surprise. We've been doing this for decades. Over 90% of the global fortune 500 run Oracle. And there's a reason, because this is what we're good at. This our core strength. Our availability, our security, our scalability, our functionality, both for OTP and analytics. All the capabilities, built-in machine learning, graph analytics, everything. So even when we compare narrowly things like Internet of Things or variable data against niche competitors that that's what all they do. We came up dramatically ahead. But what surprised a lot of people is how far ahead of some of the other cloud vendors like Amazon, like Azure, like Google, Oracle came out ahead in the cloud database category. So a lot of people think, well, some of these other pure cloud vendors must be ahead of Oracle in cloud database. But actually not. I mean, if you look at the Gartner analyst report, it was very clear. It was Oracle was dramatically ahead of their cloud database technologies with our cloud database. >> So I'm pretty much out of time but last question. I've had some interesting discussions lately and we've pointed out for years in our research that of course you're delivering the entire stack, the database, part of the infrastructure the applications, you have the whole engineered system strategy. And for the most part you're kind of unique in this regard. I mean, Dell just announced that it's spinning off VMware and it could have gone the other direction. And become more integrated hardware and software player, for the data center. But look, it's working for Dell based on the reaction, from the street post announcement. Cisco they got a hardware and software model that's sort of integrated but the company's value that peaked back in the .com boom, it's been very slow to bounce back. But my point is for these companies the street doesn't value, the integrated model. Oracle is kind of the exception. You know, it's at trading at all time highs, I know you're not going to comment on the stock price, but I guess in SAP until it missed it guided conservatively, was kind of on the good trajectory. But so I'm wondering, why do you think Oracle strategy resonates with investors, but not so much those companies? Is it, because you have the applications piece? I mean, maybe that's kind of my premise for, for SAP but what's your take? Why is it working for you? >> Well, okay. I think it's pretty simple, which is some of our competitors, for example, they might have a software product and a hardware product. But mostly those are acquired in their separate products that just happen to be in a portfolio. They are not a single company with a single vision and joint engineering going on. It's really, hey, I got the software on over here. I got the hardware over there, but they don't really talk to each other, they don't really work together. They're not trying to develop something where the stack is actually not just integrated but engineered together. And that is really the key. Oracle focuses on data management top to bottom. So we have everything from our ERP, CRM applications talking to our database, talking to our engineered systems, running in our cloud. And it's all completely engineered together. So Oracle doesn't just acquire these things and kind of glue them together. We actually engineer them and that's fundamentally the difference. You can buy two things and have them as two separate divisions in your company but it doesn't really get you a whole lot. >> Juan it's always a pleasure, I love these conversations and hope we can do more in the future. Really appreciate your time. Thanks for coming to the CUBE >> Pleasure, Dave nice to talk to you. >> All right keep it right there, everybody. This is Dave Vellante for theCUBE, we'll see you next time. (upbeat musiC)
SUMMARY :
of database technology in the market Thanks, great to see you Dave, Yeah and I hope you have some time about the new service So that's kind of the big new thing of the most basic part to it. but it doesn't offer the complicated in the cloud, Well, so I mean the biggest customers And so one of the things this does, And because the cost is higher, So if you have a lot And that's the key technology is the obvious one, And it also enables the Is that on the roadmap So that's the beauty of GoldenGate, that does for the customer. the pay per use, is you don't have of the question is, I can see GoldenGate So it really expands the market beyond the on-prem that that's kind of... So the bottom line for me and it's really just the of the self-driving data So for example, the ML but you are about applying blockchain. And the use cases, you of the game theory. Once the data is in the marketplace. So the most simplest and straightforward may the best approach wins. You can run Oracle on the public cloud. But at the same time, the Yeah, so Gartner, first of all, of the databases, of the cloud databases. And for the most part you're And that is really the key. Thanks for coming to the CUBE theCUBE, we'll see you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Amazon | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Juan Loaiza | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
Juan | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Dell | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
thousands | QUANTITY | 0.99+ |
Monday | DATE | 0.99+ |
two things | QUANTITY | 0.99+ |
One problem | QUANTITY | 0.99+ |
Mark steamer | PERSON | 0.99+ |
One benefit | QUANTITY | 0.99+ |
Gartner | ORGANIZATION | 0.99+ |
OCI | ORGANIZATION | 0.99+ |
fourth cluster | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
one answer | QUANTITY | 0.99+ |
third cluster | QUANTITY | 0.99+ |
one big problem | QUANTITY | 0.99+ |
two big problems | QUANTITY | 0.99+ |
two sets | QUANTITY | 0.99+ |
Coinbase | ORGANIZATION | 0.99+ |
two part | QUANTITY | 0.99+ |
about five years | QUANTITY | 0.98+ |
two big benefits | QUANTITY | 0.98+ |
first company | QUANTITY | 0.97+ |
two separate divisions | QUANTITY | 0.97+ |
Over 90% | QUANTITY | 0.97+ |
GoldenGate | ORGANIZATION | 0.97+ |
second copy | QUANTITY | 0.97+ |
David foyer | PERSON | 0.97+ |
first two pieces | QUANTITY | 0.96+ |
single | QUANTITY | 0.96+ |
two big blockers | QUANTITY | 0.96+ |
single application | QUANTITY | 0.96+ |
Juan Tello, Deloitte | Informatica World 2018
>>live from Las Vegas. It's the Cube covering. Inform Attica, World 2018 Not you. Buy in for Monica. >>I am Peter Burroughs Wellcome. Back to Day two. Coverage on a cube of in from Attica, World 2018. We're broadcasting from the Venetian here in beautiful Las Vegas. Certainly a lot of excitement. A lot of the buzz just heard the general session empty. Probably 1000 people in the room looking at various moods. Excited to be here this morning. We're being joined by my co host. Jim Kabila's Jim is lead analyst to Wicked Bond. Silicon Angle. Looking at a lot of the data and data practice issues on our first guest is want Te'o One is a principle of data management on architecture at Deloitte one. Welcome to the Cube. >>Great. Thank you, guys. Thank you guys for having me. >>So let's kick it off. What's do you do with Deloitte? What's interesting? What a customer is talking about? >>Yeah. No, absolutely. I mean, I think you know, we are absolutely at the what I would call inflection point around the importance of data. And so my role at the Lloyd is to lead our data management and architecture practice, which essentially deals with everything from data strategy today to execution and how we enable all their transformational initiatives right to truly take advantage of the power that data has to unlock. You know, better business processes to unlock better insights, right to take better action, right? I mean everything that we've been historically talking about, right? In terms of what can organizations do around their data asset? My job is to ensure that we are leading guiding, driving and developing these solutions for our clients. >>So here's a simple question. Just kind of kick it off and see where it goes way. Think that data is becoming more important? You think the day is coming on or important? Are you finding yourself still talking to people that are data administrators or you finding yourselves being pulled into higher level conversations within the business? Talk about data asset, date ass, information, data, asset returns. How is that changing? >>I would say it's evolving, right? I mean, if I and so I have the privilege of running or practice nationally, right? So I have the approach of looking at all of the various industries and sectors right. And so I think, you know, if you take the financial service is life science, healthcare industries, right where there's a lot more regulatory demand on data ensuring that you know what it is, where it's coming from. It's got the right data standards and qualities. I would say they they've gotten it long ago, right? And they've put in place data management organizations. We hear the chief data officer, right? I would say those industries and sectors are a lot more prominent on DSO the conversations absolutely at the executive level, right? There is an executive owner that's responsible for ensuring that the data is correct. >>Tell us about changing data landscape one. Why do enterprises need to change their data strategy and architecture? What do you What do you hear from clients telling them? >>Yeah, I think it's quite simple, right? It is so absolutely enable their business strategy right. You can no longer enable your business strategy without without the data dimension, right? I mean, for many, many years we've talked about, you know, people process technology, right? Well, now there's 1/4 dimension, right? People process technology and data on dhe. That's how we like to think about it. Is that important? Right? You need that executive, and I'll use two words very, very distinctly, right. You don't need an executive data sponsor. You need an executive data owner. Right? And that's the transformation, right? In the evolution that we're seeing in the market and that we're actually advocating for right to truly unlocked that business strategy, that business outcome that they're looking >>for. So let's talk about if we're gonna do that, then we need tools to do it. Yeah, absolutely. So we're talking about data we're talking about data owners we're talking about practice is to actually create generate value out of data. That's not something we're going to manually, right? Talk about some of the tools generally that your clients are starting to apply to improve their productivity of doing these things. >>Yeah. I mean, I would say there's a sort of standard spectrum of data management tools ride from, you know, the database to master data management to quality to meta data management. Right. So each of these technical capabilities and tools right provide the capabilities required to manage that sort of data supply chain right? There is infinite sources of data and there's infinite sources of demand, right? And it is the responsibility of, you know, the data management organization, too, to manage that supply chain. And obviously you need tools and you need technology to sort of support that entire life cycle. >>What is the one thing that you tell clients that need to do with their data in order to stay competitive? Is there one imperative thing that they all need to do with their data just to stay in the thick of whatever it is they do in their industry? >>Yes. So the one thing I always advise our clients is all data is not created equal, right? So fine and identify the data that truly Dr Value for your organization. Because that's been, I would say, one of the biggest challenges in this space, right is everyone's drowning in data, right? And so to bring all these capabilities for your entire, you know, sort of landscape in your organization, it's massive, right? It's just too big, right? So ty value and outcomes to the data that matters, right? So I'll give an example, right? So in retail, right, I mean their values around knowing their customers and the products that they So to those customers, right? So let's start double clicking underneath that and figuring out and ensuring that that data right has all the rights standards is up to quality so it can meet those business strategies, right? Don't go after everything, Right, map business outcome and value to the data that supports that. >>What's the role of the chief data officer and the other C level executives in driving that sort of transformation? Yeah. How is their role changing? >>So I would say the chief date officer role is again evolving and still maturing. Not everyone has it, but I do see them as the when the next executive sea level rose. That will truly be a catalyst for change and innovation. Right where, you know, I think we traditionally think about the CTO or the C I o. Or the chief strategy officer, right? Sort of back to the now four dimensions. It's no longer three their ability to understand the business strategy, understand where their data is to support that and bring new, innovative ways to enable that, right? So it's absolutely critical. >>So what we think ultimately on justice on you is that a chief is a is an executive that's responsible for demonstrating that they're generating, return and share older capital. Exactly. Chief data officer. Therefore, be the individual that's demonstrating that they're generating return on the company state assets. When you take an asset approach, you could think about portfolio. But think about portfolio now. You're discriminating, which values most valuable. Which date is less valuable. If you agree, that suggest that there is a new class of tool that has to be bought in around this notion of port folio catalogs, minute master data management and give us a sense of that kind of new tool kit that's gonna be at the core of not just managing data inside an application like a D B. M s right, but something that's actually managing data assets, >>right? Yeah, I think it's It's the entire ecosystem of how we bring it together and how we prove we create. What I would say is, products and service is around data right so back to this construct of your managing the data supply chain, right? And so the responsibility of the CDO and how you measure and manage that too, you know, outcomes. Right and shareholder value is I've just created a product around this data, and we talked a lot about data monetization. Andi. I would say It's from a outside in perspective. Am I selling my data? Am I making money? Right? Well, and of course, that's one angle. But I would say there's also the inside out view where your monetizing to create value back to your organization, Right? So increase, you know, customer cells, right? Reduced turn right. All those things matter. And so time data products to those business outcomes. I think how you get to, you know, the return on investment shareholder value as it relates to this role in the products and service is that it's creating. >>All right, we're out of time. I want a oh, principal date architecture er and management management architecture. Sorry at Deloitte. Thank you very much for being on the Cube. >>Thank you. >>All right, so we'll be right back with another event or another segment from in Dramatic World 2018 here in Las Vegas.
SUMMARY :
It's the Cube covering. Looking at a lot of the data and data practice issues on our first guest is Thank you guys for having me. What's do you do with Deloitte? And so my role at the Lloyd is to lead Are you finding yourself still talking to people that are data administrators or I mean, if I and so I have the privilege of running or practice nationally, What do you What do you hear from clients telling them? I mean, for many, many years we've talked about, you know, people process technology, is to actually create generate value out of data. And it is the responsibility of, you know, the data management organization, So fine and identify the data that truly Dr Value for your organization. What's the role of the chief data officer and the other C level executives in driving that sort of transformation? So I would say the chief date officer role is again evolving and still maturing. So what we think ultimately on justice on you is that a chief is a is I think how you get to, you know, the return on investment shareholder value as it relates to Thank you very much for being on the Cube. All right, so we'll be right back with another event or another segment
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Deloitte | ORGANIZATION | 0.99+ |
Juan Tello | PERSON | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Jim Kabila | PERSON | 0.99+ |
two words | QUANTITY | 0.99+ |
Jim | PERSON | 0.99+ |
1000 people | QUANTITY | 0.99+ |
Dramatic World 2018 | EVENT | 0.98+ |
Lloyd | ORGANIZATION | 0.98+ |
one angle | QUANTITY | 0.98+ |
each | QUANTITY | 0.98+ |
Wicked Bond | ORGANIZATION | 0.97+ |
first guest | QUANTITY | 0.97+ |
today | DATE | 0.97+ |
one | QUANTITY | 0.97+ |
Monica | PERSON | 0.95+ |
Day two | QUANTITY | 0.95+ |
one thing | QUANTITY | 0.91+ |
Informatica World 2018 | EVENT | 0.9+ |
this morning | DATE | 0.88+ |
Deloitte one | ORGANIZATION | 0.87+ |
four dimensions | QUANTITY | 0.82+ |
one imperative thing | QUANTITY | 0.82+ |
three | QUANTITY | 0.81+ |
Attica | ORGANIZATION | 0.78+ |
1/4 | QUANTITY | 0.76+ |
DSO | ORGANIZATION | 0.75+ |
Venetian | OTHER | 0.67+ |
Peter Burroughs | ORGANIZATION | 0.62+ |
World 2018 | EVENT | 0.5+ |
Silicon | LOCATION | 0.48+ |
Angle | ORGANIZATION | 0.47+ |
Cube | COMMERCIAL_ITEM | 0.36+ |
One | ORGANIZATION | 0.32+ |
Juan Vega, Dell EMC | VMworld 2017
>> Announcer: Live from Las Vegas. It's the Cube. Covering VMWorld 2017 brought to you by VMWare. And its ecosystem partners. (techno music) >> Okay welcome back everyone we are live here in Las Vegas for VMWorld 2017. We are on the floor. I'm John Furrier with the Cube with Dave Vellante our next guest is Juan Vega director ready solutions product manager for Dell EMC. Welcome to the Cube. >> Thank you. For my first time, really looking forward to it. >> Okay first what's ready solutions mean? >> So ready solutions are literally a bunch of services that we apply to infrastructure to help build confidence, convenience, and a better customer experience. For folks who are consuming a do-it-yourself, who want to take a do-it-yourself approach to converge systems or SDDS. >> John: So I got the button that says Node-a-Rama. What does that that button? >> Node-a-Rama well, we're launching a bunch of nodes this year right. We have a lot of nodes that we're putting out there for a variety of workloads including vSAN right and with vSAN we're introducing 14 G technology to this you know this show. We just launched it recently and we're bringing lots of new performance technologies in that 14 G space. It'll help a bunch with software defined storage. >> Node-a-Rama, John likes developers, he thought it was node JS or something, he was getting excited. So I wonder if you could talk to something that we've been addressing all week here on the Cube. You see in VM Ware's results a lot of momentum and it's not just, doesn't look like it's a one quarter, I mean three quarters of growth appears to be some momemtum. The AWS deal sort of clarified for customers the Cloud strategy and I think the other piece that we've been talking about is the reality that customers have that you're not able to reform their business and stick it in the Cloud. They're really trying to take the Cloud model and bring it to their data and in order to do that they need simplification. So first of all do you buy that and what are you guys doing to facilitate that? >> I absolutely buy it. I mean if I look at Dell EMC's capabilities across the spectrum right, there's a broad variety of services that we can offer a customer to help them adopt that technology right. We call it sort of absorbing their tech debt as it were. And we can do that from very basic do-it-yourself hardware infrastructure right all the way up through you talked to Colin earlier today and we talked about VxRail and VxRack. We're actually providing sort of life cycle management for those environments. With ready nodes and ready bundles, between those two. There's a little bit more service, a little bit more confidence, a little bit more convenience, a little faster time to value right, on that infrastructure, without really moving the customer to a, an environment where we manage it for them. >> Okay and so why do you need to do that you know? I though VMWare was so simple, push a button and go. Talk about sort of how you're closing that gap. >> It can be simple and once you're in the virtualization layer it absolutely is simple but there's a relationship between the virtualization layer and the hardware that has to be maintained. So why is there an HCL right? Why is, why do we that? Because there's a known relationship between that software and that hardware that enables that virtualization. We're making that easier and easier for customers all the time. >> And virtualization does not equate to Cloud. >> Juan: Of course. >> So how do you look at Cloud. How do you sort of, I don't want to get into what do you define as Cloud but, at what point do customers say yes, this is a viable alternative for me to attack my IT labor problem, to for me to tick the box with my management that I'm you know cutting cost. You know et cetera, what are those attributes that you are driving toward that you see customers demanding today? >> Well I see that space evolving right. And the part that we're focusing on ready nodes, is really focused on that software defined storage component. So as that piece of the puzzle evolves, all right we're trying to remove complexity in that environment right. Go back to that ability to confidently present you with a hardware solution that is absolutely adapted for that software environment. Make it faster time to value so that it's showing up pre-configured with services that help you enable that environment more quickly right and then should something need to be done, you know downstream, say a drive fails or whatever right, we can provide a better support experience by contextualizing that hardware in that environment. So it's a space for customers who are still very much doing it themselves, very much building their own environment right in the software defined storage space. But we're providing a set of services that increase that confidence for them right and make it more convenient, give them a better experience. >> It's interesting you know, this is our eighth VM World. Dave and I have been here since 2010. It's been great run, thank everyone for watching the Cube, we love coming every year. But it's been interesting watching the journey. Software defined data center, the hype was what five years ago? Maybe four years ago. But now it's reality. NSX is baked in there, crown jewel. Crowd native coming in over the top. vSAN has been like this rising star. Server SAN from Wikibon has crushed it on the research side. But I got to ask you, now we're hearing customers deploy new use cases under digital transformation that merged software stacks with hardware stacks. What is the biggest challenge that customers have 'cause they want more vSAN. How are you guys helping customers get more vSAN and what are some of the key challenges that you guys solve. >> Well I think there's a couple things that we're doing. First of all we're enabling a very broad set of hardware including cutting edge technologies that are helping them improve the performance, improve the reliability of their implementations in this space. So today we're looking at six different hardware platforms with about 15 different configurations on the HCL and we're expanding that this month significantly. All of those can be delivered. >> John: On the hardware side. >> On the hardware side. >> Okay got it. >> All those can be delivered in a way that they fit seamlessly into a data center environment that's deploying software defined storage. So, I think helping them simplify that is really how we're trying to make this more of a reality and Dell has always brought strong operationalization to any customer we worked with. >> So I got to ask you on the software side, again software's eating the world, Wikibon's true private Cloud report really validating a lot of the success that vSAN's having. I mean all the actions on premise. Transforming the Cloud operating model which is to be more agile. What is the key software piece of it because now you've got DevOps, the Cloud native side saying hey infrastructure is code. I want you to run invisible. The Ops guys saying wait a minute, we got hardware stacks, you got software stacks, they got to come together. >> Absolutely, so our open manage enterprise solution is our software connection for helping manage that hardware in the vSan vCenter environment. And it allows them to actually move all of the controls for updating and managing that system into one pane of glass which is their vSAN vCenter pane of glass. And so we're really trying to help drive that automation, enable that capability for the do-it-yourself customer. Now if the customer wants to have significantly less tech debt then we're happy to talk to them about VxRail and VxRack where we start adding more management software capabilities to help drive an even better experience. >> One more thing you mentioned tech debts. I want to get that on the table. Real issue is technology debt meaning trying to move faster, take some short cuts or you know move the needle too fast. What are some of the technical debts that customers are getting into and where's, what's good technical debt and what's a bad technical debt? >> Oh that's a tough question. I think that in terms of good, of bad technical debt, let's start there right. Anything is going to be sort of routine, spread across lots of different customers within a base that could be off, offloaded to a service provider who can provide that sort of scale is bad technical debt. So things like driver updates, managing your HCL, paying attention to how to go about replacing a hard drive in a server that's gone down in a node. Right those are sort of bad technical debt. You shouldn't be wasting your resources that are focused on your business outcomes on that sort of technical debt. And even at our most basic level, the ready node, we're starting to provide that level of service to the customer. And I think we advance that even more as we get into our rails and racks. In terms of good technical debt, yet to be determined but I would suspect that a lot of that has to do with developing the code. >> John: Debt you can pay back. >> Right, that you can pay back. >> As I tell Dave, we don't want to take on too much debt and then can't pay it back. We'll be bankrupt. >> And that's the sort of code that's directly tied to your environment right. So for example all of the AI infrastructure that they were building in the keynote today for the pizza company right. That's a good example of I'm developing code that's intellectual property for my business, that's good technical debt. I'm going to pay it off. Gives me a competitive advantage. >> That you could use. >> Exactly right. >> Dave: To pay off the... >> Precisely. >> The investments that you've made. So you, you're a disrupter of sets. I mean you've got John talked about the server SAN. We, it's something we published years ago. And basically you're disrupting an install base that you guys own. Right? >> Sounds like a story I heard a long time ago when virtualization first came on the scene. Oh we're going to be running out of servers. That didn't happen, we're selling more servers than we ever have. >> Oh yeah not that you'll stop selling but you, you've got this massive install base and you're essentially, where appropriate migrating that install base to a new way. >> Of course. >> I wonder if you could talk about that dynamic and what those customer conversations are like. >> Well I think it's important to us to be a trusted advisor to our customers. It's always been Dell's sort of way of doing business right. We roll up our sleeves and we get to work with you. So as this transition is applying, is happening to the industry, I think it's up to us to provide those kind of you know feet on the street services that make it easier for customers to absorb and deal with that transition right. And again I know I sound like the broken record here but it's about helping them have confidence that as they move into this transition, they're not having to deal with all the vagaries of mismatched hardware and software incompatibilities. It's about being able to get faster time to value because we did some of the basic steps like pre-configuring that system so it's just ready to go right. >> And what about workloads? How do you see those evolving? 'Cause that's one piece is simplification and you know tacking the IT labor problem with non-differentiated patching and other stuff. The bad technical debt you guys were talking about. What about workloads, what are you seeing emerge in terms of the types of workloads that have an affinity to these types of systems. >> Well I think you know we heard Chad talk earlier about how the network was becoming sort of the bottleneck right. And I think that we're seeing more and more storage, workloads with an affinity for storage moving into the Cloud space, right into the converted space as that technology begins to evolve. And we're seeing things like the new NVME drives in our 14 G servers. All right we have six X the capacity that we had before which means applications, workloads that have a storage affinity are able to actually start moving into this more, I know you'd only use the work virtualized, but this more software defined space. >> Right. >> All right bottom line if a customer's ready node, you guys are doing some good stuff, vSAN's hot, Gelsinger said the world's going to get much faster, today's the slowest day of your life going forward, or something along those lines. There's the implying that it's going to get pretty crazy. Peter Burris head of research for Wikibon.com said the whole computer industry's been turned upside down, it's going to be landing on the table and it's going to re-sort itself out. When you deal with customers, how, what's that conversation like because they're scrambling to lock down their true private Cloud on on premise. They see hybrid Cloud as that pathway to multi Cloud. That's their end stage but right now they got to take care of business at home. That's like cleaning up their own house in IT, what are some of those conversations when that kind of disruption, chaos, complexity. >> Sure, I think everyone's looking for a little bit more of that confidence right and the whole relationship with their supply chain. We're doing it, our customers are doing it and every time we have that conversation with them it basically boils down to what can you do for me that is going to make it easier for me to deal with this transition. How can I trust that these-- >> So ease of use. Is a big thing. >> Juan: I'm sorry? >> Ease of use. Pretty big deal? >> Not just ease of use, but trust that I'll be supported downstream right. So a ready node builds that for example into its value proposition. We want to make sure that you understand that downstream we know what you're using it for and we're able to help you in that context and that's a real key example of how I think we help build that trust with our customers. >> Michael Dell, final question for you talk about just really, final question for you is that Michael Dell was mentioning the technology synergy between Dell technologies cross the portfolio including VMWare. So the question for you is what are some of the synergies that you guys are getting with VMWare? How does that put it to motion? >> Sure, there are several actually. We've done a lot of our development work in the VxRail space around management in conjunction with VMWare. I think that the evolution of the software defined space is being driven by them and we're happy to participate in it in every way we can. So I think there's a lot of, a lot of development and tech support opportunities that we're finding in that relationship. >> So positive outcomes you guys are having a good time. Certainly VMWare's doing great. Good to see Pat Gelsinger on the upslope in terms of stock prices up over a hundred and four. As of yesterday, I haven't even checked what it was today but certainly clarity in the community, clarity in the ecosystem, clarity in the product. Cloud and IoT Edge. I mean the wave slide is pretty much baked at this point. >> Yup. >> And execution. >> And I'm excited to see Dell EMC having a presence across that whole breadth. >> Yeah Dave was commenting it seems like that new Dell technologies is much more sanity now in the community, it's all sorted out. Looking good, congratulations. >> Thank you, thank you very much. >> Juan Vega director, ready solutions with Dell EMC's product management. He's the product czar. Thanks for spending the time. It's the Cube coverage live here at VMWorld 2017, day two of three days of wall to wall coverage. Be right back with more after this short break. (techno music)
SUMMARY :
Covering VMWorld 2017 brought to you by VMWare. We are on the floor. For my first time, really looking forward to it. of services that we apply to infrastructure John: So I got the button that says Node-a-Rama. We have a lot of nodes that we're putting out there So first of all do you buy that that we can offer a customer to help them do you need to do that you know? that has to be maintained. that you are driving toward that you see customers So as that piece of the puzzle evolves, that you guys solve. that are helping them improve the performance, strong operationalization to any customer we worked with. So I got to ask you on the software side, that hardware in the vSan vCenter environment. What are some of the technical debts of that has to do with developing the code. As I tell Dave, we don't want to take So for example all of the AI infrastructure that you guys own. Oh we're going to be running out of servers. to a new way. I wonder if you could talk about that dynamic that make it easier for customers to absorb in terms of the types of workloads that have the converted space as that technology begins to evolve. There's the implying that it's going to get pretty crazy. it basically boils down to what can you do for me So ease of use. Ease of use. that downstream we know what you're using it for of the synergies that you guys are getting with VMWare? to participate in it in every way we can. I mean the wave slide is pretty much baked at this point. And I'm excited to see Dell EMC having now in the community, it's all sorted out. Thanks for spending the time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Juan | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Juan Vega | PERSON | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
Michael Dell | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
yesterday | DATE | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Colin | PERSON | 0.99+ |
Gelsinger | PERSON | 0.99+ |
VMWare | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
Wikibon.com | ORGANIZATION | 0.99+ |
one piece | QUANTITY | 0.99+ |
VMWorld 2017 | EVENT | 0.99+ |
Dell EMC | ORGANIZATION | 0.99+ |
first time | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
vSAN | TITLE | 0.98+ |
three quarters | QUANTITY | 0.98+ |
four years ago | DATE | 0.98+ |
five years ago | DATE | 0.98+ |
2010 | DATE | 0.98+ |
three days | QUANTITY | 0.98+ |
Chad | PERSON | 0.98+ |
one quarter | QUANTITY | 0.98+ |
vSAN vCenter | TITLE | 0.97+ |
this year | DATE | 0.97+ |
Wikibon | ORGANIZATION | 0.96+ |
node JS | TITLE | 0.96+ |
Node-a-Rama | TITLE | 0.96+ |
VMworld 2017 | EVENT | 0.96+ |
VxRail | TITLE | 0.95+ |
First | QUANTITY | 0.95+ |
six different hardware platforms | QUANTITY | 0.94+ |
over a hundred and four | QUANTITY | 0.94+ |
NSX | ORGANIZATION | 0.94+ |
about 15 different configurations | QUANTITY | 0.93+ |
eighth | QUANTITY | 0.93+ |
first | QUANTITY | 0.92+ |
VxRack | TITLE | 0.92+ |
VMWare | TITLE | 0.92+ |
earlier today | DATE | 0.91+ |
day two | QUANTITY | 0.9+ |
this month | DATE | 0.84+ |
six X | QUANTITY | 0.82+ |
years ago | DATE | 0.82+ |
VxRack | ORGANIZATION | 0.81+ |
-a-Rama | TITLE | 0.8+ |
one pane | QUANTITY | 0.79+ |
vSan vCenter | TITLE | 0.78+ |
14 G | OTHER | 0.76+ |
One more thing | QUANTITY | 0.75+ |
Cube | COMMERCIAL_ITEM | 0.74+ |
14 G | QUANTITY | 0.73+ |
couple | QUANTITY | 0.73+ |
VxRail | ORGANIZATION | 0.66+ |
Juan Gaviria, ADP | VMworld 2017
>> Narrator: Live from Las Vegas, it's theCUBE covering VMworld 2017. Brought to you by VMware and its ecosystem partners. (upbeat tech music) >> Hi, I'm Stu Miniman here with my co-host Justin Warren, And we're at vmworld 2017. You're watching theCUBE worldwide leader in tech coverage. Happy to welcome to the program, first time guest, Juan Gaviria who's with ADP, and he's the senior director of technical systems engineering. Juan, thank you so much for joining us. >> Thank you for having me. It's a pleasure to be here. >> So vmworld, it's my 8th year coming to the show. I've been part of the community for a long time, but one of the things that people love at this show, about 20,000 maybe a little north of that, it's peers talking to peers. People that dig into the technology, find out what works and how to do things better and everything. Tell us a little bit about your role. I think most of us know ADP. We've gotten checks with the logo on it, or lots of areas of other services. But what's you're role inside the org? >> Yeah, sure. So really quick about ADP to your point, the logo is pretty well known. We actually pay one in six people in the United States, so over 25 million employees we pay. We have over 650,000 clients, and our mobile app, which is really the way I recommend you look at your pay stubs, 401K, benefits, etc., has been downloaded over 12 million times. So the ADP brand is doing well. It's a healthy business. My role specifically is that I manage all computer at ADP, so think about servers, server operating systems, and server virtualization; that's my role. >> Yeah, you brought up mobile, so maybe start there. Pat Gelsinger this morning was talking about kind digital transformation. We look at financial services, how do you reach those users? What does that kind of ripple through to all of the things that you manage? How long have you been there, and what changes have you been seeing? >> I've been there 15 years, and I've seen a lot of changes. >> Stu: 15 years ago they probably weren't even virtualized so... >> No, no, in fact, I remember rolling out ESX2.X and using the good ole mooey, so we've come a long way. And mobile has just been explosive. Ya know, from a product perspective the goal now, it's mobile first, right? So even now if you think about your benefits, when you go enroll in your benefits every year, the goal is to make that experience translate to mobile, and that's a little harder than it seems, but that's the goal for ADP. It's everything mobile. >> Bring us in. What's kind of the scope of what you manage? You said ADP globally what you handle, but what's kind of the team size? How many devices or VMs or however you manage, what are you listed in? >> Sure, so my team is responsible for computers, I mentioned, so think of everything from demand management through operations. We have globally about 50 associates that are responsible for that. We have over 3000 ESXI hosts deployed across seven global data centers with well over 40000 VMs. So it's a pretty good size infrastructure. >> And bring us inside VMware. How long have you been using it? What pieces of VMware in the ecosystem have you been using? >> We have been using VMware, again, since the early days of server virtualization. We're a VROPS customer, a VRA customer, in fact, VRA, we're leveraging it for infrastructure as service to our deaf community. We have, for ADP, thousands and thousands of developers, so just the amount of churn in our private cloud is tremendous. Airwatch, we're a big Airwatch customer, as well. >> Expand a little bit on the developer piece. What do they look for? How does that impact what you're doin? >> Yeah, sure. I don't know what they're looking for cause it's always changing to be honest with you. But we have somewhere around 6000 developers, and they're obviously developing ADP's next generation products. So they're just looking for us to get out of their way, right? They want VMs; they want 'em now. They want containers; they want 'em now. And every day I turn around they want bigger VMs, bigger containers, and it's getting harder and harder. So through VRA, we provide those pools of capacity and then they're able to spin up, tear down, rebuild VMs as needed. On a monthly basis what I see through VRA just in the developer community lab is about 3000 or so actions a month. So it's pretty high amount of change in that environment. >> Based on what was announced in the Kali, particularly around the partnership with AWS, do you thing that's going to resonate with the developers? >> Yeah, absolutely. Most of our, not most, but a fair amount of our next generation products are being developed on AWS, right? So everyone wants to be on AWS. In fact, we're bringing in a lot of college hires, and as soon as they come in they say, "I want to work on AWS." So for us it resonates because what ADP does, security is key, and we want to have a hybrid cloud, so we were actually part of the Lighthouse Program. So we were an early customer. Got to see the logo during the KeyNote which was nice. So, yeah we plan on leveraging that relationship to help us. For example, burst in that DevCloud. >> Unpack that for us. One of the things we look at, when I hear hybrid cloud I need you to explain that because every customer I talk to, it means different things to me, especially, you mention things like bursting that's a little scary sometimes. So maybe explain what that actually means in your environment. >> Yeah, so, in the Dev environment specifically, what it means is, as I mentioned, we get requests that come out of left field, right? I need a 300 gig memory VM and 10 terabytes of storage. You're just like, "Where, I don't have this," right? I don't have hundreds of those. So we can put that capacity out on AWS much faster, and as those projects materialize, we can then bring that back in. So that's what I mean by hybrid cloud for us. >> So you're using the VMware on AWS, you've been testing that out, you said? My understanding is you're also using Vsan, is that separate from that? Cause Vsan's part of the VMware Cloud or cloud foundation suite, a piece of it, so what's your interest been in Vsan, and how does that fit into the entire picture? >> So it is different. For us, the AWS relationship is going to be more of a manage service, obviously. We're actually going to become a consumer. So we're going to feel like our own customers. To answer your question on Vsan, yeah, we've made a huge investment in Vsan, so all of our VM storage, which again is 40000 VMs worth, which is well over 4+ petabytes of storage, we're moving that all to Vsan. >> What's happenin to all those arrays? >> They're going to be gone. >> Yeah? >> They're going to be gone. >> That's a really big move. Can you, you got to take us back, ya know. How did you is this a top-down or, ya know, bottoms-up walk us through some of that. >> Yeah what started that? Like how did you come to even begin contemplating replacing all of your storage? >> So it's been both to answer your first question. Both top-down and bottom-up. We've been looking at the technologies for a while, and just kind of keepin close to them. At this point, they're mature enough that we feel they can run our business-critical products. And it's been a journey, right? For the last year, we've spent looking at all the different market leading technologies and figuring out which ones make sense in an environment our size. How do we operationalize this thing? So it's been a journey and this is the beginning for us, so we're actually, as I speak, we're starting to deploy our first Vsan clusters in production and we're deploying it in hundreds of servers at a time so it's exciting and interesting times for the team and I. >> Yeah, one of the interesting things, some people look at Vsan and they're like "Oh well it's kind of small deployments," but we had some of the VMware people on earlier today, and they're like, "We're deploying internally," but it's lots of clusters because if you tell me hundreds of servers, I'm like, "Well that's not a single cluster that's lots of clusters." How do you carve that up? How do you manage that? How do you roll that out? What does that look like? >> That's the trickiest part, right? And, by the way, as we look at different solutions, the cluster size became one of the reasons why we chose Vsan. >> Okay. >> A lot of the other solutions that are out there will limit you to about eight node clusters, and to your point, we have thousands of hosts. That's hundreds of clusters. So Vsan gives us the ability to have slightly larger clusters. Today we're going to look at about 16 node clusters to start. That seems to be where VMware is going as well, so we'll follow their lead. We figure they know what they're doin'. And we'll manage that using Vroms as well. >> Yeah I was curious as to what was actually driving the change to Vsan, and what was it about Vsan that said, "Yes! This is great! "This is the one that we're going to pick." You've mentioned cluster size, were there other things that made you sort of decide that Vsan was the right choice for you? >> So to me, the way I look at Vsan from a Vsphere perspective is that they've made storage a feature. And our Vsphere administrators, they know how to run Vsphere and now they just have another feature. So that was one of the main reasons, just the operational efficiencies from a team perspective. There are a lot of other reasons as well. Security: some of the other competitors out there, for example, didn't have encryption when we were looking at it, which is, everything we do revolves around security, so that was another key reason for Vsan for us. And what drove us at first was really, with the traditional models, we found ourselves to not be very agile. Because our business is growing so fast, we're building about six months of capacity at a time, and if you can think about the cost of that much capacity at a shot it's millions of dollars, it's kind of sitting idle. So with HCI technologies and Vsan, specifically, we think we're going to be much more modular in our approach and closer to just in time. So we expect significant capital benefits from that. >> So if I hear you right, it's the pooled nature of what you're doing and that the building blocks are small enough that you're not getting to what people usually have is like, "Oh yeah, I have all this capacity and I'm three years in "and I'm still not using a lot of what I run into, "ya know, I overbuy so much because of that." >> Exactly, and think about that first purchase. You've got to sit with finance and say, "Hey I've got to go buy an array "and I've got to go buy a couple hundred servers." Now I don't have to buy that much up front so it's a huge benefit for us. >> And it sounds like it's going to be cord deployments as well, cause there are a lot of like the HCI deployments, traditionally, have been for remote office things, or just particular work loads like VDI will be one thing that it runs on, but it sounds like this is going to underpin pretty much everything that you do. >> Pretty much everything, yeah. And in addition to VDI we have a very large VDI deployment that supports all of our customer support reps, and it's going to underpin that in addition to underpinning all of the business products that you use to view your pay statement. >> Alright, so you talked about the finance people, what about the storage people? I have to imagine you had storage admins, you look at it and you say, "Okay are they out of a job? "Are they going to work on new challenges?" Can you walk us through how you approach them? How they've looked at this whole migration? And what happens to them versus the VMware people? The virtualization admins I should say. >> It's a funny question cause I've become a little bit more popular now with the storage scene. They've actually knocked on my door and said, "Hey, anything we can help you with?" But, no, it's a good partnership. My peer and I who run storage, we actually built a team together that's going to help us roll out Vsan so we know that there are skills in the storage team that we can leverage, and our vision of it is that we're no longer going to have Vsphere administrators or storage administrators. We're going to have cloud engineers, and they have to know, compute network storage really cause we view the skills as converging as well. It's not just the software and the hardware. >> How about the management of that though? Are you essentially going to be managing a team together rather than it being separate people managing different people? >> Correct it's one team. >> One team? >> It's really interesting, Juan, I'm just curious, in your kind of evaluation phase, what did you learn that if you had known it at the beginning might have either accelerated or you might have positioned things a little bit differently now that you're ready to kind of this massive roll out? >> I think I would have had maybe stricter entrance criteria. You think about a company our size and all the partners we have. We looked at a lot of different solutions. We spent a lot of time in the lab. Where in the end we knew that, for example, an eight node cluster, or not having encryption, were showstoppers, but yet we spent the time in the lab to do that, so my recommendation or advice to my peers out there is come up with good criteria that you know you have to have, and then from there, do the paper exercise and bring in the ones that you know will actually be able to get to production. >> What was that entire kind of evaluation phase? How long did that take? >> More than six months. >> And can I ask what underlying deployment you're going to use for Vsan? >> From a hardware perspective? >> Yeah. >> Sure, HP servers. DL360s. >> Okay, and what led you to choose that versus, ya know, the Dell people are all lined up to say, ya know, come on we own VMware, ya know, you should do VXrails? >> Vxrail to me is a little bit different than just Vsan, but yeah absolutely Dell was pretty interested in that business as well, and the beauty of Vsan is that it gives us the choice. We've been a long-time, happy HP customer, so for this first phase we'll continue to be with HP, and for some reason, if something changes we know with Vsan we have that flexibility. >> You've been with VMware for quite a while, I'm sure you've been watching Vsan. What are you still asking them for? They've had a very aggressive road map. I think they've got most of the basic check blocks done. I've heard a little bit about the road map, but what's on your to-do list for Vsan or any kind of the associated pieces? >> You mentioned VXrail as an example and the automation that they've brought with rail is significant. It's very valuable. I think they need to bring some of that same automation to Vsan's standalone. So as I think about patching thousands of hosts with Vsan and the drivers and that entire matrix of things. They've got to help us there. I think they've got some work to do in terms of improving the performance management of that because environments this size, managing that manually is too much work. So I think we've got some work to do there. But they've been a great partner. They've been listening to us, so I'm pretty happy about where they're headed. >> Earlier you mentioned deploying VMs and containers, is that like Docker or how do containers fit in? >> So Docker has been sort of a religious debate internally to be honest. Do you deploy it on bare metal? Do you deploy it on VMs? I think right now, we're settled on deploying Docker on VMs, but very large VMs. We're thinking 200 gigs, and the goal will be, we're going to try to do that on Vsan. So we're still in early development there, but that seems to be where we're finally landing on. >> Interesting, and I'm assuming that's Linux on top of the VMs to allow that. >> Yes. >> Alright, well, Juan Gaviria, really appreciate you sharing that really interesting use case. I wish ya best of luck on the rollout, and thank you for being on theCUBE. >> Thank you. Thanks for having me. >> Alright, for Justin, I'm Stu, and we'll be back with lots more coverage here from VMworld 2017, you're watching theCUBE.
SUMMARY :
Brought to you by VMware and its ecosystem partners. and he's the senior director It's a pleasure to be here. People that dig into the technology, So really quick about ADP to your point, and what changes have you been seeing? Stu: 15 years ago they probably the goal is to make that experience translate to mobile, What's kind of the scope of what you manage? I mentioned, so think of everything What pieces of VMware in the ecosystem have you been using? so just the amount of churn How does that impact what you're doin? cause it's always changing to be honest with you. So for us it resonates because what ADP does, One of the things we look at, So that's what I mean by hybrid cloud for us. We're actually going to become a consumer. How did you is this a top-down or, ya know, bottoms-up So it's been both to answer your first question. How do you carve that up? And, by the way, as we look at different solutions, and to your point, we have thousands of hosts. the change to Vsan, and what was it about Vsan that said, So to me, the way I look at Vsan So if I hear you right, it's the pooled nature You've got to sit with finance and say, this is going to underpin pretty much everything that you do. of the business products that you use I have to imagine you had storage admins, "Hey, anything we can help you with?" and all the partners we have. Sure, HP servers. and the beauty of Vsan is that it gives us the choice. What are you still asking them for? that same automation to Vsan's standalone. but that seems to be where we're finally landing on. Interesting, and I'm assuming that's Linux and thank you for being on theCUBE. Thanks for having me. and we'll be back with lots more coverage here
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Justin Warren | PERSON | 0.99+ |
Justin | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Juan | PERSON | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
Juan Gaviria | PERSON | 0.99+ |
HP | ORGANIZATION | 0.99+ |
thousands | QUANTITY | 0.99+ |
300 gig | QUANTITY | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
8th year | QUANTITY | 0.99+ |
10 terabytes | QUANTITY | 0.99+ |
first question | QUANTITY | 0.99+ |
United States | LOCATION | 0.99+ |
200 gigs | QUANTITY | 0.99+ |
three years | QUANTITY | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
One team | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
over 650,000 clients | QUANTITY | 0.99+ |
More than six months | QUANTITY | 0.99+ |
40000 VMs | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
ADP | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
six people | QUANTITY | 0.99+ |
Vsan | TITLE | 0.99+ |
hundreds of servers | QUANTITY | 0.99+ |
15 years ago | DATE | 0.99+ |
Airwatch | ORGANIZATION | 0.99+ |
one team | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
15 years | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Stu | PERSON | 0.99+ |
hundreds of clusters | QUANTITY | 0.99+ |
Both | QUANTITY | 0.98+ |
over 25 million employees | QUANTITY | 0.98+ |
first phase | QUANTITY | 0.98+ |
first time | QUANTITY | 0.98+ |
about 50 associates | QUANTITY | 0.98+ |
VMworld 2017 | EVENT | 0.98+ |
Vsan | ORGANIZATION | 0.98+ |
Linux | TITLE | 0.97+ |
hundreds | QUANTITY | 0.97+ |
Vsphere | TITLE | 0.97+ |
over 12 million times | QUANTITY | 0.97+ |
around 6000 developers | QUANTITY | 0.97+ |
thousands of hosts | QUANTITY | 0.96+ |
VROPS | ORGANIZATION | 0.96+ |
about six months | QUANTITY | 0.96+ |
One | QUANTITY | 0.96+ |
single cluster | QUANTITY | 0.96+ |
DevCloud | TITLE | 0.95+ |
about 20,000 | QUANTITY | 0.95+ |
KeyNote | TITLE | 0.95+ |
about 3000 | QUANTITY | 0.94+ |
first purchase | QUANTITY | 0.94+ |
HCI | ORGANIZATION | 0.93+ |
one thing | QUANTITY | 0.93+ |
seven global data centers | QUANTITY | 0.93+ |
ESX2.X | TITLE | 0.93+ |
vmworld 2017 | EVENT | 0.93+ |
millions of dollars | QUANTITY | 0.92+ |
Las Vegas | LOCATION | 0.92+ |
over 4+ petabytes | QUANTITY | 0.92+ |
about 16 node clusters | QUANTITY | 0.91+ |
Juan Loaiza, Oracle - Oracle OpenWorld - #oow16 - #theCUBE
>> Narrator: Live, from San Francisco. It's the CUBE. Covering Oracle Open World 2016. Brought to you by Oracle. Now, here's your host: John Furrier and Peter Burris. (Music) (Background Noise) >> Okay, welcome back everyone. We are here, live at Oracle OpenWorld 2016. This is SiliconANGLE Media, it's The CUBE. Our flagship program, we go out to the events and extract the signal from the noise. I'm John Furrier, the co-CEO of SiliconANGLE, with Peter Burris, head of research for SiliconANGLE Media, as well as General Manager of Wikibon Research. Our next guest, I'm excited to have him back because he's a product guy and we love to go deep into the products. CUBE alumni, Juan Loaiza Senior Vice President of Database Technologies, veteran of Oracle, welcome back to The CUBE. Great to see you. >> Thanks, great to be here. >> Love talking to the product guys on the development side because we get to go deep into the road map. And we're going to try to get as much information out of you as possible. But you'll do your best to hold back, like you did last year. Only kidding. >> I know. (laughter) >> Okay no. >> You must have me confused with somebody else. (laughter) >> Maybe that was Larry Ellison, well he hasn't been on yet. Larry, we'll get you on. >> He's not so good at holding back either. (laughter) >> That's why we don't let him on. That's why they won't let him on, I think. That's, Larry would be too comfortable in The CUBE. No, in all seriousness, joking aside, the hottest areas right now is in your wheel house. Engineered systems, which is going to be a real enabler for Oracle on the performance side. And as you make your own chips, ZF SPARC and Exodeum All this other cool stuff is going to go faster, faster, faster, lower cost, higher performance. The database... >> Better security. Better availability. >> Security, I mean. Amazing stuff. But the database is where the crown jewel is for Oracle, always has been. Before you put Web Logic on it, make it sticky. But now you've got the cloud. The cloud is a environment for great opportunity for the database, business and other databases, some Oracle, some not Oracle. What's going on with the database and the Cloud? Can you take a minute to explain the current situation? >> Yeah, so that's a big question. (laughter) What's going on? So what do you want to start with database or do you want to start out with the Cloud? >> John: Let's start with the database. What's going on with the database? And what does that mean for customers as it moves to the cloud? >> Yeah. So, database we're in the process of releasing our next big database. We don't release databases very often. It only really happens every few years. It's a very big deal. So, what we're trying to do with our next generation database is modernize the whole infrastructure, adjust to a lot of the big transformations that are happening in the marketplace. So among those are things like big data. Where do we go with big data? So, with our new generation database we're making big database and database work seamlessly together. So we have something called big data SOQL, where we can query data regardless of whether it's in Hadoop, NoSQL, Oracle. It's completely transparent. So customers no longer have these silos of information. Another big thing in database is datatype search engine. So new generation wanted JSON, it's called JSON, which is a new data format, so it's used in javascripts. So web developers develop in javascript. They represent data in JSON. And then they say, hey. I don't want to take my JSON data and convert it to relational data. It's a big pain. >> John: True. So, one of the things we've done in our new generation 12-step database was say, hey. Take that JSON, we'll put that directly into the database. We'll allow it to be queried. We'll make it highly available >> John: Without a schema. Without any kind of a schema, >> Nothing. >> just throw it in there unstructured. >> Juan: Just throw it in there. That's right. So we've made it very simple for new-age developers to use JSON with databases. That's another really big thing that's happening. >> So tell us what, just let's double down on that for a second. JSON has been a big trend in API based systems, lot of abilities in JSON endpoints. For user experience, whether it's mobile or web, very prevalent now. Pretty much standard. >> Juan: Yes. >> How does that get rendered itself from a customer's perspective? Are you saying that Oracle will just onboard it into the database itself? Or is it a separate product? Or is it, I mean... >> - [Juan] Directly into the data. So we have native JSON directly into the data. We've essentially added JSON as a datatype. We've added the sequel, we have SOQL extensions. You can access JSON like an index... >> John: So, I can run in single queries on JSON? >> You can, exactly right. You can very simply run SOQL queries on JSON. >> And what's the impact to the customer? >> Juan: And all the stuff that comes with that. >> John: And what does that solve? What problem does that solve? >> It solves two problems. One is, people like that datatype. So new-age developers, they're writing in javascript. They have JSON and they just want to use it. So they don't have to convert it. >> John: Which by the way, everyone's running in javascripts. >> Right, that's right. That's the big programming language. And the other thing is unstructured data. So, data that's not structured initially, that every piece of data has its own structure. So it's a representation for saying, that dynamic, unstructured representation that's very standard in the industry. A little bit like XML used to be before. JSON is kind of the new XML, the new-age XML. >> John: Yeah, that's true. How about the data lay concept? Because Hadoop as a market, just didn't make it, right? I mean, it's out, Hadoop is out there >> Juan: Yes. SPARC is certainly relevant because you have, you know, that kind of use space and memory and faster processing. But the real power is that that a batch oriented data set. As things like Hadoop and SPARC evolve, how does that relate to Oracle's product road map? >> Juan: Yeah, so we have our own Hadoop big data plans, where we run a cloudera-based Hadoop product and what we're trying to do is make those work seamlessly with existing databases. So there's certain kinds of workload and applications that hadoop is really good for. You know, kind of a frivolous example is if you want to find cats in pictures, you're not going to do that with an Oracle database. So you know, here's a billion pictures. Find all the pictures that contain cats. Not a good application for Oracle, right? On the other hand, if you're running analytic queries against relational data that's perfect for Oracle. So we see that these technologies can coexist. So there's certain kinds of applications that are really good for that dual kind of work. Or that certain kind of applications are really good for relational. And what we need to do is make sure that these things run seamlessly. >> John: What's the glue between those two layers? >> Peter: Well that's just it, there's even more applications where they're going to want to use both. >> That's right. That's right. We can't, >> So, what's the glue? >> Eventually everyone goes to both, right? >> Peter: Yeah, so what's the glue? What is that glue? >> Well, there's a number of glues that we built. Which is, one is called big data SOQL. It lets you query seamlessly across them. We also have connectors that let you move data seamlessly between them. So, those are kind of the main glues between them. >> So one of the things that we've observed is that, to John's point, there's been a lot more downloads of hadoop than we've seen go into production. It's become a very, very complex ecosystem and it's got some limitations, batch-oriented, et cetera. The challenge that businesses have is that they try to run pilots around hadoop, because they find themselves piloting the hardware, hadoop, the clusters, all the way up to the use case. And a lot of times they end up failing. How does something like the big data pliers facilitate piloting? Because it looks like it should reduce the complexity of the infrastructure and give people the opportunity to spend more time on the use case. >> I mean, you've got it exactly right. Which is, you know, there's some people that are hobbyists. Right, like there's people that want to build their own log cabins. They want to cut their own trees, kind of build their own planks and put together their log cabin. And that's kind of how hadoop started. It was kind of a hobbyist model, right? And hadoop has kind of moved to the next level. Now, it's people that want to get stuff done. And it's like, I don't want to chop trees. You know, I want to be living in a, just give me a house. >> John: Well actually, I wouldn't say hobbyist. I mean Yahoo had a need, they needed log cabins. >> Right. >> So they built one. You know, but it was a use case. The web scaler guys needed an unstructured... >> Right. >> It has to be scalable. >> But a lot of people are very much, kind of thinking build your own. So now a lot of people want a solution. They're like, you know, I don't want to be building this. So that's where big data plans come in. Because it's a complete solution. It includes the hardware, it's been pre-tubed, pre-optimized. It includes the cloudera software. It includes all our connectors and it includes support for the whole thing. Because that's the other part. You know when you put together your own house, who are you going to call when it leaks? Right? You're on your own when it leaks. If Oracle puts it together, we can support the entire staff when you have any kind of issue, any kind of problem. And that's the kind of stuff enterprises want. It's not a hobby anymore once it becomes an enterprise >> Peter: So given that we're in a big data universe right now, where we've got use case that are proliferating very fast and we have limited experience about them. But the technologies underlying that we're deploying to build those use cases are also proliferating very fast. Is it going to be possible for the open source model that presumes downloads, try buy, not sales people, not a lot of learning, not a lot of hand-holding to make it possible to fix that whole thing or make it all come together? Or is a company like Oracle going to have to step in and take some responsibility for guiding how the market evolves? >> Yes, so open source and Oracle can work together. I mean, we have Lennox distributions. We own MYSOQL. So Oracle and Open Source is... >> Peter: You're not at odds. >> That's right. We, in fact, are one of the major Open Source companies in the world. But you know, like I said, real businesses are in it as a hobby. They want a solution. They're looking at this as a tool. And a lot of times they want somebody that can support it, that can physically assure that it's going to work for them. And they have someone they can call. It's not just hey, I'm going to post a message on a message board and hope that somebody responds. Right? I mean when you have, you know, airplanes in the air. when you have, you know, dollars flying across the network. You need a solution. You need somebody you can call and you can guarantee is going to solve the problem. And also that can ensure that the technology moves in the right direction, takes into account what users want. And that, you know, a certain level of quality and assurance is built into it. >> Peter: So let's build on that. When you look at the future of database, what do you see? >> Juan: Well, there's a lot of different, so database is in a very big change. There's some big changes happening in the database world right now. More than probably ever before. One that we've been kind of talking about is sort of this big data hadoop. Another thing is JSON. Another area is in-memory is a very big change that is happening in databases. The whole moving into in-memory, into these different kinds of formats. Along with that, Oracle is pioneering moving database algorithms directly into the chips. The chip technology, to make it run dramatically faster, to make it more available, make it more secure. That's another big thing. Building multitenancy directly into the database, that's another big area that Oracle is pioneering. Instead of having it, kind of cloudify the database directly, negatively inside the database. Another big area that we've been working on is putting native sharding of databases directly into the database. >> How about data protection? >> Well that's in the multitenancy, right? Take me through the multitenancy a little bit. How does multitenancy inside the database going to work? >> Well, okay. So that's what we call our multitenant database. It's a little bit like VM. So, Vms say, hey it looks like I have a physical machine. But in fact I have a fraction of a machine. It looks like, it looks to me like a physical machine. In fact, it's a virtual machine. >> Peter: Okay. >> We're doing the same kind of thing with the database. So it looks like I have a physical database to the application. But in fact, you're sharing a database among many users. So what is the advantage of that? The advantage of that is we don't have one database. Or thousands of databases anymore. So many of our customers have deployed thousands of databases. It becomes a huge maintenance headache to have thousands of databases. Especially in today's security world where you have to constantly patch and update these things. You can't just kind of leave them alone anymore. So if you have a small number of physical databases and lots of virtual databases it completely saves costs. It's more agile. Opex lower. Capex lower. That's the new world of multitenant cloud data. >> John: Also it's brand new with appliances. And I want to get your thoughts on last year the big range that I liked was this zero data loss >> Recovery plan, yes >> ZDLRA. >> Juan: That's right. You got it right. >> What's the, I mean very fascinating, basically zero data loss. >> Peter: It's cool technology. >> Juan: Yes. So what is, is that still on the, out there? What's going on with that? >> Zero data loss and recovery parts is our fastest growing appliance right now. >> John: It is? >> Yes. Easily. It's been very well received by the market. We have some of the biggest banks now, running it. Financial institutions, retailers. Why? Because its a very simple value proposition. Which is, hey I want to protect my data in a way that it's constantly protected that I don't lose any data. In a way that is scalable. In a way that offloads my production database. It's a very simple... >> That's a grace saving situation, right? So like the people that have these security breaches, is this where that fits? Where's the use case for ZDLRA? >> ZDLRA is not security, it's about availability. >> John: Okay, so if someone basically shuts the data center down. >> Right. If that database becomes corrupted... >> John: In one region. >> If there's some natural disaster. If there's a bomb. If there's a whatever. Is my data protected? Will I lose anything? Nobody can afford to lose data anymore. In the old days, when you did a backup, you did a nightly backup and then if something happened, then you'd restore it. Well guess what? That doesn't work anymore. We're too dependent. So, nobody wants to lose their airline records. Nobody wants to lose their bank records. Nobody wants to lose their retail records. We can't afford to lose data anymore. We need a solution that's zero data loss. >> I'm surprised aren't, there's not more fanfare at the show about that. I was really impressed last year I'm glad to hear it's doing well. Containers. Database containers. >> Juan: Yes. This is something that we talked about a little bit last time. >> Juan: That's the same as multitenants. >> Okay. That's multitenancy. >> Juan: It's different terminology for that. >> okay, now cloud based databases. Now we get to the cloud. Where does all this go to the cloud. >> Okay, so you know traditionally customers deployed on premeses. what we're doing now is we're taking the Oracle database that we've developed the last 40 years. It's the most sophisticated database in the world. And we're moving it onto the cloud. So what does the customer get? They get, they can provision it instantly. So you go onto our website and say I want a database. Here's the size. Here's the number of CPUs I want. Boom. They get it. They pay monthly instead of paying upfront. They don't pay for the licenses. They just pay us a monthly fee. And then Oracle operates the whole thing. It's like, I don't want manage it. I just want to use it. So that's the benefit of the cloud. I go somewhere. I need a database. I get it right away. I don't have to mess with it. And I pay monthly. >> John: So the Oracle, on your Oracle cloud you would then deploy all those other goodness, ZDLRA, all the other technology >> Juan: All that stuff, yes. (crosstalk) behind the curtain, so to speak. >> Juan: So we have a range of offerings in our cloud. So we have a regular database service. We have an enterprise service. And then we have a high end service, an exit data cloud service, right? >> That runs our exit data. Super fast, super available. And then we have something called exit data express, which is the lowest cost cloud database in the world. So we have kind of three things, depending on what the customer wants. They want a smaller database for really low cost. They want a super mission critical, high performance database or they kind of want something in the middle. So we span the whole range. And, by the way, our high end is higher than anybody else. Our low end is lower cost than anybody else. So we span a bigger range than anyone else. >> You know Juan, next year we need to get an hour with you. >> Juan: Yes. >> To cover all the... >> Juan: It's a lot of topics. >> No. You're a great guest. And you have a lot of experience and a lot of, and we appreciate the insight. I'll give you the final word, I want to get one more answer out of you because you're awesome. You're sharing great insight. For the folks watching, what's the one thing or one or two, three things they should know about Oracle, Cloud, the technology, the database? The things going on at Oracle that they may not be hearing about it could be the best selling things. Something that's not on the main stream press reporting. >> Well, you know our Oracle cloud is pretty simple. I mean, the main thing to understand is that it's 100% compatible with databases on premises. So it's very easy to move workloads back and forth. That's the main thing. And the other thing is, we are, we use the exact same infrastructure. So we've been developing, for example, our exit data product, which is kind of the precursor to cloud. It's a very specialized database system run on premises. And now we're running that in the cloud. So again, the customer can get the exact same thing. And our latest offering is cloud at customer. So we take those same cloud attributes and we can put them >> John: It's the cloud machine, right? >> inside the customer database. >> Juan: Yeah, so we have a cloud machine, an exelated cloud machine, and a big data cloud machine. >> John: So customers get all the choices of Oracle. >> That's right. So the customer has full choice, they can move to the cloud if and when they want at the speed they want. They can move back and forth. They can do disaster recovery in the cloud. They can do backup in the cloud. They can do development in the cloud. So all these range of offerings, all these range of choices are now the customers. >> So true or false? Larry Ellison is the master at the long game? >> Juan: Larry thinks long term, absolutely. >> John: Of course, true. >> Yes, absolutely. He's brilliant and he's shown it over and over again. >> I agree, big fan. Yesterday's key note, Larry could've done better. But he was too busy getting all those announcements out that he was mailing in at the end. It was so many announcements. >> Juan: It's hard these days because Oracle, there's so much happening at Oracle. There's so much happening at Oracle. Juan, Thanks so much for spending your valuable time with us at the CUBE, we really appreciate it. This is SiliconANGLE Media's The CUBE. We go out to the events I'm John Furrier, Juan Loaiza Senior Vice President Juan Laoiza, Senior Vice President of Database Platform Services. Live in San Francisco. We'll be right back. (Music)
SUMMARY :
It's the CUBE. and extract the signal from the noise. guys on the development side I know. confused with somebody else. Maybe that was Larry Ellison, He's not so good at on the performance side. Better security. But the database is where the So what do you want to start with database as it moves to the cloud? are happening in the marketplace. So, one of the things we've Without any kind of a schema, developers to use JSON with databases. double down on that for a second. onboard it into the database itself? directly into the data. You can very simply run Juan: And all the stuff So they don't have to convert it. John: Which by the way, JSON is kind of the new How about the data lay concept? But the real power is that Find all the pictures that contain cats. they're going to want to use both. That's right. of glues that we built. So one of the things And it's like, I don't want to chop trees. John: Well actually, So they built one. And that's the kind of But the technologies I mean, we have Lennox distributions. that the technology of database, what do you see? of cloudify the database the database going to work? So that's what we call That's the new world of And I want to get your thoughts on Juan: That's right. What's the, I mean very fascinating, So what is, is that our fastest growing appliance right now. We have some of the biggest ZDLRA is not security, the data center down. If that database In the old days, when you did a backup, more fanfare at the show about that. This is something that we talked Juan: It's different Where does all this go to the cloud. So that's the benefit of the cloud. behind the curtain, so to speak. Juan: So we have a range cloud database in the world. need to get an hour with you. Something that's not on the I mean, the main thing to understand Juan: Yeah, so we have a cloud machine, all the choices of Oracle. So the customer has full choice, Juan: Larry thinks He's brilliant and he's that he was mailing in at the end. at the CUBE, we really appreciate it.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Juan | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Juan Loaiza | PERSON | 0.99+ |
Larry | PERSON | 0.99+ |
Larry Ellison | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Yahoo | ORGANIZATION | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Peter Burris | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Juan Loaiza | PERSON | 0.99+ |
100% | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
SiliconANGLE | ORGANIZATION | 0.99+ |
thousands | QUANTITY | 0.99+ |
two problems | QUANTITY | 0.99+ |
JSON | TITLE | 0.99+ |
two | QUANTITY | 0.99+ |
Juan Laoiza | PERSON | 0.99+ |
Lennox | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
two layers | QUANTITY | 0.99+ |
Oracle Aspires to be the Netflix of AI | Cube Conversation
(gentle music playing) >> For centuries, we've been captivated by the concept of machines doing the job of humans. And over the past decade or so, we've really focused on AI and the possibility of intelligent machines that can perform cognitive tasks. Now in the past few years, with the popularity of machine learning models ranging from recent ChatGPT to Bert, we're starting to see how AI is changing the way we interact with the world. How is AI transforming the way we do business? And what does the future hold for us there. At theCube, we've covered Oracle's AI and ML strategy for years, which has really been used to drive automation into Oracle's autonomous database. We've talked a lot about MySQL HeatWave in database machine learning, and AI pushed into Oracle's business apps. Oracle, it tends to lead in AI, but not competing as a direct AI player per se, but rather embedding AI and machine learning into its portfolio to enhance its existing products, and bring new services and offerings to the market. Now, last October at Cloud World in Las Vegas, Oracle partnered with Nvidia, which is the go-to AI silicon provider for vendors. And they announced an investment, a pretty significant investment to deploy tens of thousands more Nvidia GPUs to OCI, the Oracle Cloud Infrastructure and build out Oracle's infrastructure for enterprise scale AI. Now, Oracle CEO, Safra Catz said something to the effect of this alliance is going to help customers across industries from healthcare, manufacturing, telecoms, and financial services to overcome the multitude of challenges they face. Presumably she was talking about just driving more automation and more productivity. Now, to learn more about Oracle's plans for AI, we'd like to welcome in Elad Ziklik, who's the vice president of AI services at Oracle. Elad, great to see you. Welcome to the show. >> Thank you. Thanks for having me. >> You're very welcome. So first let's talk about Oracle's path to AI. I mean, it's the hottest topic going for years you've been incorporating machine learning into your products and services, you know, could you tell us what you've been working on, how you got here? >> So great question. So as you mentioned, I think most of the original four-way into AI was on embedding AI and using AI to make our applications, and databases better. So inside mySQL HeatWave, inside our autonomous database in power, we've been driving AI, all of course are SaaS apps. So Fusion, our large enterprise business suite for HR applications and CRM and ELP, and whatnot has built in AI inside it. Most recently, NetSuite, our small medium business SaaS suite started using AI for things like automated invoice processing and whatnot. And most recently, over the last, I would say two years, we've started exposing and bringing these capabilities into the broader OCI Oracle Cloud infrastructure. So the developers, and ISVs and customers can start using our AI capabilities to make their apps better and their experiences and business workflow better, and not just consume these as embedded inside Oracle. And this recent partnership that you mentioned with Nvidia is another step in bringing the best AI infrastructure capabilities into this platform so you can actually build any type of machine learning workflow or AI model that you want on Oracle Cloud. >> So when I look at the market, I see companies out there like DataRobot or C3 AI, there's maybe a half dozen that sort of pop up on my radar anyway. And my premise has always been that most customers, they don't want to become AI experts, they want to buy applications and have AI embedded or they want AI to manage their infrastructure. So my question to you is, how does Oracle help its OCI customers support their business with AI? >> So it's a great question. So I think what most customers want is business AI. They want AI that works for the business. They want AI that works for the enterprise. I call it the last mile of AI. And they want this thing to work. The majority of them don't want to hire a large and expensive data science teams to go and build everything from scratch. They just want the business problem solved by applying AI to it. My best analogy is Lego. So if you think of Lego, Lego has these millions Lego blocks that you can use to build anything that you want. But the majority of people like me or like my kids, they want the Lego death style kit or the Lego Eiffel Tower thing. They want a thing that just works, and it's very easy to use. And still Lego blocks, you still need to build some things together, which just works for the scenario that you're looking for. So that's our focus. Our focus is making it easy for customers to apply AI where they need to, in the right business context. So whether it's embedding it inside the business applications, like adding forecasting capabilities to your supply chain management or financial planning software, whether it's adding chat bots into the line of business applications, integrating these things into your analytics dashboard, even all the way to, we have a new platform piece we call ML applications that allows you to take a machine learning model, and scale it for the thousands of tenants that you would be. 'Cause this is a big problem for most of the ML use cases. It's very easy to build something for a proof of concept or a pilot or a demo. But then if you need to take this and then deploy it across your thousands of customers or your thousands of regions or facilities, then it becomes messy. So this is where we spend our time making it easy to take these things into production in the context of your business application or your business use case that you're interested in right now. >> So you mentioned chat bots, and I want to talk about ChatGPT, but my question here is different, we'll talk about that in a minute. So when you think about these chat bots, the ones that are conversational, my experience anyway is they're just meh, they're not that great. But the ones that actually work pretty well, they have a conditioned response. Now they're limited, but they say, which of the following is your problem? And then if that's one of the following is your problem, you can maybe solve your problem. But this is clearly a trend and it helps the line of business. How does Oracle think about these use cases for your customers? >> Yeah, so I think the key here is exactly what you said. It's about task completion. The general purpose bots are interesting, but as you said, like are still limited. They're getting much better, I'm sure we'll talk about ChatGPT. But I think what most enterprises want is around task completion. I want to automate my expense report processing. So today inside Oracle we have a chat bot where I submit my expenses the bot ask a couple of question, I answer them, and then I'm done. Like I don't need to go to our fancy application, and manually submit an expense report. I do this via Slack. And the key is around managing the right expectations of what this thing is capable of doing. Like, I have a story from I think five, six years ago when technology was much inferior than it is today. Well, one of the telco providers I was working with wanted to roll a chat bot that does realtime translation. So it was for a support center for of the call centers. And what they wanted do is, Hey, we have English speaking employees, whatever, 24/7, if somebody's calling, and the native tongue is different like Hebrew in my case, or Chinese or whatnot, then we'll give them a chat bot that they will interact with and will translate this on the fly and everything would work. And when they rolled it out, the feedback from customers was horrendous. Customers said, the technology sucks. It's not good. I hate it, I hate your company, I hate your support. And what they've done is they've changed the narrative. Instead of, you go to a support center, and you assume you're going to talk to a human, and instead you get a crappy chat bot, they're like, Hey, if you want to talk to a Hebrew speaking person, there's a four hour wait, please leave your phone and we'll call you back. Or you can try a new amazing Hebrew speaking AI powered bot and it may help your use case. Do you want to try it out? And some people said, yeah, let's try it out. Plus one to try it out. And the feedback, even though it was the exact same technology was amazing. People were like, oh my God, this is so innovative, this is great. Even though it was the exact same experience that they hated a few weeks earlier on. So I think the key lesson that I picked from this experience is it's all about setting the right expectations, and working around the right use case. If you are replacing a human, the level is different than if you are just helping or augmenting something that otherwise would take a lot of time. And I think this is the focus that we are doing, picking up the tasks that people want to accomplish or that enterprise want to accomplish for the customers, for the employees. And using chat bots to make those specific ones better rather than, hey, this is going to replace all humans everywhere, and just be better than that. >> Yeah, I mean, to the point you mentioned expense reports. I'm in a Twitter thread and one guy says, my favorite part of business travel is filling out expense reports. It's an hour of excitement to figure out which receipts won't scan. We can all relate to that. It's just the worst. When you think about companies that are building custom AI driven apps, what can they do on OCI? What are the best options for them? Do they need to hire an army of machine intelligence experts and AI specialists? Help us understand your point of view there. >> So over the last, I would say the two or three years we've developed a full suite of machine learning and AI services for, I would say probably much every use case that you would expect right now from applying natural language processing to understanding customer support tickets or social media, or whatnot to computer vision platforms or computer vision services that can understand and detect objects, and count objects on shelves or detect cracks in the pipe or defecting parts, all the way to speech services. It can actually transcribe human speech. And most recently we've launched a new document AI service. That can actually look at unstructured documents like receipts or invoices or government IDs or even proprietary documents, loan application, student application forms, patient ingestion and whatnot and completely automate them using AI. So if you want to do one of the things that are, I would say common bread and butter for any industry, whether it's financial services or healthcare or manufacturing, we have a suite of services that any developer can go, and use easily customized with their own data. You don't need to be an expert in deep learning or large language models. You could just use our automobile capabilities, and build your own version of the models. Just go ahead and use them. And if you do have proprietary complex scenarios that you need customer from scratch, we actually have the most cost effective platform for that. So we have the OCI data science as well as built-in machine learning platform inside the databases inside the Oracle database, and mySQL HeatWave that allow data scientists, python welding people that actually like to build and tweak and control and improve, have everything that they need to go and build the machine learning models from scratch, deploy them, monitor and manage them at scale in production environment. And most of it is brand new. So we did not have these technologies four or five years ago and we've started building them and they're now at enterprise scale over the last couple of years. >> So what are some of the state-of-the-art tools, that AI specialists and data scientists need if they're going to go out and develop these new models? >> So I think it's on three layers. I think there's an infrastructure layer where the Nvidia's of the world come into play. For some of these things, you want massively efficient, massively scaled infrastructure place. So we are the most cost effective and performant large scale GPU training environment today. We're going to be first to onboard the new Nvidia H100s. These are the new super powerful GPU's for large language model training. So we have that covered for you in case you need this 'cause you want to build these ginormous things. You need a data science platform, a platform where you can open a Python notebook, and just use all these fancy open source frameworks and create the models that you want, and then click on a button and deploy it. And it infinitely scales wherever you need it. And in many cases you just need the, what I call the applied AI services. You need the Lego sets, the Lego death style, Lego Eiffel Tower. So we have a suite of these sets for typical scenarios, whether it's cognitive services of like, again, understanding images, or documents all the way to solving particular business problems. So an anomaly detection service, demand focusing service that will be the equivalent of these Lego sets. So if this is the business problem that you're looking to solve, we have services out there where we can bring your data, call an API, train a model, get the model and use it in your production environment. So wherever you want to play, all the way into embedding this thing, inside this applications, obviously, wherever you want to play, we have the tools for you to go and engage from infrastructure to SaaS at the top, and everything in the middle. >> So when you think about the data pipeline, and the data life cycle, and the specialized roles that came out of kind of the (indistinct) era if you will. I want to focus on two developers and data scientists. So the developers, they hate dealing with infrastructure and they got to deal with infrastructure. Now they're being asked to secure the infrastructure, they just want to write code. And a data scientist, they're spending all their time trying to figure out, okay, what's the data quality? And they're wrangling data and they don't spend enough time doing what they want to do. So there's been a lack of collaboration. Have you seen that change, are these approaches allowing collaboration between data scientists and developers on a single platform? Can you talk about that a little bit? >> Yeah, that is a great question. One of the biggest set of scars that I have on my back from for building these platforms in other companies is exactly that. Every persona had a set of tools, and these tools didn't talk to each other and the handoff was painful. And most of the machine learning things evaporate or die on the floor because of this problem. It's very rarely that they are unsuccessful because the algorithm wasn't good enough. In most cases it's somebody builds something, and then you can't take it to production, you can't integrate it into your business application. You can't take the data out, train, create an endpoint and integrate it back like it's too painful. So the way we are approaching this is focused on this problem exactly. We have a single set of tools that if you publish a model as a data scientist and developers, and even business analysts that are seeing a inside of business application could be able to consume it. We have a single model store, a single feature store, a single management experience across the various personas that need to play in this. And we spend a lot of time building, and borrowing a word that cellular folks used, and I really liked it, building inside highways to make it easier to bring these insights into where you need them inside applications, both inside our applications, inside our SaaS applications, but also inside custom third party and even first party applications. And this is where a lot of our focus goes to just because we have dealt with so much pain doing this inside our own SaaS that we now have built the tools, and we're making them available for others to make this process of building a machine learning outcome driven insight in your app easier. And it's not just the model development, and it's not just the deployment, it's the entire journey of taking the data, building the model, training it, deploying it, looking at the real data that comes from the app, and creating this feedback loop in a more efficient way. And that's our focus area. Exactly this problem. >> Well thank you for that. So, last week we had our super cloud two event, and I had Juan Loza on and he spent a lot of time talking about how open Oracle is in its philosophy, and I got a lot of feedback. They were like, Oracle open, I don't really think, but the truth is if you think about database Oracle database, it never met a hardware platform that it didn't like. So in that sense it's open. So, but my point is, a big part of of machine learning and AI is driven by open source tools, frameworks, what's your open source strategy? What do you support from an open source standpoint? >> So I'm a strong believer that you don't actually know, nobody knows where the next slip fog or the next industry shifting innovation in AI is going to come from. If you look six months ago, nobody foreseen Dali, the magical text to image generation and the exploding brought into just art and design type of experiences. If you look six weeks ago, I don't think anybody's seen ChatGPT, and what it can do for a whole bunch of industries. So to me, assuming that a customer or partner or developer would want to lock themselves into only the tools that a specific vendor can produce is ridiculous. 'Cause nobody knows, if anybody claims that they know where the innovation is going to come from in a year or two, let alone in five or 10, they're just wrong or lying. So our strategy for Oracle is to, I call this the Netflix of AI. So if you think about Netflix, they produced a bunch of high quality shows on their own. A few years ago it was House of Cards. Last month my wife and I binge watched Ginny and Georgie, but they also curated a lot of shows that they found around the world and bought them to their customers. So it started with things like Seinfeld or Friends and most recently it was Squid games and those are famous Israeli TV series called Founder that Netflix bought in, and they bought it as is and they gave it the Netflix value. So you have captioning and you have the ability to speed the movie and you have it inside your app, and you can download it and watch it offline and everything, but nobody Netflix was involved in the production of these first seasons. Now if these things hunt and they're great, then the third season or the fourth season will get the full Netflix production value, high value budget, high value location shooting or whatever. But you as a customer, you don't care whether the producer and director, and screenplay writing is a Netflix employee or is somebody else's employee. It is fulfilled by Netflix. I believe that we will become, or we are looking to become the Netflix of AI. We are building a bunch of AI in a bunch of places where we think it's important and we have some competitive advantage like healthcare with Acellular partnership or whatnot. But I want to bring the best AI software and hardware to OCI and do a fulfillment by Oracle on that. So you'll get the Oracle security and identity and single bill and everything you'd expect from a company like Oracle. But we don't have to be building the data science, and the models for everything. So this means both open source recently announced a partnership with Anaconda, the leading provider of Python distribution in the data science ecosystem where we are are doing a joint strategic partnership of bringing all the goodness into Oracle customers as well as in the process of doing the same with Nvidia, and all those software libraries, not just the Hubble, both for other stuff like Triton, but also for healthcare specific stuff as well as other ISVs, other AI leading ISVs that we are in the process of partnering with to get their stuff into OCI and into Oracle so that you can truly consume the best AI hardware, and the best AI software in the world on Oracle. 'Cause that is what I believe our customers would want the ability to choose from any open source engine, and honestly from any ISV type of solution that is AI powered and they want to use it in their experiences. >> So you mentioned ChatGPT, I want to talk about some of the innovations that are coming. As an AI expert, you see ChatGPT on the one hand, I'm sure you weren't surprised. On the other hand, maybe the reaction in the market, and the hype is somewhat surprising. You know, they say that we tend to under or over-hype things in the early stages and under hype them long term, you kind of use the internet as example. What's your take on that premise? >> So. I think that this type of technology is going to be an inflection point in how software is being developed. I truly believe this. I think this is an internet style moment, and the way software interfaces, software applications are being developed will dramatically change over the next year two or three because of this type of technologies. I think there will be industries that will be shifted. I think education is a good example. I saw this thing opened on my son's laptop. So I think education is going to be transformed. Design industry like images or whatever, it's already been transformed. But I think that for mass adoption, like beyond the hype, beyond the peak of inflected expectations, if I'm using Gartner terminology, I think certain things need to go and happen. One is this thing needs to become more reliable. So right now it is a complete black box that sometimes produce magic, and sometimes produce just nonsense. And it needs to have better explainability and better lineage to, how did you get to this answer? 'Cause I think enterprises are going to really care about the things that they surface with the customers or use internally. So I think that is one thing that's going to come out. And the other thing that's going to come out is I think it's going to come industry specific large language models or industry specific ChatGPTs. Something like how OpenAI did co-pilot for writing code. I think we will start seeing this type of apps solving for specific business problems, understanding contracts, understanding healthcare, writing doctor's notes on behalf of doctors so they don't have to spend time manually recording and analyzing conversations. And I think that would become the sweet spot of this thing. There will be companies, whether it's OpenAI or Microsoft or Google or hopefully Oracle that will use this type of technology to solve for specific very high value business needs. And I think this will change how interfaces happen. So going back to your expense report, the world of, I'm going to go into an app, and I'm going to click on seven buttons in order to get some job done like this world is gone. Like I'm going to say, hey, please do this and that. And I expect an answer to come out. I've seen a recent demo about, marketing in sales. So a customer sends an email that is interested in something and then a ChatGPT powered thing just produces the answer. I think this is how the world is going to evolve. Like yes, there's a ton of hype, yes, it looks like magic and right now it is magic, but it's not yet productive for most enterprise scenarios. But in the next 6, 12, 24 months, this will start getting more dependable, and it's going to change how these industries are being managed. Like I think it's an internet level revolution. That's my take. >> It's very interesting. And it's going to change the way in which we have. Instead of accessing the data center through APIs, we're going to access it through natural language processing and that opens up technology to a huge audience. Last question, is a two part question. And the first part is what you guys are working on from the futures, but the second part of the question is, we got data scientists and developers in our audience. They love the new shiny toy. So give us a little glimpse of what you're working on in the future, and what would you say to them to persuade them to check out Oracle's AI services? >> Yep. So I think there's two main things that we're doing, one is around healthcare. With a new recent acquisition, we are spending a significant effort around revolutionizing healthcare with AI. Of course many scenarios from patient care using computer vision and cameras through automating, and making better insurance claims to research and pharma. We are making the best models from leading organizations, and internal available for hospitals and researchers, and insurance providers everywhere. And we truly are looking to become the leader in AI for healthcare. So I think that's a huge focus area. And the second part is, again, going back to the enterprise AI angle. Like we want to, if you have a business problem that you want to apply here to solve, we want to be your platform. Like you could use others if you want to build everything complicated and whatnot. We have a platform for that as well. But like, if you want to apply AI to solve a business problem, we want to be your platform. We want to be the, again, the Netflix of AI kind of a thing where we are the place for the greatest AI innovations accessible to any developer, any business analyst, any user, any data scientist on Oracle Cloud. And we're making a significant effort on these two fronts as well as developing a lot of the missing pieces, and building blocks that we see are needed in this space to make truly like a great experience for developers and data scientists. And what would I recommend? Get started, try it out. We actually have a shameless sales plug here. We have a free deal for all of our AI services. So it typically cost you nothing. I would highly recommend to just go, and try these things out. Go play with it. If you are a python welding developer, and you want to try a little bit of auto mail, go down that path. If you're not even there and you're just like, hey, I have these customer feedback things and I want to try out, if I can understand them and apply AI and visualize, and do some cool stuff, we have services for that. My recommendation is, and I think ChatGPT got us 'cause I see people that have nothing to do with AI, and can't even spell AI going and trying it out. I think this is the time. Go play with these things, go play with these technologies and find what AI can do to you or for you. And I think Oracle is a great place to start playing with these things. >> Elad, thank you. Appreciate you sharing your vision of making Oracle the Netflix of AI. Love that and really appreciate your time. >> Awesome. Thank you. Thank you for having me. >> Okay. Thanks for watching this Cube conversation. This is Dave Vellante. We'll see you next time. (gentle music playing)
SUMMARY :
AI and the possibility Thanks for having me. I mean, it's the hottest So the developers, So my question to you is, and scale it for the thousands So when you think about these chat bots, and the native tongue It's just the worst. So over the last, and create the models that you want, of the (indistinct) era if you will. So the way we are approaching but the truth is if you the movie and you have it inside your app, and the hype is somewhat surprising. and the way software interfaces, and what would you say to them and you want to try a of making Oracle the Netflix of AI. Thank you for having me. We'll see you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Netflix | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Elad Ziklik | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
Safra Catz | PERSON | 0.99+ |
Elad | PERSON | 0.99+ |
thousands | QUANTITY | 0.99+ |
Anaconda | ORGANIZATION | 0.99+ |
two part | QUANTITY | 0.99+ |
fourth season | QUANTITY | 0.99+ |
House of Cards | TITLE | 0.99+ |
Lego | ORGANIZATION | 0.99+ |
second part | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
first seasons | QUANTITY | 0.99+ |
Seinfeld | TITLE | 0.99+ |
Last month | DATE | 0.99+ |
third season | QUANTITY | 0.99+ |
four hour | QUANTITY | 0.99+ |
last week | DATE | 0.99+ |
Hebrew | OTHER | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
last October | DATE | 0.99+ |
OCI | ORGANIZATION | 0.99+ |
three years | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
two fronts | QUANTITY | 0.99+ |
first part | QUANTITY | 0.99+ |
Juan Loza | PERSON | 0.99+ |
Founder | TITLE | 0.99+ |
four | DATE | 0.99+ |
six weeks ago | DATE | 0.99+ |
today | DATE | 0.99+ |
two years | QUANTITY | 0.99+ |
python | TITLE | 0.99+ |
five | QUANTITY | 0.99+ |
a year | QUANTITY | 0.99+ |
six months ago | DATE | 0.99+ |
two developers | QUANTITY | 0.99+ |
first | QUANTITY | 0.98+ |
Python | TITLE | 0.98+ |
H100s | COMMERCIAL_ITEM | 0.98+ |
five years ago | DATE | 0.98+ |
one | QUANTITY | 0.98+ |
Friends | TITLE | 0.98+ |
one guy | QUANTITY | 0.98+ |
10 | QUANTITY | 0.97+ |
AMD & Oracle Partner to Power Exadata X9M
(upbeat jingle) >> The history of Exadata in the platform is really unique. And from my vantage point, it started earlier this century as a skunkworks inside of Oracle called Project Sage back when grid computing was the next big thing. Oracle saw that betting on standard hardware would put it on an industry curve that would rapidly evolve. Last April, for example, Oracle announced the availability of Exadata X9M in OCI, Oracle Cloud Infrastructure. One thing that hasn't been as well publicized is that Exadata on OCI is using AMD's EPYC processors in the database service. EPYC is not Eastern Pacific Yacht Club for all you sailing buffs, rather it stands for Extreme Performance Yield Computing, the enterprise grade version of AMD's Zen architecture which has been a linchpin of AMD's success in terms of penetrating enterprise markets. And to focus on the innovations that AMD and Oracle are bringing to market, we have with us today, Juan Loaiza, who's executive vice president of mission critical technologies at Oracle, and Mark Papermaster, who's the CTO and EVP of technology and engineering at AMD. Juan, welcome back to the show. Mark, great to have you on The Cube in your first appearance, thanks for coming on. Juan, let's start with you. You've been on The Cube a number of times, as I said, and you've talked about how Exadata is a top platform for Oracle database. We've covered that extensively. What's different and unique from your point of view about Exadata Cloud Infrastructure X9M on OCI? >> So as you know, Exadata, it's designed top down to be the best possible platform for database. It has a lot of unique capabilities, like we make extensive use of RDMA, smart storage. We take advantage of everything we can in the leading hardware platforms. X9M is our next generation platform and it does exactly that. We're always wanting to be, to get all the best that we can from the available hardware that our partners like AMD produce. And so that's what X9M in it is, it's faster, more capacity, lower latency, more iOS, pushing the limits of the hardware technology. So we don't want to be the limit, the software database software should not be the limit, it should be the actual physical limits of the hardware. That that's what X9M's all about. >> Why, Juan, AMD chips in X9M? >> We're introducing AMD chips. We think they provide outstanding performance, both for OTP and for analytic workloads. And it's really that simple, we just think the performance is outstanding in the product. >> Mark, your career is quite amazing. I could riff on history for hours but let's focus on the Oracle relationship. Mark, what are the relevant capabilities and key specs of the AMD chips that are used in Exadata X9M on Oracle's cloud? >> Well, thanks. It's really the basis of the great partnership that we have with Oracle on Exadata X9M and that is that the AMD technology uses our third generation of Zen processors. Zen was architected to really bring high performance back to X86, a very strong roadmap that we've executed on schedule to our commitments. And this third generation does all of that, it uses a seven nanometer CPU that is a core that was designed to really bring throughput, bring really high efficiency to computing and just deliver raw capabilities. And so for Exadata X9M, it's really leveraging all of that. It's really a balanced processor and it's implemented in a way to really optimize high performance. That is our whole focus of AMD. It's where we've reset the company focus on years ago. And again, great to see the super smart database team at Oracle really partner with us, understand those capabilities and it's been just great to partner with them to enable Oracle to really leverage the capabilities of the Zen processor. >> Yeah. It's been a pretty amazing 10 or 11 years for both companies. But Mark, how specifically are you working with Oracle at the engineering and product level and what does that mean for your joint customers in terms of what they can expect from the collaboration? >> Well, here's where the collaboration really comes to play. You think about a processor and I'll say, when Juan's team first looked at it, there's general benchmarks and the benchmarks are impressive but they're general benchmarks. And they showed the base processing capability but the partnership comes to bear when it means optimizing for the workloads that Exadata X9M is really delivering to the end customers. And that's where we dive down and as we learn from the Oracle team, we learn to understand where bottlenecks could be, where is there tuning that we could in fact really boost the performance above that baseline that you get in the generic benchmarks. And that's what the teams have done, so for instance, you look at optimizing latency to our DMA, you look at optimizing throughput on oil TP and database processing. When you go through the workloads and you take the traces and you break it down and you find the areas that are bottlenecking and then you can adjust, we have thousands of parameters that can be adjusted for a given workload. And that's the beauty of the partnership. So we have the expertise on the CPU engineering, Oracle Exadata team knows innately what the customers need to get the most out of their platform. And when the teams came together, we actually achieved anywhere from 20% to 50% gains on specific workloads, it is really exciting to see. >> Mark, last question for you is how do you see this relationship evolving in the future? Can you share a little roadmap for the audience? >> You bet. First off, given the deep partnership that we've had on Exadata X9M, it's really allowed us to inform our future design. So in our current third generation, EPYC is that is really what we call our epic server offerings. And it's a 7,003 third gen and Exadara X9M. So what about fourth gen? Well, fourth gen is well underway, ready for the future, but it incorporates learning that we've done in partnership with Oracle. It's going to have even more through capabilities, it's going to have expanded memory capabilities because there's a CXL connect express link that'll expand even more memory opportunities. And I could go on. So that's the beauty of a deep partnership as it enables us to really take that learning going forward. It pays forward and we're very excited to fold all of that into our future generations and provide even a better capabilities to Juan and his team moving forward. >> Yeah, you guys have been obviously very forthcoming. You have to be with Zen and EPYC. Juan, anything you'd like to add as closing comments? >> Yeah. I would say that in the processor market there's been a real acceleration in innovation in the last few years, there was a big move 10, 15 years ago when multicore processors came out. And then we were on that for a while and then things started stagnating, but in the last two or three years, AMD has been leading this, there's been a dramatic acceleration in innovation so it's very exciting to be part of this and customers are getting a big benefit from this. >> All right. Hey, thanks for coming back on The Cube today. Really appreciate your time. >> Thanks. Glad to be here. >> All right and thank you for watching this exclusive Cube conversation. This is Dave Vellante from The Cube and we'll see you next time. (upbeat jingle)
SUMMARY :
in the database service. in the leading hardware platforms. And it's really that simple, and key specs of the the great partnership that we have expect from the collaboration? but the partnership comes to So that's the beauty of a deep partnership You have to be with Zen and EPYC. but in the last two or three years, coming back on The Cube today. Glad to be here. and we'll see you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Juan | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Juan Loaiza | PERSON | 0.99+ |
Mark | PERSON | 0.99+ |
10 | QUANTITY | 0.99+ |
20% | QUANTITY | 0.99+ |
Mark Papermaster | PERSON | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
Last April | DATE | 0.99+ |
11 years | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
both companies | QUANTITY | 0.99+ |
iOS | TITLE | 0.99+ |
7,003 | QUANTITY | 0.99+ |
X9M | TITLE | 0.99+ |
50% | QUANTITY | 0.99+ |
fourth gen | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
First | QUANTITY | 0.98+ |
Zen | COMMERCIAL_ITEM | 0.97+ |
third generation | QUANTITY | 0.97+ |
X86 | COMMERCIAL_ITEM | 0.97+ |
first appearance | QUANTITY | 0.97+ |
Exadata | TITLE | 0.97+ |
third gen | QUANTITY | 0.96+ |
earlier this century | DATE | 0.96+ |
seven nanometer | QUANTITY | 0.96+ |
Exadata | ORGANIZATION | 0.94+ |
first | QUANTITY | 0.92+ |
Eastern Pacific Yacht Club | ORGANIZATION | 0.9+ |
EPYC | ORGANIZATION | 0.87+ |
both | QUANTITY | 0.86+ |
OCI | TITLE | 0.85+ |
One thing | QUANTITY | 0.83+ |
Exadata X9M | COMMERCIAL_ITEM | 0.81+ |
Power Exadata | ORGANIZATION | 0.81+ |
The Cube | ORGANIZATION | 0.8+ |
OCI | ORGANIZATION | 0.79+ |
The Cube | COMMERCIAL_ITEM | 0.79+ |
Zen | ORGANIZATION | 0.78+ |
three years | QUANTITY | 0.78+ |
Exadata X9M | COMMERCIAL_ITEM | 0.74+ |
X9M | COMMERCIAL_ITEM | 0.74+ |
years | DATE | 0.73+ |
15 years ago | DATE | 0.7+ |
10 | DATE | 0.7+ |
EPYC | OTHER | 0.65+ |
Exadara | ORGANIZATION | 0.64+ |
Oracle Cloud Infrastructure | ORGANIZATION | 0.61+ |
last few years | DATE | 0.6+ |
Exadata Cloud Infrastructure X9M | TITLE | 0.6+ |
Oracle & AMD Partner to Power Exadata X9M
[Music] the history of exadata in the platform is really unique and from my vantage point it started earlier this century as a skunk works inside of oracle called project sage back when grid computing was the next big thing oracle saw that betting on standard hardware would put it on an industry curve that would rapidly evolve and i remember the oracle hp database machine which was announced at oracle open world almost 15 years ago and then exadata kept evolving after the sun acquisition it became a platform that had tightly integrated hardware and software and today exadata it keeps evolving almost like a chameleon to address more workloads and reach new performance levels last april for example oracle announced the availability of exadata x9m in oci oracle cloud infrastructure and introduced the ability to run the autonomous database service or the exa data database service you know oracle often talks about they call it stock exchange performance level kind of no description needed and sort of related capabilities the company as we know is fond of putting out benchmarks and comparisons with previous generations of product and sometimes competitive products that underscore the progress that's being made with exadata such as 87 percent more iops with metrics for latency measured in microseconds mics instead of milliseconds and many other numbers that are industry-leading and compelling especially for mission-critical workloads one thing that hasn't been as well publicized is that exadata on oci is using amd's epyc processors in the database service epyc is not eastern pacific yacht club for all your sailing buffs rather it stands for extreme performance yield computing the enterprise grade version of amd's zen architecture which has been a linchpin of amd's success in terms of penetrating enterprise markets and to focus on the innovations that amd and oracle are bringing to market we have with us today juan loyza who's executive vice president of mission critical technologies at oracle and mark papermaster who's the cto and evp of technology and engineering at amd juan welcome back to the show mark great to have you on thecube and your first appearance thanks for coming on yep happy to be here thank you all right juan let's start with you you've been on thecube a number of times as i said and you've talked about how exadata is a top platform for oracle database we've covered that extensively what's different and unique from your point of view about exadata cloud infrastructure x9m on oci yeah so as you know exadata it's designed top down to be the best possible platform for database uh it has a lot of unique capabilities like we make extensive use of rdma smart storage we take advantage of you know everything we can in the leading uh hardware platforms and x9m is our next generation platform and it does exactly that we're always wanting to be to get all the best that we can from the available hardware that our partners like amd produce and so that's what x9 in it is it's faster more capacity lower latency more ios pushing the limits of the hardware technology so we don't want to be the limit the software the database software should not be the limit it should be uh the actual physical limits of the hardware and that that's what x9m is all about why won amd chips in x9m uh yeah so we're we're uh introducing uh amd chips we think they provide outstanding performance uh both for oltp and for analytic workloads and it's really that simple we just think that performance is outstanding in the product yeah mark your career is quite amazing i've been around long enough to remember the transition to cmos from emitter coupled logic in the mainframe era back when you were at ibm that was an epic technology call at the time i was of course steeped as an analyst at idc in the pc era and like like many witnessed the tectonic shift that apple's ipod and iphone caused and the timing of you joining amd is quite important in my view because it coincided with the year that pc volumes peaked and marked the beginning of what i call a stagflation period for x86 i could riff on history for hours but let's focus on the oracle relationship mark what are the relevant capabilities and key specs of the amd chips that are used in exadata x9m on oracle's cloud well thanks and and uh it's really uh the basis of i think the great partnership that we have with oracle on exadata x9m and that is that the amd technology uses our third generation of zen processors zen was you know architected to really bring high performance you know back to x86 a very very strong road map that we've executed you know on schedule to our commitments and this third generation does all of that it uses a seven nanometer cpu that is a you know core that was designed to really bring uh throughput uh bring you know really high uh efficiency uh to computing uh and just deliver raw capabilities and so uh for uh exadata x9m uh it's really leveraging all of that it's it's a uh implemented in up to 64 cores per socket it's got uh you know really anywhere from 128 to 168 pcie gen 4 io connectivity so you can you can really attach uh you know all of the uh the necessary uh infrastructure and and uh storage uh that's needed uh for exadata performance and also memory you have to feed the beast for those analytics and for the oltp that juan was talking about and so it does have eight lanes of memory for high performance ddr4 so it's really as a balanced processor and it's implemented in a way to really optimize uh high performance that that is our whole focus of uh amd it's where we've you know reset the company focus on years ago and uh again uh you know great to see uh you know the the super smart uh you know database team at oracle really a partner with us understand those capabilities and it's been just great to partner with them to uh you know to you know enable oracle to really leverage the capabilities of the zen processor yeah it's been a pretty amazing 10 or 11 years for both companies but mark how specifically are you working with oracle at the engineering and product level you know and what does that mean for your joint customers in terms of what they can expect from the collaboration well here's where the collaboration really comes to play you think about a processor and you know i'll say you know when one's team first looked at it there's general benchmarks and the benchmarks are impressive but they're general benchmarks and you know and they showed you know the i'll say the you know the base processing capability but the partnership comes to bear uh when it when it means optimizing for the workloads that exadata x9m is really delivering to the end customers and that's where we dive down and and as we uh learn from the oracle team we learned to understand where bottlenecks could be uh where is there tuning that we could in fact in fact really boost the performance above i'll say that baseline that you get in the generic benchmarks and that's what the teams have done so for instance you look at you know optimizing latency to rdma you look at just throughput optimizing throughput on otp and database processing when you go through the workloads and you take the traces and you break it down and you find the areas that are bottlenecking and then you can adjust we have you know thousands of parameters that can be adjusted for a given workload and that's again that's the beauty of the partnership so we have the expertise on the cpu engineering uh you know oracle exudated team knows innately what the customers need to get the most out of their platform and when the teams came together we actually achieved anywhere from 20 percent to 50 gains on specific workloads it's really exciting to see so okay so so i want to follow up on that is that different from the competition how are you driving customer value you mentioned some you know some some percentage improvements are you measuring primarily with with latency how do you look at that well uh you know we are differentiated with the uh in the number of factors we bring a higher core density we bring the highest core density certainly in x86 and and moreover what we've led the industry is how to scale those cores we have a very high performance fabric that connects those together so as as a customer needs more cores again we scale anywhere from 8 to 64 cores but what the trick is uh that is you add more cores you want the scale the scale to be as close to linear as possible and so that's a differentiation we have and we enable that again with that balanced computer of cpu io and memory that we design but the key is you know we pride ourselves at amd of being able to partner in a very deep fashion with our customers we listen very well i think that's uh what we've had the opportunity uh to do with uh juan and his team we appreciate that and and that is how we got the kind of performance benefits that i described earlier it's working together almost like one team and in bringing that best possible capability to the end customers great thank you for that one i want to come back to you can both the exadata database service and the autonomous database service can they take advantage of exadata cloud x9m capabilities that are in that platform yeah absolutely um you know autonomous is basically our self-driving version of the oracle database but fundamentally it is the same uh database course so both of them will take advantage of the tremendous performance that we're getting now you know when when mark takes about 64 cores that's for chip we have two chips you know it's a two socket server so it's 128 128-way processor and then from our point of view there's two threads so from the database point there's 200 it's a 256-way processor and so there's a lot of raw performance there and we've done a lot of work with the amd team to make sure that we deliver that to our customers for all the different kinds of workload including otp analytics but also including for our autonomous database so yes absolutely allah takes advantage of it now juan you know i can't let you go without asking about the competition i've written extensively about the big four hyperscale clouds specifically aws azure google and alibaba and i know that don't hate me sometimes it angers some of my friends at oracle ibm too that i don't include you in that list but but i see oracle specifically is different and really the cloud for the most demanding applications and and top performance databases and not the commodity cloud which of course that angers all my friends at those four companies so i'm ticking everybody off so how does exadata cloud infrastructure x9m compare to the likes of aws azure google and other database cloud services in terms of oltp and analytics value performance cost however you want to frame it yeah so our architecture is fundamentally different uh we've architected our database for the scale out environment so for example we've moved intelligence in the storage uh we've put uh remote direct memory access we put persistent memory into our product so we've done a lot of architectural changes that they haven't and you're starting to see a little bit of that like if you look at some of the things that amazon and google are doing they're starting to realize that hey if you're gonna achieve good results you really need to push some database uh processing into the storage so so they're taking baby steps toward that you know you know roughly 15 years after we we've had a product and again at some point they're gonna realize you really need rdma you really need you know more uh direct access to those capabilities so so they're slowly getting there but you know we're well ahead and what you know the way this is delivered is you know better availability better performance lower latency higher iops so and this is why our customers love our product and you know if you if you look at the global fortune 100 over 90 percent of them are running exit data today and even in the in our cloud uh you know over 60 of the global 100 are running exadata in the oracle cloud because of all the differentiated uh benefits that they get uh from the product uh so yeah we're we're well ahead in the in the database space mark last question for you is how do you see this relationship evolving in the future can you share a little road map for the audience you bet well first off you know given the deep partnership that we've had on exudate x9m uh it it's really allowed us to inform our future design so uh in our current uh third generation epic epyc is uh that is really uh what we call our epic server offerings and it's a 7003 third gen in and exudate x9m so what about fourth gen well fourth gen is well underway uh you know it and uh and uh you know ready to you know for the for the future but it incorporates learning uh that we've done in partnership with with oracle uh it's gonna have even more through capabilities it's gonna have expanded memory capabilities because there's a cxl connect express link that'll expand even more memory opportunities and i could go on so you know that's the beauty of a deep partnership as it enables us to really take that learning going forward it pays forward and we're very excited to to fold all of that into our future generations and provide even a better capabilities to one and his team moving forward yeah you guys have been obviously very forthcoming you have to be with with with zen and epic juan anything you'd like to add as closing comments yeah i would say that in the processor market there's been a real acceleration in innovation in the last few years um there was you know a big move 10 15 years ago when multi-core processors came out and then you know we were on that for a while and then things started staggering but in the last two or three years and amd has been leading this um there's been a dramatic uh acceleration in innovation in this space so it's very exciting to be part of this and and customers are getting a big benefit from this all right chance hey thanks for coming back in the cube today really appreciate your time thanks glad to be here all right thank you for watching this exclusive cube conversation this is dave vellante from thecube and we'll see you next time [Music]
**Summary and Sentiment Analysis are not been shown because of improper transcript**
ENTITIES
Entity | Category | Confidence |
---|---|---|
20 percent | QUANTITY | 0.99+ |
juan loyza | PERSON | 0.99+ |
amd | ORGANIZATION | 0.99+ |
amazon | ORGANIZATION | 0.99+ |
8 | QUANTITY | 0.99+ |
256-way | QUANTITY | 0.99+ |
10 | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
alibaba | ORGANIZATION | 0.99+ |
87 percent | QUANTITY | 0.99+ |
128 | QUANTITY | 0.99+ |
oracle | ORGANIZATION | 0.99+ |
two threads | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
11 years | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
50 | QUANTITY | 0.99+ |
200 | QUANTITY | 0.99+ |
ipod | COMMERCIAL_ITEM | 0.99+ |
both | QUANTITY | 0.99+ |
two chips | QUANTITY | 0.99+ |
both companies | QUANTITY | 0.99+ |
10 | DATE | 0.98+ |
iphone | COMMERCIAL_ITEM | 0.98+ |
earlier this century | DATE | 0.98+ |
last april | DATE | 0.98+ |
third generation | QUANTITY | 0.98+ |
juan | PERSON | 0.98+ |
64 cores | QUANTITY | 0.98+ |
128-way | QUANTITY | 0.98+ |
two socket | QUANTITY | 0.98+ |
eight lanes | QUANTITY | 0.98+ |
aws | ORGANIZATION | 0.97+ |
AMD | ORGANIZATION | 0.97+ |
ios | TITLE | 0.97+ |
fourth gen | QUANTITY | 0.96+ |
168 pcie | QUANTITY | 0.96+ |
dave vellante | PERSON | 0.95+ |
third gen | QUANTITY | 0.94+ |
aws azure | ORGANIZATION | 0.94+ |
apple | ORGANIZATION | 0.94+ |
thousands of parameters | QUANTITY | 0.92+ |
years | DATE | 0.91+ |
15 years | QUANTITY | 0.9+ |
Power Exadata | ORGANIZATION | 0.9+ |
over 90 percent | QUANTITY | 0.89+ |
four companies | QUANTITY | 0.89+ |
first | QUANTITY | 0.88+ |
oci | ORGANIZATION | 0.87+ |
first appearance | QUANTITY | 0.85+ |
one team | QUANTITY | 0.84+ |
almost 15 years ago | DATE | 0.83+ |
seven nanometer | QUANTITY | 0.83+ |
last few years | DATE | 0.82+ |
one thing | QUANTITY | 0.82+ |
15 years ago | DATE | 0.82+ |
epyc | TITLE | 0.8+ |
over 60 | QUANTITY | 0.79+ |
amd produce | ORGANIZATION | 0.79+ |
Bob Thome, Tim Chien & Subban Raghunathan, Oracle
>>Earlier this week, Oracle announced the new X nine M generation of exit data platforms for its cloud at customer and legacy on prem deployments. And the company made some enhancements to its zero data loss, recovery appliance. CLRA something we've covered quite often since its announcement. We had a video exclusive with one Louisa who was the executive vice president of mission critical database technologies. At Oracle. We did that on the day of the announcement who got his take on it. And I asked Oracle, Hey, can we get some subject matter experts, some technical gurus to dig deeper and get more details on the architecture because we want to better understand some of the performance claims that Oracle is making. And with me today is Susan. Who's the vice president of product management for exit data database machine. Bob tome is the vice president of product management for exit data cloud at customer. And Tim chin is the senior director of product management for DRA folks. Welcome to this power panel and welcome to the cube. >>Thank you, Dave. >>Can we start with you? Um, Juan and I, we talked about the X nine M a that Oracle just launched a couple of days ago. Maybe you could give us a recap, some of the, what do we need to know? The, especially I'm interested in the big numbers once more so we can just understand the claims you're making around this announcement. We can dig into that. >>Absolutely. They've very excited to do that. In a nutshell, we have the world's fastest database machine for both LTP and analytics, and we made that even faster, not just simply faster, but for all LPP we made it 70% faster and we took the oil PPV ops all the way up to 27.6 million read IOPS and mind you, this is being measured at the sequel layer for analytics. We did pretty much the same thing, an 87% increase in analytics. And we broke through that one terabyte per second barrier, absolutely phenomenal stuff. Now, while all those numbers by themselves are fascinating, here's something that's even more fascinating in my mind, 80% of the product development work for extra data, X nine M was done during COVID, which means all of us were remote. And what that meant was extreme levels of teamwork between the development teams, manufacturing teams, procurement teams, software teams, the works. I mean, everybody coming together as one to deliver this product, I think it's kudos to everybody who touched this product in one way or the other extremely proud of it. >>Thank you for making that point. And I'm laughing because it's like you the same bolt of a mission-critical OLT T O LTP performance. You had the world record, and now you're saying, adding on top of that. Um, but, okay. But, so there are customers that still, you know, build the builder and they're trying to build their own exit data. What they do is they buy their own servers and storage and networking components. And I do that when I talk to them, they'll say, look, they want to maintain their independence. They don't want to get locked in Oracle, or maybe they believe it's cheaper. You know, maybe they're sort of focused on the, the, the CapEx the CFO has him in the headlock, or they might, sometimes they talk about, they want a platform that can support, you know, horizontal, uh, apps, maybe not Oracle stuff, or, or maybe they're just trying to preserve their job. I don't know, but why shouldn't these customers roll their own and why can't they get similar results just using standard off the shelf technologies? >>Great question. It's going to require a little involved answer, but let's just look at the statistics to begin with. Oracle's exit data was first productized in Delaware to the market in 2008. And at that point in time itself, we had industry leadership across a number of metrics. Today, we are at the 11th generation of exit data, and we are way far ahead than the competition, like 50 X, faster hundred X faster, right? I mean, we are talking orders of magnitude faster. How did we achieve this? And I think the answer to your question is going to lie in what are we doing at the engineering level to make these magical numbers come to, uh, for right first, it starts with the hardware. Oracle has its own hardware server design team, where we are embedding in capabilities towards increasing performance, reliability, security, and scalability down at the hardware level, the database, which is a user level process talks to the hardware directly. >>The only reason we can do this is because we own the source code for pretty much everything in between, starting with the database, going into the operating system, the hypervisor. And as I, as I just mentioned the hardware, and then we also worked with the former elements on this entire thing, the key to making extra data, the best Oracle database machine lies in that engineering, where we take the operating system, make it fit like tongue and groove into, uh, a bit with the opera, with the hardware, and then do the same with the database. And because we have got this deep insight into what are the workloads that are, that are running at any given point in time on the compute side of extra data, we can then do micromanagement at the software layers of how traffic flows are flowing through the entire system and do things like, you know, prioritize all PP transactions on a very specific, uh, you know, queue on the RDMA. >>We'll converse Ethan at be able to do smart scan, use the compute elements in the storage tier to be able to offload SQL processing. They call them the longer I used formats of data, extend them into flash, just a whole bunch of things that we've been doing over the last 12 years, because we have this deep engineering, you can try to cobble a system together, which sort of looks like an extra data. It's got a network and it's got storage, tiering compute here, but you're not going to be able to achieve anything close to what we are doing. The biggest deal in my mind, apart from the performance and the high availability is the security, because we are testing the stack top to bottom. When you're trying to build your own best of breed kind of stuff. You're not going to be able to do that because it depended on the server that had to do something and HP to do something else or Dell to do something else and a Brocade switch to do something it's not possible. We can do this, we've done it. We've proven it. We've delivered it for over a decade. End of story. For as far as I'm concerned, >>I mean, you know, at this fine, remember when Oracle purchased Sohn and I know a big part of that purchase was to get Java, but I remember saying at the time it was a brilliant acquisition. I was looking at it from a financial standpoint. I think you paid seven and a half billion for it. And it automatically, when you're, when Safra was able to get back to sort of pre acquisition margins, you got the Oracle uplift in terms of revenue multiples. So then that standpoint, it was a no brainer, but the other thing is back in the Unix days, it was like HP. Oracle was the standard. And, and in terms of all the benchmarks and performance, but even then, I'm sure you work closely with HP, but it was like to get the stuff to work together, you know, make sure that it was going to be able to recover according to your standards, but you couldn't actually do that deep engineering that you just described now earlier, Subin you, you, you, you stated that the X sign now in M you get, oh, LTP IO, IOP reads at 27 million IOPS. Uh, you got 19 microseconds latency, so pretty impressive stuff, impressive numbers. And you kind of just went there. Um, but how are you measuring these numbers versus other performance claims from your competitors? What what's, you know, are you, are you stacking the deck? Can you give you share with us there? >>Sure. So Shada incidents, we are mentioning it at the sequel layer. This is not some kind of an ion meter or a micro benchmark. That's looking at just a flash subsystem or just a persistent memory subsystem. This is measured at the compute, not doing an entire set of transactions. And how many times can you finish that? Right? So that's how it's being measured. Now. Most people cannot measure it like that because of the disparity and the number of vendors that are involved in that particular solution, right? You've got servers from vendor a and storage from vendor B, the storage network from vendor C, the operating system from vendor D. How do you tune all of these things on your own? You cannot write. I mean, there's only certain bells and whistles and knobs that are available for you to tune, but so that's how we are measuring the 19 microseconds is at the sequel layer. >>What that means is this a real world customer running a real world. Workload is guaranteed to get that kind of a latency. None of the other suppliers can make that claim. This is the real world capability. Now let's take a look at that 19 microseconds we boast and we say, Hey, we had an order of magnitude two orders of magnitude faster than everybody else. When it comes down to latency. And one things that this is we'll do our magic while it is magical. The magic is really grounded in deep engineering and deep physics and science. The way we implement this is we, first of all, put the persistent memory tier in the storage. And that way it's shared across all of the database instances that are running on the compute tier. Then we have this ultra fast hundred gigabit ethernet RDMA over converged ethernet fabric. >>With this, what we have been able to do is at the hardware level between two network interface guides that are resident on that fabric, we create paths that enable high priority low-latency communication between any two end points on that fabric. And then given the fact that we implemented persistent memory in the storage tier, what that means is with that persistent memory, sitting on the memory bus of the processor in the storage tier, we can perform it remote direct memory access operation from the compute tier to memory address spaces in the persistent memory of the storage tier, without the involvement of the operating system on either end, no context, switches, knowing processing latencies and all of that. So it's hardware to hardware, communication with security built in, which is immutable, right? So all of this is built into the hardware itself. So there's no software involved. You perform a read, the data comes back 19 microseconds, boom. End of story. >>Yeah. So that's key to my next topic, which is security because if you're not getting the OSTP involved and that's, you know, very oftentimes if I can get access to the OSTP, I get privileged. Like I can really take advantage of that as a hacker. But so, but, but before I go there, like Oracle talks about, it's got a huge percentage of the Gayety 7% of the fortune 100 companies run their mission, critical workloads on exit data. But so that's not only important to the companies, but they're serving consumer me, right. I'm going to my ATM or I'm swiping my credit card. And Juan mentioned that you use a layered security model. I just sort of inferred anyway, that, that having this stuff in hardware and not have to involve access to the OS actually contributes to better security. But can you describe this in a bit more detail? >>So yeah, what Brian was talking about was this layered security set differently. It is defense in depth, and that's been our mantra and philosophy for several years now. So what does that entail? As I mentioned earlier, we designed our own servers. We do this for performance. We also do it for security. We've got a number of features that are built into the hardware that make sure that we've got immutable areas of form where we, for instance, let me give you this example. If you take an article x86 server, just a standard x86 server, not even express in the form of an extra data system, even if you had super user privileges sitting on top of an operating system, you cannot modify the bias as a user, as a super user that has to be done through the system management network. So we put gates and protection modes, et cetera, right in the hardware itself. >>Now, of course the security of that hardware goes all the way back to the fact that we own the design. We've got a global supply chain, but we are making sure that our supply chain is protected monitored. And, uh, we also protect the last mile of the supply chain, which is we can detect if there's been any tampering of form where that's been, uh, that's occurred in the hardware while the hardware shipped from our factory to the customers, uh, docks. Right? So we, we know that something's been tampered with the moment it comes back up on the customer. So that's on the hardware. Let's take a look at the operating system, Oracle Linux, we own article the next, the entire source code. And what shipping on exit data is the unbreakable enterprise Connell, the carnal and the operating system itself have been reduced in terms of eliminating all unnecessary packages from that operating system bundle. >>When we deliver it in the form of the data, let's put some real numbers on that. A standard Oracle Linux or a standard Linux distribution has got about 5,000 plus packages. These things include like print servers, web servers, a whole bunch of stuff that you're not absolutely going to use at all on exit data. Why ship those? Because the moment you ship more stuff than you need, you are increasing the, uh, the target, uh, that attackers can get to. So on AXA data, there are only 701 packages. So compare this 5,413 packages on a standard Linux, 701 and exit data. So we reduced the attack surface another aspect on this, when we, we do our own STIG, uh, ASCAP benchmarking. If you take a standard Linux and you run that ASCAP benchmark, you'll get about a 30% pass score on exit data. It's 90 plus percent. >>So which means we are doing the heavy lifting of doing the security checks on the operating system before it even goes out to the factory. And then you layer on Oracle database, transparent data encryption. We've got all kinds of protection capabilities, data reduction, being able to do an authentication on a user ID basis, being able to log it, being able to track it, being able to determine who access the system when and log back. So it's basically defend at every single layer. And then of course the customer's responsibility. It doesn't just stop by getting this high secure, uh, environment. They have to do their own job of them securing their network perimeters, securing who has physical access to the system and everything else. So it's a giant responsibility. And as you mentioned, you know, you as a consumer going to an ATM machine and withdrawing money, you would do 200. You don't want to see 5,000 deducted from your account. And so all of this is made possible with exited and the amount of security focus that we have on the system >>And the bank doesn't want to see it the other way. So I'm geeking out here in the cube, but I got one more question for you. Juan talked about X nine M best system for database consolidation. So I, I kinda, you know, it was built to handle all LTP analytics, et cetera. So I want to push you a little bit on this because I can make an argument that, that this is kind of a Swiss army knife versus the best screwdriver or the best knife. How do you respond to that concern and how, how do you respond to the concern that you're putting too many eggs in one basket? Like, what do you tell people to fear you're consolidating workloads to save money, but you're also narrowing the blast radius. Isn't that a problem? >>Very good question there. So, yes. So this is an interesting problem, and it is a balancing act. As you correctly pointed out, you want to have the economies of scale that you get when you consolidate more and more databases, but at the same time, when something happens when hardware fails or there's an attack, you want to make sure that you have business continuity. So what we are doing on exit data, first of all, as I mentioned, we are designing our own hardware and a building in reliability into the system and at the hardware layer, that means having redundancy, redundancy for fans, power supplies. We even have the ability to isolate faulty cores on the processor. And we've got this a tremendous amount of sweeping that's going on by the system management stack, looking for problem areas and trying to contain them as much as possible within the hardware itself. >>Then you take it up to the software layer. We used our reliability to then build high availability. What that implies is, and that's fundamental to the exited architecture is this entire scale out model, our based system, you cannot go smaller than having two database nodes and three storage cells. Why is that? That's because you want to have high availability of your database instances. So if something happens to one server hardware, software, whatever you got another server that's ready to take on that load. And then with real application clusters, you can then switch over between these two, why three storage cells. We want to make sure that when you have got duplicate copies of data, because you at least want to have one additional copy of your data in case something happens to the disc that has got that only that one copy, right? So the reason we have got three is because then you can Stripe data across these three different servers and deliver high availability. >>Now you take that up to the rack level. A lot of things happen. Now, when you're really talking about the blast radius, you want to make sure that if something physically happens to this data center, that you have infrastructure that's available for it to function for business continuity, we maintain, which is why we have the maximum availability architecture. So with components like golden gate and active data guard, and other ways by which we can keep to this distant systems in sync is extremely critical for us to deliver these high availability paths that make, uh, the whole equation about how many eggs in one basket versus containing the containment of the blast radius. A lot easier to grapple with because business continuity is something which is paramount to us. I mean, Oracle, the enterprise is running on Xcel data. Our high value cloud customers are running on extra data. And I'm sure Bob's going to talk a lot more about the cloud piece of it. So I think we have all the tools in place to, to go after that optimization on how many eggs in one basket was his blast radius. It's a question of working through the solution and the criticalities of that particular instance. >>Okay, great. Thank you for that detailed soup. We're going to give you a break. You go take a breath, get a, get a drink of water. Maybe we'll come back to you. If we have time, let's go to Bob, Bob, Bob tome, X data cloud at customer X nine M earlier this week, Juan said kinda, kinda cocky. What we're bothering, comparing exit data against your cloud, a customer against outpost or Azure stack. Can you elaborate on, on why that is? >>Sure. Or you, you know, first of all, I want to say, I love, I love baby. We go south posts. You know why it affirms everything that we've been doing for the past four and a half years with clouded customer. It affirms that cloud is running that running cloud services in customers' data center is a large and important market, large and important enough that AWS felt that the need provide these, um, you know, these customers with an AWS option, even if it only supports a sliver of the functionality that they provide in the public cloud. And that's what they're doing. They're giving it a sliver and they're not exactly leading with the best they could offer. So for that reason, you know, that reason alone, there's really nothing to compare. And so we, we give them the benefit of the doubt and we actually are using their public cloud solutions. >>Another point most customers are looking to deploy to Oracle cloud, a customer they're looking for a per performance, scalable, secure, and highly available platform to deploy. What's offered their most critical databases. Most often they are Oracle databases does outposts for an Oracle database. No. Does outpost run a comparable database? Not really does outposts run Amazon's top OTP and analytics database services, the ones that are top in their cloud public cloud. No, that we couldn't find anything that runs outposts that's worth comparing against X data clouded customer, which is why the comparisons are against their public cloud products. And even with that still we're looking at numbers like 50 times a hundred times slower, right? So then there's the Azure stack. One of the key benefits to, um, you know, that customers love about the cloud that I think is really under, appreciated it under appreciated is really that it's a single vendor solution, right? You have a problem with cloud service could be I as pass SAS doesn't matter. And there's a single vendor responsible for fixing your issue as your stack is missing big here, because they're a multi-vendor cloud solution like AWS outposts. Also, they don't exactly offer the same services in the cloud that they offer on prem. And from what I hear, it can be a management nightmare requiring specialized administrators to keep that beast running. >>Okay. So, well, thanks for that. I'll I'll grant you that, first of all, granted that Oracle was the first with that same, same vision. I always tell people that, you know, if they say, well, we were first I'm like, well, actually, no, Oracle's first having said that, Bob and I hear you that, that right now, outpost is a one Datto version. It doesn't have all the bells and whistles, but neither did your cloud when you first launched your cloud. So let's, let's let it bake for a while and we'll come back in a couple of years and see how things compare. So if you're up for it. Yeah. >>Just remember that we're still in the oven too. Right. >>Okay. All right. Good. I love it. I love the, the chutzpah. One also talked about Deutsche bank. Um, and that, I, I mean, I saw that Deutsche bank announcement, how they're working with Oracle, they're modernizing their infrastructure around database. They're building other services around that and kind of building their own sort of version of a cloud for their customers. How does exit data cloud a customer fit in to that whole Deutsche bank deal? Is, is this solution unique to Deutsche bank? Do you see other organizations adopting clouded customer for similar reasons and use cases? >>Yeah, I'll start with that. First. I want to say that I don't think Georgia bank is unique. They want what all customers want. They want to be able to run their most important workloads. The ones today running their data center on exit eight as a non other high-end systems in a cloud environment where they can benefit from things like cloud economics, cloud operations, cloud automations, but they can't move to public cloud. They need to maintain the service levels, the performance, the scalability of the security and the availability that their business has. It has come to depend on most clouds can't provide that. Although actually Oracle's cloud can our public cloud Ken, because our public cloud does run exit data, but still even with that, they can't do it because as a bank, they're subject to lots of rules and regulations, they cannot move their 40 petabytes of data to a point outside the control of their data center. >>They have thousands of interconnected databases, right? And applications. It's like a rat's nest, right? And this is similar many large customers have this problem. How do you move that to the cloud? You can move it piecemeal. Uh, I'm going to move these apps and, you know, not move those apps. Um, but suddenly ended up with these things where some pieces are up here. Some pieces are down here. The thing just dies because of the long latency over a land connection, it just doesn't work. Right. So you can also shut it down. Let's shut it down on, on Friday and move everything all at once. Unfortunately, when you're looking at it, a state decides that most customers have, you're not going to be able to, you're going to be down for a month, right? Who can, who can tolerate that? So it's a big challenge and exited cloud a customer let's then move to the cloud without losing control of their data. >>And without unhappy having to untangle that thousands of interconnected databases. So, you know, that's why these customers are choosing X data, clouded customer. More importantly, it sets them up for the future with exited cloud at customer, they can run not just in their data center, but they could also run in public cloud, adjacent sites, giving them a path to moving some work out of the data center and ultimately into the public cloud. You know, as I said, they're not unique. Other banks are watching and some are acting and it's not just banks. Just last week. Telefonica telco in Spain announced their intent to migrate the bulk of their Oracle databases to excavate a cloud at customer. This will be the key cloud platform running. They're running in their data center to support both new services, as well as mission critical and operational systems. And one last important point exited cloud a customer can also run autonomous database. Even if customers aren't today ready to adopt this. A lot of them are interested in it. They see it as a key piece of the puzzle moving forward in the future and customers know that they can easily start to migrate to autonomous in the future as they're ready. And this of course is going to drive additional efficiencies and additional cost savings. >>So, Bob, I got a question for you because you know, Oracle's playing both sides, right? You've got a cloud, you know, you've got a true public cloud now. And, and obviously you have a huge on-premise state. When I talk to companies that don't own a cloud, uh, whether it's Dell or HPE, Cisco, et cetera, they have made, they make the point. And I agree with them by the way that the world is hybrid, not everything's going into the, to the cloud. However, I had a lot of respect for folks at Amazon as well. And they believed long-term, they'll say this, they've got them on record of saying this, that they believe long-term ultimately all workloads are going to be running in the cloud. Now, I guess it depends on how you define the cloud. The cloud is expanding and all that other stuff. But my question to you, because again, you kind of on both sides, here are our hybrid solutions like cloud at customer. Do you see them as a stepping stone to the cloud, or is cloud in your data center, sort of a continuous sort of permanent, you know, essential play >>That. That's a great question. As I recall, people debated this a few years back when we first introduced clouded customer. And at that point, some people I'm talking about even internal Oracle, right? Some people saw this as a stop gap measure to let people leverage cloud benefits until they're really ready for the public cloud. But I think over the past four and a half years, the changing the thinking has changed a little bit on this. And everyone kind of agrees that clouded customer may be a stepping stone for some customers, but others see that as the end game, right? Not every workload can run in the public cloud, not at least not given the, um, you know, today's regulations and the issues that are faced by many of these regulated industries. These industries move very, very slowly and customers are content to, and in many cases required to retain complete control of their data and they will be running under their control. They'll be running with that data under their control and the data center for the foreseeable future. >>Oh, I got another question for kind of just, if I could take a little tangent, cause the other thing I hear from the, on the, the, the on-prem don't own, the cloud folks is it's actually cheaper to run in on-prem, uh, because they're getting better at automation, et cetera. When you get the exact opposite from the cloud guys, they roll their eyes. Are you kidding me? It's way cheaper to run it in the cloud, which is more cost-effective is it one of those? It depends, Bob. >>Um, you know, the great thing about numbers is you can make, you can, you can kind of twist them to show anything that you want, right? That's a have spreadsheet. Can I, can, I can sell you on anything? Um, I think that there's, there's customers who look at it and they say, oh, on-premise sheet is cheaper. And there's customers who look at it and say, the cloud is cheaper. If you, um, you know, there's a lot of ways that you may incur savings in the cloud. A lot of it has to do with the cloud economics, the ability to pay for what you're using and only what you're using. If you were to kind of, you know, if you, if you size something for your peak workload and then, you know, on prem, you probably put a little bit of a buffer in it, right? >>If you size everything for that, you're gonna find that you're paying, you know, this much, right? All the time you're paying for peak workload all the time with the cloud, of course, we support scaling up, scaling down. We supply, we support you're paying for what you use and you can scale up and scale down. That's where the big savings is now. There's also additional savings associated with you. Don't have the cloud vendors like work. Well, we manage that infrastructure for you. You no longer have to worry about it. Um, we have a lot of automation, things that you use to either, you know, probably what used to happen is you used to have to spend hours and hours or years or whatever, scripting these things yourselves. We now have this automation to do it. We have, um, you eyes that make things ad hoc things, as simple as point and click and, uh, you know, that eliminates errors. And, and it's often difficult to put a cost on those things. And I think the more enlightened customers can put a cost on all of those. So the people that were saying it's cheaper to run on prem, uh, they, they either, you know, have a very stable workload that never changes and their environment never changes, um, or more likely. They just really haven't thought through the, all the hidden costs out there. >>All right, you got some new features. Thank you for that. By the way, you got some new features in, in cloud, a customer, a what are those? Do I have to upgrade to X nine M to, to get >>All right. So, you know, we're always introducing new features for clouded customer, but two significant things that we've rolled out recently are operator access control and elastic storage expansion. As we discussed, many organizations are using Axeda cloud a customer they're attracting the cloud economics, the operational benefits, but they're required by regulations to retain control and visibility of their data, as well as any infrastructure that sits inside their data center with operator access control, enabled cloud operations, staff members must request access to a customer system, a customer, it team grants, a designated person, specific access to a specific component for a specific period of time with specific privileges, they can then kind of view audit controls in real time. And if they see something they don't like, you know, Hey, what's this guy doing? It looks like he's, he's stealing my data or doing something I don't like, boom. >>They can kill that operators, access the session, the connections, everything right away. And this gives everyone, especially customers that need to, you know, regulate remote access to their infrastructure. It gives them the confidence that they need to use exit data cloud, uh, conduct, customer service. And, and the other thing that's new is, um, elastic storage expansion. Customers could out add additional service to their system either at initial deployment or after the fact. And this really provides two important benefits. The first is that they can right size their configuration if they need only the minimum compute capacity, but they don't need the maximum number of storage servers to get that capacity. They don't need to subscribe to kind of a fixed shape. We used to have fixed shapes, I guess, with hundreds of unnecessary database cores, just to get the storage capacity, they can select a smaller system. >>And then incrementally add on that storage. The second benefit is the, is kind of key for many customers. You are at a storage, guess what you can add more. And that way, when you're out of storage, that's really important. Now they'll get to your last part of that question. Do you need a deck, a new, uh, exit aquatic customer XIM system to get these features? No they're available for all gen two exited clouded customer systems. That's really one of the best things about cloud. The service you subscribed to today just keeps getting better and better. And unless there's some technical limitation that, you know, we, and it, which is rare, most new features are available even for the oldest cloud customer systems. >>Cool. And you can bring that in on from my, my last question for you, Bob is a, another one on security. Obviously, again, we talked to Susan about this. It's a big deal. How can customer data be secure if it's in the cloud, if somebody, other than the, their own vetted employees are managing the underlying infrastructure, is is that a concern you hear a lot and how do you handle that? >>You know, it's, it's only something because a lot of these customers, they have big, you know, security people and it's their job to be concerned about that kind of stuff. And security. However, is one of the biggest, but least appreciate appreciated benefits of cloud cloud vendors, such as Oracle hire the best and brightest security experts to ensure that their clouds are secure. Something that only the largest customers can afford to do. You're a small, small shop. You're not going to be able to, you know, hire some of this expertise. So you're better off being in the cloud. Customers who are running in the Oracle cloud can also use articles, data, safe tool, which we provide, which basically lets you inspect your databases, insurance. Sure that everything is locked down and secure and your data is secure. But your question is actually a little bit different. >>It was about potential internal threats to company's data. Given the cloud vendor, not the customer's employees have access to the infrastructure that sits beneath the databases and really the first and most important thing we do to protect customers' data is we encrypt that database by default. Actually Subin listed a whole laundry list of things, but that's the one thing I want to point out. We encrypt your database. It's, you know, it's, it's encrypted. Yes. It sits on our infrastructure. Yes. Our operations persons can actually see those data files sitting on the infrastructure, but guess what? They can't see the data. The data is encrypted. All they see as kind of a big encrypted blob. Um, so they can't access the data themselves. And you know, as you'd expect, we have very tight controls over operations access to the infrastructure. They need to securely log in using mechanisms by stuff to present, prevent unauthorized access. >>And then all access is logged and suspicious. Activities are investigated, but that still may not be enough for some customers, especially the ones I mentioned earlier, the regulated industries. And that's why we offer app operator access control. As I mentioned, that gives customers complete control over the access to the infrastructure. The, when the, what ops can do, how long can they do it? Customers can monitor in real time. And if they see something they don't like they stop it immediately. Lastly, I just want to mention Oracle's data ball feature. This prevents administrators from accessing data, protecting data from road operators, robot, world operations, whether they be from Oracle or from the customer's own it staff, this database option. A lot of ball is sorry. Database ball data vault is included when running a license included service on exited clouded customer. So basically to get it with the service. Got it. >>Hi Tom. Thank you so much. It's unbelievable, Bob. I mean, we've got a lot to unpack there, but uh, we're going to give you a break now and go to Tim, Tim chin, zero data loss, recovery appliance. We always love that name. The big guy we think named it, but nobody will tell us, but we've been talking about security. There's been a lot of news around ransomware attacks. Every industry around the globe, any knucklehead with, uh, with a high school diploma could become a ransomware attack or go in the dark web, get, get ransomware as a service stick, a, put a stick in and take a piece of the VIG and hopefully get arrested. Um, with, when you think about database, how do you deal with the ransomware challenge? >>Yeah, Dave, um, that's an extremely important and timely question. Um, we are hearing this from our customers. We just talk about ha and backup strategies and ransomware, um, has been coming up more and more. Um, and the unfortunate thing that these ransoms are actually paid, um, uh, in the hope of the re you know, the, uh, the ability to access the data again. So what that means it tells me is that today's recovery solutions and processes are not sufficient to get these systems back in a reliable and timely manner. Um, and so you have to pay the ransom, right, to get, uh, to get the, even a hope of getting the data back now for databases. This can have a huge impact because we're talking about transactional workloads. And so even a compromise of just a few minutes, a blip, um, can affect hundreds or even thousands of transactions. This can literally represent hundreds of lost orders, right? If you're a big manufacturing company or even like millions of dollars worth of, uh, financial transactions in a bank. Right. Um, and that's why protecting databases at a transaction level is especially critical, um, for ransomware. And that's a huge contrast to traditional backup approaches. Okay. >>So how do you approach that? What do you, what do you do specifically for ransomware protection for the database? >>Yeah, so we have the zero data loss recovery appliance, which we announced the X nine M generation. Um, it is really the only solution in the market, which offers that transaction level of protection, which allows all transactions to be recovered with zero RPO, zero again, and this is only possible because Oracle has very innovative and unique technology called real-time redo, which captures all the transactional changes from the databases by the appliance, and then stored as well by the appliance, moreover, the appliance validates all these backups and reading. So you want to make sure that you can recover them after you've sent them, right? So it's not just a file level integrity check on a file system. That's actual database level of validation that the Oracle blocks and the redo that I mentioned can be restored and recovered as a usable database, any kind of, um, malicious attack or modification of that backup data and transmit that, or if it's even stored on the appliance and it was compromised would be immediately detected and reported by that validation. >>So this allows administrators to take action. This is removing that system from the network. And so it's a huge leap in terms of what customers can get today. The last thing I just want to point out is we call our cyber vault deployment, right? Um, a lot of customers in the industry are creating what we call air gapped environments, where they have a separate location where their backup copies are stored physically network separated from the production systems. And so this prevents ransomware for possibly infiltrating that last good copy of backups. So you can deploy recovery appliance in a cyber vault and have it synchronized at random times when the network's available, uh, to, to keep it in sync. Right. Um, so that combined with our transaction level zero data loss validation, it's a nice package and really a game changer in protecting and recovering your databases from modern day cyber threats. >>Okay, great. Thank you for clarifying that air gap piece. Cause I, there was some confusion about that. Every data protection and backup company that I know as a ransomware solution, it's like the hottest topic going, you got newer players in, in, in recovery and backup like rubric Cohesity. They raised a ton of dough. Dell has got solutions, HPE just acquired Zerto to deal with this problem. And other things IBM has got stuff. Veem seems to be doing pretty well. Veritas got a range of, of recovery solutions. They're sort of all out there. What's your take on these and their strategy and how do you differentiate? >>Yeah, it's a pretty crowded market, like you said. Um, I think the first thing you really have to keep in mind and understand that these vendors, these new and up and coming, um, uh, uh, vendors start in the copy data management, we call CDN space and they're not traditional backup recovery designed are purpose built for the purpose of CDM products is to provide these fast point in time copies for test dev non-production use, and that's a viable problem and it needs a solution. So you create these one time copy and then you create snapshots. Um, after you apply these incremental changes to that copy, and then the snapshot can be quickly restored and presented as like it's a fully populated, uh, file. And this is all done through the underlying storage of block pointers. So all of this kind of sounds really cool and modern, right? It's like new and upcoming and lots of people in the market doing this. Well, it's really not that modern because we've, we know storage, snapshot technologies has been around for years. Right. Um, what these new vendors have been doing is essentially repackaging the old technology for backup and recovery use cases and having sort of an easier to use automation interface wrapped around it. >>Yeah. So you mentioned a copy data management, uh, last year, active FIO. Uh, they started that whole space from what I recall at one point there, they value more than a billion dollars. They were acquired by Google. Uh, and as I say, they kind of created that, that category. So fast forward a little bit, nine months a year, whatever it's been, do you see that Google active FIO offer in, in, in customer engagements? Is that something that you run into? >>We really don't. Um, yeah, it was really popular and known some years ago, but we really don't hear about it anymore. Um, after the acquisition, you look at all the collateral and the marketing, they are really a CDM and backup solution exclusively for Google cloud use cases. And they're not being positioned as for on premises or any other use cases outside of Google cloud. That's what, 90, 90 plus percent of your market there that isn't addressable now by Activia. So really we don't see them in any of our engagements at this time. >>I want to come back and push it a little bit, uh, on some of the tech that you said, it's kind of really not that modern. Uh, I mean it's, if they certainly position it as modern, a lot of the engineers who are building there's new sort of backup and recovery capabilities came from the hyperscalers, whether it's copy data management, you know, the bot mock quote, unquote modern backup recovery, it's kind of a data management, sort of this nice all in one solution seems pretty compelling. How does recovery clients specifically stack up? You know, a lot of people think it's a niche product for, for really high end use cases. Is that fair? How do you see a town? >>Yeah. Yeah. So it's, I think it's so important to just, you know, understand, again, the fundamental use of this technology is to create data copies for test W's right. Um, and that's really different than operational backup recovery in which you must have this ability to do full and point in time recoverability in any production outage or Dr. Situation. Um, and then more importantly, after you recover and your applications are back in business, that performance must continue to meet servers levels as before. And when you look at a CDM product, um, and you restore a snapshot and you say with that product and the application is brought up on that restored snapshot, what happens or your production application is now running on actual read rideable snapshots on backup storage. Remember they don't restore all the data back to the production, uh, level stores. They're restoring it as a snapshot okay. >>Onto their storage. And so you have a huge difference in performance. Now running these applications where they instantly recovered, if you will database. So to meet these true operational requirements, you have to fully restore the files to production storage period. And so recovery appliance was first and foremost designed to accomplish this. It's an operational recovery solution, right? We accomplish that. Like I mentioned, with this real-time transaction protection, we have incremental forever backup strategies. So that you're just taking just the changes every day. And you, you can create these virtual full backups that are quickly restored, fully restored, if you will, at 24 terabytes an hour. And we validate and document that performance very clearly in our website. And of course we provide that continuous recovery validation for all the backups that are stored on the system. So it's, um, it's a very nice, complete solution. >>It scales to meet your demands, hundreds of thousands of databases, you know, it's, um, you know, these CDM products might seem great and they work well for a few databases, but then you put a real enterprise load and these hundreds of databases, and we've seen a lot of times where it just buckles, you know, it can't handle that kind of load in that, uh, in that scale. Uh, and, and this is important because customers read their marketing and read the collateral like, Hey, instant recovery. Why wouldn't I want that? Well, it's, you know, nicer than it looks, you know, it always sounds better. Right. Um, and so we have to educate them and about exactly what that means for the database, especially backup recovery use cases. And they're not really handled well, um, with their products. >>I know I'm like way over. I had a lot of questions on this announcement and I was gonna, I was gonna let you go, Tim, but you just mentioned something that, that gave me one more question if I may. So you talked about, uh, supporting hundreds of thousands of databases. You petabytes, you have real world use cases that, that actually leverage the, the appliance in these types of environments. Where does it really shine? >>Yeah. Let me just give you just two real quick ones. You know, we have a company energy transfer, the major natural gas and pipeline operator in the U S so they are a big part of our country's critical infrastructure services. We know ransomware, and these kinds of threats are, you know, are very much viable. We saw the colonial pipeline incident that happened, right? And so the attack, right, critical services while energy transfer was running, lots of databases and their legacy backup environments just couldn't keep up with their enterprise needs. They had backups taking like, well, over a day, they had restores taking several hours. Um, and so they had problems and they couldn't meet their SLS. They moved to the recovery appliance and now they're seeing backwards complete with that incremental forever in just 15 minutes. So that's like a 48 times improvement in backup time. >>And they're also seeing restores completing in about 30 minutes, right. Versus several hours. So it's a, it's a huge difference for them. And they also get that nice recovery validation and monitoring by the system. They know the health of their enterprise at their fingertips. The second quick one is just a global financial services customer. Um, and they have like over 10,000 databases globally and they, they really couldn't find a solution other than throw more hardware kind of approach to, uh, to fix their backups. Well, this, uh, not that the failures and not as the issues. So they moved to recovery appliance and they saw their failed backup rates go down for Matta plea. They saw four times better backup and restore performance. Um, and they have also a very nice centralized way to monitor and manage the system. Uh, real-time view if you will, that data protection health for their entire environment. Uh, and they can show this to the executive management and auditing teams. This is great for compliance reporting. Um, and so they finally done that. They have north of 50 plus, um, recovery appliances a day across that on global enterprise. >>Love it. Thank you for that. Um, uh, guys, great power panel. We have a lot of Oracle customers in our community and the best way to, to help them is to, I get to ask you a bunch of questions and get the experts to answer. So I wonder if you could bring us home, maybe you could just sort of give us the, the top takeaways that you want to your customers to remember in our audience to remember from this announcement. >>Sure, sorry. Uh, I want to actually pick up from where Tim left off and talk about a real customer use case. This is hot off the press. One of the largest banks in the United States, they decided to, that they needed to update. So performance software update on 3000 of their database instances, which are spanning 68, exited a clusters, massive undertaking, correct. They finished the entire task in three hours, three hours to update 3000 databases and 68 exited a clusters. Talk about availability, try doing this on any other infrastructure, no way anyone's going to be able to achieve this. So that's on terms of the availability, right? We are engineering in all of the aspects of database management, performance, security availability, being able to provide redundancy at every single level is all part of the design philosophy and how we are engineering this product. And as far as we are concerned, the, the goal is for forever. >>We are just going to continue to go down this path of increasing performance, increasing the security aspect of the, uh, of the infrastructure, as well as our Oracle database and keep going on this. You know, this, while these have been great results that we've delivered with extra data X nine M the, the journey is on and to our customers. The biggest advantage that you're going to get from the kind of performance metrics that we are driving with extra data is consolidation consolidate more, move, more database instances onto the extended platform, gain the benefits from that consolidation, reduce your operational expenses, reduce your capital expenses. They use your management expenses, all of those, bring it down to accelerator. Your total cost of ownership is guaranteed to go down. Those are my key takeaways, Dave >>Guys, you've been really generous with your time. Uh Subin uh, uh, uh, Bob, Tim, I appreciate you taking my questions and we'll willingness to go toe to toe, really? Thanks for your time. >>You're welcome, David. Thank you. Thank you. >>And thank you for watching this video exclusive from the cube. This is Dave Volante, and we'll see you next time. Be well.
SUMMARY :
We did that on the day of the announcement who got his take on it. Maybe you could give us a recap, 80% of the product development work for extra data, that still, you know, build the builder and they're trying to build their own exit data. And I think the answer to your question is going to lie in what are we doing at the engineering And as I, as I just mentioned the hardware, and then we also worked with the former elements on in the storage tier to be able to offload SQL processing. you know, make sure that it was going to be able to recover according to your standards, the storage network from vendor C, the operating system from vendor D. How do you tune all of these None of the other suppliers can make that claim. remote direct memory access operation from the compute tier to And Juan mentioned that you use a layered security model. that are built into the hardware that make sure that we've got immutable areas of form Now, of course the security of that hardware goes all the way back to the fact that we own the design. Because the moment you ship more stuff than you need, you are increasing going to an ATM machine and withdrawing money, you would do 200. And the bank doesn't want to see it the other way. economies of scale that you get when you consolidate more and more databases, but at the same time, So if something happens to one server hardware, software, whatever you the blast radius, you want to make sure that if something physically happens We're going to give you a break. of the functionality that they provide in the public cloud. you know, that customers love about the cloud that I think is really under, appreciated it under I always tell people that, you know, if they say, well, we were first I'm like, Just remember that we're still in the oven too. Do you see other organizations adopting clouded customer for they cannot move their 40 petabytes of data to a point outside the control of their data center. Uh, I'm going to move these apps and, you know, not move those apps. They see it as a key piece of the puzzle moving forward in the future and customers know that they can You've got a cloud, you know, you've got a true public cloud now. not at least not given the, um, you know, today's regulations and the issues that are When you get the exact opposite from the cloud guys, they roll their eyes. the cloud economics, the ability to pay for what you're using and only what you're using. Um, we have a lot of automation, things that you use to either, you know, By the way, you got some new features in, in cloud, And if they see something they don't like, you know, Hey, what's this guy doing? And this gives everyone, especially customers that need to, you know, You are at a storage, guess what you can add more. is is that a concern you hear a lot and how do you handle that? You're not going to be able to, you know, hire some of this expertise. And you know, as you'd expect, that gives customers complete control over the access to the infrastructure. but uh, we're going to give you a break now and go to Tim, Tim chin, zero Um, and so you have to pay the ransom, right, to get, uh, to get the, even a hope of getting the data back now So you want to make sure that you can recover them Um, a lot of customers in the industry are creating what we it's like the hottest topic going, you got newer players in, in, So you create these one time copy Is that something that you run into? Um, after the acquisition, you look at all the collateral I want to come back and push it a little bit, uh, on some of the tech that you said, it's kind of really not that And when you look at a CDM product, um, and you restore a snapshot And so you have a huge difference in performance. and we've seen a lot of times where it just buckles, you know, it can't handle that kind of load in that, I had a lot of questions on this announcement and I was gonna, I was gonna let you go, And so the attack, right, critical services while energy transfer was running, Uh, and they can show this to the executive management to help them is to, I get to ask you a bunch of questions and get the experts to answer. They finished the entire task in three hours, three hours to increasing the security aspect of the, uh, of the infrastructure, uh, uh, Bob, Tim, I appreciate you taking my questions and we'll willingness to go toe Thank you. And thank you for watching this video exclusive from the cube.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Tom | PERSON | 0.99+ |
Susan | PERSON | 0.99+ |
Brian | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
2008 | DATE | 0.99+ |
David | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Dave Volante | PERSON | 0.99+ |
48 times | QUANTITY | 0.99+ |
70% | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Juan | PERSON | 0.99+ |
Bob Thome | PERSON | 0.99+ |
Tim Chien | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Tim | PERSON | 0.99+ |
Bob | PERSON | 0.99+ |
Deutsche bank | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
40 petabytes | QUANTITY | 0.99+ |
3000 | QUANTITY | 0.99+ |
Delaware | LOCATION | 0.99+ |
87% | QUANTITY | 0.99+ |
50 times | QUANTITY | 0.99+ |
three hours | QUANTITY | 0.99+ |
19 microseconds | QUANTITY | 0.99+ |
Tim chin | PERSON | 0.99+ |
90 | QUANTITY | 0.99+ |
Connell | ORGANIZATION | 0.99+ |
5,000 | QUANTITY | 0.99+ |
hundreds | QUANTITY | 0.99+ |
Deutsche bank | ORGANIZATION | 0.99+ |
Today | DATE | 0.99+ |
90 plus percent | QUANTITY | 0.99+ |
5,413 packages | QUANTITY | 0.99+ |
80% | QUANTITY | 0.99+ |
last week | DATE | 0.99+ |
HP | ORGANIZATION | 0.99+ |
68 | QUANTITY | 0.99+ |
seven and a half billion | QUANTITY | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
First | QUANTITY | 0.99+ |
Spain | LOCATION | 0.99+ |
AXA | ORGANIZATION | 0.99+ |
two orders | QUANTITY | 0.99+ |
United States | LOCATION | 0.99+ |
one copy | QUANTITY | 0.99+ |
Bob tome | PERSON | 0.99+ |
27 million | QUANTITY | 0.99+ |
Louisa | PERSON | 0.99+ |
24 terabytes | QUANTITY | 0.99+ |
15 minutes | QUANTITY | 0.99+ |
The New Data Equation: Leveraging Cloud-Scale Data to Innovate in AI, CyberSecurity, & Life Sciences
>> Hi, I'm Natalie Ehrlich and welcome to the AWS startup showcase presented by The Cube. We have an amazing lineup of great guests who will share their insights on the latest innovations and solutions and leveraging cloud scale data in AI, security and life sciences. And now we're joined by the co-founders and co-CEOs of The Cube, Dave Vellante and John Furrier. Thank you gentlemen for joining me. >> Hey Natalie. >> Hey Natalie. >> How are you doing. Hey John. >> Well, I'd love to get your insights here, let's kick it off and what are you looking forward to. >> Dave, I think one of the things that we've been doing on the cube for 11 years is looking at the signal in the marketplace. I wanted to focus on this because AI is cutting across all industries. So we're seeing that with cybersecurity and life sciences, it's the first time we've had a life sciences track in the showcase, which is amazing because it shows that growth of the cloud scale. So I'm super excited by that. And I think that's going to showcase some new business models and of course the keynotes Ali Ghodsi, who's the CEO Data bricks pushing a billion dollars in revenue, clear validation that startups can go from zero to a billion dollars in revenues. So that should be really interesting. And of course the top venture capitalists coming in to talk about what the enterprise dynamics are all about. And what about you, Dave? >> You know, I thought it was an interesting mix and choice of startups. When you think about, you know, AI security and healthcare, and I've been thinking about that. Healthcare is the perfect industry, it is ripe for disruption. If you think about healthcare, you know, we all complain how expensive it is not transparent. There's a lot of discussion about, you know, can everybody have equal access that certainly with COVID the staff is burned out. There's a real divergence and diversity of the quality of healthcare and you know, it all results in patients not being happy, and I mean, if you had to do an NPS score on the patients and healthcare will be pretty low, John, you know. So when I think about, you know, AI and security in the context of healthcare in cloud, I ask questions like when are machines going to be able to better meet or make better diagnoses than doctors? And that's starting. I mean, it's really in assistance putting into play today. But I think when you think about cheaper and more accurate image analysis, when you think about the overall patient experience and trust and personalized medicine, self-service, you know, remote medicine that we've seen during the COVID pandemic, disease tracking, language translation, I mean, there are so many things where the cloud and data, and then it can help. And then at the end of it, it's all about, okay, how do I authenticate? How do I deal with privacy and personal information and tamper resistance? And that's where the security play comes in. So it's a very interesting mix of startups. I think that I'm really looking forward to hearing from... >> You know Natalie one of the things we talked about, some of these companies, Dave, we've talked a lot of these companies and to me the business model innovations that are coming out of two factors, the pandemic is kind of coming to an end so that accelerated and really showed who had the right stuff in my opinion. So you were either on the wrong side or right side of history when it comes to the pandemic and as we look back, as we come out of it with clear growth in certain companies and certain companies that adopted let's say cloud. And the other one is cloud scale. So the focus of these startup showcases is really to focus on how startups can align with the enterprise buyers and create the new kind of refactoring business models to go from, you know, a re-pivot or refactoring to more value. And the other thing that's interesting is that the business model isn't just for the good guys. If you look at say ransomware, for instance, the business model of hackers is gone completely amazing too. They're kicking it but in terms of revenue, they have their own they're well-funded machines on how to extort cash from companies. So there's a lot of security issues around the business model as well. So to me, the business model innovation with cloud-scale tech, with the pandemic forcing function, you've seen a lot of new kinds of decision-making in enterprises. You seeing how enterprise buyers are changing their decision criteria, and frankly their existing suppliers. So if you're an old guard supplier, you're going to be potentially out because if you didn't deliver during the pandemic, this is the issue that everyone's talking about. And it's kind of not publicized in the press very much, but this is actually happening. >> Well thank you both very much for joining me to kick off our AWS startup showcase. Now we're going to go to our very special guest Ali Ghodsi and John Furrier will seat with him for a fireside chat and Dave and I will see you on the other side. >> Okay, Ali great to see you. Thanks for coming on our AWS startup showcase, our second edition, second batch, season two, whatever we want to call it it's our second version of this new series where we feature, you know, the hottest startups coming out of the AWS ecosystem. And you're one of them, I've been there, but you're not a startup anymore, you're here pushing serious success on the revenue side and company. Congratulations and great to see you. >> Likewise. Thank you so much, good to see you again. >> You know I remember the first time we chatted on The Cube, you weren't really doing much software revenue, you were really talking about the new revolution in data. And you were all in on cloud. And I will say that from day one, you were always adamant that it was cloud cloud scale before anyone was really talking about it. And at that time it was on premises with Hadoop and those kinds of things. You saw that early. I remember that conversation, boy, that bet paid out great. So congratulations. >> Thank you so much. >> So I've got to ask you to jump right in. Enterprises are making decisions differently now and you are an example of that company that has gone from literally zero software sales to pushing a billion dollars as it's being reported. Certainly the success of Data bricks has been written about, but what's not written about is the success of how you guys align with the changing criteria for the enterprise customer. Take us through that and these companies here are aligning the same thing and enterprises want to change. They want to be in the right side of history. What's the success formula? >> Yeah. I mean, basically what we always did was look a few years out, the how can we help these enterprises, future proof, what they're trying to achieve, right? They have, you know, 30 years of legacy software and, you know baggage, and they have compliance and regulations, how do we help them move to the future? So we try to identify those kinds of secular trends that we think are going to maybe you see them a little bit right now, cloud was one of them, but it gets more and more and more. So we identified those and there were sort of three or four of those that we kind of latched onto. And then every year the passes, we're a little bit more right. Cause it's a secular trend in the market. And then eventually, it becomes a force that you can't kind of fight anymore. >> Yeah. And I just want to put a plug for your clubhouse talks with Andreessen Horowitz. You're always on clubhouse talking about, you know, I won't say the killer instinct, but being a CEO in a time where there's so much change going on, you're constantly under pressure. It's a lonely job at the top, I know that, but you've made some good calls. What was some of the key moments that you can point to, where you were like, okay, the wave is coming in now, we'd better get on it. What were some of those key decisions? Cause a lot of these startups want to be in your position, and a lot of buyers want to take advantage of the technology that's coming. They got to figure it out. What was some of those key inflection points for you? >> So if you're just listening to what everybody's saying, you're going to miss those trends. So then you're just going with the stream. So, Juan you mentioned that cloud. Cloud was a thing at the time, we thought it's going to be the thing that takes over everything. Today it's actually multi-cloud. So multi-cloud is a thing, it's more and more people are thinking, wow, I'm paying a lot's to the cloud vendors, do I want to buy more from them or do I want to have some optionality? So that's one. Two, open. They're worried about lock-in, you know, lock-in has happened for many, many decades. So they want open architectures, open source, open standards. So that's the second one that we bet on. The third one, which you know, initially wasn't sort of super obvious was AI and machine learning. Now it's super obvious, everybody's talking about it. But when we started, it was kind of called artificial intelligence referred to robotics, and machine learning wasn't a term that people really knew about. Today, it's sort of, everybody's doing machine learning and AI. So betting on those future trends, those secular trends as we call them super critical. >> And one of the things that I want to get your thoughts on is this idea of re-platforming versus refactoring. You see a lot being talked about in some of these, what does that even mean? It's people trying to figure that out. Re-platforming I get the cloud scale. But as you look at the cloud benefits, what do you say to customers out there and enterprises that are trying to use the benefits of the cloud? Say data for instance, in the middle of how could they be thinking about refactoring? And how can they make a better selection on suppliers? I mean, how do you know it used to be RFP, you deliver these speeds and feeds and you get selected. Now I think there's a little bit different science and methodology behind it. What's your thoughts on this refactoring as a buyer? What do I got to do? >> Well, I mean let's start with you said RFP and so on. Times have changed. Back in the day, you had to kind of sign up for something and then much later you're going to get it. So then you have to go through this arduous process. In the cloud, would pay us to go model elasticity and so on. You can kind of try your way to it. You can try before you buy. And you can use more and more. You can gradually, you don't need to go in all in and you know, say we commit to 50,000,000 and six months later to find out that wow, this stuff has got shelf where it doesn't work. So that's one thing that has changed it's beneficial. But the second thing is, don't just mimic what you had on prem in the cloud. So that's what this refactoring is about. If you had, you know, Hadoop data lake, now you're just going to have an S3 data lake. If you had an on-prem data warehouse now you just going to have a cloud data warehouse. You're just repeating what you did on prem in the cloud, architected for the future. And you know, for us, the most important thing that we say is that this lake house paradigm is a cloud native way of organizing your data. That's different from how you would do things on premises. So think through what's the right way of doing it in the cloud. Don't just try to copy paste what you had on premises in the cloud. >> It's interesting one of the things that we're observing and I'd love to get your reaction to this. Dave a lot** and I have been reporting on it is, two personas in the enterprise are changing their organization. One is I call IT ops or there's an SRE role developing. And the data teams are being dismantled and being kind of sprinkled through into other teams is this notion of data, pipelining being part of workflows, not just the department. Are you seeing organizational shifts in how people are organizing their resources, their human resources to take advantage of say that the data problems that are need to being solved with machine learning and whatnot and cloud-scale? >> Yeah, absolutely. So you're right. SRE became a thing, lots of DevOps people. It was because when the cloud vendors launched their infrastructure as a service to stitch all these things together and get it all working you needed a lot of devOps people. But now things are maturing. So, you know, with vendors like Data bricks and other multi-cloud vendors, you can actually get much higher level services where you don't need to necessarily have lots of lots of DevOps people that are themselves trying to stitch together lots of services to make this work. So that's one trend. But secondly, you're seeing more data teams being sort of completely ubiquitous in these organizations. Before it used to be you have one data team and then we'll have data and AI and we'll be done. ' It's a one and done. But that's not how it works. That's not how Google, Facebook, Twitter did it, they had data throughout the organization. Every BU was empowered. It's sales, it's marketing, it's finance, it's engineering. So how do you embed all those data teams and make them actually run fast? And you know, there's this concept of a data mesh which is super important where you can actually decentralize and enable all these teams to focus on their domains and run super fast. And that's really enabled by this Lake house paradigm in the cloud that we're talking about. Where you're open, you're basing it on open standards. You have flexibility in the data types and how they're going to store their data. So you kind of provide a lot of that flexibility, but at the same time, you have sort of centralized governance for it. So absolutely things are changing in the market. >> Well, you're just the professor, the masterclass right here is amazing. Thanks for sharing that insight. You're always got to go out of date and that's why we have you on here. You're amazing, great resource for the community. Ransomware is a huge problem, it's now the government's focus. We're being attacked and we don't know where it's coming from. This business models around cyber that's expanding rapidly. There's real revenue behind it. There's a data problem. It's not just a security problem. So one of the themes in all of these startup showcases is data is ubiquitous in the value propositions. One of them is ransomware. What's your thoughts on ransomware? Is it a data problem? Does cloud help? Some are saying that cloud's got better security with ransomware, then say on premise. What's your vision of how you see this ransomware problem being addressed besides the government taking over? >> Yeah, that's a great question. Let me start by saying, you know, we're a data company, right? And if you say you're a data company, you might as well just said, we're a privacy company, right? It's like some people say, well, what do you think about privacy? Do you guys even do privacy? We're a data company. So yeah, we're a privacy company as well. Like you can't talk about data without talking about privacy. With every customer, with every enterprise. So that's obviously top of mind for us. I do think that in the cloud, security is much better because, you know, vendors like us, we're investing so much resources into security and making sure that we harden the infrastructure and, you know, by actually having all of this infrastructure, we can monitor it, detect if something is, you know, an attack is happening, and we can immediately sort of stop it. So that's different from when it's on prem, you have kind of like the separated duties where the software vendor, which would have been us, doesn't really see what's happening in the data center. So, you know, there's an IT team that didn't develop the software is responsible for the security. So I think things are much better now. I think we're much better set up, but of course, things like cryptocurrencies and so on are making it easier for people to sort of hide. There decentralized networks. So, you know, the attackers are getting more and more sophisticated as well. So that's definitely something that's super important. It's super top of mind. We're all investing heavily into security and privacy because, you know, that's going to be super critical going forward. >> Yeah, we got to move that red line, and figure that out and get more intelligence. Decentralized trends not going away it's going to be more of that, less of the centralized. But centralized does come into play with data. It's a mix, it's not mutually exclusive. And I'll get your thoughts on this. Architectural question with, you know, 5G and the edge coming. Amazon's got that outpost stringent, the wavelength, you're seeing mobile world Congress coming up in this month. The focus on processing data at the edge is a huge issue. And enterprises are now going to be commercial part of that. So architecture decisions are being made in enterprises right now. And this is a big issue. So you mentioned multi-cloud, so tools versus platforms. Now I'm an enterprise buyer and there's no more RFPs. I got all this new choices for startups and growing companies to choose from that are cloud native. I got all kinds of new challenges and opportunities. How do I build my architecture so I don't foreclose a future opportunity. >> Yeah, as I said, look, you're actually right. Cloud is becoming even more and more something that everybody's adopting, but at the same time, there is this thing that the edge is also more and more important. And the connectivity between those two and making sure that you can really do that efficiently. My ask from enterprises, and I think this is top of mind for all the enterprise architects is, choose open because that way you can avoid locking yourself in. So that's one thing that's really, really important. In the past, you know, all these vendors that locked you in, and then you try to move off of them, they were highly innovative back in the day. In the 80's and the 90's, there were the best companies. You gave them all your data and it was fantastic. But then because you were locked in, they didn't need to innovate anymore. And you know, they focused on margins instead. And then over time, the innovation stopped and now you were kind of locked in. So I think openness is really important. I think preserving optionality with multi-cloud because we see the different clouds have different strengths and weaknesses and it changes over time. All right. Early on AWS was the only game that either showed up with much better security, active directory, and so on. Now Google with AI capabilities, which one's going to win, which one's going to be better. Actually, probably all three are going to be around. So having that optionality that you can pick between the three and then artificial intelligence. I think that's going to be the key to the future. You know, you asked about security earlier. That's how people detect zero day attacks, right? You ask about the edge, same thing there, that's where the predictions are going to happen. So make sure that you invest in AI and artificial intelligence very early on because it's not something you can just bolt on later on and have a little data team somewhere that then now you have AI and it's one and done. >> All right. Great insight. I've got to ask you, the folks may or may not know, but you're a professor at Berkeley as well, done a lot of great work. That's where you kind of came out of when Data bricks was formed. And the Berkeley basically was it invented distributed computing back in the 80's. I remember I was breaking in when Unix was proprietary, when software wasn't open you actually had the deal that under the table to get code. Now it's all open. Isn't the internet now with distributed computing and how interconnects are happening. I mean, the internet didn't break during the pandemic, which proves the benefit of the internet. And that's a positive. But as you start seeing edge, it's essentially distributed computing. So I got to ask you from a computer science standpoint. What do you see as the key learnings or connect the dots for how this distributed model will work? I see hybrids clearly, hybrid cloud is clearly the operating model but if you take it to the next level of distributed computing, what are some of the key things that you look for in the next five years as this starts to be completely interoperable, obviously software is going to drive a lot of it. What's your vision on that? >> Yeah, I mean, you know, so Berkeley, you're right for the gigs, you know, there was a now project 20, 30 years ago that basically is how we do things. There was a project on how you search in the very early on with Inktomi that became how Google and everybody else to search today. So workday was super, super early, sometimes way too early. And that was actually the mistake. Was that they were so early that people said that that stuff doesn't work. And then 20 years later you were invented. So I think 2009, Berkeley published just above the clouds saying the cloud is the future. At that time, most industry leaders said, that's just, you know, that doesn't work. Today, recently they published a research paper called, Sky Computing. So sky computing is what you get above the clouds, right? So we have the cloud as the future, the next level after that is the sky. That's one on top of them. That's what multi-cloud is. So that's a lot of the research at Berkeley, you know, into distributed systems labs is about this. And we're excited about that. Then we're one of the sky computing vendors out there. So I think you're going to see much more innovation happening at the sky level than at the compute level where you needed all those DevOps and SRE people to like, you know, build everything manually themselves. I can just see the memes now coming Ali, sky net, star track. You've got space too, by the way, space is another frontier that is seeing a lot of action going on because now the surface area of data with satellites is huge. So again, I know you guys are doing a lot of business with folks in that vertical where you starting to see real time data acquisition coming from these satellites. What's your take on the whole space as the, not the final frontier, but certainly as a new congested and contested space for, for data? >> Well, I mean, as a data vendor, we see a lot of, you know, alternative data sources coming in and people aren't using machine learning< AI to eat out signal out of the, you know, massive amounts of imagery that's coming out of these satellites. So that's actually a pretty common in FinTech, which is a vertical for us. And also sort of in the public sector, lots of, lots of, lots of satellites, imagery data that's coming. And these are massive volumes. I mean, it's like huge data sets and it's a super, super exciting what they can do. Like, you know, extracting signal from the satellite imagery is, and you know, being able to handle that amount of data, it's a challenge for all the companies that we work with. So we're excited about that too. I mean, definitely that's a trend that's going to continue. >> All right. I'm super excited for you. And thanks for coming on The Cube here for our keynote. I got to ask you a final question. As you think about the future, I see your company has achieved great success in a very short time, and again, you guys done the work, I've been following your company as you know. We've been been breaking that Data bricks story for a long time. I've been excited by it, but now what's changed. You got to start thinking about the next 20 miles stair when you look at, you know, the sky computing, you're thinking about these new architectures. As the CEO, your job is to one, not run out of money which you don't have to worry about that anymore, so hiring. And then, you got to figure out that next 20 miles stair as a company. What's that going on in your mind? Take us through your mindset of what's next. And what do you see out in that landscape? >> Yeah, so what I mentioned around Sky company optionality around multi-cloud, you're going to see a lot of capabilities around that. Like how do you get multi-cloud disaster recovery? How do you leverage the best of all the clouds while at the same time not having to just pick one? So there's a lot of innovation there that, you know, we haven't announced yet, but you're going to see a lot of it over the next many years. Things that you can do when you have the optionality across the different parts. And the second thing that's really exciting for us is bringing AI to the masses. Democratizing data and AI. So how can you actually apply machine learning to machine learning? How can you automate machine learning? Today machine learning is still quite complicated and it's pretty advanced. It's not going to be that way 10 years from now. It's going to be very simple. Everybody's going to have it at their fingertips. So how do we apply machine learning to machine learning? It's called auto ML, automatic, you know, machine learning. So that's an area, and that's not something that can be done with, right? But the goal is to eventually be able to automate a way the whole machine learning engineer and the machine learning data scientist altogether. >> You know it's really fun and talking with you is that, you know, for years we've been talking about this inside the ropes, inside the industry, around the future. Now people starting to get some visibility, the pandemics forced that. You seeing the bad projects being exposed. It's like the tide pulled out and you see all the scabs and bad projects that were justified old guard technologies. If you get it right you're on a good wave. And this is clearly what we're seeing. And you guys example of that. So as enterprises realize this, that they're going to have to look double down on the right projects and probably trash the bad projects, new criteria, how should people be thinking about buying? Because again, we talked about the RFP before. I want to kind of circle back because this is something that people are trying to figure out. You seeing, you know, organic, you come in freemium models as cloud scale becomes the advantage in the lock-in frankly seems to be the value proposition. The more value you provide, the more lock-in you get. Which sounds like that's the way it should be versus proprietary, you know, protocols. The protocol is value. How should enterprises organize their teams? Is it end to end workflows? Is it, and how should they evaluate the criteria for these technologies that they want to buy? >> Yeah, that's a great question. So I, you know, it's very simple, try to future proof your decision-making. Make sure that whatever you're doing is not blocking your in. So whatever decision you're making, what if the world changes in five years, make sure that if you making a mistake now, that's not going to bite you in about five years later. So how do you do that? Well, open source is great. If you're leveraging open-source, you can try it out already. You don't even need to talk to any vendor. Your teams can already download it and try it out and get some value out of it. If you're in the cloud, this pay as you go models, you don't have to do a big RFP and commit big. You can try it, pay the vendor, pay as you go, $10, $15. It doesn't need to be a million dollar contract and slowly grow as you're providing value. And then make sure that you're not just locking yourself in to one cloud or, you know, one particular vendor. As much as possible preserve your optionality because then that's not a one-way door. If it turns out later you want to do something else, you can, you know, pick other things as well. You're not locked in. So that's what I would say. Keep that top of mind that you're not locking yourself into a particular decision that you made today, that you might regret in five years. >> I really appreciate you coming on and sharing your with our community and The Cube. And as always great to see you. I really enjoy your clubhouse talks, and I really appreciate how you give back to the community. And I want to thank you for coming on and taking the time with us today. >> Thanks John, always appreciate talking to you. >> Okay Ali Ghodsi, CEO of Data bricks, a success story that proves the validation of cloud scale, open and create value, values the new lock-in. So Natalie, back to you for continuing coverage. >> That was a terrific interview John, but I'd love to get Dave's insights first. What were your takeaways, Dave? >> Well, if we have more time I'll tell you how Data bricks got to where they are today, but I'll say this, the most important thing to me that Allie said was he conveyed a very clear understanding of what data companies are outright and are getting ready. Talked about four things. There's not one data team, there's many data teams. And he talked about data is decentralized, and data has to have context and that context lives in the business. He said, look, think about it. The way that the data companies would get it right, they get data in teams and sales and marketing and finance and engineering. They all have their own data and data teams. And he referred to that as a data mesh. That's a term that is your mock, the Gany coined and the warehouse of the data lake it's merely a node in that global message. It meshes discoverable, he talked about federated governance, and Data bricks, they're breaking the model of shoving everything into a single repository and trying to make that the so-called single version of the truth. Rather what they're doing, which is right on is putting data in the hands of the business owners. And that's how true data companies do. And the last thing you talked about with sky computing, which I loved, it's that future layer, we talked about multi-cloud a lot that abstracts the underlying complexity of the technical details of the cloud and creates additional value on top. I always say that the cloud players like Amazon have given the gift to the world of 100 billion dollars a year they spend in CapEx. Thank you. Now we're going to innovate on top of it. Yeah. And I think the refactoring... >> Hope by John. >> That was great insight and I totally agree. The refactoring piece too was key, he brought that home. But to me, I think Data bricks that Ali shared there and why he's been open and sharing a lot of his insights and the community. But what he's not saying, cause he's humble and polite is they cracked the code on the enterprise, Dave. And to Dave's points exactly reason why they did it, they saw an opportunity to make it easier, at that time had dupe was the rage, and they just made it easier. They was smart, they made good bets, they had a good formula and they cracked the code with the enterprise. They brought it in and they brought value. And see that's the key to the cloud as Dave pointed out. You get replatform with the cloud, then you refactor. And I think he pointed out the multi-cloud and that really kind of teases out the whole future and landscape, which is essentially distributed computing. And I think, you know, companies are starting to figure that out with hybrid and this on premises and now super edge I call it, with 5G coming. So it's just pretty incredible. >> Yeah. Data bricks, IPO is coming and people should know. I mean, what everybody, they created spark as you know John and everybody thought they were going to do is mimic red hat and sell subscriptions and support. They didn't, they developed a managed service and they embedded AI tools to simplify data science. So to your point, enterprises could buy instead of build, we know this. Enterprises will spend money to make things simpler. They don't have the resources, and so this was what they got right was really embedding that, making a building a managed service, not mimicking the kind of the red hat model, but actually creating a new value layer there. And that's big part of their success. >> If I could just add one thing Natalie to that Dave saying is really right on. And as an enterprise buyer, if we go the other side of the equation, it used to be that you had to be a known company, get PR, you fill out RFPs, you had to meet all the speeds. It's like going to the airport and get a swab test, and get a COVID test and all kinds of mechanisms to like block you and filter you. Most of the biggest success stories that have created the most value for enterprises have been the companies that nobody's understood. And Andy Jazz's famous quote of, you know, being misunderstood is actually a good thing. Data bricks was very misunderstood at the beginning and no one kind of knew who they were but they did it right. And so the enterprise buyers out there, don't be afraid to test the startups because you know the next Data bricks is out there. And I think that's where I see the psychology changing from the old IT buyers, Dave. It's like, okay, let's let's test this company. And there's plenty of ways to do that. He illuminated those premium, small pilots, you don't need to go on these big things. So I think that is going to be a shift in how companies going to evaluate startups. >> Yeah. Think about it this way. Why should the large banks and insurance companies and big manufacturers and pharma companies, governments, why should they burn resources managing containers and figuring out data science tools if they can just tap into solutions like Data bricks which is an AI platform in the cloud and let the experts manage all that stuff. Think about how much money in time that saves enterprises. >> Yeah, I mean, we've got 15 companies here we're showcasing this batch and this season if you call it. That episode we are going to call it? They're awesome. Right? And the next 15 will be the same. And these companies could be the next billion dollar revenue generator because the cloud enables that day. I think that's the exciting part. >> Well thank you both so much for these insights. Really appreciate it. AWS startup showcase highlights the innovation that helps startups succeed. And no one knows that better than our very next guest, Jeff Barr. Welcome to the show and I will send this interview now to Dave and John and see you just in the bit. >> Okay, hey Jeff, great to see you. Thanks for coming on again. >> Great to be back. >> So this is a regular community segment with Jeff Barr who's a legend in the industry. Everyone knows your name. Everyone knows that. Congratulations on your recent blog posts we have reading. Tons of news, I want to get your update because 5G has been all over the news, mobile world congress is right around the corner. I know Bill Vass was a keynote out there, virtual keynote. There's a lot of Amazon discussion around the edge with wavelength. Specifically, this is the outpost piece. And I know there is news I want to get to, but the top of mind is there's massive Amazon expansion and the cloud is going to the edge, it's here. What's up with wavelength. Take us through the, I call it the power edge, the super edge. >> Well, I'm really excited about this mostly because it gives a lot more choice and flexibility and options to our customers. This idea that with wavelength we announced quite some time ago, at least quite some time ago if we think in cloud years. We announced that we would be working with 5G providers all over the world to basically put AWS in the telecom providers data centers or telecom centers, so that as their customers build apps, that those apps would take advantage of the low latency, the high bandwidth, the reliability of 5G, be able to get to some compute and storage services that are incredibly close geographically and latency wise to the compute and storage that is just going to give customers this new power and say, well, what are the cool things we can build? >> Do you see any correlation between wavelength and some of the early Amazon services? Because to me, my gut feels like there's so much headroom there. I mean, I was just riffing on the notion of low latency packets. I mean, just think about the applications, gaming and VR, and metaverse kind of cool stuff like that where having the edge be that how much power there. It just feels like a new, it feels like a new AWS. I mean, what's your take? You've seen the evolutions and the growth of a lot of the key services. Like EC2 and SA3. >> So welcome to my life. And so to me, the way I always think about this is it's like when I go to a home improvement store and I wander through the aisles and I often wonder through with no particular thing that I actually need, but I just go there and say, wow, they've got this and they've got this, they've got this other interesting thing. And I just let my creativity run wild. And instead of trying to solve a problem, I'm saying, well, if I had these different parts, well, what could I actually build with them? And I really think that this breadth of different services and locations and options and communication technologies. I suspect a lot of our customers and customers to be and are in this the same mode where they're saying, I've got all this awesomeness at my fingertips, what might I be able to do with it? >> He reminds me when Fry's was around in Palo Alto, that store is no longer here but it used to be back in the day when it was good. It was you go in and just kind of spend hours and then next thing you know, you built a compute. Like what, I didn't come in here, whether it gets some cables. Now I got a motherboard. >> I clearly remember Fry's and before that there was the weird stuff warehouse was another really cool place to hang out if you remember that. >> Yeah I do. >> I wonder if I could jump in and you guys talking about the edge and Jeff I wanted to ask you about something that is, I think people are starting to really understand and appreciate what you did with the entrepreneur acquisition, what you do with nitro and graviton, and really driving costs down, driving performance up. I mean, there's like a compute Renaissance. And I wonder if you could talk about the importance of that at the edge, because it's got to be low power, it has to be low cost. You got to be doing processing at the edge. What's your take on how that's evolving? >> Certainly so you're totally right that we started working with and then ultimately acquired Annapurna labs in Israel a couple of years ago. I've worked directly with those folks and it's really awesome to see what they've been able to do. Just really saying, let's look at all of these different aspects of building the cloud that were once effectively kind of somewhat software intensive and say, where does it make sense to actually design build fabricate, deploy custom Silicon? So from putting up the system to doing all kinds of additional kinds of security checks, to running local IO devices, running the NBME as fast as possible to support the EBS. Each of those things has been a contributing factor to not just the power of the hardware itself, but what I'm seeing and have seen for the last probably two or three years at this point is the pace of innovation on instance types just continues to get faster and faster. And it's not just cranking out new instance types because we can, it's because our awesomely diverse base of customers keeps coming to us and saying, well, we're happy with what we have so far, but here's this really interesting new use case. And we needed a different ratio of memory to CPU, or we need more cores based on the amount of memory, or we needed a lot of IO bandwidth. And having that nitro as the base lets us really, I don't want to say plug and play, cause I haven't actually built this myself, but it seems like they can actually put the different elements together, very very quickly and then come up with new instance types that just our customers say, yeah, that's exactly what I asked for and be able to just do this entire range of from like micro and nano sized all the way up to incredibly large with incredible just to me like, when we talk about terabytes of memory that are just like actually just RAM memory. It's like, that's just an inconceivably large number by the standards of where I started out in my career. So it's all putting this power in customer hands. >> You used the term plug and play, but it does give you that nitro gives you that optionality. And then other thing that to me is really exciting is the way in which ISVs are writing to whatever's underneath. So you're making that, you know, transparent to the users so I can choose as a customer, the best price performance for my workload and that that's just going to grow that ISV portfolio. >> I think it's really important to be accurate and detailed and as thorough as possible as we launch each one of these new instance types with like what kind of processor is in there and what clock speed does it run at? What kind of, you know, how much memory do we have? What are the, just the ins and outs, and is it Intel or arm or AMD based? It's such an interesting to me contrast. I can still remember back in the very very early days of back, you know, going back almost 15 years at this point and effectively everybody said, well, not everybody. A few people looked and said, yeah, we kind of get the value here. Some people said, this just sounds like a bunch of generic hardware, just kind of generic hardware in Iraq. And even back then it was something that we were very careful with to design and optimize for use cases. But this idea that is generic is so, so, so incredibly inaccurate that I think people are now getting this. And it's okay. It's fine too, not just for the cloud, but for very specific kinds of workloads and use cases. >> And you guys have announced obviously the performance improvements on a lamb** does getting faster, you got the per billing, second billings on windows and SQL server on ECE too**. So I mean, obviously everyone kind of gets that, that's been your DNA, keep making it faster, cheaper, better, easier to use. But the other area I want to get your thoughts on because this is also more on the footprint side, is that the regions and local regions. So you've got more region news, take us through the update on the expansion on the footprint of AWS because you know, a startup can come in and these 15 companies that are here, they're global with AWS, right? So this is a major benefit for customers around the world. And you know, Ali from Data bricks mentioned privacy. Everyone's a privacy company now. So the huge issue, take us through the news on the region. >> Sure, so the two most recent regions that we announced are in the UAE and in Israel. And we generally like to pre-announce these anywhere from six months to two years at a time because we do know that the customers want to start making longer term plans to where they can start thinking about where they can do their computing, where they can store their data. I think at this point we now have seven regions under construction. And, again it's all about customer trice. Sometimes it's because they have very specific reasons where for based on local laws, based on national laws, that they must compute and restore within a particular geographic area. Other times I say, well, a lot of our customers are in this part of the world. Why don't we pick a region that is as close to that part of the world as possible. And one really important thing that I always like to remind our customers of in my audience is, anything that you choose to put in a region, stays in that region unless you very explicitly take an action that says I'd like to replicate it somewhere else. So if someone says, I want to store data in the US, or I want to store it in Frankfurt, or I want to store it in Sao Paulo, or I want to store it in Tokyo or Osaka. They get to make that very specific choice. We give them a lot of tools to help copy and replicate and do cross region operations of various sorts. But at the heart, the customer gets to choose those locations. And that in the early days I think there was this weird sense that you would, you'd put things in the cloud that would just mysteriously just kind of propagate all over the world. That's never been true, and we're very very clear on that. And I just always like to reinforce that point. >> That's great stuff, Jeff. Great to have you on again as a regular update here, just for the folks watching and don't know Jeff he'd been blogging and sharing. He'd been the one man media band for Amazon it's early days. Now he's got departments, he's got peoples on doing videos. It's an immediate franchise in and of itself, but without your rough days we wouldn't have gotten all the great news we subscribe to. We watch all the blog posts. It's essentially the flow coming out of AWS which is just a tsunami of a new announcements. Always great to read, must read. Jeff, thanks for coming on, really appreciate it. That's great. >> Thank you John, great to catch up as always. >> Jeff Barr with AWS again, and follow his stuff. He's got a great audience and community. They talk back, they collaborate and they're highly engaged. So check out Jeff's blog and his social presence. All right, Natalie, back to you for more coverage. >> Terrific. Well, did you guys know that Jeff took a three week AWS road trip across 15 cities in America to meet with cloud computing enthusiasts? 5,500 miles he drove, really incredible I didn't realize that. Let's unpack that interview though. What stood out to you John? >> I think Jeff, Barr's an example of what I call direct to audience a business model. He's been doing it from the beginning and I've been following his career. I remember back in the day when Amazon was started, he was always building stuff. He's a builder, he's classic. And he's been there from the beginning. At the beginning he was just the blog and it became a huge audience. It's now morphed into, he was power blogging so hard. He has now support and he still does it now. It's basically the conduit for information coming out of Amazon. I think Jeff has single-handedly made Amazon so successful at the community developer level, and that's the startup action happened and that got them going. And I think he deserves a lot of the success for AWS. >> And Dave, how about you? What is your reaction? >> Well I think you know, and everybody knows about the cloud and back stop X** and agility, and you know, eliminating the undifferentiated, heavy lifting and all that stuff. And one of the things that's often overlooked which is why I'm excited to be part of this program is the innovation. And the innovation comes from startups, and startups start in the cloud. And so I think that that's part of the flywheel effect. You just don't see a lot of startups these days saying, okay, I'm going to do something that's outside of the cloud. There are some, but for the most part, you know, if you saw in software, you're starting in the cloud, it's so capital efficient. I think that's one thing, I've throughout my career. I've been obsessed with every part of the stack from whether it's, you know, close to the business process with the applications. And right now I'm really obsessed with the plumbing, which is why I was excited to talk about, you know, the Annapurna acquisition. Amazon bought and a part of the $350 million, it's reported, you know, maybe a little bit more, but that isn't an amazing acquisition. And the reason why that's so important is because Amazon is continuing to drive costs down, drive performance up. And in my opinion, leaving a lot of the traditional players in their dust, especially when it comes to the power and cooling. You have often overlooked things. And the other piece of the interview was that Amazon is actually getting ISVs to write to these new platforms so that you don't have to worry about there's the software run on this chip or that chip, or x86 or arm or whatever it is. It runs. And so I can choose the best price performance. And that's where people don't, they misunderstand, you always say it John, just said that people are misunderstood. I think they misunderstand, they confused, you know, the price of the cloud with the cost of the cloud. They ignore all the labor costs that are associated with that. And so, you know, there's a lot of discussion now about the cloud tax. I just think the pace is accelerating. The gap is not closing, it's widening. >> If you look at the one question I asked them about wavelength and I had a follow up there when I said, you know, we riff on it and you see, he lit up like he beam was beaming because he said something interesting. It's not that there's a problem to solve at this opportunity. And he conveyed it to like I said, walking through Fry's. But like, you go into a store and he's a builder. So he sees opportunity. And this comes back down to the Martine Casada paradox posts he wrote about do you optimize for CapEx or future revenue? And I think the tell sign is at the wavelength edge piece is going to be so creative and that's going to open up massive opportunities. I think that's the place to watch. That's the place I'm watching. And I think startups going to come out of the woodwork because that's where the action will be. And that's just Amazon at the edge, I mean, that's just cloud at the edge. I think that is going to be very effective. And his that's a little TeleSign, he kind of revealed a little bit there, a lot there with that comment. >> Well that's a to be continued conversation. >> Indeed, I would love to introduce our next guest. We actually have Soma on the line. He's the managing director at Madrona venture group. Thank you Soma very much for coming for our keynote program. >> Thank you Natalie and I'm great to be here and will have the opportunity to spend some time with you all. >> Well, you have a long to nerd history in the enterprise. How would you define the modern enterprise also known as cloud scale? >> Yeah, so I would say I have, first of all, like, you know, we've all heard this now for the last, you know, say 10 years or so. Like, software is eating the world. Okay. Put it another way, we think about like, hey, every enterprise is a software company first and foremost. Okay. And companies that truly internalize that, that truly think about that, and truly act that way are going to start up, continue running well and things that don't internalize that, and don't do that are going to be left behind sooner than later. Right. And the last few years you start off thing and not take it to the next level and talk about like, not every enterprise is not going through a digital transformation. Okay. So when you sort of think about the world from that lens. Okay. Modern enterprise has to think about like, and I am first and foremost, a technology company. I may be in the business of making a car art, you know, manufacturing paper, or like you know, manufacturing some healthcare products or what have you got out there. But technology and software is what is going to give me a unique, differentiated advantage that's going to let me do what I need to do for my customers in the best possible way [Indistinct]. So that sort of level of focus, level of execution, has to be there in a modern enterprise. The other thing is like not every modern enterprise needs to think about regular. I'm competing for talent, not anymore with my peers in my industry. I'm competing for technology talent and software talent with the top five technology companies in the world. Whether it is Amazon or Facebook or Microsoft or Google, or what have you cannot think, right? So you really have to have that mindset, and then everything flows from that. >> So I got to ask you on the enterprise side again, you've seen many ways of innovation. You've got, you know, been in the industry for many, many years. The old way was enterprises want the best proven product and the startups want that lucrative contract. Right? Yeah. And get that beach in. And it used to be, and we addressed this in our earlier keynote with Ali and how it's changing, the buyers are changing because the cloud has enabled this new kind of execution. I call it agile, call it what you want. Developers are driving modern applications, so enterprises are still, there's no, the playbooks evolving. Right? So we see that with the pandemic, people had needs, urgent needs, and they tried new stuff and it worked. The parachute opened as they say. So how do you look at this as you look at stars, you're investing in and you're coaching them. What's the playbook? What's the secret sauce of how to crack the enterprise code today. And if you're an enterprise buyer, what do I need to do? I want to be more agile. Is there a clear path? Is there's a TSA to let stuff go through faster? I mean, what is the modern playbook for buying and being a supplier? >> That's a fantastic question, John, because I think that sort of playbook is changing, even as we speak here currently. A couple of key things to understand first of all is like, you know, decision-making inside an enterprise is getting more and more de-centralized. Particularly decisions around what technology to use and what solutions to use to be able to do what people need to do. That decision making is no longer sort of, you know, all done like the CEO's office or the CTO's office kind of thing. Developers are more and more like you rightly said, like sort of the central of the workflow and the decision making process. So it'll be who both the enterprises, as well as the startups to really understand that. So what does it mean now from a startup perspective, from a startup perspective, it means like, right. In addition to thinking about like hey, not do I go create an enterprise sales post, do I sell to the enterprise like what I might have done in the past? Is that the best way of moving forward, or should I be thinking about a product led growth go to market initiative? You know, build a product that is easy to use, that made self serve really works, you know, get the developers to start using to see the value to fall in love with the product and then you think about like hey, how do I go translate that into a contract with enterprise. Right? And more and more what I call particularly, you know, startups and technology companies that are focused on the developer audience are thinking about like, you know, how do I have a bottom up go to market motion? And sometime I may sort of, you know, overlap that with the top down enterprise sales motion that we know that has been going on for many, many years or decades kind of thing. But really this product led growth bottom up a go to market motion is something that we are seeing on the rise. I would say they're going to have more than half the startup that we come across today, have that in some way shape or form. And so the enterprise also needs to understand this, the CIO or the CTO needs to know that like hey, I'm not decision-making is getting de-centralized. I need to empower my engineers and my engineering managers and my engineering leaders to be able to make the right decision and trust them. I'm going to give them some guard rails so that I don't find myself in a soup, you know, sometime down the road. But once I give them the guard rails, I'm going to enable people to make the decisions. People who are closer to the problem, to make the right decision. >> Well Soma, what are some of the ways that startups can accelerate their enterprise penetration? >> I think that's another good question. First of all, you need to think about like, Hey, what are enterprises wanting to rec? Okay. If you start off take like two steps back and think about what the enterprise is really think about it going. I'm a software company, but I'm really manufacturing paper. What do I do? Right? The core thing that most enterprises care about is like, hey, how do I better engage with my customers? How do I better serve my customers? And how do I do it in the most optimal way? At the end of the day that's what like most enterprises really care about. So startups need to understand, what are the problems that the enterprise is trying to solve? What kind of tools and platform technologies and infrastructure support, and, you know, everything else that they need to be able to do what they need to do and what only they can do in the most optimal way. Right? So to the extent you are providing either a tool or platform or some technology that is going to enable your enterprise to make progress on what they want to do, you're going to get more traction within the enterprise. In other words, stop thinking about technology, and start thinking about the customer problem that they want to solve. And the more you anchor your company, and more you anchor your conversation with the customer around that, the more the enterprise is going to get excited about wanting to work with you. >> So I got to ask you on the enterprise and developer equation because CSOs and CXOs, depending who you talk to have that same answer. Oh yeah. In the 90's and 2000's, we kind of didn't, we throttled down, we were using the legacy developer tools and cloud came and then we had to rebuild and we didn't really know what to do. So you seeing a shift, and this is kind of been going on for at least the past five to eight years, a lot more developers being hired yet. I mean, at FinTech is clearly a vertical, they always had developers and everyone had developers, but there's a fast ramp up of developers now and the role of open source has changed. Just looking at the participation. They're not just consuming open source, open source is part of the business model for mainstream enterprises. How is this, first of all, do you agree? And if so, how has this changed the course of an enterprise human resource selection? How they're organized? What's your vision on that? >> Yeah. So as I mentioned earlier, John, in my mind the first thing is, and this sort of, you know, like you said financial services has always been sort of hiring people [Indistinct]. And this is like five-year old story. So bear with me I'll tell you the firewall story and then come to I was trying to, the cloud CIO or the Goldman Sachs. Okay. And this is five years ago when people were still like, hey, is this cloud thing real and now is cloud going to take over the world? You know, am I really ready to put my data in the cloud? So there are a lot of questions and conversations can affect. The CIO of Goldman Sachs told me two things that I remember to this day. One is, hey, we've got a internal edict. That we made a decision that in the next five years, everything in Goldman Sachs is going to be on the public law. And I literally jumped out of the chair and I said like now are you going to get there? And then he laughed and said like now it really doesn't matter whether we get there or not. We want to set the tone, set the direction for the organization that hey, public cloud is here. Public cloud is there. And we need to like, you know, move as fast as we realistically can and think about all the financial regulations and security and privacy. And all these things that we care about deeply. But given all of that, the world is going towards public load and we better be on the leading edge as opposed to the lagging edge. And the second thing he said, like we're talking about like hey, how are you hiring, you know, engineers at Goldman Sachs Canada? And he said like in hey, I sort of, my team goes out to the top 20 schools in the US. And the people we really compete with are, and he was saying this, Hey, we don't compete with JP Morgan or Morgan Stanley, or pick any of your favorite financial institutions. We really think about like, hey, we want to get the best talent into Goldman Sachs out of these schools. And we really compete head to head with Google. We compete head to head with Microsoft. We compete head to head with Facebook. And we know that the caliber of people that we want to get is no different than what these companies want. If you want to continue being a successful, leading it, you know, financial services player. That sort of tells you what's going on. You also talked a little bit about like hey, open source is here to stay. What does that really mean kind of thing. In my mind like now, you can tell me that I can have from given my pedigree at Microsoft, I can tell you that we were the first embraces of open source in this world. So I'll say that right off the bat. But having said that we did in our turn around and said like, hey, this open source is real, this open source is going to be great. How can we embrace and how can we participate? And you fast forward to today, like in a Microsoft is probably as good as open source as probably any other large company I would say. Right? Including like the work that the company has done in terms of acquiring GitHub and letting it stay true to its original promise of open source and community can I think, right? I think Microsoft has come a long way kind of thing. But the thing that like in all these enterprises need to think about is you want your developers to have access to the latest and greatest tools. To the latest and greatest that the software can provide. And you really don't want your engineers to be reinventing the wheel all the time. So there is something available in the open source world. Go ahead, please set up, think about whether that makes sense for you to use it. And likewise, if you think that is something you can contribute to the open source work, go ahead and do that. So it's really a two way somebody Arctic relationship that enterprises need to have, and they need to enable their developers to want to have that symbiotic relationship. >> Soma, fantastic insights. Thank you so much for joining our keynote program. >> Thank you Natalie and thank you John. It was always fun to chat with you guys. Thank you. >> Thank you. >> John we would love to get your quick insight on that. >> Well I think first of all, he's a prolific investor the great from Madrona venture partners, which is well known in the tech circles. They're in Seattle, which is in the hub of I call cloud city. You've got Amazon and Microsoft there. He'd been at Microsoft and he knows the developer ecosystem. And reason why I like his perspective is that he understands the value of having developers as a core competency in Microsoft. That's their DNA. You look at Microsoft, their number one thing from day one besides software was developers. That was their army, the thousand centurions that one won everything for them. That has shifted. And he brought up open source, and .net and how they've embraced Linux, but something that tele before he became CEO, we interviewed him in the cube at an Xcel partners event at Stanford. He was open before he was CEO. He was talking about opening up. They opened up a lot of their open source infrastructure projects to the open compute foundation early. So they had already had that going and at that price, since that time, the stock price of Microsoft has skyrocketed because as Ali said, open always wins. And I think that is what you see here, and as an investor now he's picking in startups and investing in them. He's got to read the tea leaves. He's got to be in the right side of history. So he brings a great perspective because he sees the old way and he understands the new way. That is the key for success we've seen in the enterprise and with the startups. The people who get the future, and can create the value are going to win. >> Yeah, really excellent point. And just really quickly. What do you think were some of our greatest hits on this hour of programming? >> Well first of all I'm really impressed that Ali took the time to come join us because I know he's super busy. I think they're at a $28 billion valuation now they're pushing a billion dollars in revenue, gap revenue. And again, just a few short years ago, they had zero software revenue. So of these 15 companies we're showcasing today, you know, there's a next Data bricks in there. They're all going to be successful. They already are successful. And they're all on this rocket ship trajectory. Ali is smart, he's also got the advantage of being part of that Berkeley community which they're early on a lot of things now. Being early means you're wrong a lot, but you're also right, and you're right big. So Berkeley and Stanford obviously big areas here in the bay area as research. He is smart, He's got a great team and he's really open. So having him share his best practices, I thought that was a great highlight. Of course, Jeff Barr highlighting some of the insights that he brings and honestly having a perspective of a VC. And we're going to have Peter Wagner from wing VC who's a classic enterprise investors, super smart. So he'll add some insight. Of course, one of the community session, whenever our influencers coming on, it's our beat coming on at the end, as well as Katie Drucker. Another Madrona person is going to talk about growth hacking, growth strategies, but yeah, sights Raleigh coming on. >> Terrific, well thank you so much for those insights and thank you to everyone who is watching the first hour of our live coverage of the AWS startup showcase for myself, Natalie Ehrlich, John, for your and Dave Vellante we want to thank you very much for watching and do stay tuned for more amazing content, as well as a special live segment that John Furrier is going to be hosting. It takes place at 12:30 PM Pacific time, and it's called cracking the code, lessons learned on how enterprise buyers evaluate new startups. Don't go anywhere.
SUMMARY :
on the latest innovations and solutions How are you doing. are you looking forward to. and of course the keynotes Ali Ghodsi, of the quality of healthcare and you know, to go from, you know, a you on the other side. Congratulations and great to see you. Thank you so much, good to see you again. And you were all in on cloud. is the success of how you guys align it becomes a force that you moments that you can point to, So that's the second one that we bet on. And one of the things that Back in the day, you had to of say that the data problems And you know, there's this and that's why we have you on here. And if you say you're a data company, and growing companies to choose In the past, you know, So I got to ask you from a for the gigs, you know, to eat out signal out of the, you know, I got to ask you a final question. But the goal is to eventually be able the more lock-in you get. to one cloud or, you know, and taking the time with us today. appreciate talking to you. So Natalie, back to you but I'd love to get Dave's insights first. And the last thing you talked And see that's the key to the of the red hat model, to like block you and filter you. and let the experts manage all that stuff. And the next 15 will be the same. see you just in the bit. Okay, hey Jeff, great to see you. and the cloud is going and options to our customers. and some of the early Amazon services? And so to me, and then next thing you Fry's and before that and appreciate what you did And having that nitro as the base is the way in which ISVs of back, you know, going back is that the regions and local regions. And that in the early days Great to have you on again Thank you John, great to you for more coverage. What stood out to you John? and that's the startup action happened the most part, you know, And that's just Amazon at the edge, Well that's a to be We actually have Soma on the line. and I'm great to be here How would you define the modern enterprise And the last few years you start off thing So I got to ask you on and then you think about like hey, And the more you anchor your company, So I got to ask you on the enterprise and this sort of, you know, Thank you so much for It was always fun to chat with you guys. John we would love to get And I think that is what you see here, What do you think were it's our beat coming on at the end, and it's called cracking the code,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Ali Ghodsi | PERSON | 0.99+ |
Natalie Ehrlich | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Natalie | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Osaka | LOCATION | 0.99+ |
UAE | LOCATION | 0.99+ |
Allie | PERSON | 0.99+ |
Israel | LOCATION | 0.99+ |
Peter Wagner | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Tokyo | LOCATION | 0.99+ |
$10 | QUANTITY | 0.99+ |
Sao Paulo | LOCATION | 0.99+ |
Goldman Sachs | ORGANIZATION | 0.99+ |
Frankfurt | LOCATION | 0.99+ |
Berkeley | ORGANIZATION | 0.99+ |
Jeff Barr | PERSON | 0.99+ |
Seattle | LOCATION | 0.99+ |
$28 billion | QUANTITY | 0.99+ |
Katie Drucker | PERSON | 0.99+ |
$15 | QUANTITY | 0.99+ |
Morgan Stanley | ORGANIZATION | 0.99+ |
Soma | PERSON | 0.99+ |
Iraq | LOCATION | 0.99+ |
2009 | DATE | 0.99+ |
Juan | PERSON | 0.99+ |
Goldman Sachs | ORGANIZATION | 0.99+ |
$350 million | QUANTITY | 0.99+ |
Ali | PERSON | 0.99+ |
11 years | QUANTITY | 0.99+ |
Maria Colgan & Gerald Venzl, Oracle | June CUBEconversation
(upbeat music) Developers have become the new king makers in the world of digital and cloud. The rise of containers and microservices has accelerated the transition to cloud native applications. A lot of people will talk about application architecture and the related paradigms and the benefits they bring for the process of writing and delivering new apps. But a major challenge continues to be, the how and the what when it comes to accessing, processing and getting insights from the massive amounts of data that we have to deal with in today's world. And with me are two experts from the data management world who will share with us how they think about the best techniques and practices based on what they see at large organizations who are working with data and developing so-called data-driven apps. Please welcome Maria Colgan and Gerald Venzl, two distinguish product managers from Oracle. Folks, welcome, thanks so much for coming on. >> Thanks for having us Dave. >> Thank you very much for having us. >> Okay, Maria let's start with you. So, we throw around this term data-driven, data-driven applications. What are we really talking about there? >> So data-driven applications are applications that work on a diverse set of data. So anything from spatial to sensor data, document data as well as your usual transaction processing data. And what they're going to do is they'll generate value from that data in very different ways to a traditional application. So for example, they may use machine learning, they are able to do product recommendations in the middle of a transaction. Or we could use graph to be able to identify an influencer within the community so we can target them with a specific promotion. It could also use spatial data to be able to help find the nearest stores to a particular customer. And because these apps are deployed on multiple platforms, everything from mobile devices as well as standard browsers, they need a data platform that's going to be both secure, reliable and scalable. >> Well, so when you think about how the workloads are shifting I mean, we're not talking about, you know it's not anymore a world of just your ERP or your HCM or your CRM, you know kind of the traditional operational systems. You really are seeing an explosion of these new data oriented apps. You're seeing, you know, modeling in the cloud, you are going to see more and more inferencing, inferencing at the edge. But Maria maybe you could talk a little bit about sort of the benefits that customers are seeing from developing these types of applications. I mean, why should people care about data-driven apps? >> Oh, for sure, there's massive benefits to them. I mean, probably the most obvious one for any business regardless of the industry, is that they not only allow you to understand what your customers are up to, but they allow you to be able to anticipate those customer's needs. So that helps businesses maintain that competitive edge and retain their customers. But it also helps them make data-driven decisions in real time based on actual data rather than on somebody's gut feeling or basing those decisions on historical data. So for example, you can do real-time price adjustments on products based on demand and so forth, that kind of thing. So it really changes the way people do business today. >> So Gerald, you think about the narrative in the industry everybody wants to be a platform player all your customers they are becoming software companies, they are becoming platform players. Everybody wants to be like, you know name a company that is huge trillion dollar market cap or whatever, and those are data-driven companies. And so it would seem to me that data-driven applications, there's nobody, no company really shouldn't be data-driven. Do you buy that? >> Yeah, absolutely. I mean, data-driven, and that naturally the whole industry is data-driven, right? It's like we all have information technologies about processing data and deriving information out of it. But when it comes to app development I think there is a big push to kind of like we have to do machine learning in our applications, we have to get insights from data. And when you actually look back a bit and take a step back, you see that there's of course many different kinds of applications out there as well that's not to be forgotten, right? So there is a usual front end user interfaces where really the application all it does is just entering some piece of information that's stored somewhere or perhaps a microservice that's not attached to a data to you at all but just receives or asks calls (indistinct). So I think it's not necessarily so important for every developer to kind of go on a bandwagon that they have to be data-driven. But I think it's equally important for those applications and those developers that build applications, that drive the business, that make business critical decisions as Maria mentioned before. Those guys should take really a close look into what data-driven apps means and what the data to you can actually give to them. Because what we see also happening a lot is that a lot of the things that are well known and out there just ready to use are being reimplemented in the applications. And for those applications, they essentially just ended up spending more time writing codes that will be already there and then have to maintain and debug the code as well rather than just going to market faster. >> Gerald can you talk to the prevailing approaches that developers take to build data-driven applications? What are the ones that you see? Let's dig into that a little bit more and maybe differentiate the different approaches and talk about that? >> Yeah, absolutely. I think right now the industry is like in two camps, it's like sort of a religious war going on that you'll see often happening with different architectures and so forth going on. So we have single purpose databases or data management technologies. Which are technologies that are as the name suggests build around a single purpose. So it's like, you know a typical example would be your ordinary key-value store. And a key-value store all it does is it allows you to store and retrieve a piece of data whatever that may be really, really fast but it doesn't really go beyond that. And then the other side of the house or the other camp would be multimodal databases, multimodal data management technologies. Those are technologies that allow you to store different types of data, different formats of data in the same technology in the same system alongside. And, you know, when you look at the geographics out there of what we have from technology, is pretty much any relational database or any database really has evolved into such a multimodal database. Whether that's MySQL that allows you to store or chase them alongside relational or even a MongoDB that allows you to do or gives you native graph support since (mumbles) and as well alongside the adjacent support. >> Well, it's clearly a trend in the industry. We've talked about this a lot in The Cube. We know where Oracle stands on this. I mean, you just mentioned MySQL but I mean, Oracle Databases you've been extending, you've mentioned JSON, we've got blockchain now in there you're infusing, you know ML and AI into the database, graph database capabilities, you know on and on and on. We talked a lot about we compared that to Amazon which is kind of the right tool, the right job approach. So maybe you could talk about, you know, your point of view, the benefits for developers of using that converged database if I can use that word approach being able to store multiple data formats? Why do you feel like that's a better approach? >> Yeah, I think on a high level it comes down to complexity. You are actually avoiding additional complexity, right? So not every use case that you have necessarily warrants to have yet another data management technology or yet the special build technology for managing that data, right? It's like many use cases that we see out there happily want to just store a piece of a chase and document, a piece of chase in a database and then perhaps retrieve it again afterwards so write some simple queries over it. And you really don't have to get a new database technology or a NoSQL database into the mix if you already have some to just fulfill that exact use case. You could just happily store that information as well in the database you already have. And what it really comes down to is the learning curve for developers, right? So it's like, as you use the same technology to store other types of data, you don't have to learn a new technology, you don't have to associate yourself with new and learn new drivers. You don't have to find new frameworks and you don't have to know how to necessarily operate or best model your data for that database. You can essentially just reuse your knowledge of the technology as well as the libraries and code you have already built in house perhaps in another application, perhaps, you know framework that you used against the same technology because it is still the same technology. So, kind of all comes down again to avoiding complexity rather than not fragmenting you know, the many different technologies we have. If you were to look at the different data formats that are out there today it's like, you know, you would end up with many different databases just to store them if you were to fully religiously follow the single purpose best built technology for every use case paradigm, right? And then you would just end up having to manage many different databases more than actually focusing on your app and getting value to your business or to your user. >> Okay, so I get that and I buy that by the way. I mean, especially if you're a larger organization and you've got all these projects going on but before we go back to Maria, Gerald, I want to just, I want to push on that a little bit. Because the counter to that argument would be in the analogy. And I wonder if you, I'd love for you to, you know knock this analogy off the blocks. The counter would be okay, Oracle is the Swiss Army knife and it's got, you know, all in one. But sometimes I need that specialized long screwdriver and I go into my toolbox and I grab that. It's better than the screwdriver in my Swiss Army knife. Why, are you the Swiss Army knife of databases? Or are you the all-in-one have that best of breed screwdriver for me? How do you think about that? >> Yeah, that's a fantastic question, right? And I think it's first of all, you have to separate between Oracle the company that has actually multiple data management technologies and databases out there as you said before, right? And Oracle Database. And I think Oracle Database is definitely a Swiss Army knife has many capabilities of since the last 40 years, you know that we've seen object support coming that's still in the Oracle Database today. We have seen XML coming, it's still in the Oracle Database, graph, spatial, et cetera. And so you have many different ways of managing your data and then on top of that going into the converge, not only do we allow you to store the different data model in there but we actually allow you also to, you apply all the security policies and so forth on top of it something Maria can talk more about the mission around converged database. I would also argue though that for some aspects, we do actually have to or add a screwdriver that you talked about as well. So especially in the relational world people get very quickly hung up on this idea that, oh, if you only do rows and columns, well, that's kind of what you put down on disk. And that was never true, it's the relational model is actually a logical model. What's probably being put down on disk is blocks that align themselves nice with block storage and always has been. So that allows you to actually model and process the data sort of differently. And one common example or one good example that we have that we introduced a couple of years ago was when, column and databases were very strong and you know, the competition came it's like, yeah, we have In-Memory column that stores now they're so much better. And we were like, well, orienting the data role-based or column-based really doesn't matter in the sense that we store them as blocks on disks. And so we introduced the in memory technology which gives you an In-Memory column, a representation of your data as well alongside your relational. So there is an example where you go like, well, actually you know, if you have this use case of the column or analytics all In-Memory, I would argue Oracle Database is also that screwdriver you want to go down to and gives you that capability. Because not only gives you representation in columnar, but also which many people then forget all the analytic power on top of SQL. It's one thing to store your data columnar, it's a completely different story to actually be able to run analytics on top of that and having all the built-in functionalities and stuff that you want to do with the data on top of it as you analyze it. >> You know, that's a great example, the kilometer 'cause I remember there was like a lot of hype around it. Oh, it's the Oracle killer, you know, at Vertica. Vertica is still around but, you know it never really hit escape velocity. But you know, good product, good company, whatever. Natezza, it kind of got buried inside of IBM. ParXL kind of became, you know, red shift with that deal so that kind of went away. Teradata bought a company, I forget which company it bought but. So that hype kind of disapated and now it's like, oh yeah, columnar. It's kind of like In-Memory, we've had a In-Memory databases ever since we've had databases you know, it's a kind of a feature not a sector. But anyway, Maria, let's come back to you. You've got a lot of customer experience. And you speak with a lot of companies, you know during your time at Oracle. What else are you seeing in terms of the benefits to this approach that might not be so intuitive and obvious right away? >> I think one of the biggest benefits to having a multimodel multiworkload or as we call it a converged database, is the fact that you can get greater data synergy from it. In other words, you can utilize all these different techniques and data models to get better value out of that data. So things like being able to do real-time machine learning, fraud detection inside a transaction or being able to do a product recommendation by accessing three different data models. So for example, if I'm trying to recommend a product for you Dave, I might use graph analytics to be able to figure out your community. Not just your friends, but other people on our system who look and behave just like you. Once I know that community then I can go over and see what products they bought by looking up our product catalog which may be stored as JSON. And then on top of that I can then see using the key-value what products inside that catalog those community members gave a five star rating to. So that way I can really pinpoint the right product for you. And I can do all of that in one transaction inside the database without having to transform that data into different models or God forbid, access different systems to be able to get all of that information. So it really simplifies how we can generate that value from the data. And of course, the other thing our customers love is when it comes to deploying data-driven apps, when you do it on a converged database it's much simpler because it is that standard data platform. So you're not having to manage multiple independent single purpose databases. You're not having to implement the security and the high availability policies, you know across a bunch of different diverse platforms. All of that can be done much simpler with a converged database 'cause the DBA team of course, is going to just use that standard set of tools to manage, monitor and secure those systems. >> Thank you for that. And you know, it's interesting, you talk about simplification and you are in Juan's organization so you've big focus on mission critical. And so one of the things that I think is often overlooked well, we talk about all the time is recovery. And if things are simpler, recovery is faster and easier. And so it's kind of the hallmark of Oracle is like the gold standard of the toughest apps, the most mission critical apps. But I wanted to get to the cloud Maria. So because everything is going to the cloud, right? Not all workloads are going to the cloud but everybody is talking about the cloud. Everybody has cloud first mentality and so yes, it's a hybrid world. But the natural next question is how do you think the cloud fits into this world of data-driven apps? >> I think just like any app that you're developing, the cloud helps to accelerate that development. And of course the deployment of these data-driven applications. 'Cause if you think about it, the developer is instantly able to provision a converged database that Oracle will automatically manage and look after for them. But what's great about doing something like that if you use like our autonomous database service is that it comes in different flavors. So you can get autonomous transaction processing, data warehousing or autonomous JSON so that the developer is going to get a database that's been optimized for their specific use case, whatever they are trying to solve. And it's also going to contain all of that great functionality and capabilities that we've been talking about. So what that really means to the developer though is as the project evolves and inevitably the business needs change a little, there's no need to panic when one of those changes comes in because your converged database or your autonomous database has all of those additional capabilities. So you can simply utilize those to able to address those evolving changes in the project. 'Cause let's face it, none of us normally know exactly what we need to build right at the very beginning. And on top of that they also kind of get a built-in buddy in the cloud, especially in the autonomous database. And that buddy comes in the form of built-in workload optimizations. So with the autonomous database we do things like automatic indexing where we're using machine learning to be that buddy for the developer. So what it'll do is it'll monitor the workload and see what kind of queries are being run on that system. And then it will actually determine if there are indexes that should be built to help improve the performance of that application. And not only does it bill those indexes but it verifies that they help improve the performance before publishing it to the application. So by the time the developer is finished with that app and it's ready to be deployed, it's actually also been optimized by the developers buddy, the Oracle autonomous database. So, you know, it's a really nice helping hand for developers when they're building any app especially data-driven apps. >> I like how you sort of gave us, you know the truth here is you don't always know where you're going when you're building an app. It's like it goes from you are trying to build it and they will come to start building it and we'll figure out where it's going to go. With Agile that's kind of how it works. But so I wonder, can you give some examples of maybe customers or maybe genericize them if you need to. Data-driven apps in the cloud where customers were able to drive more efficiency, where the cloud buddy allowed the customers to do more with less? >> No, we have tons of these but I'll try and keep it to just a couple. One that comes to mind straight away is retrace. These folks built a blockchain app in the Oracle Cloud that allows manufacturers to actually share the supply chain with the consumer. So the consumer can see exactly, who made their product? Using what raw materials? Where they were sourced from? How it was done? All of that is visible to the consumer. And in order to be able to share that they had to work on a very diverse set of data. So they had everything from JSON documents to images as well as your traditional transactions in there. And they store all of that information inside the Oracle autonomous database, they were able to build their app and deploy it on the cloud. And they were able to do all of that very, very quickly. So, you know, that ability to work on multiple different data types in a single database really helped them build that product and get it to market in a very short amount of time. Another customer that's doing something really, really interesting is MindSense. So these guys operate the largest mines in Canada, Chile, and Peru. But what they do is they put these x-ray devices on the massive mechanical shovels that are at the cove or at the mine face. And what that does is it senses the contents of the buckets inside these mining machines. And it's looking to see at that content, to see how it can optimize the processing of the ore inside in that bucket. So they're looking to minimize the amount of power and water that it's going to take to process that. And also of course, minimize the amount of waste that's going to come out of that project. So all of that sensor data is sent into an autonomous database where it's going to be processed by a whole host of different users. So everything from the mine engineers to the geo scientists, to even their own data scientists utilize that data to drive their business forward. And what I love about these guys is they're not happy with building just one app. MindSense actually use our built-in low core development environment, APEX that comes as part of the autonomous database and they actually produce applications constantly for different aspects of their business using that technology. And it's actually able to accelerate those new apps to the business. It takes them now just a couple of days or weeks to produce an app instead of months or years to build those new apps. >> Great, thank you for that Maria. Gerald, I'm going to push you again. So, I said upfront and talked about microservices and the cloud and containers and you know, anybody in the developer space follows that very closely. But some of the things that we've been talking about here people might look at that and say, well, they're kind of antithetical to microservices. This is our Oracles monolithic approach. But when you think about the benefits of microservices, people want freedom of choice, technology choice, seen as a big advantage of microservices and containers. How do you address such an argument? >> Yeah, that's an excellent question and I get that quite often. The microservices architecture in general as I said before had architectures, Linux distributions, et cetera. It's kind of always a bit of like there's an academic approach and there's a pragmatic approach. And when you look at the microservices the original definitions that came out at the early 2010s. They actually never said that each microservice has to have a database. And they also never said that if a microservice has a database, you have to use a different technology for each microservice. Just like they never said, you have to write a microservice in a different programming language, right? So where I'm going with this is like, yes you know, sometimes when you look at some vendors out there, some niche players, they push this message or they jump on this academic approach of like each microservice has the best tool at hand or I'd use a different database for your purpose, et cetera. Which almost often comes across like us. You know, we want to stay part of the conversation. Nothing stops a developer from, you know using a multimodal database for the microservice and just using that as a document store, right? Or just using that as a relational database. And, you know, sometimes I mean, it was actually something that happened that was really interesting yesterday I don't know whether you follow Dave or not. But Facebook had an outage yesterday, right? And Facebook is one of those companies that are seen as the Silicon Valley, you know know how to do microservices companies. And when you add through the outage, well, what happened, right? Some unfortunate logical error with configuration as a force that took a database cluster down. So, you know, there you have it where you go like, well, maybe not every microservice is actually in fact talking to its own database or its own special purpose database. I think there, you know, well, what we should, the industry should be focusing much more on this argument of which technology to use? What's the right tool for a job? Is more to ask themselves, what business problem actually are we trying to solve? And therefore what's the right approach and the right technology for this. And so therefore, just as I said before, you know multimodal databases they do have strong benefits. They have many built-in functionalities that are already there and they allow you to reduce this complexity of having to know many different technologies, right? And so it's not only to store different data models either you know, treat a multimodal database as a chasing documents store or a relational database but most databases are multimodal since 20 plus years. But it's also actually being able to perhaps if you store that data together, you can perhaps actually derive additional value for somebody else but perhaps not for your application. But like for example, if you were to use Oracle Database you can actually write queries on top of all of that data. It doesn't really matter for our query engine whether it's the data is format that then chase or the data is formatted in rows and columns you can just rather than query over it. And that's actually very powerful for those guys that have to, you know get the reporting done the end of the day, the end of the week. And for those guys that are the data scientists that they want to figure out, you know which product performed really well or can we tweak something here and there. When you look into that space you still see a huge divergence between the guys to put data in kind of the altarpiece style and guys that try to derive new insights. And there's still a lot of ETL going around and, you know we have big data technologies that some of them come and went and some of them came in that are still around like Apache Spark which is still like a SQL engine on top of any of your data kind of going back to the same concept. And so I will say that, you know, for developers when we look at microservices it's like, first of all, is the argument you were making because the vendor or the technology you want to use tells you this argument or, you know, you kind of want to have an argument to use a specific technology? Or is it really more because it is the best technology, to best use for this given use case for this given application that you have? And if so there's of course, also nothing wrong to use a single purpose technology either, right? >> Yeah, I mean, whenever I talk about Oracle I always come back to the most important applications, the mission critical. It's very difficult to architect databases with microservices and containers. You have to be really, really careful. And so and again, it comes back to what we were talking before about with Maria that the complexity and the recovery. But Gerald I want to stay with you for a minute. So there's other data management technologies popping out there. I mean, I've seen some people saying, okay just leave the data in an S3 bucket. We can query that, then we've got some magic sauce to do that. And so why are you optimistic about you know, traditional database technology going forward? >> I would say because of the history of databases. So one thing that once struck me when I came to Oracle and then got to meet great people like Juan Luis and Andy Mendelsohn who had been here for a long, long time. I come to realization that relational databases are around for about 45 years now. And, you know, I was like, I'm too young to have been around then, right? So I was like, what else was around 45 years? It's like just the tech stack that we have today. It's like, how does this look like? Well, Linux only came out in 93. Well, databases pre-date Linux a lot rather than as I started digging I saw a lot of technologies come and go, right? And you mentioned before like the technologies that data management systems that we had that came and went like the columnar databases or XML databases, object databases. And even before relational databases before Cot gave us the relational model there were apparently these networks stores network databases which to some extent look very similar to adjacent documents. There wasn't a harder storing data and a hierarchy to format. And, you know when you then start actually reading the Cot paper and diving a little bit more into the relation model, that's I think one important crux in there that most of the industry keeps forgetting or it hasn't been around to even know. And that is that when Cot created the relational model, he actually focused not so much on the application putting the data in, but on future users and applications still being able to making sense out of the data, right? And that's kind of like I said before we had those network models, we had XML databases you have adjacent documents stores. And the one thing that they all have along with it is like the application that puts the data in decides the structure of the data. And that's all well and good if you had an application of the developer writing an application. It can become really tricky when 10 years later you still want to look at that data and the application that the developer is no longer around then you go like, what does this all mean? Where is the structure defined? What is this attribute? What does it mean? How does it correlate to others? And the one thing that people tend to forget is that it's actually the data that's here to stay not someone who does the applications where it is. Ideally, every company wants to store every single byte of data that they have because there might be future value in it. Economically may not make sense that's now much more feasible than just years ago. But if you could, why wouldn't you want to store all your data, right? And sometimes you actually have to store the data for seven years or whatever because the laws require you to. And so coming back then and you know, like 10 years from now and looking at the data and going like making sense of that data can actually become a lot more difficult and a lot more challenging than having to first figure out and how we store this data for general use. And that kind of was what the relational model was all about. We decompose the data structures into tables and columns with relationships amongst each other so therefore between each other. So that therefore if somebody wants to, you know typical example would be well you store some purchases from your web store, right? There's a customer attribute in it. There's some credit card payment information in it, just some product information on what the customer bought. Well, in the relational model if you just want to figure out which products were sold on a given day or week, you just would query the payment and products table to get the sense out of it. You don't need to touch the customer and so forth. And with the hierarchical model you have to first sit down and understand how is the structure, what is the customer? Where is the payment? You know, does the document start with the payment or does it start with the customer? Where do I find this information? And then in the very early days those databases even struggled to then not having to scan all the documents to get the data out. So coming back to your question a bit, I apologize for going on here. But you know, it's like relational databases have been around for 45 years. I actually argue it's one of the most successful software technologies that we have out there when you look in the overall industry, right? 45 years is like, in IT terms it's like from a star being the ones who are going supernova. You have said it before that many technologies coming and went, right? And just want to add a more really interesting example by the way is Hadoop and HDFS, right? They kind of gave us this additional promise of like, you know, the 2010s like 2012, 2013 the hype of Hadoop and so forth and (mumbles) and HDFS. And people are just like, just put everything into HDFS and worry about the data later, right? And we can query it and map reduce it and whatever. And we had customers actually coming to us they were like, great we have half a petabyte of data on an HDFS cluster and we have no clue what's stored in there. How do we figure this out? What are we going to do now? Now you had a big data cleansing problem. And so I think that is why databases and also data modeling is something that will not go away anytime soon. And I think databases and database technologies are here for quite a while to stay. Because many of those are people they don't think about what's happening to the data five years from now. And many of the niche players also and also frankly even Amazon you know, following with this single purpose thing is like, just use the right tool for the job for your application, right? Just pull in the data there the way you wanted. And it's like, okay, so you use technologies all over the place and then five years from now you have your data fragmented everywhere in different formats and, you know inconsistencies, and, and, and. And those are usually when you come back to this data-driven business critical business decision applications the worst case scenario you can have, right? Because now you need an army of people to actually do data cleansing. And there's not a coincidence that data science has become very, very popular the last recent years as we kind of went on with this proliferation of different database or data management technologies some of those are not even database. But I think I leave it at that. >> It's an interesting talk track because you're right. I mean, no schema on right was alluring, but it definitely created some problems. It also created an entire, you know you referenced the hyper specialized roles and did the data cleansing component. I mean, maybe technology will eventually solve that problem but it hasn't up at least up tonight. Okay, last question, Maria maybe you could start off and Gerald if you want to chime in as well it'd be great. I mean, it's interesting to watch this industry when Oracle sort of won the top database mantle. I mean, I watched it, I saw it. It was, remember it was Informix and it was (indistinct) too and of course, Microsoft you got to give them credit with SQL server, but Oracle won the database wars. And then everything got kind of quiet for awhile database was sort of boring. And then it exploded, you know, all the, you know not only SQL and the key-value stores and the cloud databases and this is really a hot area now. And when we looked at Oracle we said, okay, Oracle it's all about Oracle Database, but we've seen the kind of resurgence in MySQL which everybody thought, you know once Oracle bought Sun they were going to kill MySQL. But now we see you investing in HeatWave, TimesTen, we talked about In-Memory databases before. So where do those fit in Maria in the grand scheme? How should we think about Oracle's database portfolio? >> So there's lots of places where you'd use those different things. 'Cause just like any other industry there are going to be new and boutique use cases that are going to benefit from a more specialized product or single purpose product. So good examples off the top of my head of the kind of systems that would benefit from that would be things like a stock exchange system or a telephone exchange system. Both of those are latency critical transaction processing applications where they need microsecond response times. And that's going to exceed perhaps what you might normally get or deploy with a converged database. And so Oracle's TimesTen database our In-Memory database is perfect for those kinds of applications. But there's also a host of MySQL applications out there today and you said it yourself there Dave, HeatWave is a great place to provision and deploy those kinds of applications because it's going to run 100 times faster than AWS (mumbles). So, you know, there really is a place in the market and in our customer's systems and the needs they have for all of these different members of our database family here at Oracle. >> Yeah, well, the internet is basically running in the lamp stack so I see MySQL going away. All right Gerald, will give you the final word, bring us home. >> Oh, thank you very much. Yeah, I mean, as Maria said, I think it comes back to what we discussed before. There is obviously still needs for special technologies or different technologies than a relational database or multimodal database. Oracle has actually many more databases that people may first think of. Not only the three that we have already mentioned but there's even SP so the Oracle's NoSQL database. And, you know, on a high level Oracle is a data management company, right? And we want to give our customers the best tools and the best technology to manage all of their data. Rather than therefore there has to be a need or there should be a part of the business that also focuses on this highly specialized systems and this highly specialized technologies that address those use cases. And I think it makes perfect sense. It's like, you know, when the customer comes to Oracle they're not only getting this, take this one product you know, and if you don't like it your problem but actually you have choice, right? And choice allows you to make a decision based on what's best for you and not necessarily best for the vendor you're talking to. >> Well guys, really appreciate your time today and your insights. Maria, Gerald, thanks so much for coming on The Cube. >> Thank you very much for having us. >> And thanks for watching this Cube conversation this is Dave Vellante and we'll see you next time. (upbeat music)
SUMMARY :
in the world of digital and cloud. and the benefits they bring What are we really talking about there? the nearest stores to kind of the traditional So it really changes the way So Gerald, you think about to you at all but just receives or even a MongoDB that allows you to do ML and AI into the database, in the database you already have. and I buy that by the way. of since the last 40 years, you know the benefits to this approach is the fact that you can get And so one of the things that And that buddy comes in the form of the truth here is you don't and deploy it on the cloud. and the cloud and containers and you know, is the argument you were making that the complexity and the recovery. because the laws require you to. And then it exploded, you and the needs they have in the lamp stack so I and the best technology to and your insights. we'll see you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Gerald Venzl | PERSON | 0.99+ |
Andy Mendelsohn | PERSON | 0.99+ |
Maria | PERSON | 0.99+ |
Chile | LOCATION | 0.99+ |
Peru | LOCATION | 0.99+ |
Maria Colgan | PERSON | 0.99+ |
Canada | LOCATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Gerald | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Maria Colgan | PERSON | 0.99+ |
seven years | QUANTITY | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Juan Luis | PERSON | 0.99+ |
100 times | QUANTITY | 0.99+ |
five star | QUANTITY | 0.99+ |
Dave | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
two experts | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Sun | ORGANIZATION | 0.99+ |
45 years | QUANTITY | 0.99+ |
MySQL | TITLE | 0.99+ |
three | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
each microservice | QUANTITY | 0.99+ |
Swiss Army | ORGANIZATION | 0.99+ |
early 2010s | DATE | 0.99+ |
Teradata | ORGANIZATION | 0.99+ |
Swiss Army | ORGANIZATION | 0.99+ |
Linux | TITLE | 0.99+ |
10 years later | DATE | 0.99+ |
2012 | DATE | 0.99+ |
two camps | QUANTITY | 0.99+ |
SQL | TITLE | 0.99+ |
Both | QUANTITY | 0.98+ |
Oracle Database | TITLE | 0.98+ |
2010s | DATE | 0.98+ |
TimesTen | ORGANIZATION | 0.98+ |
Hadoop | TITLE | 0.98+ |
first | QUANTITY | 0.98+ |
Oracles | ORGANIZATION | 0.98+ |
Vertica | ORGANIZATION | 0.98+ |
tonight | DATE | 0.98+ |
2013 | DATE | 0.98+ |
Maria Colgan & Gerald Venzl, Oracle | June CUBEconversation
(upbeat music) >> It'll be five, four, three and then silent two, one, and then you guys just follow my lead. We're just making some last minute adjustments. Like I said, we're down two hands today. So, you good Alex? Okay, are you guys ready? >> I'm ready. >> Ready. >> I got to get get one note here. >> So I noticed Maria you stopped anyway, so I have time. >> Just so they know Dave and the Boston Studio, are they both kind of concurrently be on film even when they're not speaking or will only the speaker be on film for like if Gerald's drawing while Maria is talking about-- >> Sorry but then I missed one part of my onboarding spiel. There should be, if you go into gallery there should be a label. There should be something labeled Boston live switch feed. If you pin that gallery view you'll see what our program currently being recorded is. So any time you don't see yourself on that feed is an excellent time to take a drink of water, scratch your nose, check your notes. Do whatever you got to do off screen. >> Can you give us a three shot, Alex? >> Yes, there it is. >> And then go to me, just give me a one-shot to Dave. So when I'm here you guys can take a drink or whatever >> That makes sense? >> Yeah. >> Excellent, I will get my recordings restarted and we'll open up when Dave's ready. >> All right, you guys ready? >> Ready. >> All right Steve, you go on mute. >> Okay, on me in 5, 4, 3. Developers have become the new king makers in the world of digital and cloud. The rise of containers and microservices has accelerated the transition to cloud native applications. A lot of people will talk about application architecture and the related paradigms and the benefits they bring for the process of writing and delivering new apps. But a major challenge continues to be, the how and the what when it comes to accessing, processing and getting insights from the massive amounts of data that we have to deal with in today's world. And with me are two experts from the data management world who will share with us how they think about the best techniques and practices based on what they see at large organizations who are working with data and developing so-called data-driven apps. Please welcome Maria Colgan and Gerald Venzl, two distinguish product managers from Oracle. Folks, welcome, thanks so much for coming on. >> Thanks for having us Dave. >> Thank you very much for having us. >> Okay, Maria let's start with you. So, we throw around this term data-driven, data-driven applications. What are we really talking about there? >> So data-driven applications are applications that work on a diverse set of data. So anything from spatial to sensor data, document data as well as your usual transaction processing data. And what they're going to do is they'll generate value from that data in very different ways to a traditional application. So for example, they may use machine learning, they are able to do product recommendations in the middle of a transaction. Or we could use graph to be able to identify an influencer within the community so we can target them with a specific promotion. It could also use spatial data to be able to help find the nearest stores to a particular customer. And because these apps are deployed on multiple platforms, everything from mobile devices as well as standard browsers, they need a data platform that's going to be both secure, reliable and scalable. >> Well, so when you think about how the workloads are shifting I mean, we're not talking about, you know it's not anymore a world of just your ERP or your HCM or your CRM, you know kind of the traditional operational systems. You really are seeing an explosion of these new data oriented apps. You're seeing, you know, modeling in the cloud, you are going to see more and more inferencing, inferencing at the edge. But Maria maybe you could talk a little bit about sort of the benefits that customers are seeing from developing these types of applications. I mean, why should people care about data-driven apps? >> Oh, for sure, there's massive benefits to them. I mean, probably the most obvious one for any business regardless of the industry, is that they not only allow you to understand what your customers are up to, but they allow you to be able to anticipate those customer's needs. So that helps businesses maintain that competitive edge and retain their customers. But it also helps them make data-driven decisions in real time based on actual data rather than on somebody's gut feeling or basing those decisions on historical data. So for example, you can do real-time price adjustments on products based on demand and so forth, that kind of thing. So it really changes the way people do business today. >> So Gerald, you think about the narrative in the industry everybody wants to be a platform player all your customers they are becoming software companies, they are becoming platform players. Everybody wants to be like, you know name a company that is huge trillion dollar market cap or whatever, and those are data-driven companies. And so it would seem to me that data-driven applications, there's nobody, no company really shouldn't be data-driven. Do you buy that? >> Yeah, absolutely. I mean, data-driven, and that naturally the whole industry is data-driven, right? It's like we all have information technologies about processing data and deriving information out of it. But when it comes to app development I think there is a big push to kind of like we have to do machine learning in our applications, we have to get insights from data. And when you actually look back a bit and take a step back, you see that there's of course many different kinds of applications out there as well that's not to be forgotten, right? So there is a usual front end user interfaces where really the application all it does is just entering some piece of information that's stored somewhere or perhaps a microservice that's not attached to a data to you at all but just receives or asks calls (indistinct). So I think it's not necessarily so important for every developer to kind of go on a bandwagon that they have to be data-driven. But I think it's equally important for those applications and those developers that build applications, that drive the business, that make business critical decisions as Maria mentioned before. Those guys should take really a close look into what data-driven apps means and what the data to you can actually give to them. Because what we see also happening a lot is that a lot of the things that are well known and out there just ready to use are being reimplemented in the applications. And for those applications, they essentially just ended up spending more time writing codes that will be already there and then have to maintain and debug the code as well rather than just going to market faster. >> Gerald can you talk to the prevailing approaches that developers take to build data-driven applications? What are the ones that you see? Let's dig into that a little bit more and maybe differentiate the different approaches and talk about that? >> Yeah, absolutely. I think right now the industry is like in two camps, it's like sort of a religious war going on that you'll see often happening with different architectures and so forth going on. So we have single purpose databases or data management technologies. Which are technologies that are as the name suggests build around a single purpose. So it's like, you know a typical example would be your ordinary key-value store. And a key-value store all it does is it allows you to store and retrieve a piece of data whatever that may be really, really fast but it doesn't really go beyond that. And then the other side of the house or the other camp would be multimodal databases, multimodal data management technologies. Those are technologies that allow you to store different types of data, different formats of data in the same technology in the same system alongside. And, you know, when you look at the geographics out there of what we have from technology, is pretty much any relational database or any database really has evolved into such a multimodal database. Whether that's MySQL that allows you to store or chase them alongside relational or even a MongoDB that allows you to do or gives you native graph support since (mumbles) and as well alongside the adjacent support. >> Well, it's clearly a trend in the industry. We've talked about this a lot in The Cube. We know where Oracle stands on this. I mean, you just mentioned MySQL but I mean, Oracle Databases you've been extending, you've mentioned JSON, we've got blockchain now in there you're infusing, you know ML and AI into the database, graph database capabilities, you know on and on and on. We talked a lot about we compared that to Amazon which is kind of the right tool, the right job approach. So maybe you could talk about, you know, your point of view, the benefits for developers of using that converged database if I can use that word approach being able to store multiple data formats? Why do you feel like that's a better approach? >> Yeah, I think on a high level it comes down to complexity. You are actually avoiding additional complexity, right? So not every use case that you have necessarily warrants to have yet another data management technology or yet the special build technology for managing that data, right? It's like many use cases that we see out there happily want to just store a piece of a chase and document, a piece of chase in a database and then perhaps retrieve it again afterwards so write some simple queries over it. And you really don't have to get a new database technology or a NoSQL database into the mix if you already have some to just fulfill that exact use case. You could just happily store that information as well in the database you already have. And what it really comes down to is the learning curve for developers, right? So it's like, as you use the same technology to store other types of data, you don't have to learn a new technology, you don't have to associate yourself with new and learn new drivers. You don't have to find new frameworks and you don't have to know how to necessarily operate or best model your data for that database. You can essentially just reuse your knowledge of the technology as well as the libraries and code you have already built in house perhaps in another application, perhaps, you know framework that you used against the same technology because it is still the same technology. So, kind of all comes down again to avoiding complexity rather than not fragmenting you know, the many different technologies we have. If you were to look at the different data formats that are out there today it's like, you know, you would end up with many different databases just to store them if you were to fully religiously follow the single purpose best built technology for every use case paradigm, right? And then you would just end up having to manage many different databases more than actually focusing on your app and getting value to your business or to your user. >> Okay, so I get that and I buy that by the way. I mean, especially if you're a larger organization and you've got all these projects going on but before we go back to Maria, Gerald, I want to just, I want to push on that a little bit. Because the counter to that argument would be in the analogy. And I wonder if you, I'd love for you to, you know knock this analogy off the blocks. The counter would be okay, Oracle is the Swiss Army knife and it's got, you know, all in one. But sometimes I need that specialized long screwdriver and I go into my toolbox and I grab that. It's better than the screwdriver in my Swiss Army knife. Why, are you the Swiss Army knife of databases? Or are you the all-in-one have that best of breed screwdriver for me? How do you think about that? >> Yeah, that's a fantastic question, right? And I think it's first of all, you have to separate between Oracle the company that has actually multiple data management technologies and databases out there as you said before, right? And Oracle Database. And I think Oracle Database is definitely a Swiss Army knife has many capabilities of since the last 40 years, you know that we've seen object support coming that's still in the Oracle Database today. We have seen XML coming, it's still in the Oracle Database, graph, spatial, et cetera. And so you have many different ways of managing your data and then on top of that going into the converge, not only do we allow you to store the different data model in there but we actually allow you also to, you apply all the security policies and so forth on top of it something Maria can talk more about the mission around converged database. I would also argue though that for some aspects, we do actually have to or add a screwdriver that you talked about as well. So especially in the relational world people get very quickly hung up on this idea that, oh, if you only do rows and columns, well, that's kind of what you put down on disk. And that was never true, it's the relational model is actually a logical model. What's probably being put down on disk is blocks that align themselves nice with block storage and always has been. So that allows you to actually model and process the data sort of differently. And one common example or one good example that we have that we introduced a couple of years ago was when, column and databases were very strong and you know, the competition came it's like, yeah, we have In-Memory column that stores now they're so much better. And we were like, well, orienting the data role-based or column-based really doesn't matter in the sense that we store them as blocks on disks. And so we introduced the in memory technology which gives you an In-Memory column, a representation of your data as well alongside your relational. So there is an example where you go like, well, actually you know, if you have this use case of the column or analytics all In-Memory, I would argue Oracle Database is also that screwdriver you want to go down to and gives you that capability. Because not only gives you representation in columnar, but also which many people then forget all the analytic power on top of SQL. It's one thing to store your data columnar, it's a completely different story to actually be able to run analytics on top of that and having all the built-in functionalities and stuff that you want to do with the data on top of it as you analyze it. >> You know, that's a great example, the kilometer 'cause I remember there was like a lot of hype around it. Oh, it's the Oracle killer, you know, at Vertica. Vertica is still around but, you know it never really hit escape velocity. But you know, good product, good company, whatever. Natezza, it kind of got buried inside of IBM. ParXL kind of became, you know, red shift with that deal so that kind of went away. Teradata bought a company, I forget which company it bought but. So that hype kind of disapated and now it's like, oh yeah, columnar. It's kind of like In-Memory, we've had a In-Memory databases ever since we've had databases you know, it's a kind of a feature not a sector. But anyway, Maria, let's come back to you. You've got a lot of customer experience. And you speak with a lot of companies, you know during your time at Oracle. What else are you seeing in terms of the benefits to this approach that might not be so intuitive and obvious right away? >> I think one of the biggest benefits to having a multimodel multiworkload or as we call it a converged database, is the fact that you can get greater data synergy from it. In other words, you can utilize all these different techniques and data models to get better value out of that data. So things like being able to do real-time machine learning, fraud detection inside a transaction or being able to do a product recommendation by accessing three different data models. So for example, if I'm trying to recommend a product for you Dave, I might use graph analytics to be able to figure out your community. Not just your friends, but other people on our system who look and behave just like you. Once I know that community then I can go over and see what products they bought by looking up our product catalog which may be stored as JSON. And then on top of that I can then see using the key-value what products inside that catalog those community members gave a five star rating to. So that way I can really pinpoint the right product for you. And I can do all of that in one transaction inside the database without having to transform that data into different models or God forbid, access different systems to be able to get all of that information. So it really simplifies how we can generate that value from the data. And of course, the other thing our customers love is when it comes to deploying data-driven apps, when you do it on a converged database it's much simpler because it is that standard data platform. So you're not having to manage multiple independent single purpose databases. You're not having to implement the security and the high availability policies, you know across a bunch of different diverse platforms. All of that can be done much simpler with a converged database 'cause the DBA team of course, is going to just use that standard set of tools to manage, monitor and secure those systems. >> Thank you for that. And you know, it's interesting, you talk about simplification and you are in Juan's organization so you've big focus on mission critical. And so one of the things that I think is often overlooked well, we talk about all the time is recovery. And if things are simpler, recovery is faster and easier. And so it's kind of the hallmark of Oracle is like the gold standard of the toughest apps, the most mission critical apps. But I wanted to get to the cloud Maria. So because everything is going to the cloud, right? Not all workloads are going to the cloud but everybody is talking about the cloud. Everybody has cloud first mentality and so yes, it's a hybrid world. But the natural next question is how do you think the cloud fits into this world of data-driven apps? >> I think just like any app that you're developing, the cloud helps to accelerate that development. And of course the deployment of these data-driven applications. 'Cause if you think about it, the developer is instantly able to provision a converged database that Oracle will automatically manage and look after for them. But what's great about doing something like that if you use like our autonomous database service is that it comes in different flavors. So you can get autonomous transaction processing, data warehousing or autonomous JSON so that the developer is going to get a database that's been optimized for their specific use case, whatever they are trying to solve. And it's also going to contain all of that great functionality and capabilities that we've been talking about. So what that really means to the developer though is as the project evolves and inevitably the business needs change a little, there's no need to panic when one of those changes comes in because your converged database or your autonomous database has all of those additional capabilities. So you can simply utilize those to able to address those evolving changes in the project. 'Cause let's face it, none of us normally know exactly what we need to build right at the very beginning. And on top of that they also kind of get a built-in buddy in the cloud, especially in the autonomous database. And that buddy comes in the form of built-in workload optimizations. So with the autonomous database we do things like automatic indexing where we're using machine learning to be that buddy for the developer. So what it'll do is it'll monitor the workload and see what kind of queries are being run on that system. And then it will actually determine if there are indexes that should be built to help improve the performance of that application. And not only does it bill those indexes but it verifies that they help improve the performance before publishing it to the application. So by the time the developer is finished with that app and it's ready to be deployed, it's actually also been optimized by the developers buddy, the Oracle autonomous database. So, you know, it's a really nice helping hand for developers when they're building any app especially data-driven apps. >> I like how you sort of gave us, you know the truth here is you don't always know where you're going when you're building an app. It's like it goes from you are trying to build it and they will come to start building it and we'll figure out where it's going to go. With Agile that's kind of how it works. But so I wonder, can you give some examples of maybe customers or maybe genericize them if you need to. Data-driven apps in the cloud where customers were able to drive more efficiency, where the cloud buddy allowed the customers to do more with less? >> No, we have tons of these but I'll try and keep it to just a couple. One that comes to mind straight away is retrace. These folks built a blockchain app in the Oracle Cloud that allows manufacturers to actually share the supply chain with the consumer. So the consumer can see exactly, who made their product? Using what raw materials? Where they were sourced from? How it was done? All of that is visible to the consumer. And in order to be able to share that they had to work on a very diverse set of data. So they had everything from JSON documents to images as well as your traditional transactions in there. And they store all of that information inside the Oracle autonomous database, they were able to build their app and deploy it on the cloud. And they were able to do all of that very, very quickly. So, you know, that ability to work on multiple different data types in a single database really helped them build that product and get it to market in a very short amount of time. Another customer that's doing something really, really interesting is MindSense. So these guys operate the largest mines in Canada, Chile, and Peru. But what they do is they put these x-ray devices on the massive mechanical shovels that are at the cove or at the mine face. And what that does is it senses the contents of the buckets inside these mining machines. And it's looking to see at that content, to see how it can optimize the processing of the ore inside in that bucket. So they're looking to minimize the amount of power and water that it's going to take to process that. And also of course, minimize the amount of waste that's going to come out of that project. So all of that sensor data is sent into an autonomous database where it's going to be processed by a whole host of different users. So everything from the mine engineers to the geo scientists, to even their own data scientists utilize that data to drive their business forward. And what I love about these guys is they're not happy with building just one app. MindSense actually use our built-in low core development environment, APEX that comes as part of the autonomous database and they actually produce applications constantly for different aspects of their business using that technology. And it's actually able to accelerate those new apps to the business. It takes them now just a couple of days or weeks to produce an app instead of months or years to build those new apps. >> Great, thank you for that Maria. Gerald, I'm going to push you again. So, I said upfront and talked about microservices and the cloud and containers and you know, anybody in the developer space follows that very closely. But some of the things that we've been talking about here people might look at that and say, well, they're kind of antithetical to microservices. This is our Oracles monolithic approach. But when you think about the benefits of microservices, people want freedom of choice, technology choice, seen as a big advantage of microservices and containers. How do you address such an argument? >> Yeah, that's an excellent question and I get that quite often. The microservices architecture in general as I said before had architectures, Linux distributions, et cetera. It's kind of always a bit of like there's an academic approach and there's a pragmatic approach. And when you look at the microservices the original definitions that came out at the early 2010s. They actually never said that each microservice has to have a database. And they also never said that if a microservice has a database, you have to use a different technology for each microservice. Just like they never said, you have to write a microservice in a different programming language, right? So where I'm going with this is like, yes you know, sometimes when you look at some vendors out there, some niche players, they push this message or they jump on this academic approach of like each microservice has the best tool at hand or I'd use a different database for your purpose, et cetera. Which almost often comes across like us. You know, we want to stay part of the conversation. Nothing stops a developer from, you know using a multimodal database for the microservice and just using that as a document store, right? Or just using that as a relational database. And, you know, sometimes I mean, it was actually something that happened that was really interesting yesterday I don't know whether you follow Dave or not. But Facebook had an outage yesterday, right? And Facebook is one of those companies that are seen as the Silicon Valley, you know know how to do microservices companies. And when you add through the outage, well, what happened, right? Some unfortunate logical error with configuration as a force that took a database cluster down. So, you know, there you have it where you go like, well, maybe not every microservice is actually in fact talking to its own database or its own special purpose database. I think there, you know, well, what we should, the industry should be focusing much more on this argument of which technology to use? What's the right tool for a job? Is more to ask themselves, what business problem actually are we trying to solve? And therefore what's the right approach and the right technology for this. And so therefore, just as I said before, you know multimodal databases they do have strong benefits. They have many built-in functionalities that are already there and they allow you to reduce this complexity of having to know many different technologies, right? And so it's not only to store different data models either you know, treat a multimodal database as a chasing documents store or a relational database but most databases are multimodal since 20 plus years. But it's also actually being able to perhaps if you store that data together, you can perhaps actually derive additional value for somebody else but perhaps not for your application. But like for example, if you were to use Oracle Database you can actually write queries on top of all of that data. It doesn't really matter for our query engine whether it's the data is format that then chase or the data is formatted in rows and columns you can just rather than query over it. And that's actually very powerful for those guys that have to, you know get the reporting done the end of the day, the end of the week. And for those guys that are the data scientists that they want to figure out, you know which product performed really well or can we tweak something here and there. When you look into that space you still see a huge divergence between the guys to put data in kind of the altarpiece style and guys that try to derive new insights. And there's still a lot of ETL going around and, you know we have big data technologies that some of them come and went and some of them came in that are still around like Apache Spark which is still like a SQL engine on top of any of your data kind of going back to the same concept. And so I will say that, you know, for developers when we look at microservices it's like, first of all, is the argument you were making because the vendor or the technology you want to use tells you this argument or, you know, you kind of want to have an argument to use a specific technology? Or is it really more because it is the best technology, to best use for this given use case for this given application that you have? And if so there's of course, also nothing wrong to use a single purpose technology either, right? >> Yeah, I mean, whenever I talk about Oracle I always come back to the most important applications, the mission critical. It's very difficult to architect databases with microservices and containers. You have to be really, really careful. And so and again, it comes back to what we were talking before about with Maria that the complexity and the recovery. But Gerald I want to stay with you for a minute. So there's other data management technologies popping out there. I mean, I've seen some people saying, okay just leave the data in an S3 bucket. We can query that, then we've got some magic sauce to do that. And so why are you optimistic about you know, traditional database technology going forward? >> I would say because of the history of databases. So one thing that once struck me when I came to Oracle and then got to meet great people like Juan Luis and Andy Mendelsohn who had been here for a long, long time. I come to realization that relational databases are around for about 45 years now. And, you know, I was like, I'm too young to have been around then, right? So I was like, what else was around 45 years? It's like just the tech stack that we have today. It's like, how does this look like? Well, Linux only came out in 93. Well, databases pre-date Linux a lot rather than as I started digging I saw a lot of technologies come and go, right? And you mentioned before like the technologies that data management systems that we had that came and went like the columnar databases or XML databases, object databases. And even before relational databases before Cot gave us the relational model there were apparently these networks stores network databases which to some extent look very similar to adjacent documents. There wasn't a harder storing data and a hierarchy to format. And, you know when you then start actually reading the Cot paper and diving a little bit more into the relation model, that's I think one important crux in there that most of the industry keeps forgetting or it hasn't been around to even know. And that is that when Cot created the relational model, he actually focused not so much on the application putting the data in, but on future users and applications still being able to making sense out of the data, right? And that's kind of like I said before we had those network models, we had XML databases you have adjacent documents stores. And the one thing that they all have along with it is like the application that puts the data in decides the structure of the data. And that's all well and good if you had an application of the developer writing an application. It can become really tricky when 10 years later you still want to look at that data and the application that the developer is no longer around then you go like, what does this all mean? Where is the structure defined? What is this attribute? What does it mean? How does it correlate to others? And the one thing that people tend to forget is that it's actually the data that's here to stay not someone who does the applications where it is. Ideally, every company wants to store every single byte of data that they have because there might be future value in it. Economically may not make sense that's now much more feasible than just years ago. But if you could, why wouldn't you want to store all your data, right? And sometimes you actually have to store the data for seven years or whatever because the laws require you to. And so coming back then and you know, like 10 years from now and looking at the data and going like making sense of that data can actually become a lot more difficult and a lot more challenging than having to first figure out and how we store this data for general use. And that kind of was what the relational model was all about. We decompose the data structures into tables and columns with relationships amongst each other so therefore between each other. So that therefore if somebody wants to, you know typical example would be well you store some purchases from your web store, right? There's a customer attribute in it. There's some credit card payment information in it, just some product information on what the customer bought. Well, in the relational model if you just want to figure out which products were sold on a given day or week, you just would query the payment and products table to get the sense out of it. You don't need to touch the customer and so forth. And with the hierarchical model you have to first sit down and understand how is the structure, what is the customer? Where is the payment? You know, does the document start with the payment or does it start with the customer? Where do I find this information? And then in the very early days those databases even struggled to then not having to scan all the documents to get the data out. So coming back to your question a bit, I apologize for going on here. But you know, it's like relational databases have been around for 45 years. I actually argue it's one of the most successful software technologies that we have out there when you look in the overall industry, right? 45 years is like, in IT terms it's like from a star being the ones who are going supernova. You have said it before that many technologies coming and went, right? And just want to add a more really interesting example by the way is Hadoop and HDFS, right? They kind of gave us this additional promise of like, you know, the 2010s like 2012, 2013 the hype of Hadoop and so forth and (mumbles) and HDFS. And people are just like, just put everything into HDFS and worry about the data later, right? And we can query it and map reduce it and whatever. And we had customers actually coming to us they were like, great we have half a petabyte of data on an HDFS cluster and we have no clue what's stored in there. How do we figure this out? What are we going to do now? Now you had a big data cleansing problem. And so I think that is why databases and also data modeling is something that will not go away anytime soon. And I think databases and database technologies are here for quite a while to stay. Because many of those are people they don't think about what's happening to the data five years from now. And many of the niche players also and also frankly even Amazon you know, following with this single purpose thing is like, just use the right tool for the job for your application, right? Just pull in the data there the way you wanted. And it's like, okay, so you use technologies all over the place and then five years from now you have your data fragmented everywhere in different formats and, you know inconsistencies, and, and, and. And those are usually when you come back to this data-driven business critical business decision applications the worst case scenario you can have, right? Because now you need an army of people to actually do data cleansing. And there's not a coincidence that data science has become very, very popular the last recent years as we kind of went on with this proliferation of different database or data management technologies some of those are not even database. But I think I leave it at that. >> It's an interesting talk track because you're right. I mean, no schema on right was alluring, but it definitely created some problems. It also created an entire, you know you referenced the hyper specialized roles and did the data cleansing component. I mean, maybe technology will eventually solve that problem but it hasn't up at least up tonight. Okay, last question, Maria maybe you could start off and Gerald if you want to chime in as well it'd be great. I mean, it's interesting to watch this industry when Oracle sort of won the top database mantle. I mean, I watched it, I saw it. It was, remember it was Informix and it was (indistinct) too and of course, Microsoft you got to give them credit with SQL server, but Oracle won the database wars. And then everything got kind of quiet for awhile database was sort of boring. And then it exploded, you know, all the, you know not only SQL and the key-value stores and the cloud databases and this is really a hot area now. And when we looked at Oracle we said, okay, Oracle it's all about Oracle Database, but we've seen the kind of resurgence in MySQL which everybody thought, you know once Oracle bought Sun they were going to kill MySQL. But now we see you investing in HeatWave, TimesTen, we talked about In-Memory databases before. So where do those fit in Maria in the grand scheme? How should we think about Oracle's database portfolio? >> So there's lots of places where you'd use those different things. 'Cause just like any other industry there are going to be new and boutique use cases that are going to benefit from a more specialized product or single purpose product. So good examples off the top of my head of the kind of systems that would benefit from that would be things like a stock exchange system or a telephone exchange system. Both of those are latency critical transaction processing applications where they need microsecond response times. And that's going to exceed perhaps what you might normally get or deploy with a converged database. And so Oracle's TimesTen database our In-Memory database is perfect for those kinds of applications. But there's also a host of MySQL applications out there today and you said it yourself there Dave, HeatWave is a great place to provision and deploy those kinds of applications because it's going to run 100 times faster than AWS (mumbles). So, you know, there really is a place in the market and in our customer's systems and the needs they have for all of these different members of our database family here at Oracle. >> Yeah, well, the internet is basically running in the lamp stack so I see MySQL going away. All right Gerald, will give you the final word, bring us home. >> Oh, thank you very much. Yeah, I mean, as Maria said, I think it comes back to what we discussed before. There is obviously still needs for special technologies or different technologies than a relational database or multimodal database. Oracle has actually many more databases that people may first think of. Not only the three that we have already mentioned but there's even SP so the Oracle's NoSQL database. And, you know, on a high level Oracle is a data management company, right? And we want to give our customers the best tools and the best technology to manage all of their data. Rather than therefore there has to be a need or there should be a part of the business that also focuses on this highly specialized systems and this highly specialized technologies that address those use cases. And I think it makes perfect sense. It's like, you know, when the customer comes to Oracle they're not only getting this, take this one product you know, and if you don't like it your problem but actually you have choice, right? And choice allows you to make a decision based on what's best for you and not necessarily best for the vendor you're talking to. >> Well guys, really appreciate your time today and your insights. Maria, Gerald, thanks so much for coming on The Cube. >> Thank you very much for having us. >> And thanks for watching this Cube conversation this is Dave Vellante and we'll see you next time. (upbeat music)
SUMMARY :
and then you guys just follow my lead. So I noticed Maria you stopped anyway, So any time you don't So when I'm here you guys and we'll open up when Dave's ready. and the benefits they bring What are we really talking about there? the nearest stores to kind of the traditional So for example, you can do So Gerald, you think about to you at all but just receives or even a MongoDB that allows you to do ML and AI into the database, in the database you already have. and I buy that by the way. of since the last 40 years, you know the benefits to this approach is the fact that you can get And you know, it's And that buddy comes in the form of the truth here is you don't and deploy it on the cloud. and the cloud and containers and you know, is the argument you were making And so why are you because the laws require you to. And then it exploded, you and the needs they have in the lamp stack so I and the best technology to and your insights. we'll see you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Gerald Venzl | PERSON | 0.99+ |
Andy Mendelsohn | PERSON | 0.99+ |
Maria | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Chile | LOCATION | 0.99+ |
Maria Colgan | PERSON | 0.99+ |
Peru | LOCATION | 0.99+ |
100 times | QUANTITY | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Gerald | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Canada | LOCATION | 0.99+ |
seven years | QUANTITY | 0.99+ |
Juan Luis | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Steve | PERSON | 0.99+ |
five star | QUANTITY | 0.99+ |
Maria Colgan | PERSON | 0.99+ |
Swiss Army | ORGANIZATION | 0.99+ |
Swiss Army | ORGANIZATION | 0.99+ |
Alex | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
MySQL | TITLE | 0.99+ |
one note | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
two hands | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
two experts | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Linux | TITLE | 0.99+ |
Teradata | ORGANIZATION | 0.99+ |
each microservice | QUANTITY | 0.99+ |
Hadoop | TITLE | 0.99+ |
45 years | QUANTITY | 0.99+ |
Oracles | ORGANIZATION | 0.99+ |
early 2010s | DATE | 0.99+ |
today | DATE | 0.99+ |
one-shot | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
one good example | QUANTITY | 0.99+ |
Sun | ORGANIZATION | 0.99+ |
tonight | DATE | 0.99+ |
first | QUANTITY | 0.99+ |
Christian Craft, Oracle | CUBE Conversation
(upbeat music) >> Hello everyone, and welcome to this Cube conversation. We're going to dig into some of the more specific and sometimes gory details of managing the nuances of database, database management systems. You know, it's a lot of fun to get it to the daily buzz of cloud and database competition and get a little snarky on Twitter, but there are a lot of mundane issues that you have to address to really do proper database sizing, capacity planning, and you know whether or not database consolidation makes sense. These are not trivial issues. And decades ago they spawned an entire role around the database administrator. They had to do the dirty work of database management so that users and customers would be satisfied. And while automation and cloud are changing that role, at the end of the day, somebody actually has to make the databases work in the cloud and make sure that the business doesn't feel any impact on the transition along the way. So on that note, we have with us Oracle senior director of product management for mission critical databases. He works in Juan Loaiza's group, Chris Craft, and Steve Zivanic whom we know well on the cube says this guy is the Jedi master when it comes to consolidating databases in the cloud. Nobody knows more on the face of the planet Earth. So we're really excited Chris, to have you inside the Cube. Welcome. >> Thanks, thanks Dave. >> That's a very humble thanks. So when it comes to running databases in the cloud can you explain the difference between sizing and capacity planning? Aren't they two sides of the same coin? >> Yeah, you know, they really are. It's like, you know sizing is really part of capacity to planning. It's really, I look at sizing as a one-time effort whereas capacity planning is more your ongoing. You perform sizing initially when the application is deployed. And then, then when you're changing platforms, like going from on-prem to the Cloud you're going to go through a sizing exercise 'cause you're looking at going to a new platform. That's more of a one-time effort, and then ongoing, you're looking at your capacity management over time. So yeah, they are very related so. >> Okay, thank you. So we're going to talk about database consolidation. A lot of people would say, look the cloud makes consolidating databases maybe not irrelevant, but maybe not the best strategy because I got all these different purpose-built databases. Why consolidate databases if they're already going to consolidate it in the cloud in one location? >> Yeah. So, so we're really talking about in in the cloud, you're running virtual machines but consolidation still applies on the virtual machines. So if you have a virtual machine that's dedicated to a database that database is that server, that virtual machine is going to be under utilized over time. So what we're doing with consolidation is running multiple databases within a virtual machine or what it, Oracle virtual cluster. We do everything on clusters. So multiple machines multiple databases within that will drive up the utilization and improve your cost structure. So it's a sizing it's it's absolutely critical on even in the cloud. >> Okay. But, but wouldn't it, I might say to that, wouldn't it be better to have each database have a dedicated VM? I mean, from a performance perspective, it doesn't try to make the database do too much affect performance. >> Yeah. It, so whenever, so we know historically that a database on a dedicated server back in the day that was a physical server, today it's a virtual machine. When you do that, your utilization will be in the range of 15 to 20%. And that's, you know very highly under utilized systems when you do that. So we don't need to isolate things onto dedicated virtual machines for a performance perspective. There are other ways that we can manage that we have resource management built into Oracle and the Oracle database. And then on Exadata we have an integrated IO resource management as well so we can deal with that different ways. >> Okay. So you're basically proposing that you're putting these databases onto a single VM and managing it accordingly. Is there additional details you can provide on that? >> So, you know, we don't put everything into you know, literally one, one VM. You want to have some isolation built in there, but see and take a more pragmatic approach. You know, like every single database in one VM that's the wrong way to go. Each database in a dedicated VM is also the other extreme, also the wrong way to go. So we're kind of right down the middle and be more pragmatic about it, and do some level of consolidation to drive up utilization. >> I remember when I first started following tech I was studying up on, you know kind of how disc drives work and so forth. And there was probably like I can't even remember what it was. It was like probably like 10 megabytes under an actuator. And people were saying, Oh my God, that's so much data. You, you got your blast radius is, is so big. You got to split that up. So it's the same concept, apply with availability. Some would say, there's a problem because you're consolidating all this data and you've got this blast radius that increases. How do you address that? >> And so, you know, redundancy. So we have redundancy at all levels. So if you look at a single, so we're talking about Exadata here, taught in an Exadata machine we can lose up to 24 disc drives out of 30. 30 machines with 36 disc drives, we can use 24 of those. So that'd be 12 per storage cell. You can lose two storage cells as 24 out of 36 drives so we can lose and keep on running. We can also, we also cluster, we also do clustering. So the database servers are clustered together for high availability. So we can take, we can suffer multiple simultaneous failures and keep on running without performance impact either. So it's, so recovery, we handle that in different ways. So it's, look at blast radius from a standpoint, you want some, some isolation for blast radius but we have physical failures is just not something that we're concerned with. >> Why do you deal with taking down a VM? Doesn't that normally mean there's going to be some kind of disruption? >> Oh, so you know, the, so Oracle database, you're talking about real application clusters on on Oracle database, on Exadata. We've got, we have a very fast detection of of failures and then resolution of the failure. So we're looking at a small blip in performance, you know we're looking at a few milliseconds to detect failure and then maybe up around three seconds to actually affect the failover. So the applications that are not getting disconnected, they continue operating in the, in that kind of condition. So that's kind of unique to the Exadata platform. And so, you know, in our cloud, we're running Exadata. We have this built in there. So we're, we're resilient to that type of failure, so. >> And sorry, you mentioned real application clusters. You're saying because you're running real application clusters that's how you're able to become more resilient? >> So yeah, so we have, so Oracle database real application clusters runs on top of a clustered virtual machines on Exadata. We have integration then optimizes the fail over times of that clustering. So it's, it's not the cluster same, it's the optimizations are only built into Exadata. So we have much much faster, much better tighter integration, so much more scalability because of that, that integration that we have. >> Can I run rack in other clouds? Can I put that into Amazon's cloud? >> So, so real application clusters requires two things. It's a, you require shared storage in a fast interconnect, a fast networking interconnecting. And those things just don't exist in the other clouds. We have those built into Exadata in our cloud. And we also, we also allow real application clusters in our relational database, our database cloud service offering as well. But it's, really the highest implementation of that is in Exadata. >> Well, of course I was tongue in cheek joking but this is, this is why, you know, I was listening to Arvind Krishna the other day in IBM Think. And he was saying only 25% of mission critical applications have moved into the cloud. I didn't think it's that high. I mean, but, but what you're doing is basically building a mission critical, you know, cloud or a cloud for mission critical databases. And that's, that's unique. I mean, I would expect other cloud vendors that eventually you know, are going to get there, but you're kind of starting with the hard stuff and working backwards. But, that is what I've always interpreted is unique to Oracle, but how does that affect cost? Isn't that more expensive? >> Actually, no. We're taking services that that start out at a very similar price point. And then we drive. So what we've seen from other customers that are running in like Amazon, for example, we see databases on dedicated virtual machines that run anywhere from 15 to 20% utilization. So what we do is, that low, low utilization, what we do is take that and triple that. So we run, so we run maybe 50% utilization. At that point we still have full redundancy, but we've now made the service one third of the cost. So we're starting at a third, we're starting at a very similar cost. And then we drive it to, you know three times a utilization. This is not crazy numbers. This is, you know, 50% is, is fine and retain the redundancy at that level as well. >> Got it, well so. >> What we've seen is about a third the cost. >> Really? Okay. Well, so, but, what about, like for instance, on AWS, couldn't I run this in a multi availability zone, running RDS or some other cloud database? >> So, so you can run a Multi-AZ environment like in in Amazon, for example, you can run locals. That's what we call local standby. If you do that, you're now instead of being one third, instead of being three times more expensive, you're now six times more expensive. Because that is another copy of the entire platform, the entire instance, the storage, everything on the other availability zone instead of being three times more, it's now six. >> Because you're essentially replicating everything in a brute force mode, right? >> Yeah. It's a data guard standby, local standby in another AZ, or what we call availability domain in our cloud. >> So let's maybe geek out a little bit. So, let's talk more about availability. You know, for years, I mean, I remember going back to reading about this stuff with tandem computers, you know, coincident failures. How are you dealing with those in today's modern world? >> So what we call simultaneous failures is, so we, we deal with that with redundancy in the system. So we have redundancy at all layers in the storage. Like I said earlier, we can take across, you know, two storage cells and each storage cell has a dozen drives. So that's 24 disc drives. That's eight flashcard failures simultaneously. And we keep on running no data loss, no loss of service. That's at the storage layer. We have multiple, multiple redundant networking switches at that, at the networking layer, the internal network. Then we go up into the database server. We then have redundancy across the nodes of a cluster. You have multiple virtual machines that comprise a virtual cluster. So it's at each and every level, we have redundancy. And then we drive the redundancy into the application using what's called application continuity. So the application connections have knowledge of the failure, failure modes of the database. They can follow to the surviving node, and continue operating. >> And you do this with math, you're doing some kind of magic bit slicing, or how do you do that? >> That, so that is that particular thing, application continuity, so technology that's been built into Oracle database since, since 12c, and that it's been around for quite a long time. And it allows the application to follow the rack cluster, any kind of issues with the rack cluster. We can drain connections off. It's very well-proven technology in, you know, prior to to proactive maintenance, we can drain connections over and then it will also handle a failure of a connection as well. And the application following that, yes. >> I learned from my old mainframe days and hanging around with David Floyer. It's always ask, what happens when something goes wrong and it's all about recovery. And you guys have the gold standard there. I mean, we've talked about this a lot. So you got Exadata. That's what is behind your Exadata cloud service, X8M I think you call it, and you've got autonomous database. I'm not great with model numbers, but, but talk about the way you can handle simultaneous failures. I mean, are there like triple redundancies that you've built in? >> Yeah. So everything what we do in our cloud is everything is triple redundancy by default. So we, you can suffer, that way we can suffer two failures and continue operating. So the, the other thing, so recovery, if you look at transaction recovery, when a failure occurs a transaction will flip that session, will flip to the machine that keeps running. It'll reposition all in the work that's in flight, any kind of inflight transactions, any in flight queries that are going on, reposition and continue operating. >> So you've essentially created like the old three site data centers, but you're in a single platform because you're synchronous. But, that same concept in a package. >> It's, you know, it's a lot of times you show a picture of an Exadata. It looks like a single box, but in the box there's some redundancy built in the box. And in fact, in the cloud it's actually across an entire aisle. So it's, we kind of obscure that a little bit, from your provisioning, you know, our database nodes and our storage cells and in the cloud but it's actually across an entire aisle of a dataset. >> Okay, and of course, that's within a synchronous location. Let's talk about disaster recovery, and what you're doing in that area, around Oracle Cloud What are my options there? What's different from other cloud providers we were talking earlier about, AZs, how are you different and what are you doing there? >> Yeah, so we, we talked earlier about the Multi-AZ deployment, what we call it availability domain, AD, so a little different terminology. But we can deploy another, another copy of the database into another availability domain, if you like. It's not often that you lose an entire AZ or AD, it's more, we're protecting from regional failures. So across another region. And that's where we look at, we really look at that as that technology, as a standby, as a data, disaster recovery solution not for HA. HA, we build HA into the machine itself. >> So you're saying, we were talking earlier about AZ, you're saying that's for HA versus DR. Is that, is that what you're contending? >> Yeah, like, you know again, pick on Amazon for a second here. Amazon uses a standby database. What we would normally use for disaster recovery, they're using that for availability. And you're looking at a few minutes of time to flip over to another AZ, whereas within an Exadata frame, we can flip over in milliseconds. We keep continue running. There is no loss of conductivity. And then we use the standby in another region for disaster. That's a true disaster solution. >> As opposed to incurring that penalty of latency, or whatever, to spin up the other resource. >> Right, right. >> Okay, so that's clear how kind of you guys address that, that challenge. Last question, maybe you could give us your take, again folks, coming out of Oracle's mouth, but what's the bottom line cost Delta based on your experience between your service and competitive services? I love these conversations because you're not afraid to talk about the competition, so bring it on. >> I've seen, so we've just based on what we've seen with customers deploying databases in Amazon, versus what, you know we've replaced that within, in our cloud service. We're seeing from just a list price perspective. Now, you know, we discount, I know Amazon discounts, but the only thing I can really speak to is list price perspective. It's about a third the cost. So we're talking about a more powerful platform, runs faster. We get these incredible, we haven't even talked about performance here. Talk about availability, performance where we're getting IO rates, IO latencies in the 19 microsecond range. Now with Exadata, that's going to be 50 times faster than what you get with these traditional cloud vendors. So much, much faster, and a third the cost. >> So talk about discounts, I mean, I know Oracle discounts, Oracle from list price, Oracle provides significant discounts. I'm not as familiar with your cloud pricing but I mean, Amazon's discounts are really in the form of like reserved instances. Is your pricing similar in that regard or different? I mean, if I'm just paying on demand, I'm paying through the nose. I presume it's same with you. If I, but if I buy in bulk getting a discount, is that what you mean by discount? Or is it more similar to the way you've traditionally discounted, you know large customers, the more you spend, the more you you get kind of thing. >> It's a, there's a discount structure. So it's, we don't have the same kind of lock-in like with reserved instance structure, but yeah, it's, there are discounts and that's going to be very customer specific. >> Right. >> So, but I think that the end result we're starting at, a three X differential on the price. >> But the reason I'm asking the question is that the stats you gave me are for list price, right? >> Yeah, yes, yeah. >> Okay, and sure, you're saying that at list price you're, you're less expensive. I, and again, my contention would be just by experiences that your discounts would be more aggressive traditionally in Oracle's traditional business. You know, I've done a lot of Oracle negotiation in my days. And if you're, you know, if you're a big customer you can get good deals. And again, I'm not as familiar with the cloud pricing, but still that's, that's good. If you're doing it on a list price basis, to me, that's a conservative statement if that makes any sense. >> Right, that's where it starts. We know that's where it's starting out. So I, you know, once you get into discounts, it's very customer specific. >> Right. >> We know the starting point is at three X differential. Before you do something in the Multi-AZ would be a six X differential, by the way, so. >> Yeah, okay. All right, Chris. Well, Hey, I appreciate you taking us through this, good stuff, and best of luck, good work. You know, you guys keep, I always say Oracle invest you guys spend a lot of money in RD and, and, you know you're quiet for a while in the cloud and all of a sudden you came out like you invented it. So good job! >> All right. >> All right, thanks. Thanks for coming on. All right. >> Thanks. >> Thank you for watching everybody. This is Dave Vellante for Cube conversations. We'll see you next time. (upbeat music)
SUMMARY :
So on that note, we have with databases in the cloud Yeah, you know, they really are. maybe not the best strategy So if you have a virtual I might say to that, in the range of 15 to 20%. you can provide on that? So, you know, we So it's the same concept, So if you look at a So the applications that are And sorry, you mentioned So it's, it's not the cluster exist in the other clouds. building a mission critical, you know, And then we drive it to, you know about a third the cost. Well, so, but, what If you do that, you're now or what we call availability you know, coincident failures. So the application And it allows the application about the way you can handle So we, you can suffer, like the old three site data And in fact, in the cloud what are you doing there? It's not often that you So you're saying, we were Yeah, like, you know again, that penalty of latency, kind of you guys address that, but the only thing I can really speak to is that what you mean by discount? So it's, we don't have the So, but I think that the you can get good deals. So I, you know, once We know the starting point and all of a sudden you came Thanks for coming on. Thank you for watching everybody.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Steve Zivanic | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Chris | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
15 | QUANTITY | 0.99+ |
36 drives | QUANTITY | 0.99+ |
six | QUANTITY | 0.99+ |
50% | QUANTITY | 0.99+ |
50 times | QUANTITY | 0.99+ |
three times | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
24 | QUANTITY | 0.99+ |
David Floyer | PERSON | 0.99+ |
six times | QUANTITY | 0.99+ |
36 disc drives | QUANTITY | 0.99+ |
10 megabytes | QUANTITY | 0.99+ |
Chris Craft | PERSON | 0.99+ |
30. 30 machines | QUANTITY | 0.99+ |
one-time | QUANTITY | 0.99+ |
one third | QUANTITY | 0.99+ |
two sides | QUANTITY | 0.99+ |
Arvind Krishna | PERSON | 0.99+ |
two failures | QUANTITY | 0.99+ |
each storage cell | QUANTITY | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
19 microsecond | QUANTITY | 0.99+ |
two storage cells | QUANTITY | 0.99+ |
Christian Craft | PERSON | 0.99+ |
Delta | ORGANIZATION | 0.99+ |
25% | QUANTITY | 0.99+ |
20% | QUANTITY | 0.99+ |
Juan Loaiza | PERSON | 0.99+ |
single platform | QUANTITY | 0.99+ |
Each database | QUANTITY | 0.98+ |
AWS | ORGANIZATION | 0.98+ |
each database | QUANTITY | 0.98+ |
decades ago | DATE | 0.98+ |
third | QUANTITY | 0.97+ |
Exadata | ORGANIZATION | 0.97+ |
AZ | LOCATION | 0.97+ |
around three seconds | QUANTITY | 0.97+ |
three times | QUANTITY | 0.96+ |
12 per storage cell | QUANTITY | 0.96+ |
two things | QUANTITY | 0.96+ |
24 disc drives | QUANTITY | 0.95+ |
single box | QUANTITY | 0.95+ |
today | DATE | 0.94+ |
ORGANIZATION | 0.94+ | |
one location | QUANTITY | 0.93+ |
three X | QUANTITY | 0.93+ |
one | QUANTITY | 0.92+ |
one VM | QUANTITY | 0.91+ |
first | QUANTITY | 0.9+ |
single VM | QUANTITY | 0.89+ |
three | QUANTITY | 0.88+ |
Oracle Cloud | TITLE | 0.85+ |
single database | QUANTITY | 0.83+ |
three site data centers | QUANTITY | 0.83+ |
dozen drives | QUANTITY | 0.81+ |
eight flashcard | QUANTITY | 0.79+ |
Wim Coekaerts, Oracle | CUBEconversations
(bright upbeat music) >> Hello everyone, and welcome to this exclusive Cube Conversation. We have the pleasure today to welcome, Wim Coekaerts, senior vice president of software development at Oracle. Wim, it's good to see you. How you been, sir? >> Good, it's been a while since we last talked but I'm excited to be here, as always. >> It was during COVID though and so I hope to see you face to face soon. But so Wim, since the Barron's Article declared Oracle a Cloud giant, we've really been sort of paying attention and amping up our coverage of Oracle and asking a lot of questions like, is Oracle really a Cloud giant? And I'll say this, we've always stressed that Oracle invests in R&D and of course there's a lot of D in that equation. And over the past year, we've seen, of course the autonomous database is ramping up, especially notable on Exadata Cloud@Customer, we've covered that extensively. We covered the autonomous data warehouse announcement, the blockchain piece, which of course got me excited 'cause I get to talk about crypto with Juan. Roving Edge, which for everybody who might not be familiar with that, it's an edge cloud service, dedicated regions that you guys announced, which is a managed cloud region. And so it's clear, you guys are serious about cloud. These are all cloud first services using second gen OCI. So, Oracle's making some moves but the question is, what are customers doing? Are they buying this stuff? Are they leaning into these new deployment models for the databases? What can you tell us? >> You know, definitely. And I think, you know, the reason that we have so many different services is that not every customer is the same, right? One of the things that people don't necessarily realize, I guess, is in the early days of cloud lots of startups went there because they had no local infrastructure. It was easy for them to get started in something completely new. Our customers are mostly enterprise customers that have huge data centers in many cases, they have lots of real estate local. And when they think about cloud they're wondering how can we create an environment that doesn't cause us to have two ops teams and two ways of managing things. And so, they're trying to figure out exactly what it means to take their real estate and either move it wholesale to the cloud over a period of years, or they say, "Hey, some of these things need to be local maybe even for regulatory purposes." Or just because they want to keep some data locally within their own data centers but then they have to move other things remotely. And so, there's many different ways of solving the problem. And you can't just say, "Here's one cloud, this is where you go and that's it." So, we basically say, if you're on prem, we provide you with cloud services on-premises, like dedicated regions or Oracle Exadata Cloud@Customer and so forth so that you get the benefits of what we built for cloud and spend a lot of time on, but you can run them in your own data center or people say, "No, no, no. I want to get rid of my data centers, I do it remotely." Okay, then you do it in Oracle cloud directly. Or you have a hybrid model where you say, "Some stays local, some is remote." The nice thing is you get the exact same API, the exact same way of managing things, no matter how you deploy it. And that's a big differentiator. >> So, is it fair to say that you guys have, I think of it as a purpose built club, 'cause I talk to a lot of customers. I mean, take an insurance app like Claims, and customers tell me, "I'm not putting that into the public cloud." But you're making a case that it actually might make sense in your cloud because you can support those mission critical applications with the exact same experience, same API, same... I can get, you know, take Rack for instance, I can't get, you know, real application clusters in an Amazon cloud but presumably I can get them in your cloud. So, is it fair to say you have a purpose built cloud specifically for the most demanding applications? Is that a right way to look at it or not necessarily? >> Well, it's interesting. I think the thing to be careful of is, I guess, purpose built cloud might for some people mean, "Oh, you can only do things if it's Oracle centric." Right, and so I think that fundamentally, Oracle cloud provides a generic cloud. You can run anything you want, any application, any deployment model that you have. Whether you're an Oracle customer or not, we provide you with a full cloud service, right? However, given that we know and have known, obviously for a long time, how our products run best, when we designed OCI gen two, when we designed the networking stack, the storage layer and all that stuff, we made sure that it would be capable of running our more complex environments because our advantage is, Oracle customers have a place where they can run Oracle the best. Right, and so obviously the context of purpose-built fits that model, where yes, we've made some design choices that allow us to run Rack inside OCI and allow us to deploy Exadatas inside OCI which you cannot do in other clouds. So yes, it's purpose built in that sense but I would caution on the side of that it sometimes might imply that it's unique to Oracle products and I guess one way to look at it is if you can run Oracle, you can run everything else, right? Because it's such a complex suite of products that if you can run that then it it'll support any other (mumbling). >> Right. Right, it's like New York city. You make it there, you can make it anywhere. If I can run the most demanding mission critical applications, well, then I can run a web app for instance, okay. I got a question on tooling 'cause there's a lot of tooling, like sometimes it makes my eyes bleed when I look at all this stuff and doesn't... Square the circle for me, doesn't autonomous, an autonomous database like Autonomous Linux, for instance, doesn't it eliminate the need for all these management tools? >> You know, it does. It eliminates the need for the management at the lower level, right. So, with the autonomous Linux, what we offer and what we do is, we automatically patch the operating system for you and make sure it's secure from a security patching point of view. We eliminate the downtime, so when we do it then you don't have to restart applications. However, we don't know necessarily what the app is that is installed on top of it. You know, people can deploy their own applications, they can run third party applications, they can use it for development environments and so forth. So, there's sort of the core operating system layer and on the database side, you know, we take care of database patching and upgrades and storage management and all that stuff. So the same thing, if you run your own application inside the database, we can manage the database portion but we don't manage the application portion just like on the operating system. And so, there's still a management level that's required, no matter what, a level above that. And the other thing and I think this is what a lot of the stuff we're doing is based on is, you still have tons of stuff on-premises that needs full management. You have applications that you migrate that are not running Autonomous Linux, could be a Windows application that's running or it could be something on a different Linux distribution or you could still have some databases installed that you manage yourself, you don't want to use the autonomous or you're on a third-party. And so we want to make sure that we can address all of them with a single set of tools, right. >> Okay, so I wonder, can you give us just an overview, just briefly of the products that comprise into the cloud services, your management solution, what's in that portfolio? How should we think about it? >> Yeah, so it basically starts with Enterprise Manager on-premises, right? Which has been the tool that our Oracle database customers in particular have been using for many years and is widely used by our customer base. And so you have those customers, most of their real estate is on-premises and they can use enterprise management with local. They have it running and they don't want to change. They can keep doing that and we keep enhancing as you know, with newer versions of Enterprise Manager getting better. So, then there's the transition to cloud and so what we've been doing over the last several years is basically, looking at the things, well, one aspect is looking at things people, likes of Enterprise Manager and make sure that we provide similar functionality in Oracle cloud. So, we have Performance Hub for looking at how the database performance is working. We have APM for Application Performance Monitoring, we have Logging Analytics that looks at all the different log files and helps make sense of it for you. We have Database Management. So, a lot of the functionality that people like in Enterprise Manager mentioned the database that we've built into Oracle cloud, and, you know, a number of other things that are coming Operations Insights, to look at how databases are performing and how we can potentially do consolidation and stuff. So we've basically looked at what people have been using on-premises, how we can replicate that in Oracle cloud and then also, when you're in a cloud, how you can make make use of all the base services that a cloud vendor provides, telemetry, logging and so forth. And so, it's a broad portfolio and what it allows us to do with our customers is say, "Look, if you're predominantly on-prem, you want to stay there, keep using Enterprise Manager. If you're starting to move to Oracle cloud, you can first use EM, look at what's happening in the cloud and then switch over, start using all the management products we have in the cloud and let go of the Enterprise Manager instance on-premise. So you can gradually shift, you can start using more and more. Maybe you start with analytics first and then you start with insights and then you switch to database management. So there's a whole suite of possibilities. >> (indistinct) you mentioned APM, I've been watching that space, it's really evolved. I mean, you saw, you know, years ago, Splunk came out with sort of log analytics, maybe simplified that a little bit, now you're seeing some open source stuff come out. You're seeing a lot of startups come out, you saw Cisco made an acquisition with AppD and that whole space is transforming it seems that the future is all about that end to end visibility, simplifying the ability to remediate problems. And I'm thinking, okay, you just mentioned, you guys have a lot of these capabilities, you got Autonomous, is that sort of where you're headed with your capabilities? >> It definitely is and in fact, one of the... So, you know, APM allows you to say, "Hey, here's my web browser and it's making a connection to the database, to a middle tier" and it's hard for operations people in companies to say, hey, the end user calls and says, "You know, my order entry system is slow. Is it the browser? Is it the middle tier that they connect to? Is it the database that's overloaded in the backend?" And so, APM helps you with tracing, you know, what happens from where to where, where the delays are. Now, once you know where the delay is, you need to drill down on it. And then you need to go look at log files. And that's where the logging piece comes in. And what happens very often is that these log files are very difficult to read. You have networking log files and you have database log files and you have reslog files and you almost have to be an expert in all of these things. And so, then with Logging Analytics, we basically provide sort of an expert dashboard system on top of that, that allows us to say, "Hey! When you look at logging for the network stack, here are the most important errors that we could find." So you don't have to go and learn all the details of these things. And so, the real advantages of saying, "Hey, we have APM, we have Logging Analytics, we can tie the two together." Right, and so we can provide a solution that actually helps solve the problem, rather than, you need to use APM for one vendor, you need to use Logging Analytics from another vendor and you know, that doesn't necessarily work very well. >> Yeah and that's why you're seeing with like the ELK Stack it's cool, you're an open source guy, it's cool as an open source, but it's complicated to set up all that that brings. So, that's kind of a cool approach that you guys are taking. You mentioned Enterprise Manager, you just made a recent announcement, a new release. What's new in that new release? >> So Enterprise Manager 13.5 just got released. And so EM keeps improving, right? We've made a lot of changes over over the years and one of the things we've done in recent years is do more frequent updates sort of the cloud model frequent updates that are not just bug fixes but also introduce new functionality so people get more stuff more frequently rather than you know, once a year. And that's certainly been very attractive because it shows that it's a lively evolving product. And one of the main focus areas of course is cloud. And so a lot of work that happens in Enterprise Manager is hybrid cloud, which basically means I run Enterprise Manager and I have some stuff in Oracle cloud, I might have some other stuff in another cloud vendors environment and so we can actually see which databases are where and provide you with one consolidated view and one tool, right? And of course it supports Autonomous Database and Exadata in cloud servers and so forth. So you can from EM see both your databases on-premises and also how it's doing in in Oracle cloud as you potentially migrate things over. So that's one aspect. And then the other one is in terms of operations and automation. One of the things that we started doing again with Enterprise Manager in the last few years is making sure that everything has a REST API. So we try to make the experience with Enterprise Manager be very similar to how people work with a cloud service. Most folks now writing automation tools are used to calling REST APIs. EM in the early days didn't have REST APIs, now we're making sure everything works that way. And one of the advantages is that we can do extensibility without having to rewrite the product, that we just add the API clause in the agent and it makes it a lot easier to become part of the modern system. Another thing that we introduced last year but that we're evolving with more dashboards and so forth is the Grafana plugin. So even though Enterprise Manager provides lots of cool tools, a lot of cloud operations folks use a tool called Grafana. And so we provide a plugin that allows customers to have Grafana dashboards but the data actually comes out of Enterprise Manager. So that allows us to integrate EM into a more cloudy world in a cloud environment. I think the other important part is making sure that again, Enterprise Manager has sort of a cloud feel to it. So when you do patching and upgrades, it's near zero downtime which basically means that we do all the upgrades for you without having to bring EM down. Because even though it's a management tool, it's used for operations. So if there were downtime for patching Enterprise Manager for an hour, then for that hour, it's a blackout window for all the monitoring we do. And so we want to avoid that from happening, so now EM is upgrading, even though all the events are still happening and being processed, and then we do a very short switch. So that help our operations people to be more available. >> Yes. I mean, I've been talking about Automated Operations since, you know, lights out data centers since the eighties back in (laughs). I remember (indistinct) data center one-time lights out there were storage tech libraries in there and so... But there were a lot of unintended consequences around, you know, automated ops, and so people were sort of scared to go there, at least lean in too much but now with all this machine intelligence... So you're talking about ops automation, you mentioned the REST APIs, the Grafana plugins, the Cloud feel, is that what you're bringing to the table that's unique, is that unique to Oracle? >> Well, the integration with Oracle in that sense is unique. So one example is you mentioned the word migration, right? And so database migration tends to be something, you know, customers obviously take very serious. We go from one place, you have to move all your data to another place that runs in a slightly different environment. And so how do you know whether that migration is going to work? And you can't migrate a thousand databases manually, right? So automation, again, it's not just... Automation is not just to say, "Hey, I can do an upgrade of a system or I can make sure that nothing is done by hand when you patch something." It's more about having a huge fleet of servers and a huge fleet of databases. How can you move something from one place to another and automate that? And so with EM, you know, we start with sort of the prerequisite phase. So we're looking at the existing environment, how much memory does it need? How much storage does it use? Which version of the database does it have? How much data is there to move? Then on the target side, we see whether the target can actually run in that environment. Then we go and look at, you know, how do you want to migrate? Do you want to migrate everything from a sort of a physical model or do you want to migrate it from a logical model? Do you want to do it while your environment is still running so that you start backing up the data to the target database while your existing production system is still running? Then we do a short switch afterwards, or you say, "No, I want to bring my database down. I want to do the migrate and then bring it back up." So there's different deployment models that we can let our customers pick. And then when the migration is done, we have a ton of health checks that can validate whether the target database will run through basically the exact same way. And then you can say, "I want to migrate 10 databases or 50 databases" and it'll work, It's all automated out of the box. >> So you're saying, I mean, you've looked at the prevailing way you've done migrations, historically you'd have to freeze the code and then migrate, and it would take forever, it was a function of the number of lines of code you had. And then a lot of times, you know, people would say, "We're not going to freeze the code" and then they would almost go out of business trying to merge the two. You're saying in 2021, you can give customers the choice, you can migrate, you could change the, you know, refuel the plane while you're in midair? Is that essentially what you're saying? >> That's a good way of describing it, yeah. So your existing database is running and we can do a logical backup and restore. So while transactions are happening we're still migrating it over and then you can do a cutoff. It makes the transition a lot easier. But the other thing is that in the past, migrations would typically be two things. One is one database version to the next, more upgrades than migration. Then the second one is that old hardware or a different CPU architecture are moving to newer hardware in a new CPU architecture. Those were sort of the typical migrations that you had prior to Cloud. And from a CIS admin point of view or a DBA it was all something you could touch, that you could physically touch the boxes. When you move to cloud, it's this nebulous thing somewhere in a data center that you have no access to. And that by itself creates a barrier to a lot of admins and DBA's from saying, "Oh, it'll be okay." There's a lot of concern. And so by baking in all these tests and the prerequisites and all the dashboards to say, you know, "This is what you use. These are the features you use. We know that they're available on the other side so you can do the migration." It helps solve some of these problems and remove the barriers. >> Well that was just kind of same same vision when you guys came up with it. I don't know, quite a while ago now. And it took a while to get there with, you know, you had gen one and then gen two but that is, I think, unique to Oracle. I know maybe some others that are trying to do that as well, but you were really the first to do that and so... I want to switch topics to talk about security. It's hot topic. You guys, you know, like many companies really focused on security. Does Enterprise Manager bring any of that over? I mean, the prevailing way to do security often times is to do scripts and write, you know, custom security policy scripts are fragile, they break, what can you tell us about security? >> Yeah. So there's really two things, you know. One is, we obviously have our own best security practices. How we run a database inside Oracle for our own world, we've learned about that over the years. And so we sort of baked that knowledge into Enterprise Manager. So we can say, "Hey, if you install this way, we do the install and the configuration based on our best practice." That's one thing. The other one is there's STIG, there's PCI and they're ShipBob, those are the main ones. And so customers can do their own way. They can download the documentation and do it manually. But what we've done is, and we've done this for a long time, is basically bake those policies into Enterprise Manager. So you can say, "Here's my database this needs to be PCI compliant or it needs to be HIPAA compliant and you push a button and then we validate the policies in those documents or in those prescript described files. And we make sure that the database is combined to that. And so we take that manual work and all that stuff basically out of the picture, we say, "Push this button and we'll take care of it." >> Now, Wim, but just quick sidebar here, last time we talked, it was under a year ago. It was definitely during COVID and it's still during COVID. We talked about the state of the penguin. So I'm wondering, you know, what's the latest update for Linux, any Linux developments that we should be aware of? >> Linux, we're still working very hard on Autonomous Linux and that's something where we can really differentiate and solve a problem. Of course, one of the things to mention is that Enterprise Manager can can do HIPAA compliance on Oracle Linux as well. So the security practices are not just for the database it can also go down to the operating system. Anyway, so on the Autonomous Linux side, you know, management in an Oracle Cloud's OS management is evolving. We're spending a lot of time on integrating log capturing, and if something were to go wrong that we can analyze a log file on the fly and send you a notification saying, "Hey, you know there was this bug and here's the cause." And it was potentially a fix for it to Autonomous Linux and we're putting a lot of effort into that. And then also sort of IT/operation management where we can look at the different applications that are running. So you're running a web server on a Linux environment or you're running some Java processes, we can see what's running. We can say, "Hey, here's the CPU utilization over the past week or the past year." And then how is this evolving? Say, if something suddenly spikes we can say, "Well, that's normal, because every Monday morning at 10 o'clock there's a spike or this is abnormal." And then you can start drilling this down. And this comes back to overtime integration with whether it's APM or Logging Analytics, we can tie the dots, right? We can connect them, we can say, "Push this thing, then click on that link." We give you the information. So it's that integration with the entire cloud platform that's really happening now >> Integration, there's that theme again. I want to come back to migration and I think you did a good job of explaining how you sort of make that non-disruptive and you know, your customers, I think, you know, generally you're pushing you know, that experience which makes people more comfortable. But my question is, why do people want to migrate if it works and it's on prem, are they doing it just because they want to get out of the data center business? Or is it a better experience in the cloud? What can you tell us there? >> You know, it's a little bit of everything. You know, one is, of course the idea that data center maintenance costs are very high. The other one is that when you run your own data center, you know, we obviously have this problem but when you're a cloud vendor, you have these problems but we're in this business. But if you buy a server, then in three years that server basically is depreciated by new versions and they have to do migration stuff. And so one of the advantages with cloud is you push a button, you have a new version of the hardware, basically, right? So the refreshes happen on a regular basis. You don't have to go and recycle that yourself. Then the other part is the subscription model. It's a lot easier to pay for what you use rather than you have a data center whether it's used or not, you pay for it. So there's the cost advantages and predictability of what you need, you pay for, you can say, "Oh next year we need to get x more of EMs." And it's easier to scale that, right? We take care of dealing with capacity planning. You don't have to deal with capacity planning of hardware, we do that as the cloud vendor. So there's all these practical advantages you get from doing it remotely and that's really what the appeal is. >> Right. So, as it relates to Enterprise Manager, did you guys have to like tear down the code and rebuild it? Was it entire like redo? How did you achieve that? >> No, no, no. So, Enterprise Manager keeps evolving and you know, we changed the underlying technologies here and there, piecemeal, not sort of a wholesale replacement. And so in talking about five, there's a lot of new stuff but it's built on the existing EM core. And so we're just, you know, improving certain areas. One of the things is, stability is important for our customers, obviously. And so by picking things piecemeal, we replace one engine rather than the whole thing. It allows us to introduce change more slowly, right. And then it's well-tested as a unit and then when we go on to the next thing. And then the other one is I mentioned earlier, a lot of the automation and extensibility comes from REST APIs. And so instead of basically re-writing everything we just provide a REST endpoint and we make all the new features that we built automatically be REST enabled. So that makes it a lot easier for us to introduce new stuff. >> Got it. So if I want to poke around with this new version of Enterprise Manager, can I do that? Is there a place I can go, do I have to call a rep? How does that work? >> Yeah, so for information you can just go to oracle.com/enterprise manager. That's the website that has all the data. The other thing is if you're already playing with Oracle Cloud or you use Oracle Cloud, we have Enterprise Manager images in the marketplace. So if you have never used EM, you can go to Oracle Cloud, push a button in the marketplace and you get a full Enterprise Manager installation in a matter of minutes. And then you can just start using that as well. >> Awesome. Hey, I wanted to ask you about, you know, people forget that you guys are the stewards of MySQL and we've been looking at MySQL Database Cloud service with HeatWave Did you name that? And so I wonder if you could talk about what you're doing with regard to managing HeatWave environments? >> So, HeatWave is the MySQL option that helps with analytics, right? And it really accelerates MySQL usage by 100 x and in some cases more and it's transparent to the customer. So as a MySQL user, you connect with standard MySQL applications and APIs and SQL and everything. And the HeatWave part is all done within the MySQL server. The engine itself says, "Oh, this SQL query, we can offload to the backend HeatWave cluster," which then goes in memory operations and blazingly fast returns it to you. And so the nice thing is that it turns every single MySQL database into also a data warehouse without any change whatsoever in your application. So it's been widely popular and it's quite exciting. I didn't personally name it, HeatWave, that was not my decision, but it sounds very cool. >> That's very cool. >> Yeah, It's a very cool name. >> We love MySQL, we started our company on the lamp stack, so like many >> Oh? >> Yeah, yeah. >> Yeah, yeah. That's great. So, yeah. And so with HeatWave or MySQL in general we're basically doing the same thing as we have done for the Oracle Database. So we're going to add more functionality in our database management tools to also look at HeatWave. So whether it's doing things like performance hub or generic database management and monitoring tools, we'll expand that in, you know, in the near future, in the future. >> That's great. Well, Wim, it's always a pleasure. Thank you so much for coming back in "The Cube" and letting me ask all my Colombo questions. It was really a pleasure having you. (mumbling) >> It's good be here. Thank you so much. >> You're welcome. And thank you for watching, everybody, this is Dave Vellante. We'll see you next time. (bright music)
SUMMARY :
How you been, sir? but I'm excited to be here, as always. And so it's clear, you guys and so forth so that you get So, is it fair to say you that if you can run that You make it there, you and on the database side, you know, and then you switch to it seems that the future is all about and you know, that doesn't approach that you guys are taking. all the upgrades for you since, you know, lights out And so with EM, you know, of lines of code you had. and then you can do a cutoff. is to do scripts and write, you know, and you push a button and So I'm wondering, you know, And then you can start drilling this down. and you know, your customers, And so one of the advantages with cloud is did you guys have to like tear And so we're just, you know, How does that work? And then you can just And so I wonder if you could And so the nice thing is that it turns we'll expand that in, you know, Thank you so much for Thank you so much. And thank you for watching, everybody,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Wim Coekaerts | PERSON | 0.99+ |
50 databases | QUANTITY | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
10 databases | QUANTITY | 0.99+ |
2021 | DATE | 0.99+ |
Enterprise Manager | TITLE | 0.99+ |
New York | LOCATION | 0.99+ |
Enterprise | TITLE | 0.99+ |
MySQL | TITLE | 0.99+ |
Java | TITLE | 0.99+ |
last year | DATE | 0.99+ |
two things | QUANTITY | 0.99+ |
three years | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
Linux | TITLE | 0.99+ |
an hour | QUANTITY | 0.99+ |
Enterprise Manager | TITLE | 0.99+ |
Windows | TITLE | 0.99+ |
SQL | TITLE | 0.99+ |
100 x | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
one tool | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.98+ |
today | DATE | 0.98+ |
second one | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
one example | QUANTITY | 0.98+ |
Enterprise Manager 13.5 | TITLE | 0.98+ |
one aspect | QUANTITY | 0.98+ |
one engine | QUANTITY | 0.97+ |
Wim | PERSON | 0.97+ |
gen one | QUANTITY | 0.97+ |
both | QUANTITY | 0.97+ |
once a year | QUANTITY | 0.97+ |
Oracle Cloud | TITLE | 0.97+ |
one way | QUANTITY | 0.97+ |
Grafana | TITLE | 0.97+ |
Barron | PERSON | 0.97+ |
first services | QUANTITY | 0.96+ |
HeatWave | ORGANIZATION | 0.96+ |
past year | DATE | 0.96+ |
one-time | QUANTITY | 0.96+ |
gen two | QUANTITY | 0.96+ |
one place | QUANTITY | 0.96+ |
past week | DATE | 0.96+ |
two ways | QUANTITY | 0.95+ |
Andy Mendelsohn, Oracle | CUBE Conversation, March 2021
the cloud has dramatically changed the way providers think about delivering database technologies not only has cloud first become a mandate for many if not most but customers are demanding more capabilities from their technology vendors examples include a substantially similar experience for cloud and on-prem workloads increased automation and a never-ending quest for more secure platforms broadly there are two prevailing models that have emerged one is to provide highly specialized database products that focus on optimizing for a specific workload signature the other end of the spectrum combines technologies in a converge platform to satisfy satisfy the needs of a much broader set of use cases and with me to get a perspective on these and other issues is andy mendelson is the executive vice president of oracle the world's leading database company andy leads database server technologies hello andy thanks for coming on hey dave glad to be here okay so we saw the recent announcements this is kind of your baby around next generation autonomous data warehouse maybe you could take us through the path you took from the original cloud data warehouses to where we are today yeah when we uh we first brought autonomous database out uh we were basically a second generation technology at that point you know we decided that what customers wanted was to the other you know the push of a button provision the really powerful oracle database technology that they've been using for years and um we did that with autonomous database and beyond that we provided a very unique capability that around self-tuning self-driving of the database which is something the first generation vendors didn't provide and this this is really important because customers today are you know developers and data analysts you know you know at the push of a button build out their their data warehouses but you know they're not experts in tuning and so what we thought was really important is that customers get great performance out of the box and that's one of the really unique things about autonomous data warehouse autonomous database in particular and then this latest generation that we just came out with also answers the questions we got from you know the data analysts and developers they said you know it's really great that i can press a button and provision this very powerful data warehouse infrastructure or database infrastructure from oracle but you know if i'm an analyst i want data you know so it's still hard for me to go and you know get data from various data sources transform them clean them up and get them to a way a place where i can start querying the data now i still need data engineers to help me do that and so we've done in the new release we said okay we want to give data analysts and data engineer data scientists developers is a true self-service experience where they can do their job completely without bringing in any you know any any engineers from their i.t organization and so that's what this new version is all about yeah awesome i mean look years ago you guys identified the i.t labor problem and you've been focused on r d and putting it in your r d to solve that problem for customers so we're really starting to see that hit now now gartner recently did some analysis they ranked and rated them some of the more popular cloud databases and oracle did very well i mean particularly particularly in operational categories i mean an operational side and the mission critical stuff you smoked everybody we had mark stamer and david floyer on and our big takeaways were that you're you're again dominating in that mission critical workloads that that that dominance continues but your approach of converging functionality really differs from some others that we saw i mean obviously when you get high ratings from gartner you're pretty stoked about that but what do you think contributed to those rankings and what are you finding specifically in customer interactions yeah so gardner does a lot of its analysis based on talking to customers finding out how their product these products that sound great on paper actually work in practice and i think that's one of the places where oracle database technology really shines it's it's uh it solves real-world problems um it's been doing it for a long time and as we've moved that technology into the cloud you know that continues you know the differentiation we've built up over the years really stands out you know you look at like amazon's databases they generally take some open source technology that isn't that new it could be 30 years old 25 years old and they put it up on the cloud and they say oh it's cloud native it's great but but in fact it's the same old you know technology that that doesn't really compete you know decade behind oracle's database technology so i think the gartner analysis really showed that sort of thing quite clearly yeah so let's talk about that a little bit because obviously i've learned a lot you know one of the things i've learned over the last many years of following this business a lot of ways to skin a cat and cloud database vendors if you think about you mentioned aws you know look at snowflake kind of right tool for the right job approach they're going to say that their specialty databases they're focused uh are better than your converged approach which they make you know think of as a you know swiss army knife what's your take on that yeah well the converged approach is something of course we've been working on for a long time so the the idea is pretty simple you know think about your smartphone you know if you can think back you know over 10 years ago used to have you know a camcorder and a a camera and a messaging device and also a dump phone device that all those different devices got converged into what we now call the smartphone why did the smartphone win it's just simply much more productive for you to carry one device around that that is actually best to breed in all the different categories instead of lots of separate devices and that's what we're doing with converge database over the years you know we've been able to build out technologies that are really good at transaction breasts at analytics for data warehousing now we're working on you know json technologies graph technologies the other vendors basically can't do this i mean it's much easier to build a specialty database that does one thing to build out a converged database that does end things really well and that's what we've been doing for years and again it's it's based on technology that uh you've invested in for quite a long time um and it's something that i think uh customers and developers and analyze analysts find to be a much more productive way of doing their jobs it's very unique and not common at all to see a technology that's been around as long as oracle database to see that sort of morph into a more modern platform i mean you mentioned aws uses leverages open source a lot you know snowflake would say okay hey we are born in the cloud and they are i think google bigquery would be another good example but but but that notion of boy i want to get your take on this born in the cloud those folks would say well we're superior to oracle's because you know they started you know decades ago not necessarily you know native cloud services uh how have you been able to address that i know you know cloud first is kind of the buzzword but but how have you you made that sort of transparent to users or or irrelevant to users because you are cloud first maybe you could talk about how you've able to achieve that and convince us that you actually really are cloud native now you know one of the things we we sort of like pointing out is that um oracle very uniquely has had this scale out technology for running all kinds of workloads not just analytic workloads which is what you see out in the cloud there but we can also scale out transaction processing workloads now that was another one of the reasons we do so well in for example the gardner analysis for trans operational workloads and that technology is really valuable as we went to cloud it lets us do some really unique things and the most obvious unique thing we we have is something we like to call you know you know cloud native you know instant elasticity and so with our technology if you want to provision a share you know some number of amount of compute to run your workloads you can provision exactly what you need you know if you need 17 cpus to get your job done you do 17 cpus when you provision your autonomous database our competitors who claim to be born in the cloud like snowflake and amazon they still use this this archaic way of provisioning uh servers based on shapes you know snowflake you know says what which shape cluster do you want you want 16 you want 32 you want 64. no it goes up by a power of 2 which means if you compare that to what oracle does you you have to provision up to like twice as much cpu than you really need so if you really need 17 they make you provision 32. if you really need 33 they make your provision 64. so this is not a cloud native experience at all it's an archaic way of doing things and and we like to point out with our instant elasticity you know we can go from 17 to 18 to 19 you know whatever you want plus we have something called auto scale so you can set your baseline to be 17 let's say but we will automatically based on your workload scale you up to three times that so in this case be 51 and because of that true elasticity we have we are really the only ones that can deliver true pay as you go kind of you know just pay for what you need kind of capability which is certainly what amazon was talking about when they first called their cloud elastic but it turns out for database services these guys still do this archaic thing with shapes so that's a really good example of where we're quite better than the other guys and it's much more cloud native than the other guys i want to follow up on that uh just stay here for a second because you're basically saying we have we have better granularity than the so-called cloud native guys now you mentioned snowflake right you got you got the shapes you got to you got to choose which shape you want and it sounds like it sounds like redshift the same and of course i know the way in which amazon separates compute from storage is largely a tiering exercise so it's not as as is as smooth as you might expect but nonetheless it's it's good how is it that you were you were able to achieve this with a database that was you know born you know many decades ago is it i mean what is it in from a technical standpoint an r d standpoint that you were able to do i mean did you design that in in the 1980s how did you how did you get here yeah well um it's a combination of interesting technologies so autonomous database you know it has the oracle database software that software is running on a very powerful optimized infrastructure for database based on the exadata technology that we've had on prem for many years we brought that to the cloud and that technology is a scale-out infrastructure that supports you know thousands of cpus and then we use our multi-tenant technology which is a way of sharing large infrastructures amongst amongst separate uh clients and we divide it up dynamically on the fly so if there's thousands of cpus you know this guy wants 20 and this one wants 30 we we divide it up and give them exactly what they need and if they want to grow we just take some extra cpus that are in reserve and we give it to them instantly and so that's a very different way of doing things and that's been a shape based approach where you know what what snowflake and amazon do under the covers they give you a real physical server you know or a cluster and that's how they provision if you want to grow they give you another big physical cluster which takes a long time to get the data populated to get it get it working we just have that one infrastructure that we're sharing among lots of users and we just give you a little extra capacity we don't it doesn't it's done instantly there's no need for data to be moved to populate the new clusters that you know snowflake or amazon are provisioning for you so it's a very different way of doing things and you're able to do that because of the tight integration between you mentioned exadata tight integration between the hardware and software we got david floyer calls it the iphone of enterprise sometimes sometimes you get some grief for that but it's it's not a bad metaphor but is that really the sort of secret well the big secret under the covers is this you know exudated technology our real application cluster scale out technologies our multi-tenant technologies so these are things we've been working on for a long time and they are very mature very powerful technologies and they really provide very unique benefits in a cloud world where people want things to happen instantly and they want to work well for any kind of workload um you know that's that's why we call we talk about being converged we can do mixed workloads you can do transactions and analytics all in the same data the other guys can't do that you know they're really good at like you said a narrow workload like i can do analytics or i can do graph you know i can do json but they can't really do the combination which is what real world applications are like they're not pure one thing versus enough right thank you for that so one of the questions people want to know is can oracle attract you know new customers that aren't existing oracle customers so maybe you could talk about that and you know why should uh somebody who's not an existing oracle customer think about using autonomous database yeah that's a that's a really good question you know oracle if you look at our customer base has a lot of really large enterprises you know the biggest banks and the biggest telcos you know they run oracle they run their businesses on oracle and these guys are sort of the most conservative of the bunch out there and they are moving to cloud at a somewhat slower rate than the than the smaller companies and so if you look at who's using autonomous database now it's actually the smaller companies you know the same type of people that first decided amazon was an interesting cloud 10 years ago they're also using our technologies and it's for the same reason they're finding you know they don't have large it organizations they don't have large numbers of engineers to engineer their infrastructure and that's why cloud is so attractive to them and autonomous database on top of cloud is really attractive as well because you know information is the lifeblood of every organization and if they can empower their analysts to get their job done without lots of help from it organizations they're going to do it and you know that's really what's made autonomous database really interesting you know the whole self-driving nature is very attractive to the smaller shops that don't have a lot of sophisticated um i.t expertise all right let's talk about developers you guys are the stewards of the java community so obviously you know big probably you know the biggest most popular programming language out there but when i think of developers i think of guys in hoodies pounding away but when i think of oracle developers i might think of maybe an app dev team inside of maybe some of those large customers that you talked about but why would developers and or analysts be interested in in using oracle as opposed to some some of those more focused narrow use databases that we were talking about earlier yeah so if you're a developer um you want to get your job done as fast as possible and so having a database that gives you the most productive application development experience is important to you and so you know i was talking we've been talking about converged database off and on so if i'm a developer i have a given job to do a converged database that lets me do a combination of analytics and and transactions and do a little json and little graph all in one is a much more productive place to go because if i if i i don't have something like that then i'm stuck taking my my application and breaking it up into pieces you know this piece i'm going to run on say aurora on amazon and this piece i have to run on the graph database and here's some json i got to run that on some document database and then i have to move the data around the data gets sort of fragmented between these databases and i have to do all this data you know integration and and whatever with a converged database i have a much simpler world where i can just use one technology stack i can get my job done and then i'm future proof against change you know requirements change all the time so you build the initial version of the application and your users say you know that this is not what i want i want some something else and it turns out that something else often is why i want analytics and you use something like a you know a document stored technology that has really poor analytic capabilities and then so you have to take that data and you have to move it to another database and so with with our converged approach you don't have to do that you know you're already in a place where everything works everything that you need you can possibly need in the future is going to be there as well and so for developers i i think you know converged is the right way to go plus for people who are what we call citizen developers you know like the data analysts that they cuddle they write a little code occasionally but they're really after getting value of the data we have this really fabulous no code loco tool called apex and apex is again a very mature technology it's been around for years and it lets somebody who's just a data analyst he knows a little sql but doesn't want to write code get their job done really fast and we've published some benchmark on our website showing you know basically you can get the job done 20 to 40 times faster using a no co loco tool like apex versus something like you know just writing cutting lots of traditional code i'm glad you brought up apex we recently interviewed one of your former colleagues amit xavery and all he would talk about is low code no code and then in the apex announcement you said something to the effect of coding should be the exception not the rule did you mean that what do you mean by that yeah so apex is a tool that people use with our our database technology for building what we call data driven applications so if you got a bunch of data and you want to get some value out of it you want to build maybe dashboards or more sophisticated reports apex is an incredible tool for doing that and it's it's modern you know it builds applications that look great on your smartphone and it automatically you know renders that same user interface on a bigger device like a laptop desktop device as well and uh it's very it's one of these things that uh the people that use it just go bonkers with it it's a viral technology they get really excited about how productive they they've been using it and they tell all their friends and i think we decided uh i guess about a year ago when we came up with this apex service that you know we really want to start going bigger on the marketing around it because it's very unique nobody else has anything quite like it and it's it again it just adds value to the whole developer productivity story around an oracle database so uh that's why we have the apex service now and we also have apex available with every oracle database on the cloud god i want to i want to ask you about some of the features around 21c there are a lot of them you announced earlier this year maybe you could tease out some of the top things that we should be paying attention to in 21c yeah sure um so one of the ways to look at 21c is we're we're continuing down this path of a converged database and so one of the the marquee features in 21c is something we call blockchain tables so what is blockchain well blockchain was this technology that's under the covers behind bitcoin you know it's a way of creating a tamper-proof data store um that was used by the original bitcoin algorithms well developers actually like having tamper proof data objects and databases too um you know and so what we decided to do was say well if i create a sql table in an oracle database what if there's a new option that just says i want that table implemented using blockchain technology to make the table tamper proof and fully audited etc and so we just did that and so in 21c you can now get a basically another feature of the converged database that says uh you know give me a sql table i can do everything i can query it i can insert rows into it but it's it's tamper proof i can't ever update it i can't delete rows from it amazon did the their usual thing they took again some open source technology and they said hey we got this great thing called quantum ledger database and it does blockchain tables but but if you want to do blockchain tables in any of their other databases you're out of luck they don't have it you have to go move the data into this new thing and it's again one of their it's again showing sort of the problem with their their proprietary this proprietary approach of having specialty databases versus just having one conversion that does it all so that's the blockchain cable feature uh we did a bunch of other things um the one i i think is worth mentioning the most is is support for persistent memory so a lot of people out there haven't noticed this this very interesting technology that intel shipped a couple years ago called optane data center memory and what it is it's basically a hybrid of flash memory which is persistent memory and standard dram which is not persistent means you can't store a database in dram um and so with this persistent memory you can basically have a database stored persistently in memory all the time and so it's a very innovative new technology from a database standpoint it's a very disruptive technology to the database market because now you can have an in-memory database basic period all the time 24 7. and so 21c is the first database out there that has native support for this new kind of persistent memory technology and we think it's it's really important so we're actually making it available as uh to our 19c customers as well and uh you know that's another technology i'd call out that we think is very unique we're way ahead of the game there and we're going to continue investing moving forward in that space as well yeah so that layer in between dram and and persistent flash that's that's a great innovation and good game changing from a from a performance and actually the way you write applications but i gotta i gotta ask you i and all the analysts were wrong with juan recently juan loyza and and to listen to that introduction of blockchain and everybody wants to know is safra going to start putting bitcoin on the oracle balance sheet i'm about to get that leap yeah that's a good question who knows yeah i can't comment on speculation ah that would be interesting okay last question then we got to go uh look oracle the narrative on oracle is you're expensive and you're mean you know it's hard to do business with do you care are you doing things to maybe change that perception in the cloud yeah i think we've made a very conscious decision that as we move to the cloud we're offering a totally new business model on the club that is a a cloud-native model you pay for what you use um you have everyday low prices you don't have to negotiate with some salesman for for months to get get a good price um so yeah we really like the message to get out there that those of you who think you know what oracle's all about um you know i and how it might be to work with oracle on in from your on premises days um you should really check out how oracle is now on the cloud we have this autonomous database technology really easy to use really simple any analysts can help get value out of the data without any help from any other engineers it's very unique it's it's uh it's the same technology you're used to but now it's delivered in a way that's much easier to consume and much lower cost and so yeah you should definitely take a look at what we've got out there on the cloud and it's all free to try out we got this free tier you can provision free vms free databases um free apex whatever you want and uh try it out and see what you think well thanks for that i was kidding about me and a lot of a lot of friends at oracle some relatives as well and thanks andy for coming on thecube today it's really great to talk to you yeah it's my pleasure and thanks for watching this is dave vellante we'll see you next time you
SUMMARY :
and so for developers i i think you know
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Andy Mendelsohn | PERSON | 0.99+ |
amazon | ORGANIZATION | 0.99+ |
March 2021 | DATE | 0.99+ |
20 | QUANTITY | 0.99+ |
gartner | ORGANIZATION | 0.99+ |
oracle | ORGANIZATION | 0.99+ |
apex | TITLE | 0.99+ |
juan loyza | PERSON | 0.99+ |
first database | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
david floyer | PERSON | 0.99+ |
two prevailing models | QUANTITY | 0.99+ |
twice | QUANTITY | 0.98+ |
dave vellante | PERSON | 0.98+ |
today | DATE | 0.98+ |
first generation | QUANTITY | 0.98+ |
10 years ago | DATE | 0.98+ |
thousands of cpus | QUANTITY | 0.98+ |
decades ago | DATE | 0.98+ |
40 times | QUANTITY | 0.98+ |
51 | OTHER | 0.97+ |
25 years old | QUANTITY | 0.97+ |
30 | QUANTITY | 0.96+ |
first | QUANTITY | 0.96+ |
andy mendelson | PERSON | 0.96+ |
17 | OTHER | 0.96+ |
1980s | DATE | 0.96+ |
second generation | QUANTITY | 0.96+ |
33 | OTHER | 0.96+ |
one | QUANTITY | 0.96+ |
30 years old | QUANTITY | 0.96+ |
json | ORGANIZATION | 0.95+ |
earlier this year | DATE | 0.95+ |
one device | QUANTITY | 0.94+ |
amit xavery | PERSON | 0.94+ |
mark stamer | PERSON | 0.92+ |
ORGANIZATION | 0.91+ | |
years | DATE | 0.91+ |
32 | OTHER | 0.9+ |
about a year ago | DATE | 0.9+ |
oracle | TITLE | 0.9+ |
over 10 years ago | DATE | 0.89+ |
safra | ORGANIZATION | 0.88+ |
16 | OTHER | 0.88+ |
one thing | QUANTITY | 0.87+ |
many decades ago | DATE | 0.87+ |
lot of people | QUANTITY | 0.83+ |
Sandy Carter, AWS | CUBE Conversation, February 2021
(upbeat music) >> Hello and welcome to this Cube conversation. I'm John Furrier, your host of theCube here in Palo Alto, California. We're here in 2021 as we get through the pandemic and vaccine on the horizon all around the world. It's great to welcome Sandy Carter, Vice President of Partners and Programs with Amazon Web Services. Sandy, great to see you. I wanted to check in with you for a couple of reasons. One is just get a take on the landscape of the marketplace as well as you've got some always good programs going on. You're in the middle of all the action. Great to see you. >> Nice to see you too, John. Thanks for having me. >> So one of the things that's come out of this COVID and as we get ready to come out of the pandemic you starting to see some patterns emerging, and that is cloud and cloud-native technologies and SAS and the new platforming and refactoring using cloud has created an opportunity for companies. Your partner group within public sector and beyond is just completely exploding and value creation. Changing the world's society is now accelerated. We've covered that in the past, certainly in detail last year at re:Invent. Now more than ever it's more important. You're doing some pretty cutting things. What's your update here for us? >> Well, John, we're really excited because you know the heartbeat of countries of the United States globally are small and medium businesses. So today we're really excited to launch Think Big for Small Business. It's a program that helps accelerate public sector serving small and diverse partners. So you know that these small and medium businesses are just the engine for inclusive growth and strategy. We talked about some stats today, but according to the World Bank, smaller medium business accounts for 98% of all companies, they contribute a 50% of the GDP, two-thirds of the employment opportunities, and the fastest growing areas are in minority owned businesses, women, black owned, brown owned, veteran owned, aborigine, ethnic minorities who are just vital to the economic role. And so today this program enables us as AWS to support this partner group to overcome the challenges that they're seeing today in their business with some benefits specifically targeted for them from AWS. >> Can I ask you what was the driver behind this? Obviously, we're seeing the pandemic and you can't look at on the TV or in the news without seeing the impact that small businesses had. So I can almost imagine that might be some motivation, but what is some of the conversations that you're having? Why this program? Why think Big for Small Business pilot experience that you're launch? >> Well, it's really interesting. The COVID obviously plays a role here because COVID hit small and medium businesses harder, but we also, you know, part of Amazon is working backwards from the customers. So we collected feedback from small businesses on their experience in working with us. They all want to work with us. And essentially they told us that they need a little bit more help, a little bit more push around programmatic benefits. So we listened to them to see what was happening. In addition, AWS grew up with a startup community. That's how we grew up. And so we wanted to also reflect our heritage and our commitment to these partners who represent such a heartbeat of many different economies. That was really the main driver. And today we had, John, one of our follow the sun. So we're doing sessions in Latin America, Canada, the US, APJ, Europe. And if you had heard these partners today it was just such a great story of how we were able to help them and help them grow. >> One of the cultural changes that we've been reporting on SiliconANGLE, you're seeing it all over the world is the shift in who's adopting, who's starting businesses. And you're seeing, you mentioned minority owned businesses but it goes beyond that. Now you have complete diverse set entrepreneurial activity. And cloud has generated this democratization wave. You starting to see businesses highly accelerated. I mean, more than ever, I've never seen in the entrepreneurial equation the ability to start, get started and get to success, get to some measurable MVP, minimal viable product, and then ultimately to success faster than ever before. This has opened up the doors to anyone to be an entrepreneur. And so this brings up the conversation of equality in entrepreneurship. I know this is close to your heart. Share your thoughts on this big trend. >> Yeah, and that's why this program it's not just a great I think achievement for AWS, but it's very personal to the entire public sector team. If you look at entrepreneurs like, Lisa Burnett, she's the President and Managing Director of DLZP. They are a female owned minority owned business from Texas. And as you listen to her story about equity, she has this amazing business, migrating Oracle workloads over to AWS, but as she started growing she needed help understanding a little bit more about what AWS could bring to the table, how we could help her, what go to market strategies we could bring, and so that equalizer was this program. She was part of our pilot. We also had John Wieler on. He is the Vice President of Biz Dev from IMT out of Canada. And he is focused on government for Canada. And as a small business, he said today something that was so impactful, he goes, "Amazon never asked me if I'm a small business. They now treat me like I'm big. I feel like I'm one of the big guys and that enables me grow even bigger." And we also talked today to Juan Pablo De Rosa. He's the CEO of Technogi. And it's a small business in Mexico. And what do they do? They do migrations. They just migrate legacy workloads over. And again, back to that equality point you made, how cool was it that here's this company in Mexico, and they're doing all these migrations and we can help them even be more successful and to drive more jobs in the region. It's a very equalizing program and something that we're very proud of. >> You know what I love about your job and I love talking to you about this (Sandy laughs) because it's so much fun. You have a global perspective. It's not just United States. There's a global perspective. This event you're having this morning that you kicked off with is not just in the US, it's a follow the sun kind of a community. You got quite the global community developing there, Sandy. Can you share some insight behind the curtain, behind AWS, how this is developing? How you're handling it? What you're doing to nurture and grow that community that really wants to engage with you because you are making them feel big because (laughs) that's what cloud does. It makes them punch above their weight class and innovate. >> Yeah, that's very correct. >> This is the core thesis of Amazon. So you've got a community developing, how are you handling it? How are you building it? How are you nurturing it? What are your thoughts? >> You know what, John? You're so insightful because that's actually the goal of this program. We want to help these partners. We want to help them grow. But our ultimate goal is to build that small and medium business community that is based on AWS. In fact, at re:Invent this year, we were able to talk about MST which is based out of Malaysia, as well as cloud prime based out of Korea. And just by talking about it, those two CEOs reached out to each other from Korea and Malaysia and started talking. And then we today introduced folks from Mexico, and Canada, and the US, and Bulgaria. And so, we really pride ourselves on facilitating that community. Our dream here, our vision here is that we would build that small business community to be much more scalable but starting out by making those connections, having that mentoring that will be built in together, doing community meetings that advisory meetings together. We piloted this program in 2020. We already have 37 partners. And they told me as I met with them, they already feel like this small and medium business community or family. Family was the word they used, I think, moving forward. So you nailed it. That's the goal here is to create that community where people can share their thoughts and mentor each other. >> And it's on the ground floor too. It's just beginning. I think it's going to be so much larger. And to piggyback off that I want to also point out and highlight and get your reaction to is the success that you've been having and Amazon Web Services in general but mainly in the public sector side with the public private partnership. You're seeing this theme emerge really been a big way. I've been enclose to it and hosting and being interviewing a lot of folks at that, your customers whether it's cybersecurity in space, the Mars partnership that you guys just got on Mars with partnerships. So it's a global and interstellar soon to be huge everywhere. But this is a big discussion because as from cybersecurity, geopolitical to space, you have this partnership with public private because you can't do it alone. The public markets, the public sector cannot do it alone. And it pretty much everyone's agreeing to that. So this dynamic of public sector and partnering private public is a pretty big deal. Unpack that for us real quickly. >> Yeah, it really is a big deal. And in fact, we've worked with several companies. I'll just use one sector. Public Safety and Disaster Response. We just announced the competency at re:Invent for our tech partners. And what we found is that when communities are facing a disaster, it really is government or the public sector plus the private sector. We had many solutions where citizens are providing data that helps the government manage a disaster or manage or help in a public safety scenario to things like simple things you would think, but in one country they were looking at bicycle routes and discovered that certain bicycle routes there were more crashes. And so one of our partners decided to have the community provide the data. And so as they were collecting that data, putting in the data lake in AWS, the community or the private sector was providing the data that enabled the application, our Public Sector Partner application to identify places where bicycle accidents happen most often. And I love the story, John, because the CEO of the partner told me that they measured their results in terms of ELO, I'm sorry, ROL, Return on Lives not ROI, because they save so many lives just from that simple application. >> Yeah, and the data's all there. You just saw on the news, Tiger Woods got into a car accident and survived. And as it turns out to your point that's a curve in the road where a lot of accidents happen. And if that data was available that could have been telegraphed right into the car itself and slow down, kind of like almost a prevention. So he just an example of just all the innovation possibilities that are abound out there. >> And that's why we love our small businesses and startups too, John. They are driving that innovation. The startups are driving that innovation and we're able to then open access to that innovation to governments, agencies, healthcare providers, space. You mentioned Mars. One of our partners MAXR helped them with the robotics. So it's just a really cool experience where you can open up that innovation, help create new jobs through these small businesses and help them be successful. There's really nothing, nothing better. >> Can I ask you- >> Small, small is beautiful. >> Can I asked you a personal question on this been Mars thing? >> Yeah. >> What's it like at Amazon Web Services now because that was such a cool mission. I saw Teresa Carlson, had a post on the internet and LinkedIn as well as her blog post. You had posted a picture of me and you had thumbs were taking an old picture from in real life. Space is cool, Mars in particular, everyone's fixated on it. Pretty big accomplishment. What's it like at Amazon? People high five in each other pretty giddy, what's happening? >> Oh yeah. The thing about Amazon is people come here to change the world. That's what we want to do. We want to have an impact on history. We want to help make history. And we do it all on behalf of our customers. We're innovating on behalf of our customers. And so, I think we get excited when our customers are successful, when our partners are successful, which is why I'm so excited right now, John, because we did that session this morning, and as I listened to Juan Pablo Dela Rosa, and just all the partners, Lisa, John, and just to hear them say, "You helped us," that's what makes us giddy. And that's what makes us excited. So it could be something as big as Mars. We went to Mars but it's also doing something for small businesses as well. It runs the spectrum that really drives us and fuels that energy. And of course, we've got great leadership as you know, because you get to talk to Andy. Andy is such a great leader. He motivates and he inspires us as well to do more on behalf of our customer. >> Yeah, you guys are very customer focused and innovative which is really the kind of the secret sauce. I love the fact that small medium sized business can also be part of the solutions. And I truly believe that, and why I wanted us to promote and amplify what you're working on today is because the small medium size enterprise and business is the heart of the recovery on a global scale. So important and having the resources to do that, and doing it easily and consuming the cloud so that they can apply the value. It's going to change lives. I think the thing that people aren't really talking much about right now, is that the small medium size businesses will be the road to recovery. >> I agree with you. And I love this program because it does promote diversity, something that Amazon is very much focused on. It's global, so it has that global reach and it supports small business, and therefore the recovery that you talked about. So it is I think an amazing emphasis on all the things that really matter now. During COVID, John, we learned about what really matters, and this program focuses on those things and helping others. >> Well, great to see you. I know you're super busy. Thanks for coming on and sharing the update, and certainly talking about the small mid size business program. I'm sure you're busy getting ready to give the awards out to the winners this year. Looking forward to seeing that come up soon. >> Great. Thank you, John. And don't forget if you are a small and medium business partner 'cause this program is specifically for partners, check out Think Big for Small Business. >> Think Big for Small Business. Sandy Carter, here on theCube, sharing our insight, of course all the updates from the worldwide public sector partner program, doing great things. I'm John Furrier for theCube. Thanks for watching. (upbeat music)
SUMMARY :
One is just get a take on the Nice to see you too, John. and the new platforming and the fastest growing areas and you can't look at on the TV and our commitment to these partners the ability to start, and so that equalizer was this program. and I love talking to you about this This is the core thesis and Canada, and the US, and Bulgaria. And it's on the ground floor too. And I love the story, John, Yeah, and the data's all there. They are driving that innovation. a post on the internet and just all the partners, Lisa, John, is that the small medium size businesses And I love this program and sharing the update, And don't forget if you are a small of course all the updates
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Andy | PERSON | 0.99+ |
World Bank | ORGANIZATION | 0.99+ |
Lisa Burnett | PERSON | 0.99+ |
John | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Sandy Carter | PERSON | 0.99+ |
Mexico | LOCATION | 0.99+ |
Sandy Carter | PERSON | 0.99+ |
Juan Pablo De Rosa | PERSON | 0.99+ |
Canada | LOCATION | 0.99+ |
2020 | DATE | 0.99+ |
Texas | LOCATION | 0.99+ |
Europe | LOCATION | 0.99+ |
Malaysia | LOCATION | 0.99+ |
John Wieler | PERSON | 0.99+ |
Sandy | PERSON | 0.99+ |
Teresa Carlson | PERSON | 0.99+ |
50% | QUANTITY | 0.99+ |
Technogi | ORGANIZATION | 0.99+ |
Korea | LOCATION | 0.99+ |
US | LOCATION | 0.99+ |
98% | QUANTITY | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
37 partners | QUANTITY | 0.99+ |
Lisa | PERSON | 0.99+ |
February 2021 | DATE | 0.99+ |
today | DATE | 0.99+ |
2021 | DATE | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
Juan Pablo Dela Rosa | PERSON | 0.99+ |
last year | DATE | 0.99+ |
United States | LOCATION | 0.99+ |
MAXR | ORGANIZATION | 0.99+ |
Bulgaria | LOCATION | 0.99+ |
pandemic | EVENT | 0.99+ |
Latin America | LOCATION | 0.99+ |
two-thirds | QUANTITY | 0.99+ |
Mars | LOCATION | 0.99+ |
one | QUANTITY | 0.99+ |
Tiger Woods | PERSON | 0.99+ |
one country | QUANTITY | 0.98+ |
Biz Dev | ORGANIZATION | 0.98+ |
APJ | LOCATION | 0.98+ |
One | QUANTITY | 0.98+ |
Mars | ORGANIZATION | 0.98+ |
DLZP | ORGANIZATION | 0.97+ |
Oracle | ORGANIZATION | 0.97+ |
two CEOs | QUANTITY | 0.96+ |
External Data | Beyond.2020 Digital
>>welcome back. And thanks for joining us for our second session. External data, your new leading indicators. We'll be hearing from industry leaders as they share best practices and challenges in leveraging external data. This panel will be a true conversation on the part of the possible. All right, let's get to >>it >>today. We're excited to be joined by thought spots. Chief Data Strategy Officer Cindy Housing Deloitte's chief data officer Manteo, the founder and CEO of Eagle Alfa. And it Kilduff and Snowflakes, VP of data marketplace and customer product strategy. Matt Glickman. Cindy. Without further ado, the floor is yours. >>Thank you, Mallory. And I am thrilled to have this brilliant team joining us from around the world. And they really bring each a very unique perspective. So I'm going to start from further away. Emmett, Welcome. Where you joining us from? >>Thanks for having us, Cindy. I'm joining from Dublin, Ireland, >>great. And and tell us a little bit about Eagle Alfa. What do you dio >>from a company's perspective? Think of Eagle Alfa as an aggregator off all the external data sets on a word I'll use a few times. Today is a big advantage we could bring companies is we have a data concierge service. There's so much data we can help identify the right data sets depending on the specific needs of the company. >>Yeah. And so, Emma, you know, people think I was a little I kind of shocked the industry. Going from gardener to a tech startup. Um, you have had a brave journey as well, Going from financial services to starting this company, really pioneering it with I think the most data sets of any of thes is that right? >>Yes, it was. It was a big jump to go from Morgan Stanley. Uh, leave the comforts of that environment Thio, PowerPoint deck and myself raising funding eight years ago s So it was a big jump on. We were very early in our market. It's in the last few years where there's been real momentum and adoption by various types of verticals. The hedge funds were first, maybe then private equity, but corporate sar are following quite quickly from behind. That will be the biggest users, in our view, by by a significant distance. >>Yeah, great. Thank um, it So we're going to go a little farther a field now, but back to the U. S. So, Juan, where you joining us from? >>Hey, Cindy. Thanks for having me. I'm joining you from Houston, Texas. >>Great. Used to be my home. Yeah, probably see Rice University back there. And you have a distinct perspective serving both Deloitte customers externally, but also internally. Can you tell us about that? >>Yeah, absolutely. So I serve as the Lord consultants, chief data officer, and as a professional service firm, I have the responsibility for overseeing our overall data agenda, which includes both the way we use data and insights to run and operate our own business, but also in how we develop data and insights services that we then take to market and how we serve our dealers and clients. >>Great. Thank you, Juan. And last but not least, Matt Glickman. Kind of in my own backyard in New York. Right, Matt? >>Correct. Joining I haven't been into the city and many months, but yes, um, based in New York. >>Okay. Great. And so, Matt, you and Emmett also, you know, brave pioneers in this space, and I'm remembering a conversation you and I shared when you were still a J. P. Morgan, I believe. And you're Goldman Sachs. Sorry. Sorry. Goldman. Can you Can you share that with us? >>Sure. I made the move back in 2015. Um, when everyone thought, you know, my wife, my wife included that I was crazy. I don't know if I would call it Comfortable was emitted, but particularly had been there for a long time on git suffered in some ways. A lot of the pains we're talking about today, given the number of data, says that the amount of of new data sets that are always demand for having run analytics teams at Goldman, seeing the pain and realizing that this pain was not unique to Goldman Sachs, it was being replicated everywhere across the industry, um, in a mind boggling way and and the fortuitous, um, luck to have one of snowflakes. Founders come to pitch snowflake to Goldman a little bit early. Um, they became a customer later, but a little bit early in 2014. And, you know, I realized that this was clearly, you know, the answer from first principles on bond. If I ever was going to leave, this was a problem. I was acutely aware of. And I also was aware of how much the man that was in financial services for a better solution and how the cloud could really solve this problem in particular the ability to not have to move data in and out of these organizations. And this was something that I saw the future of. Thank you, Andi, that this was, you know, sort of the pain that people just expected to pay. Um, this price if you need a data, there was method you had thio. You had to use you either ftp data in and out. You had data that was being, you know, dropped off and, you know, maybe in in in a new ways and cloud buckets or a P i s You have to suck all this data down and reconstruct it. And God forbid the formats change. It was, you know, a nightmare. And then having issues with data, you had a what you were seeing internally. You look nothing like what the data vendors were seeing because they want a completely different system, maybe model completely differently. Um, but this was just the way things were. Everyone had firewalls. Everyone had their own data centers. There was no other way on git was super costly. And you know this. I won't even share the the details of you know, the errors that would occur in the pain that would come from that, Um what I realized it was confirmed. What I saw it snowflake at the time was once everyone moves to run their actual workloads in this in the cloud right where you're now beyond your firewall, you'll have all this scale. But on top of that, you'll be able to point at data from these vendors were not there the traditional data vendors. Or, you know, this new wave of alternative data vendors, for example, like the ones that eagle out for brings together And bring these all these data sets together with your own internal data without moving it. Yeah, this was a fundamental shift of what you know, it's in some ways, it was a side effect of everyone moving to the cloud for costs and scale and elasticity. But as a side effect of that is what we talked about, You know it snowflake summit, you know, yesterday was this notion of a data cloud that would connect data between regions between cloud vendors between customers in a way where you could now reference data. Just like your reference websites today, I don't download CNN dot com. I point at it, and it points me to something else. I'm always seeing the latest version, obviously, and we can, you know, all collaborate on what I'm seeing on that website. That's the same thing that now can happen with data. So And I saw this as what was possible, and I distinctly asked the question, you know, the CEO of the time Is this possible? And not only was it possible it was a fundamental construct that was built into the way that snowflake was delivered. And then, lastly, this is what we learned. And I think this is what you know. M It also has been touting is that it's all great if data is out there and even if you lower that bar of access where data doesn't have to move, how do I know? Right? If I'm back to sitting at Goldman Sachs, how do I know what data is available to me now in this this you know, connected data network eso we released our data marketplace, which was a very different kind of marketplace than these of the past. Where for us, it was really like a global catalog that would elect a consumer data consumer. Noah data was available, but also level the playing field. Now we're now, you know, Eagle, Alfa, or even, you know, a new alternative data vendor build something in their in their basement can now publish that data set so that the world could see and consume and be aligned to, you know, snowflakes, core business, and not where we wouldn't have to be competing or having to take, um, any kind of custody of that data. So adding that catalog to this now ubiquitous access, um really changed the game and, you know, and then now I seem like a genius for making this move. But back then, like I said, we've seen I seem like instant. I was insane. >>Well, given, given that snowflake was the hottest aipo like ever, you were a genius. Uh, doing this, you know, six years in advance. E think we all agree on that, But, you know, a lot of this is still visionary. Um, you know, some of the most leading companies are already doing this. But one What? What is your take our Are you best in class customers still moving the data? Or is this like they're at least thinking about data monetization? What are you seeing from your perspective? >>Yeah, I mean, I did you know, the overall appreciation and understanding of you know, one. I got to get my house in order around my data, um, has something that has been, you know, understood and acted upon. Andi, I do agree that there is a shift now that says, you know, data silos alone aren't necessarily gonna bring me, you know, new and unique insights on dso enriching that with external third party data is absolutely, you know, sort of the the ship that we're seeing our customers undergo. Um, what I find extremely interesting in this space and what some of the most mature clients are doing is, you know, really taking advantage of these data marketplaces. But building data partnerships right there from what mutually exclusive, where there is a win win scenario for for you know, that organization and that could be, you know, retail customers or life science customers like with pandemic, right the way we saw companies that weren't naturally sharing information are now building these data partnership right that are going are going into mutually benefit, you know, all organizations that are sort of part of that value to Andi. I think that's the sort of really important criteria. And how we're seeing our clients that are extremely successful at this is that partnership has benefits on both sides of that equation, right? Both the data provider and then the consumer of that. And there has to be, you know, some way to ensure that both parties are are are learning right, gaining you insights to support, you know, whatever their business organization going on. >>Yeah, great one. So those data partnerships getting across the full value chain of sharing data and analytics Emmett, you work on both sides of the equation here, helping companies. Let's say let's say data providers maybe, like, you know, cast with human mobility monetize that. But then also people that are new to it. Where you seeing the top use cases? Well, >>interestingly, I agree with one of the supply side. One of the interesting trends is we're seeing a lot more data coming from large Corporates. Whether they're listed are private equity backed, as opposed to maybe data startups that are earning money just through data monetization. I think that's a great trend. I think that means a lot of the best. Data said it data is yet to come, um, in terms off the tough economy and how that's changed. I think the category that's had the most momentum and your references is Geo location data. It's that was the category at our conference in December 2000 and 12 that was pipped as the category to watch in 2019. On it didn't become that at all. Um, there were some regulatory concerns for certain types of geo data, but with with covert 19, it's Bean absolutely critical for governments, ministries of finance, central banks, municipalities, Thio crunch that data to understand what's happening in a real time basis. But from a company perspective, it's obviously critical as well. In terms of planning when customers might be back in the High Street on DSO, fourth traditionally consumer transaction data of all the 26 categories in our taxonomy has been the most popular. But Geo is definitely catching up your slide. Talked about being a tough economy. Just one point to contradict that for certain pockets of our clients, e commerce companies are having a field day, obviously, on they are very data driven and tech literate on day are they are really good client base for us because they're incredibly hungry, firm or data to help drive various, uh, decision making. >>Yeah, So fair enough. Some sectors of the economy e commerce, electron, ICS, healthcare are doing great. Others travel, hospitality, Um, super challenging. So I like your quote. The best is yet to come, >>but >>that's data sets is yet to come. And I do think the cloud is enabling that because we could get rid of some of the messy manual data flows that Matt you talked about, but nonetheless, Still, one of the hardest things is the data map. Things combining internal and external >>when >>you might not even have good master data. Common keys on your internal data. So any advice for this? Anyone who wants to take that? >>Sure I can. I can I can start. That's okay. I do think you know, one of the first problems is just a cataloging of the information that's out there. Um, you know, at least within our organization. When I took on this role, we were, you know, a large buyer of third party data. But our organization as a whole didn't necessarily have full visibility into what was being bought and for what purpose. And so having a catalog that helps us internally navigate what data we have and how we're gonna use it was sort of step number one. Um, so I think that's absolutely important. Um, I would say if we could go from having that catalog, you know, created manually to more automated to me, that's sort of the next step in our evolution, because everyone is saying right, the ongoing, uh, you know, creation of new external data sets. It's only going to get richer on DSO. We wanna be able to take advantage of that, you know, at the at the pacing speed, that data is being created. So going from Emanuel catalog to anonymous >>data >>catalog, I think, is a key capability for us. But then you know, to your second point, Cindy is how doe I then connect that to our own internal data to drive greater greater insights and how we run our business or how we serve our customers. Andi, that one you know really is a It's a tricky is a tricky, uh, question because I think it just depends on what data we're looking toe leverage. You know, we have this concept just around. Not not all data is created equal. And when you think about governance and you think about the management of your master data, your internal nomenclature on how you define and run your business, you know that that entire ecosystem begins to get extremely massive and it gets very broad and very deep on DSO for us. You know, government and master data management is absolutely important. But we took a very sort of prioritized approach on which domains do we really need to get right that drive the greatest results for our organization on dso mapping those domains like client data or employee data to these external third party data sources across this catalog was really the the unlocked for us versus trying to create this, you know, massive connection between all the external data that we're, uh, leveraging as well as all of our own internal data eso for us. I think it was very. It was a very tailored, prioritized approach to connecting internal data to external data based on the domains that matter most to our business. >>So if the domains so customer important domain and maybe that's looking at things, um, you know, whether it's social media data or customer transactions, you prioritized first by that, Is that right? >>That's correct. That's correct. >>And so, then, Matt, I'm going to throw it back to you because snowflake is in a unique position. You actually get to see what are the most popular data sets is is that playing out what one described are you seeing that play out? >>I I'd say Watch this space. Like like you said. I mean this. We've you know, I think we start with the data club. We solve that that movement problem, which I think was really the barrier that you tended to not even have a chance to focus on this mapping problem. Um, this notion of concordance, I think this is where I see the big next momentum in this space is going to be a flurry of traditional and new startups who deliver this concordance or knowledge graph as a service where this is no longer a problem that I have to solve internal to my organization. The notion of mastering which is again when everyone has to do in every organization like they used to have to do with moving data into the organization goes away. And this becomes like, I find the best of breed for the different scopes of data that I have. And it's delivered to me as a, you know, as a cloud service that just takes my data. My internal data maps it to these 2nd and 3rd party data sets. Um, all delivered to me, you know, a service. >>Yeah, well, that would be brilliant concordance as a service or or clean clean master data as a service. Um, using augmented data prep would be brilliant. So let's hope we get there. Um, you know, so 2020 has been a wild ride for everyone. If I could ask each of you imagine what is the art of the possible or looking ahead to the next to your and that you are you already mentioned the best is yet to come. Can you want to drill down on that. What what part of the best is yet to come or what is your already two possible? >>Just just a brief comment on mapping. Just this week we published a white paper on mapping, which is available for for anyone on eagle alfa dot com. It's It's a massive challenge. It's very difficult to solve. Just with technology Onda people have tried to solve it and get a certain level of accuracy, but can't get to 100% which which, which, which makes it difficult to solve it. If if if there is a new service coming out against 100% I'm all ears and that there will be a massive step forward for the entire data industry, even if it comes in a few years time, let alone next year, I think going back to the comment on data Cindy. Yes, I think boards of companies are Mawr and Mawr. Viewing data as an asset as opposed to an expense are a cost center on bond. They are looking therefore to get their internal house in order, as one was saying, but also monetize the data they are sitting on lots of companies. They're sitting on potentially valuable data. It's not all valuable on a lot of cases. They think it's worth a lot more than it is being frank. But in some cases there is valuable data on bond. If monetized, it can drop to the bottom line on. So I think that bodes well right across the world. A lot of the best date is yet to come on. I think a lot of firms like Deloitte are very well positioned to help drive that adoption because they are the trusted advisor to a lot of these Corporates. Um, so that's one thing. I think, from a company perspective. It's still we're still at the first base. It's quite frustrating how slow a lot of companies are to move and adopt, and some of them are haven't hired CDO. Some of them don't have their internal house in order. I think that has to change next year. I think if we have this conference at this time next year, I would expect that would hopefully be close to the tipping point for Corporates to use external data. And the Malcolm Gladwell tipping point on the final point I make is I think, that will hopefully start to see multi department use as opposed to silos again. Parliaments and silos, hopefully will be more coordinated on the company's side. Data could be used by marketing by sales by r and D by strategy by finance holds external data. So it really, hopefully will be coordinated by this time next year. >>Yeah, Thank you. So, to your point, there recently was an article to about one of the airlines that their data actually has more value than the company itself now. So I know, I know. We're counting on, you know, integrators trusted advisers like Deloitte to help us get there. Uh, one what? What do you think? And if I can also drill down, you know, financial services was early toe all of this because they needed the early signals. And and we talk about, you know, is is external data now more valuable than internal? Because we need those early signals in just such a different economy. >>Yeah, I think you know, for me, it's it's the seamless integration of all these external data sources and and the signals that organizations need and how to bring those into, you know, the day to day operations of your organization, right? So how do you bring those into, You know, you're planning process. How do you bring that into your sales process on DSO? I think for me success or or where I see the that the use and adoption of this is it's got to get down to that level off of operations for organizations. For this to continue to move at the pace and deliver the value that you know, we're all describing. I think we're going to get there. But I think until organizations truly get down to that level of operations and how they're using this data, it'll sort of seem like a Bolton, right? So for me, I think it's all about Mawr, the seamless integration. And I think to what Matt mentioned just around services that could help connect external data with internal data. I'll take that one step beyond and say, How can we have the data connect itself? Eso I had references Thio, you know, automation and machine learning. Um, there's significant advances in terms of how we're seeing, you know, mapping to occur in a auto generated fashion. I think this specific space and again the connection between external and internal data is a prime example of where we need to disrupt that, you know, sort of traditional data pipeline on. Try to automate that as much as possible. And let's have the data, you know, connect itself because it then sort of supports. You know, the first concept which waas How do we make it more seamless and integrated into, you know, the business processes of the organization's >>Yeah, great ones. So you two are thinking those automated, more intelligent data pipelines will get us there faster. Matt, you already gave us one. Great, Uh, look ahead, Any more to add to >>it, I'll give you I'll give you two more. One is a bit controversial, but I'll throw that you anyway, um, going back to the point that one made about data partnerships What you were saying Cindy about, you know, the value. These companies, you know, tends to be somehow sometimes more about the data they have than the actual service they provide. I predict you're going to see a wave of mergers and acquisitions. Um, that it's solely about locking down access to data as opposed to having data open up. Um to the broader, you know, economy, if I can, whether that be a retailer or, you know, insurance company was thes prime data assets. Um, you know, they could try to monetize that themselves, But if someone could acquire them and get exclusive access that data, I think that's going to be a wave of, um, in a that is gonna be like, Well, we bought this for this amount of money because of their data assets s. So I think that's gonna be a big wave. And it'll be maybe under the guise of data partnerships. But it really be about, you know, get locking down exclusive access to valuable data as opposed to trying toe monetize it itself number one. And then lastly, you know. Now, did you have this kind of ubiquity of data in this interconnected data network? Well, we're starting to see, and I think going to see a big wave of is hyper personalization of applications where instead of having the application have the data itself Have me Matt at Snowflake. Bring my data graph to applications. Right? This decoupling of we always talk about how you get data out of these applications. It's sort of the reverse was saying Now I want to bring all of my data access that I have 1st, 2nd and 3rd party into my application. Instead of having to think about getting all the data out of these applications, I think about it how when you you know, using a workout app in the consumer space, right? I can connect my Spotify or connect my apple music into that app to personalize the experience and bring my music list to that. Imagine if I could do that, you know, in a in a CRM. Imagine I could do that in a risk management. Imagine I could do that in a marketing app where I can bring my entire data graph with me and personalize that experience for, you know, for given what I have. And I think again, you know, partners like thoughts. But I think in a unique position to help enable that capability, you know, for this next wave of of applications that really take advantage of this decoupling of data. But having data flow into the app tied to me as opposed to having the APP have to know about my data ahead of time, >>Yeah, yeah, So that is very forward thinking. So I'll end with a prediction and a best practice. I am predicting that the organizations that really leverage external data, new data sources, not just whether or what have you and modernize those data flows will outperform the organizations that don't. And as a best practice to getting there, I the CDOs that own this have at least visibility into everything they're purchasing can save millions of dollars in duplicate spend. So, Thio, get their three key takeaways. Identify the leading indicators and market signals The data you need Thio. Better identify that. Consolidate those purchases and please explore the data sets the range of data sets data providers that we have on the thought spot. Atlas Marketplace Mallory over to you. >>Wow. Thank you. That was incredible. Thank you. To all of our Panelists for being here and sharing that wisdom. We really appreciate it. For those of you at home, stay close by. Our third session is coming right up and we'll be joined by our partner AWS and get to see how you can leverage the full power of your data cloud complete with the demo. Make sure to tune in to see you >>then
SUMMARY :
All right, let's get to We're excited to be joined by thought spots. Where you joining us from? Thanks for having us, Cindy. What do you dio the external data sets on a word I'll use a few times. you have had a brave journey as well, Going from financial It's in the last few years where there's been real momentum but back to the U. S. So, Juan, where you joining us from? I'm joining you from Houston, Texas. And you have a distinct perspective serving both Deloitte customers So I serve as the Lord consultants, chief data officer, and as a professional service Kind of in my own backyard um, based in New York. you know, brave pioneers in this space, and I'm remembering a conversation If I'm back to sitting at Goldman Sachs, how do I know what data is available to me now in this this you know, E think we all agree on that, But, you know, a lot of this is still visionary. And there has to be, you know, some way to ensure that you know, cast with human mobility monetize that. I think the category that's had the most momentum and your references is Geo location Some sectors of the economy e commerce, that Matt you talked about, but nonetheless, Still, you might not even have good master data. having that catalog, you know, created manually to more automated to me, But then you know, to your second point, That's correct. And so, then, Matt, I'm going to throw it back to you because snowflake is in a unique position. you know, as a cloud service that just takes my data. Um, you know, so 2020 has been I think that has to change next year. And and we talk about, you know, is is external data now And let's have the data, you know, connect itself because it then sort of supports. So you two are thinking those automated, And I think again, you know, partners like thoughts. and market signals The data you need Thio. by our partner AWS and get to see how you can leverage the full power of
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Matt Glickman | PERSON | 0.99+ |
Cindy | PERSON | 0.99+ |
Juan | PERSON | 0.99+ |
Emma | PERSON | 0.99+ |
Matt | PERSON | 0.99+ |
2015 | DATE | 0.99+ |
Deloitte | ORGANIZATION | 0.99+ |
Emmett | PERSON | 0.99+ |
New York | LOCATION | 0.99+ |
2019 | DATE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
December 2000 | DATE | 0.99+ |
Goldman | ORGANIZATION | 0.99+ |
Goldman Sachs | ORGANIZATION | 0.99+ |
Eagle Alfa | ORGANIZATION | 0.99+ |
Eagle | ORGANIZATION | 0.99+ |
next year | DATE | 0.99+ |
Andi | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
Alfa | ORGANIZATION | 0.99+ |
third session | QUANTITY | 0.99+ |
100% | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
12 | DATE | 0.99+ |
Houston, Texas | LOCATION | 0.99+ |
one | QUANTITY | 0.99+ |
Both | QUANTITY | 0.99+ |
second session | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
both sides | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
Mallory | PERSON | 0.99+ |
both parties | QUANTITY | 0.99+ |
Morgan Stanley | ORGANIZATION | 0.99+ |
second point | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
today | DATE | 0.99+ |
Cindy Housing | PERSON | 0.99+ |
Rice University | ORGANIZATION | 0.98+ |
26 categories | QUANTITY | 0.98+ |
Dublin, Ireland | LOCATION | 0.98+ |
2014 | DATE | 0.98+ |
eight years ago | DATE | 0.98+ |
Malcolm Gladwell | PERSON | 0.98+ |
2nd | QUANTITY | 0.98+ |
first principles | QUANTITY | 0.98+ |
Thio | PERSON | 0.97+ |
U. S. | LOCATION | 0.97+ |
first | QUANTITY | 0.97+ |
Mawr | ORGANIZATION | 0.97+ |
1st | QUANTITY | 0.97+ |
one point | QUANTITY | 0.97+ |
2020 | DATE | 0.96+ |
PowerPoint | TITLE | 0.96+ |
fourth | QUANTITY | 0.96+ |
this week | DATE | 0.96+ |
first base | QUANTITY | 0.95+ |
each | QUANTITY | 0.92+ |
CNN dot com | ORGANIZATION | 0.92+ |
Onda | ORGANIZATION | 0.92+ |
Spotify | ORGANIZATION | 0.92+ |
Benoit & Christian Live
>>Okay, We're now going into the technical deep dive. We're gonna geek out here a little bit. Ben Wa Dodgeville is here. He's co founder of Snowflake and president of products. And also joining us is Christian Kleinerman. Who's the senior vice president of products. Gentlemen, welcome. Good to see you. >>Yeah, you that >>get this year, they Thanks for having us. >>Very welcome. So it been well, we've heard a lot this morning about the data cloud, and it's becoming my view anyway, the linchpin of your strategy. I'm interested in what technical decisions you made early on. That that led you to this point and even enabled the data cloud. >>Yes. So? So I would say that that a crowd was built in tow in three phases. Really? The initial phase, as you call it, was it was really about 20 minutes. One regions Teoh, Data Cloud and and that region. What was important is to make that region infinity, infinity scalable, right. And that's our architectural, which we call the beauty cross to share the architectural er so that you can plug in as many were clues in that region as a Z without any limits. The limit is really the underlying prop Provide the, you know, resource is which you know, Cal provide the region as a really no limits. So So that z you know, region architecture, I think, was really the building block of the snowflake. That a cloud. But it really didn't stop there. The second aspect Waas Well, it was really data sharing. How you know munity internets within the region, how to share data between 10 and off that region between different customers on that was also enabled by architectures Because we discover, you know, compute and storage so compute You know clusters can access any storage within the region. Eso that's based off the data cloud and then really faced three Which is critical is the expansion the global expansion how we made you know, our cloud domestic layers so that we could talk You know the snowflake vision on different clouds on DNA Now we are running in three cloud on top of three cloud providers. We started with the ws and US West. We moved to assure and then uh, Google g c p On how this this crowd region way started with one crowd region as I said in the W S U S West, and then we create we created, you know, many you know, different regions. We have 22 regions today, all over the world and all over the different in the cloud providers. And what's more important is that these regions are not isolated. You know, Snowflake is one single, you know, system for the world where we created this global data mesh which connects every region such that not only there's no flex system as a whole can can be aware of for these regions, But customers can replicate data across regions on and, you know, share. There are, you know, across the planet if need be. So So this is one single, you know, really? I call it the World Wide Web. Off data that, that's, you know, is this vision of the data cloud. And it really started with this building block, which is a cloud region. >>Thank you for that. Ben White Christian. You and I have talked about this. I mean, that notion of a stripping away the complexity and that's kind of what the data cloud does. But if you think about data architectures, historically they really had no domain knowledge. They've really been focused on the technology toe ingest and analyze and prepare And then, you know, push data out to the business and you're really flipping that model, allowing the sort of domain leaders to be first class citizens if you will, uh, because they're the ones that creating data value, and they're worrying less about infrastructure. But I wonder, do you feel like customers air ready for that change? >>I I love the observation. They've that, uh, so much energy goes in in in enterprises, in organizations today, just dealing with infrastructure and dealing with pipes and plumbing and things like that and something that was insightful from from Ben Juan and and our founders from from Day one WAAS. This is a managed service. We want our customers to focus on the data, getting the insights, getting the decisions in time, not just managing pipes and plumbing and patches and upgrades, and and the the other piece that it's it's it's an interesting reality is that there is this belief that the cloud is simplifying this, and all of a sudden there's no problem but actually understanding each of the public cloud providers is a large undertaking, right? Each of them have 100 plus services, uh, sending upgrades and updates on a constant basis. And that just distracts from the time that it takes to go and say, Here's my data. Here's my data model. Here's how it make better decisions. So at the heart of everything we do is we wanna abstract the infrastructure. We don't wanna abstract the nuance of each of the cloud providers. And as you said, have companies focus on This is the domain expertise or the knowledge for my industry. Are all companies ready for it? I think it's a It's a mixed bag. We we talk to customers on a regular basis every way, every week, every day, and some of them are full on. They've sort of burned the bridges and, like I'm going to the cloud, I'm going to embrace a new model. Some others. You can see the complete like, uh, shock and all expressions like What do you mean? I don't have all these knobs. 2 to 3 can turn. Uh, but I think the future is very clear on how do we get companies to be more competitive through data? >>Well, Ben Ben. Well, it's interesting that Christian mentioned to manage service and that used to be in a hosting. Guys run around the lab lab coats and plugging things in. And of course, you're looking at this differently. It's high degrees of automation. But, you know, one of those areas is workload management. And I wonder how you think about workload management and how that changes with the data cloud. >>Yeah, this is a great question. Actually, Workload management used to be a nightmare. You know, traditional systems on it was a nightmare for the B s and they had to spend most a lot of their time, you know, just managing workloads. And why is that is because all these workloads are running on the single, you know, system and a single cluster The compete for resources. So managing workload that always explain it as explain Tetris, right? You had the first to know when to run. This work will make sure that too big workers are not overlapping. You know, maybe it really is pushed at night, you know, And And you have this 90 window which is not, you know, efficient. Of course, for you a TL because you have delays because of that. But but you have no choice, right? You have a speaks and more for resource is and you have to get the best out of this speaks resource is. And and for sure you don't want to eat here with her to impact your dash boarding workload or your reports, you know, impact and with data science and and And this became a true nine man because because everyone wants to be that a driven meaning that all the entire company wants to run new workers on on this system. And these systems are completely overwhelmed. So so, well below management was, and I may have before Snowflake and Snowflake made it really >>easy. The >>reason is it's no flag. We leverage the crowds who dedicates, you know, compute resources to each work. It's in the snowflake terminology. It's called a warehouse virtual warehouse, and each workload can run in its own virtual warehouse, and each virtual warehouse has its own dedicated competition resources. It's on, you know, I opened with and you can really control how much resources which workload gas by sizing this warehouses. You know, I just think the compute resources that they can use When the workload, you know, starts to execute automatically. The warehouse, the compute resources are turned off, but turned on by snowflake is for resuming a warehouse and you can dynamically resized this warehouse. It can be done by the system automatically. You know if if the conference see of the workload increases or it can be done manually by the administrator or, you know, just suggesting, you know, uh, compute power. You know, for each workload and and the best off that model is not only it gives you a very fine grain. Control on resource is that this work can get Not only workloads are not competing and not impacting it in any other workload. But because of that model, you can hand as many workload as you want. And that's really critical because, as I said, you know, everyone in the organization wants to use data to make decisions, So you have more and more work roads running. And then the Patriots game, you know, would have been impossible in in a in a centralized one single computer, cross the system On the flip side. Oh, is that you have to have a zone administrator off the system. You have to to justify that. The workload is worth running for your organization, right? It's so easy in literally in seconds, you can stand up a new warehouse and and start to run your your crazy on that new compute cluster. And of course, you have to justify if the cost of that because there is a cost, right, snowflake charges by seconds off compute So that cost, you know, is it's justified and you have toe. You know, it's so easy now to hire new workflow than you do new things with snowflake that that that you have to to see, you know, and and look at the trade off the cost off course and managing costs. >>So, Christian been while I use the term nightmare, I'm thinking about previous days of workload management. I mean, I talked to a lot of customers that are trying to reduce the elapsed time of going from data insights, and their nightmare is they've got this complicated data lifecycle. Andi, I'm wondering how you guys think about that. That notion of compressing elapsed time toe data value from raw data to insights. >>Yeah, so? So we we obsess or we we think a lot about this time to insight from the moment that an event happens toe the point that it shows up in a dashboard or a report or some decision or action happens based on it. There are three parts that we think on. How do we reduce that life cycle? The first one which ties to our previous conversation is related toe. Where is their muscle memory on processes or ways of doing things that don't actually make us much sense? My favorite example is you say you ask any any organization. Do you run pipelines and ingestion and transformation at two and three in the morning? And the answer is, Oh yeah, we do that. And if you go in and say, Why do you do that? The answer is typically, well, that's when the resource is are available Back to Ben Wallace. Tetris, right? That's that's when it was possible. But then you ask, Would you really want to run it two and three in the morning? If if you could do it sooner, we could do it. Mawr in time, riel time with when the event happened. So first part of it is back to removing the constraints of the infrastructures. How about running transformations and their ingestion when the business best needs it? When it's the lowest time to inside the lowest latency, not one of technology lets you do it. So that's the the the easy one out the door. The second one is instead of just fully optimizing a process, where can you remove steps of the process? This is where all of our data sharing and the snowflake data marketplace come into place. How about if you need to go in and just data from a SAS application vendor or maybe from a commercial data provider and imagine the dream off? You wouldn't have to be running constant iterations and FTP s and cracking C S V files and things like that. What if it's always available in your environment, always up to date, And that, in our mind, is a lot more revolutionary, which is not? Let's take away a process of ingesting and copying data and optimize it. How about not copying in the first place? So that's back to number two on, then back to number three is is what we do day in and day out on making sure our platform delivers the best performance. Make it faster. The combination of those three things has led many of our customers, and and And you'll see it through many of the customer testimonials today that they get insights and decisions and actions way faster, in part by removing steps, in part by doing away with all habits and in part because we deliver exceptional performance. >>Thank you, Christian. Now, Ben Wa is you know, we're big proponents of this idea of the main driven design and data architecture. Er, you know, for example, customers building entire applications and what I like all data products or data services on their data platform. I wonder if you could talk about the types of applications and services that you're seeing >>built >>on top of snowflake. >>Yeah, and And I have to say that this is a critical aspect of snowflake is to create this platform and and really help application to be built on top of this platform. And the more application we have, the better the platform will be. It is like, you know, the the analogies with your iPhone. If your iPhone that no applications, you know it would be useless. It's it's an empty platforms. So So we are really encouraging. You know, applications to be belong to the top of snowflake and from there one actually many applications and many off our customers are building applications on snowflake. We estimated that's about 30% are running already applications on top off our platform. And the reason is is off course because it's it's so easy to get compute resources. There is no limit in scale in our viability, their ability. So all these characteristics are critical for for an application on DWI deliver that you know from day One Now we have improved, you know, our increased the scope off the platform by adding, you know, Java in competition and Snow Park, which which was announced today. That's also you know, it is an enabler. Eso in terms off type of application. It's really, you know, all over and and what I like actually needs to be surprised, right? I don't know what well being on top of snowflake and how it will be the world, but with that are sharing. Also, we are opening the door to a new type of applications which are deliver of the other marketplace. Uh, where, You know, one can get this application died inside the platform, right? The platform is distributing this application, and today there was a presentation on a Christian T notes about, >>you >>know, 20 finds, which, you know, is this machine learning, you know, which is providing toe. You know, any users off snowflake off the application and and machine learning, you know, to find, you know, and apply model on on your data and enrich your data. So data enrichment, I think, will be a huge aspect of snowflake and data enrichment with machine learning would be a big, you know, use case for these applications. Also, how to get there are, you know, inside the platform. You know, a lot of applications led him to do that. Eso machine learning. Uh, that engineering enrichments away. These are application that we run on the platform. >>Great. Hey, we just got a minute or so left in. Earlier today, we ran a video. We saw that you guys announced the startup competition, >>which >>is awesome. Ben, while you're a judge in this competition, what can you tell us about this >>Yeah, >>e you know, for me, we are still a startup. I didn't you know yet, you know, realize that we're not anymore. Startup. I really, you know, you really feel about you know, l things, you know, a new startups, you know, on that. That's very important for Snowflake. We have. We were started yesterday, and we want to have new startups. So So the ends, the idea of this program, the other aspect off that program is also toe help, you know, started to build on top of snowflake and to enrich. You know, this this pain, you know, rich ecosystem that snowflake is or the data cloud off that a cloud is And we want to, you know, add and boost. You know that that excitement for the platform, so So the ants, you know, it's a win win. It's a win, you know, for for new startups. And it's a win, ofcourse for us. Because it will make the platform even better. >>Yeah, And startups, or where innovation happens. So registrations open. I've heard, uh, several, uh, startups have have signed up. You goto snowflake dot com slash startup challenge, and you can learn mawr. That's exciting program. An initiative. So thank you for doing that on behalf of of startups out there and thanks. Ben Wa and Christian. Yeah, I really appreciate you guys coming on Great conversation. >>Thanks for David. >>You're welcome. And when we talk, Thio go to market >>pros. They >>always tell us that one of the key tenets is to stay close to the customer. Well, we want to find out how data helps us. To do that in our next segment. Brings in to chief revenue officers to give us their perspective on how data is helping their customers transform. Business is digitally. Let's watch.
SUMMARY :
Okay, We're now going into the technical deep dive. That that led you to this point and even enabled the data cloud. and then we create we created, you know, many you know, different regions. and prepare And then, you know, push data out to the business and you're really flipping that model, And as you said, have companies focus on This is the domain expertise But, you know, You know, maybe it really is pushed at night, you know, And And you have this 90 The done manually by the administrator or, you know, just suggesting, you know, I'm wondering how you guys think about that. And if you go in and say, Why do you do that? Er, you know, for example, customers building entire It is like, you know, the the analogies with your iPhone. the application and and machine learning, you know, to find, We saw that you guys announced the startup competition, is awesome. so So the ants, you know, it's a win win. I really appreciate you guys coming on Great conversation. And when we talk, Thio go to market Brings in to chief revenue
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
Christian Kleinerman | PERSON | 0.99+ |
Ben Wallace | PERSON | 0.99+ |
Ben White | PERSON | 0.99+ |
Ben Wa | PERSON | 0.99+ |
three parts | QUANTITY | 0.99+ |
Ben Ben | PERSON | 0.99+ |
Each | QUANTITY | 0.99+ |
Ben | PERSON | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
Ben Wa Dodgeville | PERSON | 0.99+ |
Snowflake | ORGANIZATION | 0.99+ |
Christian | PERSON | 0.99+ |
Benoit | PERSON | 0.99+ |
today | DATE | 0.99+ |
Thio | PERSON | 0.99+ |
yesterday | DATE | 0.99+ |
first part | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
each | QUANTITY | 0.99+ |
three things | QUANTITY | 0.99+ |
22 regions | QUANTITY | 0.98+ |
second aspect | QUANTITY | 0.98+ |
Java | TITLE | 0.98+ |
about 20 minutes | QUANTITY | 0.98+ |
first one | QUANTITY | 0.98+ |
10 | QUANTITY | 0.98+ |
each work | QUANTITY | 0.98+ |
about 30% | QUANTITY | 0.97+ |
Ben Juan | PERSON | 0.97+ |
second one | QUANTITY | 0.97+ |
nine man | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
90 window | QUANTITY | 0.97+ |
single | QUANTITY | 0.97+ |
each virtual warehouse | QUANTITY | 0.96+ |
two | QUANTITY | 0.96+ |
each workload | QUANTITY | 0.96+ |
DWI | ORGANIZATION | 0.95+ |
100 plus servi | QUANTITY | 0.94+ |
20 finds | QUANTITY | 0.94+ |
one single | QUANTITY | 0.91+ |
3 | QUANTITY | 0.91+ |
three | DATE | 0.91+ |
three phases | QUANTITY | 0.91+ |
this morning | DATE | 0.91+ |
ORGANIZATION | 0.89+ | |
three | QUANTITY | 0.89+ |
Tetris | TITLE | 0.89+ |
Snow Park | TITLE | 0.88+ |
US West | LOCATION | 0.87+ |
Christian T | PERSON | 0.87+ |
Patriots | ORGANIZATION | 0.87+ |
this year | DATE | 0.86+ |
single cluster | QUANTITY | 0.84+ |
day One | QUANTITY | 0.82+ |
two | DATE | 0.8+ |
SAS | ORGANIZATION | 0.79+ |
one single computer | QUANTITY | 0.78+ |
Snowflake | TITLE | 0.78+ |
one crowd region | QUANTITY | 0.76+ |
three cloud providers | QUANTITY | 0.76+ |
W S U S West | LOCATION | 0.74+ |
One regions | QUANTITY | 0.73+ |
Christian | ORGANIZATION | 0.73+ |
Day one | QUANTITY | 0.71+ |
Earlier today | DATE | 0.68+ |
ws | ORGANIZATION | 0.61+ |
number three | QUANTITY | 0.58+ |
g c p | TITLE | 0.57+ |
2 | QUANTITY | 0.53+ |
Snowflake | EVENT | 0.45+ |
Tetris | ORGANIZATION | 0.35+ |
Debanjan Saha, Google Cloud | October 2020
(gentle music) >> From the cube studios in Palo Alto and Boston, connecting with thought leaders all around the world. This is a Cube conversation. >> With Snowflake's, enormously successful IPO, it's clear that data warehousing in the cloud has come of age and a few companies know more about data and analytics than Google. Hi, I'm Paul Gillen. This is a cube conversation. And today we're going to talk about data warehousing and data analytics in the cloud. Google BigQuery, of course, is a popular, fully managed server less data warehouse that enables rapid SQL queries and interactive analysis of massive data sets. This summer, Google previewed BigQuery Omni, which essentially brings the capabilities of BigQuery to additional platforms including Amazon web services and soon Microsoft Azure. It's all part of Google's multicloud strategy. No one knows more about this strategy than Debanjan Saha, General Manager and Vice President of engineering for data analytics and Google cloud. And he joins me today. Debanjan, thanks so much for joining me. >> Paul, nice to meet you and thank you for having me today. >> So it's clear the data warehousing is now part of many enterprise data strategies. How has the rise of cloud change the way organizations are using data science in your view? >> Well, I mean, you know, the cloud definitely is a big enabler of data warehousing and data science, as you mentioned. I mean, it has enabled things that people couldn't do on-prem, for example, if you think about data science, the key ingredient of data science, before you can start anything is access to data and you need massive amount of data in order to build the right model that you want to use. And this was a big problem on-prem because people are always thinking about what data to keep, what to discard. That's not an issue in cloud. You can keep as much of data as you want, and that has been a big boon for data science. And it's not only your data, you can also have access to other data your, for example, your partner's data, public data sets and many other things that people have access to right? That's number one, number two of course, it's a very compute intensive operation and you know, large enterprises of course can afford them build a large data center and bring in lots of tens of thousands of CPU codes, GPU codes, TPU codes whatever have you, but it is difficult especially for smaller enterprises to have access to that amount of computing power which is very very important for data science. Cloud makes it easy. I mean, you know, it has in many ways democratize the use of data science and not only the big enterprises everyone can take advantage of the power of the computing power that various different cloud vendors make it available on their platform. And the third, not to overlook that, cloud also makes it available to customers and users, lots of various different data science platform, for example, Google's own TensorFlow and you have many other platforms Spark being one example of that, right? Both a cloud native platform as well as open source platforms, which is very very useful for people using data science and managed to open source, Spark also makes it very very affordable. And all of these things have contributed to massive boon in data science in the cloud and from my perspective. >> Now, of course we've seen over the last seven months a rush to the cloud triggered by the COVID-19 pandemic. How has that played out in the analytics field? Do you see any longterm changes to, to the landscape? The way customers are using analytics as a result of what's happened these last seven months? >> You know, I think as you know about kind of a digitization of our business is happening over a long period of time, right? And people are using AIML analytics in increasing numbers. What I've seen because of COVID-19 that trend has accelerated both in terms of people moving to cloud, and in terms of they're using advanced analytics and AIML and they have to do that, right? Pretty much every business is kind of leaning heavily on their data infrastructure in order to gain insight of what's coming next. A lot of the models that people are used to, is no longer valid things are changing very very rapidly right? So in order to survive and thrive people have to lean on data, lean on analytics to figure out what's coming around the corner. And that trend in my view is only going to accelerate. It's not going to go the other way round. >> One of the problems with cloud databases, We often hear complaints about is that there's so many of them. Do you see any resolution to that proliferation? >> Well, you know, I do think a one size does not fit all right. So it is important to have choice. It's important to have specialization. And that's why you see a lot of cloud databases. I don't think the number of cloud databases is going to go down. What I do expect to happen. People are going to use interoperable data formats. They are going to use open API so that it's very, very portable as people want to move from one database to another. The way I think the convergence is going to come is two ways, One, you know, a lot of databases, for example, use Federation. If you look at BigQuery, for example, you can start with BigQuery, but with BigQuery, you can have also access to data in other databases, not only in GCP or Google cloud but also in AWS with BigQuery Omni, for example, right? So that provides a layer of Federation, which kind of create convergence with respect, to weighing various different data assets people may have. I have also seen with, for example, with Looker, you know creation of enterprise wide data models and data API is gives people a platform so that they can build their custom data app and data solutions on top up and even from data API. Those I believe are going to be the points of convergence. I think data is probably going to be in different databases because different databases do different things well, that does not mean people wouldn't have access to all their data through one API or one set of models. >> Well, since we're on the subject of BigQuery. Now this summer, you introduced BigQuery Omni which is a database data warehouse, essentially a version of BigQuery that can query data in other cloud platforms, what, what is the strategy there? And what is the customer reaction been so far? >> Well, I mean, you know as you probably have seen talking to customers more than 80% of the customers that we talk to use multiple clouds and that trend is probably not going to change. I mean, it happens for various different reasons sometime because of compliance sometimes because they want to have different tools and different platform sometime because of M and a, we are a big believer of multi-cloud strategy and that's what we are trying to do with BigQuery Omni. We do realize people have choices. Customers will have their data in various different places and we will take our analytics wherever the data is. So customers won't have to worry about moving data from one place to another., and that's what we are trying to do with BigQuery Omni you know, going to see, you know for example, with Anthos, we have created a platform over which you can build this video as different data stacks and applications, which spans multiple clouds. I believe we are going to see more of that. And BigQuery Omni is just the beginning. >> And how have your customers reacted to that announcement. >> Oh deep! They reacted very, very positively. This is the first time they have a major cloud vendor offering a fully managed server less data warehouse platform on multiple clouds. And as I mentioned, I mean we have many customers who have some of their data assets for example, in GCP, they really love BigQuery. And they also have for example, applications running on AWS and Azure. And today the only option they have is to essentially shuttle their data between various different clouds in order to gain insight across the collective pool of data sets that they have, with BigQuery, Omni, they all tended to do that. They can keep their data wherever it is. They can still join across that data and get insights irrespective of which cloud their data is. >> You recently wrote on Forbes about the shortage of data scientists and the need to make data analytics more accessible to the average business user. What is Google doing in that respect? >> So we strongly, I mean, you know one of our goals is to make the data and insight from data available to everybody in the business right? That is the way you can democratize the use of analytics and AIML. And you know, one way to do that is to teach everybody R or Python or some specific tools but that's going to take a long time. So our approach is make the power of data analytics and AI AML available to our users, no matter what tools they're comfortable with. So for example, if you look at a B Q ML BigQuery ML, we have made it possible for our users who like SQL very much to use the power of ML without having to learn anything else or without having to move their data anywhere else. We have a lot of business users for example, who prefer X prefer spreadsheets and, you know, we've connected sheets. We have made the spreadsheet interface available on top of BigQuery, and they can use the power of BigQuery without having to learn anything else. Better yet we recently launched a BigQuery Q and A. And what Q and A allows you to do is to use natural language on top of big query data, right? So the goal, I mean, if you can do that that I think is the Nevada where people, anyone for example, somebody working in a call center talking to a customer can use a simple query to figure out what's going on with the bill, for example, right? And we believe that if we can democratize the use of data, insight and analytics that not only going to accelerate the digital transformation of the businesses, it's also going to grow consumption. And that's good for both the users, as well as business. >> Now you bought Looker last year, what would you say is different about the way Google is coming out the data analytics market from the way other cloud vendors are doing it. >> So Looker is a great addition to already strong portfolio of products that we have but you know, a lot of people think about Looker as a business intelligence platform. It's actually much more than that. What is unique about Looker is the semantic model that Looker can build on top of data assets, govern semantic model Looker can build on top of data assets, which may be in BigQuery maybe in cloud SQL maybe, you know, in other cloud for example, in Redshift or SQL data warehouse. And once you have the data model, you can create a data API and essentially an ID or integrated development environment on top of which you can build your custom workflows. You can build your custom dashboard you can build your custom data application. And that is, I think, where we are moving. I don't think people want the old dashboards anymore. They want their data experience to be immersive within the workflow and within the context in which they are using the data. And that's where I see Lot of customers are now using the power of Looker and BigQuery and other platform that we have and building this custom data apps. And what again, like BigQuery, Looker is also multi-platform it supports multiple data warehouses and databases and that kind of aligns very well with our philosophy of having an open platform that is multicloud as well as hybrid. >> Certainly, with Anthos and with BigQuery Omni, you demonstrated your commitment on P cloud, but not all cloud vendors have an interest in being multicloud. Do you see any, any change that standoff and are you really in a position to influence it? >> Absolutely. I think more than us it's a customer who is going to influence that, right? And almost every customer I talk to, they don't want to be in a walled garden. They want to be an open platform where they have the choice they have the flexibility and I believe these customers are going to push essentially the adoption of platforms, which are open and multicloud. And, you know, I believe over time the successful platforms have to be open platform. And the closed platform if you look at history has never been very successful, right? And you know, I sincerely think that we are on the right path and we are on the side of customers in this philosophy. >> Final question. What's your most important priority right now? >> You know, I wake up everyday thinking about how can you make our customer successful? And the best way to make our customer successful is to make sure that they can get business outcome out of the data that they have. And that's what we are trying to do. We want to accelerate time to value to data, you know, so that people can keep their data in a governed way. They can gain insight by using the tools that we can provide them. A lot of them, we have used internally for many years and those tools are now available to our customers. We also believe we need to democratize the use of analytics and AIML. And that's why we are trying to give customers tools where they don't have to learn a lot of new things and new skills in order to use them. And if we can do them successfully I think we are going to help our customers get more value out of their data and create businesses which can use that value. I'll give you a couple of quick examples. I mean, for example, if you look at Home Depot, they use our platform to improve the predictability of the inventory by two X. If you look at, for example HSBC, they have been able to use our platform to detect financial fraud 10 X faster. If you look at, for example Juan Perez, who's the CIO of UPS, they have used our AIML and analytics to do better logistics and route planning. And they have been able to save 10 million gallons of fuel every year which amounts to 400 million in cost savings. Those are the kind of business outcome we would like to drive with the power of our platform. >> Powerful stuff, democratize data multicloud data in any cloud who can argue with that. Debanjan Saha, General Manager and Vice President of engineering for data analytics at Google cloud. Thanks so much for joining me today. >> Paul, thank you thank you for inviting me. >> I'm Paul Gillen. This has been a cube conversation. >> Debanjan: Thank you. (soft music)
SUMMARY :
From the cube studios in Palo Alto and Boston, of BigQuery to additional platforms Paul, nice to meet you and So it's clear the data You can keep as much of data as you want, a rush to the cloud triggered and they have to do that, right? One of the problems They are going to use open API of BigQuery that can query know, going to see, you know to that announcement. is to essentially shuttle their data and the need to make data That is the way you is coming out the data analytics market of products that we have and are you really in a And you know, What's your most important and analytics to do better of engineering for data Paul, thank you thank This has been a cube conversation. (soft music)
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Paul Gillen | PERSON | 0.99+ |
Paul | PERSON | 0.99+ |
Debanjan | PERSON | 0.99+ |
Juan Perez | PERSON | 0.99+ |
October 2020 | DATE | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Boston | LOCATION | 0.99+ |
HSBC | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
UPS | ORGANIZATION | 0.99+ |
BigQuery | TITLE | 0.99+ |
Home Depot | ORGANIZATION | 0.99+ |
400 million | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
two ways | QUANTITY | 0.99+ |
Debanjan Saha | PERSON | 0.99+ |
more than 80% | QUANTITY | 0.99+ |
Nevada | LOCATION | 0.99+ |
Python | TITLE | 0.99+ |
today | DATE | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
third | QUANTITY | 0.99+ |
SQL | TITLE | 0.99+ |
BigQuery Omni | TITLE | 0.99+ |
Looker | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.98+ |
Redshift | TITLE | 0.98+ |
BigQuery Omni | TITLE | 0.98+ |
one database | QUANTITY | 0.98+ |
10 million gallons | QUANTITY | 0.98+ |
one set | QUANTITY | 0.98+ |
both | QUANTITY | 0.97+ |
first time | QUANTITY | 0.97+ |
Snowflake | ORGANIZATION | 0.97+ |
One | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
COVID-19 pandemic | EVENT | 0.96+ |
10 X | QUANTITY | 0.96+ |
Both | QUANTITY | 0.95+ |
one example | QUANTITY | 0.95+ |
GCP | TITLE | 0.95+ |
Anthos | ORGANIZATION | 0.93+ |
This summer | DATE | 0.92+ |
this summer | DATE | 0.92+ |
tens of thousands | QUANTITY | 0.91+ |
last seven months | DATE | 0.89+ |
COVID-19 | OTHER | 0.88+ |
CPU | QUANTITY | 0.86+ |
two X. | QUANTITY | 0.86+ |
one size | QUANTITY | 0.86+ |
Spark | TITLE | 0.82+ |
Google cloud | ORGANIZATION | 0.79+ |
Carolyn Guss, PagerDuty | PagerDuty Summit 2020
>>from >>around the >>globe. It's the Cube with digital coverage of pager duty. Summit 2020. Brought to you by pager duty. Hey, welcome back to Brady. Jeffrey here with the Cube in Palo Alto studios today. And we're talking about an upcoming event. It's one of our favorites. This will be the fourth year that we've been doing it. And it's pager duty summit. And we're excited to have from the pager duty team. She's Caroline Gus, the VP of corporate marketing from pager duty. Caroline, Great to see you. >>Hi, Jeff. Great to see you again. >>Absolutely. So, you know, I was thinking before we turn on the cameras we've been doing pager duty for I think this will be like, say, our fourth year that first year was in the cool, um, cruise ship terminal pier. I gotta written appear 27 which was which was nice. And then the last two years, you've been in the, you know, historic Westin ST Francis in downtown San Francisco, which is a cool old venue, but oh, my goodness. You guys were busting at the seams last year. So this year, year to go virtual. There's a whole bunch of new things that that you could do in virtual that you couldn't do in physical space. At least when you're busting out of the seems so First off, Welcome and >>talk a little >>bit about planning for virtual versus planning for a physical event from, you know, head of marketing perspective. >>Absolutely. I mean, the first thing that's changed for us is the number of people that can come. It's five x the number of people that were able to join us, the Western last year. So we have, uh, we we expect to have 10,000 people registered on attending age duty summit. The second thing is thea share number of sessions that we can put on. Last year, I think we had around 25 sessions. This year we have between 40 and 50 on again. That's because we're not constrained by space and physical meeting rooms, so it's being a really exciting process for us. We've built a fantastic agenda on. It's very much personalized, you know, developers come to our event. They love our event for the opportunity to learn mixed with their peers, get best practices and hands on experience. So we have many more of those types of sessions when we have done previously, and that things like labs and Bird of Feather Sessions and Emma's. But we've also built a whole new track of content this year for executives. Page Julie has, um, many of the Fortune 500 on 4100 customers. We work very closely with CEO CTO, so we have built sessions that are really designed specifically for that audience on I think for us it's really opened up. The potential of this event made it so much broader and more appealing than we were able to do when we were, As you say, you know, somewhat confined by the location in downtown San Francisco. >>I think it's such an interesting point. Um, because before you were constrained, right, If you have X number of rooms over a couple of days, you know you've got to make hard decisions on breakouts and what could go in and what can't go in. And, you know, will there be enough demand for these for this session versus another session? Or from the perspective of an attendee, you know, do they have to make hard tradeoffs? I could only attend one session at one oclock on Tuesday and I got to make hard decisions. But this is, you said really opens up the opportunities. I think you said you doubled. You doubled your sessions on and you got five X a number of registrations. So I think, you know, way too many people think about what doesn't happen in digital vs talking about the things that you can do that are impossible in physical. >>Yeah, I think at the very beginning. Well, first of all, we held our Amir summit events in London in July. So that was great because we got Thio go through this experience once already. And what we learned was the rial removal of hurdles in this process. So, to your point about missing the session because you're attending another session, we were calling this sort of the Pelton version of events where you have live sessions. It's great to be there, live participate in the live Q and A, but equally you have an entire on demand library. So if you weren't able to go because there was something else at the same time, this is available on demand for you. So we are actually repeating live sessions on two consecutive day. So on the Monday we're on everything on the Tuesday I ask because show up again for life Q and A at the end of their sessions. But after that it's available forever on an on demand library. So for us, it was really removing hurdles in terms of the amount of content, the scheduling of the content on also the number of people that content in attend, no geographical boundaries anymore. It used to be that a customer of ours would think, Well, I'll send one or two people to the page duty summit. They could learn all the great innovation from page duty, and they'll bring it back to the team that's completely changed. You know, we have tens of 20 signing up on. All of them are able to get that experience firsthand. >>That's really interesting. I didn't didn't even think about, you know, kind of whole teams being able to attend down instead of just certain individuals because of budget constraints, or you can't send your whole team, you know, a way for a conference in a particular area. But the piece to that you're supporting that were over and over is that the net new registrants goes up so dramatically in terms of the names and and and who those individuals are because a lot of people just couldn't attend for for various reasons, whether it's cost, whether it's, uh, geography, whether it's they just can't take time off from from from leaving their primary job. So it's a really interesting opportunity to open up, um, the participation to such a much bigger like you said five x five X, and increase in the registration. That's pretty good number. >>That's right. Yeah. I mean, that crossed boundaries gone away. This event is free on DWhite. That's actually meant is, as I say, you know, larger teams from the same company are attending. Uh, In addition, we have a number of attendees who are not actually paid to duty customers right now to previously. This was very much a community event for, you know, our page duty users on now we actually have a large number of I asked, interested future customers that will be coming to the event. So that's really important for us. And also, I think, for our sponsor partners as well, because it's bordering out the audience for both of us. So let's >>talk about sponsors for a minute, because, um, one of the big things in virtual events that people are talking about quite often is. Okay, I can do the keynotes, and I could do the sessions. And now I have all these breakout sessions for, um, you know, training and certification and customer stories, etcetera. But when it comes to sponsors, right sponsors used, you know, go to events to set up a booth and hand out swag and wander badge. Right? And it really was feeding kind of a top level down funnel. That was really important. Well, now those have gone away. Physical events. So from the sponsor perspective, you know, what can they expect? What? What do you know the sponsor experience at pager duty Summit. Since I don't have a little tiny booth at the Westin ST Francis given out swag this year. >>Yeah. So one important thing is the agenda and how we're involving our sponsors in our agenda this time, something that we learned is we used to have very long keynotes. You know, the keynote could be an hour long on involved multiple components and people would stay in that room for a now er on did really stay and watch sessions all day. So we learned in the virtual format that we need to be shorter and more precise in our sessions on that opened up the opportunity to bring in more of our partners, our sponsorship partners. So zendesk Salesforce, Microsoft some examples. So they actually get to have their piece of both of our keynote sessions and of our technical product sessions. I'm really explain both the partnership with pager duty, but also they're called technology and the value that they provide customers. So I think that the presence of sponsors in content is much higher than it was before on we are still repeating the Expo format, so we actually do have on Expo Hall that any time there's breaking between sessions, you could go over to the Expo ball, and it actually runs throughout as well, and you can go in and you can talk to the teams. You can see product demos, so it's very much a virtual version of the Expo Hall where you went and you want around and you picked up a bit of swag, >>so you mentioned keynotes and and Jennifer and and the team has always had a fantastic keynotes. I mean, I just saw Jennifer being interviewed with Frank's Luqman and and Eric Juan from Zoom By by Curry, which was pretty amazing. I felt kind of jealous that I didn't get to do that. But, um, talk tell us a little bit about some of the speakers I know there'll be some some, you know, kind of big rally moment speakers as well as some that are more down to technical track or another track. Give us some highlights on on some of the people. I will be sharing the stage with Jennifer. >>Absolutely, I said. I think what's really unique about Page duty Summit is that we designed types of content for different types of attendees. So if you're a developer, your practitioner, we have something like this from Jones of Honeycombs, who's talking about who builds the tools that we all rely on today, and how do they collaborate to build them together in this virtual world? Or we have J. Paul Reed from Netflix talking about how to handle the stress of being involved in incidents, So that's really sessions for our core audience of developers who are part of our community and pager duty really helps them day to day with with that job. And then we have the more aspirational senior level speakers who could really learn from a ZA leader. So Bret Taylor, president and CEO of Salesforce, will be joining us on the main stage. You'll be talking about innovation and trust in today's world on. Then we have Derrick Johnson. He is president of N A A. C P, and he'll be talking about community engagement and particularly voter engagement, which is such an important topic for us right now. Aan den. We have leaders from within our customers who are really talking about the way they use pager duty thio drive change in their organization. So an example would be porches, bro. He runs digital for Fox on, and he's gonna be talking about digital acceleration. How large organization like Fox can really accelerate for this digital first world that we find ourselves living in right now, >>right? Well, you guys have such a developer focus because pager duty, the product of solution, has to integrate with so many other, um, infrastructure, you know, monitoring and, uh, and all of all those different systems because you guys were basically at the front line, you know, sending them the signals that go into those systems. So you have such a broad, you know, kind of ecosystem of technology partners. I don't know if people are familiar with all the integrations that you guys have built over the years, which is such a key piece of your go to market. >>That's right. I mean, we we like to say we're at the center of the digital ecosystem. We have 203 170 integrations on. That's important because we want anyone to be able to use page duty no matter what is in their technology stack technology stacks today are more complex than they've ever been before, particularly with businesses having to shift to this digital first model since we all began shelter in place, you know, we all are living through digital on working and learning through digital on DSO. The technology stacks that power that are more complicated than ever before. So by having 370 integrations, we really know that we conserve pretty much any set of services that your business. It's using. >>Yeah, we've all seen all the means right about who's who's pushing your digital transformation. You know, the CEO, the CEO or or covert. And we all know the answer to toe what's accelerated that whole process. So okay, but so before I let you go, I don't even think we've mentioned the date. So it's coming up Monday, September, September 21st through Thursday, September 24th not at the West End Online and again. What air? What are you hoping? You're kind of the key takeaways for the attendees after they come to the summit? >>Yeah, a couple of things. I mean, first of all, I think will be a sense of belonging. Three attendees, the uses, a pager duty. They are really the teams that are at the forefront of keeping our digital services working on. But what that means is responding to incidents we've actually seen. Ah, 38% increase in the volume of incidents on our platform since covert and shelter in place began. Wait 30 >>38% increase in incidents since mid March. >>That's correct. Since the beginning of on bear in mind incidents. Prior to that in the six months prior, they were pretty flat. There wasn't instant growth. But what we've also seen is a 20% improvement in the time that it takes to resolve an incident from five minutes down to four minutes. So what that really means is that the pager duty community is working really hard. They're improving their practices. Hopefully our platform, our platform is a key part of how, but these are some people under pressure, so I hope that people can come and they can experience a sense of belonging. They can learn from each other about experiences. How do you manage the stress of that situation on what are some of the great innovations that make your job easier in the year ahead? The second thing that we don't for that community is that we are offering certification for P. D. You page due to university for free this year. It's of course, with a value of $7500. Last year, you would attend page duty summit on you would sit through your sessions and you would learn and you would get certified. So this year it's offered for free. You take the course during summit. But you can also carry on if you miss anything for 30 days after. So we're really feeling that, you know, we're giving back there, offering a great program for certification and improved skills completely free to help our community in this in this time of pressure, >>right? Right. Well, it is a very passionate community, and, you know, we go to so many events and you can you can really tell it's palatable, you know, kind of what the where the tight communities are and where people are excited to see each other and where they help each other, not necessarily only at the event, but you know, throughout the year. And I think you know a huge shout out to Jennifer on the culture that she's built there because it is very warm. It's very inclusive, is very positive. And and that energy, you know, kind of goes throughout the whole company and ice the teaser. You know this in something that's built around a device that most of the kids today don't even know what a pager is, and just the whole concept of carrying a pager and being on call right and being responsible. It's a very different way to kind of look at the world when you're the one that has that thing on your hip and it's buzzing and someone's expecting, Ah, return call and you gotta fix something So you know, a huge shout out to keep a positive and you're smiling nice and big culture in a job where you're basically fixing broken things most of the time. >>Yeah, absolutely. I mean, there's, I think, a joke that we make you know these things only break on Friday night or your wedding anniversary or Thanksgiving. But one of the announcements we're most excited about this year is the level of automation on artificial intelligence that we're building into our platform that is really going to reduce the number of interruptions that developers get when they are uncle. >>Yeah, I look forward to more conversations because we're gonna be doing a bunch of Cube interviews like Normal and, uh, you know, applied artificial intelligence, I think, is where all the excitement is. It's not a generic thing. It's where you applied in a specific application to get great business outcomes. So I look forward to that conversation and hopefully we'll be able to talk again and good luck to you and the team in the last few weeks of preparation. >>Thanks so much, Jeff. I've enjoyed talking to you. Thanks for having me. >>Alright. You too. And we'll see you later. Alright. She is Caroline. I'm Jeff. You're watching the Cube. Thanks for watching. We'll see you next time.
SUMMARY :
Brought to you by pager duty. that you could do in virtual that you couldn't do in physical space. you know, head of marketing perspective. It's very much personalized, you know, developers come to our event. Or from the perspective of an attendee, you know, It's great to be there, live participate in the live Q and A, but equally you have an entire I didn't didn't even think about, you know, kind of whole teams being able to attend down That's actually meant is, as I say, you know, larger teams from the same company are attending. And now I have all these breakout sessions for, um, you know, training and certification and customer of the Expo Hall where you went and you want around and you picked up a bit of swag, of the speakers I know there'll be some some, you know, kind of big rally moment speakers as well as some that are more down to technical And then we have the more aspirational senior level speakers who could really learn at the front line, you know, sending them the signals that go into those systems. shelter in place, you know, we all are living through digital on working and learning through digital So okay, but so before I let you go, I don't even think we've mentioned the date. I mean, first of all, I think will be a sense of belonging. Last year, you would attend page duty summit on you would sit through your sessions and you would learn and you would get And and that energy, you know, kind of goes throughout the whole company and ice the teaser. I mean, there's, I think, a joke that we make you know these things only break on Friday night So I look forward to that conversation and hopefully we'll be able to talk again and good luck to you and Thanks for having me. And we'll see you later.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff | PERSON | 0.99+ |
Jennifer | PERSON | 0.99+ |
Derrick Johnson | PERSON | 0.99+ |
Caroline | PERSON | 0.99+ |
Fox | ORGANIZATION | 0.99+ |
J. Paul Reed | PERSON | 0.99+ |
five minutes | QUANTITY | 0.99+ |
London | LOCATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Jeffrey | PERSON | 0.99+ |
20% | QUANTITY | 0.99+ |
Last year | DATE | 0.99+ |
Bret Taylor | PERSON | 0.99+ |
one | QUANTITY | 0.99+ |
Caroline Gus | PERSON | 0.99+ |
last year | DATE | 0.99+ |
four minutes | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Monday | DATE | 0.99+ |
Carolyn Guss | PERSON | 0.99+ |
This year | DATE | 0.99+ |
mid March | DATE | 0.99+ |
Tuesday | DATE | 0.99+ |
10,000 people | QUANTITY | 0.99+ |
fourth year | QUANTITY | 0.99+ |
4100 customers | QUANTITY | 0.99+ |
Eric Juan | PERSON | 0.99+ |
370 integrations | QUANTITY | 0.99+ |
30 days | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Brady | PERSON | 0.99+ |
tens | QUANTITY | 0.99+ |
July | DATE | 0.99+ |
30 | QUANTITY | 0.99+ |
Friday night | DATE | 0.99+ |
one session | QUANTITY | 0.99+ |
38% | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
two people | QUANTITY | 0.99+ |
zendesk | ORGANIZATION | 0.98+ |
Salesforce | ORGANIZATION | 0.98+ |
$7500 | QUANTITY | 0.98+ |
50 | QUANTITY | 0.98+ |
Netflix | ORGANIZATION | 0.98+ |
this year | DATE | 0.98+ |
Three attendees | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
Summit 2020 | EVENT | 0.98+ |
first year | QUANTITY | 0.98+ |
first model | QUANTITY | 0.98+ |
Thursday, September 24th | DATE | 0.97+ |
second thing | QUANTITY | 0.97+ |
40 | QUANTITY | 0.95+ |
27 | OTHER | 0.95+ |
Cube | ORGANIZATION | 0.95+ |
first | QUANTITY | 0.95+ |
P. D. | LOCATION | 0.94+ |
one important thing | QUANTITY | 0.94+ |
N A A. C P | ORGANIZATION | 0.94+ |
PagerDuty | ORGANIZATION | 0.94+ |
Westin ST | ORGANIZATION | 0.93+ |
around 25 sessions | QUANTITY | 0.93+ |
six months prior | DATE | 0.92+ |
two consecutive day | QUANTITY | 0.92+ |
first thing | QUANTITY | 0.91+ |
Thanksgiving | EVENT | 0.91+ |
Thio | PERSON | 0.91+ |
last two years | DATE | 0.9+ |
Amir summit | EVENT | 0.89+ |
Zoom By | TITLE | 0.89+ |
First | QUANTITY | 0.88+ |
one oclock | DATE | 0.88+ |