Image Title

Search Results for hundreds of database technologies:

Juan Loaiza, Oracle | Building the Mission Critical Supercloud


 

(upbeat music) >> Welcome back to Supercloud two where we're gathering a number of industry luminaries to discuss the future of cloud services. And we'll be focusing on various real world practitioners today, their challenges, their opportunities with an emphasis on data, self-service infrastructure and how organizations are evolving their data and cloud strategies to prepare for that next era of digital innovation. And we really believe that support for multiple cloud estates is a first step of any Supercloud. And in that regard Oracle surprise some folks with its Azure collaboration the Oracle database and exit database services. And to discuss the challenges of developing a mission critical Supercloud we welcome Juan Loaiza, who's the executive vice president of Mission Critical Database Technologies at Oracle. Juan, you're many time CUBE alums so welcome back to the show. Great to see you. >> Great to see you, and happy to be here with you. >> Yeah, thank you. So a lot of people felt that Oracle was resistant to multicloud strategies and preferred to really have everything run just on the Oracle cloud infrastructure, OCI and maybe that was a misperception maybe you guys were misunderstood or maybe you had to change your heart. Take us through the decision to support multiple cloud platforms >> Now we've supported multiple cloud platforms for many years, so I think that was probably a misperception. Oracle database, we partnered up with Amazon very early on in their cloud when they had kind of the the first cloud out there. And we had Oracle database running on their cloud. We have backup, we have a lot of stuff running. So, yeah, part of the philosophy of Oracle has always been we partner with every platform. We're very open we started with SQL and APIs. As we develop new technologies we push them into the SQL standard. So that's always been part of the ecosystem at Oracle. That's how we think we get an advantage by being more open. I think if we try to create this isolated little world it actually hurts us and hurts customers. So for us it's a win-win to be open across the clouds. >> So Supercloud is this concept that we put forth to describe a platform or some people think it's an architecture if you have an opinion, and I'd love to hear it but it provides a programmatically consistent set of services that hosted on heterogeneous cloud providers. And so we look at the Oracle database service for Azure as fitting within this definition. In your view, is this accurate? >> Yeah, I would broaden it. I'd see a little bit more than that. We just think that services should be available from everywhere, right? So, I mean, it's a little bit like if you go back to the pre-internet world, there was things like AOL and CompuServe and those were kind of islands. And if you were on AOL, you really didn't have access to anything on CompuServe and vice versa. And the cloud world has evolved a little bit like that. And we just think that's the wrong model. They shouldn't these clouds are part of the world and they need to be interconnected like all the rest of the world. It's been a long time with telephones internet, everything, everything's interconnected. Everything should work seamlessly together. So that's how we believe if you're running in one cloud and you're running let's say an application, one cloud you want to use a service from another cloud should be completely simple to do that. It shouldn't be, I can only use what's in AOL or CompuServe or whatever else. It should not be isolated. >> Well, we got a long way to go before that Nirvana exists but one example is the Oracle database service with Azure. So what exactly does that service provide? I'm interested in how consistent the service experience is across clouds. Did you create a purpose-built PaaS layer to achieve this common experience? Or is it off the shelf Terraform? Is there unique value in the PaaS layer? Let's dig into some of those questions. I know I just threw six at you. >> Yeah, I mean, so what this is, is what we're trying to do is very simple. Which is, for example, starting with the Oracle database we want to make that seamless to use from anywhere you're running. Whether it's on-prem, on some other cloud, anywhere else you should be able to seamlessly use the Oracle database and it should look like the internet. There's no friction. There's not a lot of hoops you got to jump just because you're trying to use a database that isn't local to you. So it's pretty straightforward. And in terms of things like Azure, it's not easy to do because all these clouds have a lot of kind of very unique technologies. So what we've done is at Oracle is we've said, "Okay we're going to make Oracle database look exactly like if it was running on Azure." That means we'll use the Azure security systems, the identity management systems, the networking, there's things like monitoring and management. So we'll push all these technologies. For example, when we have monitoring event or we have alerts we'll push those into the Azure console. So as a user, it looks to you exactly as if that Oracle database was running inside Azure. Also, the networking is a big challenge across these clouds. So we've basically made that whole thing seamless. So we create the super high bandwidth network between Azure and Oracle. We make sure that's extremely low latency, under two milliseconds round trip. It's all within the local metro region. So it's very fast, very high bandwidth, very low latency. And we take care establishing the links and making sure that it's secure and all that kind of stuff. So at a high level, it looks to you like the database is--even the look and feel of the screens. It's the Azure colors, it's the Azure buttons it's the Azure layout of the screens so it looks like you're running there and we take care of all the technical details underlying that which there's a lot which has taken a lot of work to make it work seamlessly. >> In the magic of that abstraction. Juan, does it happen at the PaaS layer? Could you take us inside that a little bit? Is there intelligence in there that helps you deal with latency or are there any kind of purpose-built functions for this service? >> You could think of it as... I mean it happens at a lot of different layers. It happens at the identity management layer, it happens at the networking layer, it happens at the database layer, it happens at the monitoring layer, at the management layer. So all those things have been integrated. So it's not one thing that you just go and do. You have to integrate all these different services together. You can access files in Azure from the Oracle database. Again, that's completely seamless. You, it's just like if it was local to our cloud you get your Azure files in your kind of S3 equivalent. So yeah, the, it's not one thing. There's a whole lot of pieces to the ecosystem. And what we've done is we've worked on each piece separately to make sure that it's completely seamless and transparent so you don't have to think about it, it just works. >> So you kind of answered my next question which is one of the technical hurdles. It sounds like the technical hurdles are that integration across the entire stack. That's the sort of architecture that you've built. What was the catalyst for this service? >> Yeah, the catalyst is just fulfilling our vision of an open cloud world. It's really like I said, Oracle, from the very beginning has been believed in open standards. Customers should be able to have choice customers should be able to use whatever they want from wherever they want. And we saw that, you know in the new world of cloud that had broken down everybody had their own authentication system management system, monitoring system networking system, configuration system. And it became very difficult. There was a lot of friction to using services across cloud. So we said, "Well, okay we can fix that." It's work, it's significant amount of work but we know how to do it and let's just go do it and make it easy for customers. >> So given Oracle is really your main focus is on mission critical workloads. You talked about this low latency network, I mean but you still have physical distances, so how are you managing that latency? What's the experience been for customers across Azure and OCI? >> Yeah, so it, it's a good point. I mean, latency can be an issue. So the good thing about clouds is we have a lot of cloud data centers. We have dozens and dozens of cloud data centers around the world. And Azure has dozens and dozens of cloud data centers. And in most cases, they're in the same metro region because there's kind of natural metro regions within each country that you want to put your cloud data centers in. So most of our data centers are actually very close to the Azure data centers. There's the kind of northern Virginia, there's London, there's Tokyo I mean, there's natural places where everybody puts their data centers Seoul et cetera. And so that's the real key. So that allows us to put a very high bandwidth and low latency network. The real problems with latency come when you're trying to go along physical distance. If you're trying to connect, you know across the Pacific or you know across the country or something like that, then you can get in trouble with latency within the same metro region. It's extremely fast. It tends to be around one, you know the highest two millisecond that's roundtrip through all the routers and connections and gateways and everything else. With everything taken into consideration, what we guarantee is it's always less than two millisecond which is a very low latency time. So that tends to not be a problem because it's extremely low latency. >> I was going to ask you less than two milliseconds. So, earlier in the program we had Jack Greenfield who runs architecture for Walmart, and he was explaining what we call their Supercloud, and it's runs across Azure, GCP, and they're on-prem. They have this thing called the triplet model. So my question to you is, are you in situations where you guaranteeing that less than two milliseconds do you have situations where you're bringing, you know Exadata Cloud, a customer on-prem to achieve that? Or is this just across clouds? >> Yeah, in this case, we're talking public cloud data center to public cloud data center. >> Oh okay. >> So add your public cloud data center to Oracle Public Cloud data center. They're in the same metro region. We set up the connections, we do all the technology to make it seamless. And from a customer point of view they don't really see the network. Also, remember that SQL is actually designed to have very low bandwidth and latency requirements. So it is a language. So you don't go to the database and say do this one little thing for me. You send it a SQL statement that can actually access lots of data while in the database. So the real latency requirement of a SQL database is within the database. So I need to access all that data fast. So I need very fast access to storage very fast access across node. That's what exit data gives you. But you send one request and that request can do a huge amount of work and then return one answer. And that's kind of the design point of SQL. So SQL is inherently low bandwidth requirements, it was used back in the eighties when we used to have 10 megabit networks and the the biggest companies in the world ran back then. So right now we're talking over hundred hundreds of gigabits. So it's really not much of a challenge. When you're designed to run on 10 megabit to say, okay I'm going to give you 10,000 times what you were designed for it's really, it's a pretty low hurdle jump. >> What about the deployment models? How do you handle this? Is it a single global instance across clouds or do you sort of instantiate in each you got exudate in Azure and exudates in OCI? What's the deployment model look like? >> It's pretty straightforward. So customer decides where they want to run their application and database. So there's natural places where people go. If you're in Tokyo, you're going to choose the local Tokyo data centers for both, you know Microsoft and Oracle. If you're in London, you're going to do that. If you're in California you're going to choose maybe San Jose, something like that. So a customer just chooses. We both have data centers in that metro region. So they create their service on Azure and then they go to our console which looks just like an Azure console and say all right create me a database. And then we choose the closest Oracle data center which is generally a few miles away, and then it it all gets created. So from a customer point of view, it's very straightforward. >> I'm always in awe about how simple you make things sound. All right what about security? You talked a little bit before about identity access how you sort of abstracting the Azure capabilities away so that you've simplified it for your customers but are there any other specific security things that you need to do? How much did you have to abstract the underlying primitives of Azure or OCI to present that common experience to customers? >> Yeah, so there's really two big things. One is the identity management. Like my name is X on Azure and I have this set of privileges. Oracle has its own identity management system, right? So what we didn't want is that you have to kind of like bridge these things yourself. It's a giant pain to do that. So we actually what we call federate across these identity managements. So you put your credentials into Azure and then they automatically get to use the exact same credentials and identity in the Oracle cloud. So again, you don't have to think about it, it just works. And then the second part is that the whole bridging the network. So within a cloud you generally have virtual network that's private to your company. And so at Oracle, we bridge the private network that you created in, for example, Azure to the private network that we create for you in Oracle. So it is still a private network without you having to do a whole bunch of work. So it's just like if you were in your own data center other people can't get into your network. So it's secured at the network level, it's secured at the identity management, and encryption level. And again we did a lot of work to make that seamless for customers and they don't have to worry about it because we did the work. That's really as simple as it gets. >> That's what's Supercloud's supposed to be all about. Alright, we were talking earlier about sort of the misperception around multicloud, your view of Open I think, which is you run the Oracle database, wherever the customer wants to run it. So you got this database service across OCI and Azure customers today, they run Oracle database in AWS. You got heat wave, MySQL, heat wave that you announced on AWS, Google touts a bare metal offering where you can run Oracle on GCP. Do you see a day when you extend an OCI Azure like situation across multiple clouds? Would that bring benefits to customers or will the world of database generally remain largely fenced with maybe a few exceptions like what you're doing with OCI and Azure? I'm particularly interested in your thoughts on egress fees as maybe one of the reasons that there is a barrier to this happening and why maybe these stove pipes, exist today and in the future. What are your thoughts on that? >> Yeah, we're very open to working with everyone else out there. Like I said, we've always been, big believers in customers should have choice and you should be able to run wherever you want. So that's been kind of a founding principle of Oracle. We have the Azure, we did a partnership with them, we're open to doing other partnerships and you're going to see other things coming down the pipe on the topic of egress. Yeah, the large egress fees, it's pretty obvious what goes on with that. Various vendors like to have large egress fees because they want to keep things kind of locked into their cloud. So it's not a very customer friendly thing to do. And I think everybody recognizes that it's really trying to kind of course or put a lot of friction on moving data out of a particular cloud. And that's not what we do. We have very, very low egress fees. So we don't really do that and we don't think anybody else should do that. But I think customers at the end of the day, will win that battle. They're going to have to go back to their vendor and say, well I have choice in clouds and if you're going to impose these limits on me, maybe I'll make a different choice. So that's ultimately how these things get resolved. >> So do you think other cloud providers are going to take a page out of what you're doing with Azure and provide similar solutions? >> Yeah, well I think customers want, I mean, I've talked to a lot of customers, this is what they want, right? I mean, there's really no doubt no customer wants to be locked into a single ecosystem. There's nobody out there that wants that. And as the competition, when they start seeing an open ecosystem evolving they're going to be like, okay, I'd rather go there than the closed ecosystem, and that's going to put pressure on the closed ecosystems. So that's the nature of competition. That's what ultimately will tip the balance on these things. >> So Juan, even though you have this capability of distributing a workload across multiple clouds as in our Supercloud premise it's still something that's relatively new. It's a big decision that maybe many people might consider somewhat of a risk. So I'm curious who's driving the decisions for your initial customers? What do they want to get out of it? What's the decision point there? >> Yeah, I mean, this is generally driven by customers that want a specific technology in a cloud. I think the risk, I haven't seen a lot of people worry too much about the risk. Everybody involved in this is a very well known, very reputable firm. I mean, Oracle's been around for 40 years. We run most of the world's largest companies. I think customers understand we're not going to build a solution that's going to put their technology and their business at risk. And the same thing with Azure and others. So I don't see customers too worried about this is a risky move because it's really not. And you know, everybody understands networking at the end the day networking works. I mean, how does the internet work? It's a known quantity. It's not like it's some brand new invention. What we're really doing is breaking down the barriers to interconnecting things. Automating 'em, making 'em easy. So there's not a whole lot of risk here for customers. And like I said, every single customer in the world loves an open ecosystem. It's just not a question. If you go to a customer would you rather put your technology or your business to run on a closed ecosystem or an open system? It's kind of not even worth asking a question. It's a no-brainer. >> All right, so we got to go. My last question. What do you think of the term "Supercloud"? You think it'll stick? >> We'll see. There's a lot of terms out there and it's always fun to see which terms stick. It's a cool term. I like it, but the decision makers are actually the public, what sticks and what doesn't. It's very hard to predict. >> Yeah well, it's been a lot of fun having you on, Juan. Really appreciate your time and always good to see you. >> All right, Dave, thanks a lot. It's always fun to talk to you. >> You bet. All right, keep it right there. More Supercloud two content from theCUBE Community Dave Vellante for John Furrier. We'll be right back. (upbeat music)

Published Date : Jan 12 2023

SUMMARY :

and cloud strategies to prepare happy to be here with you. just on the Oracle cloud of the ecosystem at Oracle. and I'd love to hear it And the cloud world has Or is it off the shelf Terraform? So at a high level, it looks to you Juan, does it happen at the PaaS layer? it happens at the database layer, So you kind of And we saw that, you know What's the experience been for customers across the Pacific or you know So my question to you is, to public cloud data center. So the real latency requirement and then they go to our console the Azure capabilities away So it's secured at the network level, So you got this database We have the Azure, we did So that's the nature of competition. What's the decision point there? down the barriers to the term "Supercloud"? and it's always fun to and always good to see you. It's always fun to talk to you. Vellante for John Furrier.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
MicrosoftORGANIZATION

0.99+

OracleORGANIZATION

0.99+

DavePERSON

0.99+

WalmartORGANIZATION

0.99+

Juan LoaizaPERSON

0.99+

AmazonORGANIZATION

0.99+

San JoseLOCATION

0.99+

CaliforniaLOCATION

0.99+

Dave VellantePERSON

0.99+

TokyoLOCATION

0.99+

JuanPERSON

0.99+

LondonLOCATION

0.99+

sixQUANTITY

0.99+

10,000 timesQUANTITY

0.99+

Jack GreenfieldPERSON

0.99+

GoogleORGANIZATION

0.99+

second partQUANTITY

0.99+

AWSORGANIZATION

0.99+

less than two millisecondQUANTITY

0.99+

less than two millisecondsQUANTITY

0.99+

OneQUANTITY

0.99+

SQLTITLE

0.99+

10 megabitQUANTITY

0.99+

bothQUANTITY

0.99+

AOLORGANIZATION

0.98+

each pieceQUANTITY

0.98+

MySQLTITLE

0.98+

first cloudQUANTITY

0.98+

singleQUANTITY

0.98+

each countryQUANTITY

0.98+

John FurrierPERSON

0.98+

two big thingsQUANTITY

0.98+

under two millisecondsQUANTITY

0.98+

oneQUANTITY

0.98+

northern VirginiaLOCATION

0.98+

CompuServeORGANIZATION

0.97+

first stepQUANTITY

0.97+

Mission Critical Database TechnologiesORGANIZATION

0.97+

one requestQUANTITY

0.97+

SeoulLOCATION

0.97+

AzureTITLE

0.97+

eachQUANTITY

0.97+

two millisecondQUANTITY

0.97+

AzureORGANIZATION

0.96+

one cloudQUANTITY

0.95+

one thingQUANTITY

0.95+

cloud data centersQUANTITY

0.95+

one answerQUANTITY

0.95+

SupercloudORGANIZATION

0.94+

Oracle Announces MySQL HeatWave on AWS


 

>>Oracle continues to enhance my sequel Heatwave at a very rapid pace. The company is now in its fourth major release since the original announcement in December 2020. 1 of the main criticisms of my sequel, Heatwave, is that it only runs on O. C I. Oracle Cloud Infrastructure and as a lock in to Oracle's Cloud. Oracle recently announced that heat wave is now going to be available in AWS Cloud and it announced its intent to bring my sequel Heatwave to Azure. So my secret heatwave on AWS is a significant TAM expansion move for Oracle because of the momentum AWS Cloud continues to show. And evidently the Heatwave Engineering team has taken the development effort from O. C I. And is bringing that to A W S with a number of enhancements that we're gonna dig into today is senior vice president. My sequel Heatwave at Oracle is back with me on a cube conversation to discuss the latest heatwave news, and we're eager to hear any benchmarks relative to a W S or any others. Nippon has been leading the Heatwave engineering team for over 10 years and there's over 100 and 85 patents and database technology. Welcome back to the show and good to see you. >>Thank you. Very happy to be back. >>Now for those who might not have kept up with the news, uh, to kick things off, give us an overview of my sequel, Heatwave and its evolution. So far, >>so my sequel, Heat Wave, is a fully managed my secret database service offering from Oracle. Traditionally, my secret has been designed and optimised for transaction processing. So customers of my sequel then they had to run analytics or when they had to run machine learning, they would extract the data out of my sequel into some other database for doing. Unlike processing or machine learning processing my sequel, Heat provides all these capabilities built in to a single database service, which is my sequel. He'd fake So customers of my sequel don't need to move the data out with the same database. They can run transaction processing and predicts mixed workloads, machine learning, all with a very, very good performance in very good price performance. Furthermore, one of the design points of heat wave is is a scale out architecture, so the system continues to scale and performed very well, even when customers have very large late assignments. >>So we've seen some interesting moves by Oracle lately. The collaboration with Azure we've we've covered that pretty extensively. What was the impetus here for bringing my sequel Heatwave onto the AWS cloud? What were the drivers that you considered? >>So one of the observations is that a very large percentage of users of my sequel Heatwave, our AWS users who are migrating of Aurora or so already we see that a good percentage of my secret history of customers are migrating from GWS. However, there are some AWS customers who are still not able to migrate the O. C. I to my secret heat wave. And the reason is because of, um, exorbitant cost, which was charges. So in order to migrate the workload from AWS to go see, I digress. Charges are very high fees which becomes prohibitive for the customer or the second example we have seen is that the latency of practising a database which is outside of AWS is very high. So there's a class of customers who would like to get the benefits of my secret heatwave but were unable to do so and with this support of my secret trip inside of AWS, these customers can now get all the grease of the benefits of my secret he trip without having to pay the high fees or without having to suffer with the poorly agency, which is because of the ws architecture. >>Okay, so you're basically meeting the customer's where they are. So was this a straightforward lifted shift from from Oracle Cloud Infrastructure to AWS? >>No, it is not because one of the design girls we have with my sequel, Heatwave is that we want to provide our customers with the best price performance regardless of the cloud. So when we decided to offer my sequel, he headed west. Um, we have optimised my sequel Heatwave on it as well. So one of the things to point out is that this is a service with the data plane control plane and the console are natively running on AWS. And the benefits of doing so is that now we can optimise my sequel Heatwave for the E. W s architecture. In addition to that, we have also announced a bunch of new capabilities as a part of the service which will also be available to the my secret history of customers and our CI, But we just announced them and we're offering them as a part of my secret history of offering on AWS. >>So I just want to make sure I understand that it's not like you just wrapped your stack in a container and stuck it into a W s to be hosted. You're saying you're actually taking advantage of the capabilities of the AWS cloud natively? And I think you've made some other enhancements as well that you're alluding to. Can you maybe, uh, elucidate on those? Sure. >>So for status, um, we have taken the mind sequel Heatwave code and we have optimised for the It was infrastructure with its computer network. And as a result, customers get very good performance and price performance. Uh, with my secret he trade in AWS. That's one performance. Second thing is, we have designed new interactive counsel for the service, which means that customers can now provision there instances with the council. But in addition, they can also manage their schemas. They can. Then court is directly from the council. Autopilot is integrated. The council we have introduced performance monitoring, so a lot of capabilities which we have introduced as a part of the new counsel. The third thing is that we have added a bunch of new security features, uh, expose some of the security features which were part of the My Secret Enterprise edition as a part of the service, which gives customers now a choice of using these features to build more secure applications. And finally, we have extended my secret autopilot for a number of old gpus cases. In the past, my secret autopilot had a lot of capabilities for Benedict, and now we have augmented my secret autopilot to offer capabilities for elderly people. Includes as well. >>But there was something in your press release called Auto thread. Pooling says it provides higher and sustained throughput. High concerns concerns concurrency by determining Apple number of transactions, which should be executed. Uh, what is that all about? The auto thread pool? It seems pretty interesting. How does it affect performance? Can you help us understand that? >>Yes, and this is one of the capabilities of alluding to which we have added in my secret autopilot for transaction processing. So here is the basic idea. If you have a system where there's a large number of old EP transactions coming into it at a high degrees of concurrency in many of the existing systems of my sequel based systems, it can lead to a state where there are few transactions executing, but a bunch of them can get blocked with or a pilot tried pulling. What we basically do is we do workload aware admission control and what this does is it figures out, what's the right scheduling or all of these algorithms, so that either the transactions are executing or as soon as something frees up, they can start executing, so there's no transaction which is blocked. The advantage to the customer of this capability is twofold. A get significantly better throughput compared to service like Aurora at high levels of concurrency. So at high concurrency, for instance, uh, my secret because of this capability Uh oh, thread pulling offers up to 10 times higher compared to Aurora, that's one first benefit better throughput. The second advantage is that the true part of the system never drops, even at high levels of concurrency, whereas in the case of Aurora, the trooper goes up, but then, at high concurrency is, let's say, starting, uh, level of 500 or something. It depends upon the underlying shit they're using the troopers just dropping where it's with my secret heatwave. The truth will never drops. Now, the ramification for the customer is that if the truth is not gonna drop, the user can start off with a small shape, get the performance and be a show that even the workload increases. They will never get a performance, which is worse than what they're getting with lower levels of concurrency. So this let's leads to customers provisioning a shape which is just right for them. And if they need, they can, uh, go with the largest shape. But they don't like, you know, over pay. So those are the two benefits. Better performance and sustain, uh, regardless of the level of concurrency. >>So how do we quantify that? I know you've got some benchmarks. How can you share comparisons with other cloud databases especially interested in in Amazon's own databases are obviously very popular, and and are you publishing those again and get hub, as you have done in the past? Take us through the benchmarks. >>Sure, So benchmarks are important because that gives customers a sense of what performance to expect and what price performance to expect. So we have run a number of benchmarks. And yes, all these benchmarks are available on guitar for customers to take a look at. So we have performance results on all the three castle workloads, ol DB Analytics and Machine Learning. So let's start with the Rdp for Rdp and primarily because of the auto thread pulling feature. We show that for the IPCC for attended dataset at high levels of concurrency, heatwave offers up to 10 times better throughput and this performance is sustained, whereas in the case of Aurora, the performance really drops. So that's the first thing that, uh, tend to alibi. Sorry, 10 gigabytes. B B C c. I can come and see the performance are the throughput is 10 times better than Aurora for analytics. We have done a comparison of my secret heatwave in AWS and compared with Red Ship Snowflake Googled inquiry, we find that the price performance of my secret heatwave compared to read ship is seven times better. So my sequel, Heat Wave in AWS, provides seven times better price performance than red ship. That's a very, uh, interesting results to us. Which means that customers of Red Shift are really going to take the service seriously because they're gonna get seven times better price performance. And this is all running in a W s so compared. >>Okay, carry on. >>And then I was gonna say, compared to like, Snowflake, uh, in AWS offers 10 times better price performance. And compared to Google, ubiquity offers 12 times better price performance. And this is based on a four terabyte p PCH workload. Results are available on guitar, and then the third category is machine learning and for machine learning, uh, for training, the performance of my secret heatwave is 25 times faster compared to that shit. So all the three workloads we have benchmark's results, and all of these scripts are available on YouTube. >>Okay, so you're comparing, uh, my sequel Heatwave on AWS to Red Shift and snowflake on AWS. And you're comparing my sequel Heatwave on a W s too big query. Obviously running on on Google. Um, you know, one of the things Oracle is done in the past when you get the price performance and I've always tried to call fouls you're, like, double your price for running the oracle database. Uh, not Heatwave, but Oracle Database on a W s. And then you'll show how it's it's so much cheaper on on Oracle will be like Okay, come on. But they're not doing that here. You're basically taking my sequel Heatwave on a W s. I presume you're using the same pricing for whatever you see to whatever else you're using. Storage, um, reserved instances. That's apples to apples on A W s. And you have to obviously do some kind of mapping for for Google, for big query. Can you just verify that for me, >>we are being more than fair on two dimensions. The first thing is, when I'm talking about the price performance for analytics, right for, uh, with my secret heat rape, the cost I'm talking about from my secret heat rape is the cost of running transaction processing, analytics and machine learning. So it's a fully loaded cost for the case of my secret heatwave. There has been I'm talking about red ship when I'm talking about Snowflake. I'm just talking about the cost of these databases for running, and it's only it's not, including the source database, which may be more or some other database, right? So that's the first aspect that far, uh, trip. It's the cost for running all three kinds of workloads, whereas for the competition, it's only for running analytics. The second thing is that for these are those services whether it's like shit or snowflakes, That's right. We're talking about one year, fully paid up front cost, right? So that's what most of the customers would pay for. Many of the customers would pay that they will sign a one year contract and pay all the costs ahead of time because they get a discount. So we're using that price and the case of Snowflake. The costs were using is their standard edition of price, not the Enterprise edition price. So yes, uh, more than in this competitive. >>Yeah, I think that's an important point. I saw an analysis by Marx Tamer on Wiki Bond, where he was doing the TCO comparisons. And I mean, if you have to use two separate databases in two separate licences and you have to do et yelling and all the labour associated with that, that that's that's a big deal and you're not even including that aspect in in your comparison. So that's pretty impressive. To what do you attribute that? You know, given that unlike, oh, ci within the AWS cloud, you don't have as much control over the underlying hardware. >>So look hard, but is one aspect. Okay, so there are three things which give us this advantage. The first thing is, uh, we have designed hateful foreign scale out architecture. So we came up with new algorithms we have come up with, like, uh, one of the design points for heat wave is a massively partitioned architecture, which leads to a very high degree of parallelism. So that's a lot of hype. Each were built, So that's the first part. The second thing is that although we don't have control over the hardware, but the second design point for heat wave is that it is optimised for commodity cloud and the commodity infrastructure so we can have another guys, what to say? The computer we get, how much network bandwidth do we get? How much of, like objects to a brand that we get in here? W s. And we have tuned heat for that. That's the second point And the third thing is my secret autopilot, which provides machine learning based automation. So what it does is that has the users workload is running. It learns from it, it improves, uh, various premieres in the system. So the system keeps getting better as you learn more and more questions. And this is the third thing, uh, as a result of which we get a significant edge over the competition. >>Interesting. I mean, look, any I SV can go on any cloud and take advantage of it. And that's, uh I love it. We live in a new world. How about machine learning workloads? What? What did you see there in terms of performance and benchmarks? >>Right. So machine learning. We offer three capabilities training, which is fully automated, running in France and explanations. So one of the things which many of our customers told us coming from the enterprise is that explanations are very important to them because, uh, customers want to know that. Why did the the system, uh, choose a certain prediction? So we offer explanations for all models which have been derailed by. That's the first thing. Now, one of the interesting things about training is that training is usually the most expensive phase of machine learning. So we have spent a lot of time improving the performance of training. So we have a bunch of techniques which we have developed inside of Oracle to improve the training process. For instance, we have, uh, metal and proxy models, which really give us an advantage. We use adaptive sampling. We have, uh, invented in techniques for paralysing the hyper parameter search. So as a result of a lot of this work, our training is about 25 times faster than that ship them health and all the data is, uh, inside the database. All this processing is being done inside the database, so it's much faster. It is inside the database. And I want to point out that there is no additional charge for the history of customers because we're using the same cluster. You're not working in your service. So all of these machine learning capabilities are being offered at no additional charge inside the database and as a performance, which is significantly faster than that, >>are you taking advantage of or is there any, uh, need not need, but any advantage that you can get if two by exploiting things like gravity. John, we've talked about that a little bit in the past. Or trainee. Um, you just mentioned training so custom silicon that AWS is doing, you're taking advantage of that. Do you need to? Can you give us some insight >>there? So there are two things, right? We're always evaluating What are the choices we have from hybrid perspective? Obviously, for us to leverage is right and like all the things you mention about like we have considered them. But there are two things to consider. One is he is a memory system. So he favours a big is the dominant cost. The processor is a person of the cost, but memory is the dominant cost. So what we have evaluated and found is that the current shape which we are using is going to provide our customers with the best price performance. That's the first thing. The second thing is that there are opportunities at times when we can use a specialised processor for vaccinating the world for a bit. But then it becomes a matter of the cost of the customer. Advantage of our current architecture is on the same hardware. Customers are getting very good performance. Very good, energetic performance in a very good machine learning performance. If you will go with the specialised processor, it may. Actually, it's a machine learning, but then it's an additional cost with the customers we need to pay. So we are very sensitive to the customer's request, which is usually to provide very good performance at a very low cost. And we feel is that the current design we have as providing customers very good performance and very good price performance. >>So part of that is architectural. The memory intensive nature of of heat wave. The other is A W s pricing. If AWS pricing were to flip, it might make more sense for you to take advantage of something like like cranium. Okay, great. Thank you. And welcome back to the benchmarks benchmarks. Sometimes they're artificial right there. A car can go from 0 to 60 in two seconds. But I might not be able to experience that level of performance. Do you? Do you have any real world numbers from customers that have used my sequel Heatwave on A W s. And how they look at performance? >>Yes, absolutely so the my Secret service on the AWS. This has been in Vera for, like, since November, right? So we have a lot of customers who have tried the service. And what actually we have found is that many of these customers, um, planning to migrate from Aurora to my secret heat rape. And what they find is that the performance difference is actually much more pronounced than what I was talking about. Because with Aurora, the performance is actually much poorer compared to uh, like what I've talked about. So in some of these cases, the customers found improvement from 60 times, 240 times, right? So he travels 100 for 240 times faster. It was much less expensive. And the third thing, which is you know, a noteworthy is that customers don't need to change their applications. So if you ask the top three reasons why customers are migrating, it's because of this. No change to the application much faster, and it is cheaper. So in some cases, like Johnny Bites, what they found is that the performance of their applications for the complex storeys was about 60 to 90 times faster. Then we had 60 technologies. What they found is that the performance of heat we have compared to Aurora was 100 and 39 times faster. So, yes, we do have many such examples from real workloads from customers who have tried it. And all across what we find is if it offers better performance, lower cost and a single database such that it is compatible with all existing by sequel based applications and workloads. >>Really impressive. The analysts I talked to, they're all gaga over heatwave, and I can see why. Okay, last question. Maybe maybe two and one. Uh, what's next? In terms of new capabilities that customers are going to be able to leverage and any other clouds that you're thinking about? We talked about that upfront, but >>so in terms of the capabilities you have seen, like they have been, you know, non stop attending to the feedback from the customers in reacting to it. And also, we have been in a wedding like organically. So that's something which is gonna continue. So, yes, you can fully expect that people not dressed and continue to in a way and with respect to the other clouds. Yes, we are planning to support my sequel. He tripped on a show, and this is something that will be announced in the near future. Great. >>All right, Thank you. Really appreciate the the overview. Congratulations on the work. Really exciting news that you're moving my sequel Heatwave into other clouds. It's something that we've been expecting for some time. So it's great to see you guys, uh, making that move, and as always, great to have you on the Cube. >>Thank you for the opportunity. >>All right. And thank you for watching this special cube conversation. I'm Dave Volonte, and we'll see you next time.

Published Date : Sep 14 2022

SUMMARY :

The company is now in its fourth major release since the original announcement in December 2020. Very happy to be back. Now for those who might not have kept up with the news, uh, to kick things off, give us an overview of my So customers of my sequel then they had to run analytics or when they had to run machine So we've seen some interesting moves by Oracle lately. So one of the observations is that a very large percentage So was this a straightforward lifted shift from No, it is not because one of the design girls we have with my sequel, So I just want to make sure I understand that it's not like you just wrapped your stack in So for status, um, we have taken the mind sequel Heatwave code and we have optimised Can you help us understand that? So this let's leads to customers provisioning a shape which is So how do we quantify that? So that's the first thing that, So all the three workloads we That's apples to apples on A W s. And you have to obviously do some kind of So that's the first aspect And I mean, if you have to use two So the system keeps getting better as you learn more and What did you see there in terms of performance and benchmarks? So we have a bunch of techniques which we have developed inside of Oracle to improve the training need not need, but any advantage that you can get if two by exploiting We're always evaluating What are the choices we have So part of that is architectural. And the third thing, which is you know, a noteworthy is that In terms of new capabilities that customers are going to be able so in terms of the capabilities you have seen, like they have been, you know, non stop attending So it's great to see you guys, And thank you for watching this special cube conversation.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VolontePERSON

0.99+

December 2020DATE

0.99+

AmazonORGANIZATION

0.99+

JohnPERSON

0.99+

FranceLOCATION

0.99+

AWSORGANIZATION

0.99+

10 timesQUANTITY

0.99+

two thingsQUANTITY

0.99+

OracleORGANIZATION

0.99+

HeatwaveTITLE

0.99+

100QUANTITY

0.99+

60 timesQUANTITY

0.99+

one yearQUANTITY

0.99+

12 timesQUANTITY

0.99+

GWSORGANIZATION

0.99+

60 technologiesQUANTITY

0.99+

first partQUANTITY

0.99+

240 timesQUANTITY

0.99+

two separate licencesQUANTITY

0.99+

third categoryQUANTITY

0.99+

second advantageQUANTITY

0.99+

0QUANTITY

0.99+

seven timesQUANTITY

0.99+

two secondsQUANTITY

0.99+

twoQUANTITY

0.99+

AppleORGANIZATION

0.99+

seven timesQUANTITY

0.99+

GoogleORGANIZATION

0.99+

oneQUANTITY

0.99+

25 timesQUANTITY

0.99+

second pointQUANTITY

0.99+

NovemberDATE

0.99+

85 patentsQUANTITY

0.99+

second thingQUANTITY

0.99+

AuroraTITLE

0.99+

third thingQUANTITY

0.99+

EachQUANTITY

0.99+

second exampleQUANTITY

0.99+

10 gigabytesQUANTITY

0.99+

three thingsQUANTITY

0.99+

OneQUANTITY

0.99+

two benefitsQUANTITY

0.99+

one aspectQUANTITY

0.99+

first aspectQUANTITY

0.98+

two separate databasesQUANTITY

0.98+

over 10 yearsQUANTITY

0.98+

fourth major releaseQUANTITY

0.98+

39 timesQUANTITY

0.98+

first thingQUANTITY

0.98+

Heat WaveTITLE

0.98+

Doug Merritt, Splunk | Splunk .conf21


 

>>Welcome back to the cubes cover dot com. Splunk annual conference >>Virtual this year. I'm john for >>your host of the cube as always we're being the best stories. The best guest to you and the best guest today is the ceo Doug merit of course, Top Dog. It's great to see you. Thanks for coming on to be seen. >>So nice. I can't believe it. We had a whole year without seeing each other. >>I love this conference because it's kind of like a studio taking over a full virtual studio multiple sets, cubes here. You have the main stage, you've got rooms upstairs, tons of virtual interactions. Great numbers. Congratulations. >>Thank you. Thank you. We were, we wanted this to be primarily live where we are live, primarily on site. Um, and we pivoted some private marketing team. How quickly they pivoted and I love the environment they've created as I know next year we will be always have virtual now we've all learned but will be on site, which is great. >>It's good to see kind of you guys telling the story a lot, a lot more stories happening and You know, we've been covering splint since 2012 on the Cube. I think longer than aws there was 2013 our first cube seeing Splunk emerge is the trend has been, it's new, it's got value and you operationalize it for customers. Something new happens. You operationalized for customers and it just keeps on the Splunk way, the culture of innovation. It just seems now more than ever. You guys were involved in security early 2015 I think that was the year we started kind of talking about it your first year and now it just feels like something bigger is right here in front of us. It's and people are trying to figure out multi cloud observe ability. We see that what that's a big growth wave coming. What's the wave that's happening? >>So uh the beauty of Splunk and the kind of culture and how we were born was we have this non structured backbone um what I would call the investigative lake where you just dump garbage into it and then get value out of it through the question asking which means you can traverse anywhere because you're not taking a point of view on the data it's usable all over the place. And that's how we went up in security. As we had the I. T. Systems administrators pinging that thing with with questions. And at that point in time the separate teams were almost always part of the I. T. Teams like hey can we ask questions that thing. It's like yeah go ahead. And also they got value. And then the product managers and the app dev guys started asking questions. And so a lot of our proliferation has been because of the underlying back bonus blank the ability for new people to come to the data and find value in the data. Um as you know and as our users know we have tried to stay very focused on the go to market basis on serving the technical triumphant the cyber teams, the infrastructure management, 90 ops teams and the abdomen devoPS teams and on the go to market basis and the solutions we package that is, we're trying to stay super pure to that. That's $90 billion of total addressable market. We're super excited will be well over three billion an error this year, which is amazing is 300 million when I started seven years ago so that 10 x and seven years is great. But three billion and 90 billion like we're all just getting going right now with those Corbyn centers. The were on top of what sean bison as we tell you about, hey, we've got to continue to focus on multi cloud and edge is really important. Machine learning is important. That the lever that we've been focused on for a long time that we'll continue to gain better traction on is making sure that we've got the right data plane and application platform layer so that the rest of the world can participate in building high quality reusable and recyclable applications so that operate operationalization that we have done officially around cyber it and devops and unofficially on a one off basis for marketing and supply chain and logistics and manufacturing that those other use cases can be packaged repeated, sold and supported by the people that really know those domains because we're not manufacturing experts. It's we're honored that portion BMW are using us to get operational insight into the manufacturing floor. But they lead that we just were there is the technical Splunk people to help bring that to life. But there are lots of firms out there, no manufacturing cold process versus the screed and they can create with these packages. They're appropriate for automotive, automotive versus paint versus wineries versus having that. I think the big Accelerant over the next 10 years response, we gotta keep penetrating our core use cases but it would be allowing our ecosystem and so happy Teresa Karlsson's here is just pounding the table and partners to take the other probably 90% of the market that is not covered by by our core market. >>Yeah, I think that's awesome. And the first time we get to the partner 1st and 2nd the rebranding of the ecosystem as it's growing. But you mentioned you didn't know manufacturing as an example where the value is being created. That's interesting because you guys are enabling that value, their adding that because they know their apps then they're experts. That's where the ecosystem is really gonna shine because if you can provide that enablement this control plane as you mentioned, that's going to feed the ecosystem. So the question I have for you is as you guys have become essentially the de facto control playing for most companies because they were using spring for a lot of other great reasons now you have set them up that way is the pattern to just keep building machine learning apps on top of it or more querying what's the what's the customer next level trends that you're seeing. >>So the two core focus areas that we will stay on top of is enriching that data platform and ensure that we continue to provide better at peace and better interfaces so that when people want to build a really interesting automotive parts, supply chain optimization app that they're able to do that, we've got the right A. P. S. We've got the right services, we've got the right separation between the application of platforms so they can get that done, we'll continue to advance that platform so that there's modernization capabilities and there's advertising capabilities and other pieces that they can make their business. The other piece that will stay very focused on is within the cyber realm within I. T. Ops within devops, ensuring that we're leveraging that platform, but baking ml and baking all the advanced edge and other capabilities into those solutions because the cyber teams as where you started with a You know, we really started reporting on cyber 2015, those guys have got such a hard job and while there's lots of people pretending like they're going to come in and serve them, it's the difficulty is there are hundreds of tools and technologies that the average C so deals with and the rate of innovation is not slowing down and those vendors that have a vested interest and I want to maintain my footprint and firewalls, I want to maintain an implant, I want to maintain. It's really hard for them to say, you know what? There are 25 other categories of tools and there's 500 vendors. You gotta play nicely with your competitors and know all those folks if you really want to provide the ml the detection, the remediation, The investigation capabilities. And that's where I'm really excited about the competition. The fake competition in many cases because like, yeah, bring it on. Like I've got 2000 engineers, all they do all day long is focused on the data layer and making sure that we're effective there and I'm not diverting my engineers with any other tasks that I've got a it's hard enough to do what we do in the day layers. Well, >>it's interesting. I just had some notes here, I had one data driven innovation you've been talking about since you've been here. We've been talking about data driven innovation, cybersecurity mentioned for many years, it's almost like the balance of you gotta have tools, but you gotta have the platform. If you have too many tools and no platform, then there's a mix match here and you get hung up with tools and these blind spots. You can't have blind spots, you can't have silos. This is what kind of everyone's pretty much agreeing on right now. It's not a debate. It's more like, okay, I got silos and I got blind spots. Well how do I solve >>the difficulty? And I touched a little bit of the sun my keynote of There are well over 60 and I was using 16 because DB engines categorizes 16 different database tools. But there's actually more if you go deeper. So there's different 16 different categories of database tools. Think relational database, data warehouse, ledger databases, graph database, et cetera over 16 categories those 350 vendors. That's not because we're all stupid in tech like a graph DB is different than a relational database, which is different than what we do with our stimulus index. So there's those categories that many vendors because they're trying to solve different problems within the swim lane that you are in which for us is this non structured, high volume difficult data to manage Now. The problem is how do you create that non broken that end to end view. So you can handle your use cases effectively. Um and then the customer is still going to do with the fact that we're not a relational database engine company. We're not a data warehousing company where we were beginning to use graph DB capabilities within our our solution sets. We're gonna lean on open source other vendors use the tool for the job >>you need. But I think that what you're thinking hitting on my like is this control plane idea. I want to get back to that because if you think about what the modern application developers want is they want devops and deVOps kind of one infrastructures codes there. But if I'm a modern developer, I just want to code, >>I don't want to configure >>the data or the infrastructure. So the data value now is so much more important for the developer, whether that's policy based innovation, get options, some people call it A I ops, these are big trends. This is fairly new in the sense of being mainstream. It's been around for a couple of years, but this time, how do you see the data being much more of a developer input. >>People talk about deVOps is a new thing when I was running on the HR products at Peoplesoft in 2000 and four, we had a deVOPS teams. So that is, you know, there's always been a group of people whether Disney or not that are kind of managing the manufacturing floor for your developers, making sure they got the right tools and databases and what's new is because the ephemeral nature of cloud, that app dev work and devops and everyone that surrounds those or is now 100% data driven because you have ephemeral services, they're popping up and popping down. And if you're not able to trap the data that are each one of those services are admitting and do it on a real time basis and a thorough, complete basis, you can't sample then you are flying blind and that's not gonna work when you've got a critical code push for a feature your customers demanding and if you don't get it out, your competitors are, you need to have assurance that you've done the right things and that the quality and and the actual deployment actually works And that's where what lettuce tubes or ability Three years ago as we roughly started doing our string of acquisitions is we saw that transition from a state full world where it was all transaction engine driven. I've got to insert transaction and engines in a code. Very different engineering problem to I've got to grab data and it's convoluted data. It's chaotic data. It's changing all the time. Well, jeez that sounds and latency >>issues to they're gonna be doing fast. >>I've got to do it. You literally millisecond by millisecond. You've got are are bigger customers were honored because of how we operate. Splunk to serve some of the biggest web properties in the in the globe and they're dealing with hundreds of terabytes to petabytes of data per day that are traversing these pipes and you've got to be able to extract metrics that entire multi petabyte or traces that entire multi pedal extreme and you can't hope you're guessing right by only extracting from portions of it because again, if you missed that data you've missed it forever. So for us that was a data problem, which is why we stepped in and >>other things That data problem these days, it's almost it's the most fun to talk about if you love the problem statement that we're trying to solve. I want to get your reaction something if you don't mind. I was talking to a C. So in the C. I. O. We have a conversation kind of off camera at an event recently and I said what's the biggest challenge that you have? Just curious? I asked him, it's actually it's personnel people are mad at each other. Developers want to go faster because there are ci cd pipeline is devops their coding. They're having to wait for the security groups in some cases weeks and days when they could do it in minutes they want to do it on the in the pipelines, shifting left as some call it and it's kind of getting in the way. So it's kind of like it's not they're not getting along very well uh meaning they're slowing things down. I can say something what they really said, but they weren't getting along. What's your reaction? Because that seems to be a speed scale problem. That's developer centric, not organizational, you've got organizational challenges and being slowed down. >>So uh while we all talk about this converted landscape and how exciting is going to be. You do have diametrically opposed metrics and you're never going to have, it's very difficult to get a single person to have the same allegiance to those diametrically a virgin metrics as you want. So you've got checks and balances and the reality of what the cyber teams need to be doing to ensure that you aren't just coding effective functions with the right delivery timeframe. But that's also secure is I think going to make the security team is important forever and the same thing. You can't just write sloppy code that consumes, that blows your AWS budget or G. C. P budget within the first week of deploying it because you've still got to run a responsible business. So there are different dimensions that we all have to deal with quality time and feature functionality that different groups represent. So we, I believe a converged landscape is important. It's not that we're gonna blow it up and one person is going to do it all if you've got to get those groups talking better and you've got to reduce cycle times now we believe it's plunk is with a common data plane, which is the backbone and then solutions built from that common data plane to serve those groups. You're lessening the lack of understanding and you're reducing the cycle time. So now I can look when I'm publishing the code. If it's done properly, is it also secure And the cyber teams can kind of be flying in saying, hey, wait, wait, wait, we just saw something in the data says we're not quite ready. I'm sorry. I know you want to push, you can't push now, but there'll be a data driven conversation and not this, you shouldn't be waiting a week or two weeks, like we can't operate that scale and you've got to address people with facts and data and logic and that's what we're trying to get done. And you >>guys have a good policy engine, you can put up that up into the pipeline. So awesome. That's great, great insight there. Thanks for sharing. Final question. Um looking back in your time since you've been Ceo the culture kind of hasn't changed at Splunk, it's still they have fun, hard charging laid back a little bit and public company now, he's still got to meet the numbers, but your growing business is good, but there's a lot more coming as a big wave coming talk about the Splunk culture. >>So the core elements of culture that I love that. I think all of us agree you don't want to change one where curiosity driven culture, our tool is an investigative tool, so I never want to lose. I think that threat of grit, determination, tenacity and curiosity is paramount in life and I think literally what we push out represents that and I want our people represent that and I think the fun element is really the quirkiness of the fund, like that is one of the things I love about Splunk but we are a serious company, we are in the data plane of tens of thousands of organizations globally and what we do literally makes a difference on whether they're successful or not. As organizations, we're talking about walmart is example And how one second latency can have a, have a 10% drop off in fulfillment of transaction for wal mart that's like a billion dollars a week if you cannot get their system to perform at the level it needs to so what we do matters and the change that we've been driving that I think is a great enhancement to the culture is as we are now tip into the 50% cloud company, you have the opportunity to measure millisecond by millisecond, second by second, minute by minute, hour by hour and that's a different level of help that you get. You can literally see patterns happening over the course of minutes within customers and that's not something we were born with. We were an on premise solution, we had beautiful tools and it was the C E O. S problem, the CSS problem um and their opportunity to get that feedback. Now we get that feedback so we're trying to measure that crunchiness, the fun, the cool part about Splunk with. We also have got to be very operationally disciplined because we carry a heavy responsibility set from our customers and we're in the middle of that as well as the world knows, we're halfway through our transition to be a cloud first company but I'm excited with the results I'm seeing, so I think curiosity and tenacity go with that operational rigor. Like we should all be growth mindset oriented and very excited about, Hey, can I improve? I guess there's some information that I need that I'm not getting that will make me serve my customers better and that is the tone and tenor. I want to cross all the Splunk of whether in HR legal or engineering or sales or we serve customers and we've got to be so excited every day about getting better feedback and how to serve them better. >>Doug. Thanks for coming on the Cuban, sharing that inside. I know you had to cancel your physical event, pulled off an exceptionally strong virtual event here in person. Thanks for having the Cuban. Thanks for coming on. >>Thank you for being here and I can't wait to do this in person. Next >>to mary the ceo of Splunk here inside the cube cube coverage continues stay with us for more. We've got more interviews all the rest of the day, Stay with us. I'm john for your host. Thanks for watching. Mm >>mm mhm >>mhm >>Yeah

Published Date : Oct 20 2021

SUMMARY :

Welcome back to the cubes cover dot com. I'm john for The best guest to you and the best guest today is the I can't believe it. You have the main stage, you've got rooms upstairs, tons of virtual interactions. Um, and we pivoted some private marketing team. It's good to see kind of you guys telling the story a lot, a lot more stories happening and You know, and so happy Teresa Karlsson's here is just pounding the table and partners to take the So the question I have for you is as you guys have become essentially the de facto control playing for most companies solutions because the cyber teams as where you started with a You of you gotta have tools, but you gotta have the platform. So you can handle your use cases effectively. I want to get back to that because if you think It's been around for a couple of years, but this time, how do you see the data being much more of a developer So that is, you know, there's always been a group of people right by only extracting from portions of it because again, if you missed that data you've missed it other things That data problem these days, it's almost it's the most fun to talk about if you love the problem statement that we're trying It's not that we're gonna blow it up and one person is going to do it all if you've got to get those groups talking better guys have a good policy engine, you can put up that up into the pipeline. driving that I think is a great enhancement to the culture is as we are now tip into the 50% I know you had to cancel your physical event, pulled off an exceptionally strong Thank you for being here and I can't wait to do this in person. We've got more interviews all the rest of the day, Stay with us.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
$90 billionQUANTITY

0.99+

100%QUANTITY

0.99+

10%QUANTITY

0.99+

Doug MerrittPERSON

0.99+

BMWORGANIZATION

0.99+

walmartORGANIZATION

0.99+

2013DATE

0.99+

three billionQUANTITY

0.99+

50%QUANTITY

0.99+

2012DATE

0.99+

90%QUANTITY

0.99+

wal martORGANIZATION

0.99+

AWSORGANIZATION

0.99+

500 vendorsQUANTITY

0.99+

2000DATE

0.99+

350 vendorsQUANTITY

0.99+

300 millionQUANTITY

0.99+

a weekQUANTITY

0.99+

DougPERSON

0.99+

90 billionQUANTITY

0.99+

two weeksQUANTITY

0.99+

hundreds of terabytesQUANTITY

0.99+

PeoplesoftORGANIZATION

0.99+

2000 engineersQUANTITY

0.99+

Teresa KarlssonPERSON

0.99+

2ndQUANTITY

0.99+

next yearDATE

0.99+

1stQUANTITY

0.99+

DisneyORGANIZATION

0.99+

2015DATE

0.99+

16 different categoriesQUANTITY

0.99+

25 other categoriesQUANTITY

0.99+

first timeQUANTITY

0.99+

16QUANTITY

0.98+

todayDATE

0.98+

90 opsQUANTITY

0.98+

16 different database toolsQUANTITY

0.98+

Three years agoDATE

0.98+

early 2015DATE

0.98+

oneQUANTITY

0.98+

CorbynORGANIZATION

0.97+

seven years agoDATE

0.97+

first weekQUANTITY

0.97+

one personQUANTITY

0.97+

first yearQUANTITY

0.97+

this yearDATE

0.96+

hundreds of toolsQUANTITY

0.96+

SplunkORGANIZATION

0.95+

first companyQUANTITY

0.95+

single personQUANTITY

0.94+

seven yearsQUANTITY

0.94+

each oneQUANTITY

0.92+

secondQUANTITY

0.92+

first cubeQUANTITY

0.92+

over three billionQUANTITY

0.91+

johnPERSON

0.9+

one secondQUANTITY

0.9+

sean bisonPERSON

0.88+

billion dollars a weekQUANTITY

0.88+

over 60QUANTITY

0.83+

I. T.ORGANIZATION

0.83+

SplunkTITLE

0.82+

CubeCOMMERCIAL_ITEM

0.81+

10 xQUANTITY

0.81+

tens of thousands of organizationsQUANTITY

0.79+

SplunkPERSON

0.79+

two core focus areasQUANTITY

0.78+

Top DogORGANIZATION

0.76+

16 categoriesQUANTITY

0.72+

millisecondQUANTITY

0.68+

10 yearsQUANTITY

0.64+

petabytes of dataQUANTITY

0.64+

couple of yearsQUANTITY

0.63+

wellQUANTITY

0.61+

technologiesQUANTITY

0.57+

fourDATE

0.57+

multi petabyteQUANTITY

0.55+

waveEVENT

0.53+

CubanLOCATION

0.52+

ceoPERSON

0.5+

CubanOTHER

0.47+

I. T. OpsORGANIZATION

0.47+

springTITLE

0.4+

Juan Loaiza, Oracle | CUBE Conversation, September 2021


 

(bright music) >> Hello, everyone, and welcome to this CUBE video exclusive. This is Dave Vellante, and as I've said many times what people sometimes forget is Oracle's chairman is also its CTO, and he understands and appreciates the importance of engineering. It's the lifeblood of tech innovation, and Oracle continues to spend money on R and D. Over the past decade, the company has evolved its Exadata platform by investing in core infrastructure technology. For example, Oracle initially used InfiniBand, which in and of itself was a technical challenge to exploit for higher performance. That was an engineering innovation, and now it's moving to RoCE to try and deliver best of breed performance by today's standards. We've seen Oracle invest in machine intelligence for analytics. It's converged OLTB and mixed workloads. It's driving automation functions into its Exadata platform for things like indexing. The point is we've seen a consistent cadence of improvements with each generation of Exadata, and it's no secret that Oracle likes to brag about the results of its investments. At its heart, Oracle develops database software and databases have to run fast and be rock solid. So Oracle loves to throw around impressive numbers, like 27 million AKI ops, more than a terabyte per second for analytics scans, running it more than a terabyte per second. Look, Oracle's objective is to build the best database platform and convince its customers to run on Oracle, instead of doing it themselves or in some other cloud. And because the company owns the full stack, Oracle has a high degree of control over how to optimize the stack for its database. So this is how Oracle intends to compete with Exadata, Exadata Cloud@Customer and other products, like ZDLRA against AWS Outposts, Azure Arc and do it yourself solutions. And with me, to talk about Oracle's latest innovation with its Exadata X9M announcement is Juan Loaiza, who's the Executive Vice President of Mission Critical Database Technologies at Oracle. Juan, thanks for coming on theCUBE, always good to see you, man. >> Thanks for having me, Dave. It's great to be here. >> All right, let's get right into it and start with the news. Can you give us a quick overview of the X9M announcement today? >> Yeah, glad to. So, we've had Exadata on the market for a little over a dozen years, and every year, as you mentioned, we make it better and better. And so this year we're introducing our X9M family of products, and as usual, we're making it better. We're making it better across all the different dimensions for OLTP, for analytics, lower costs, higher IOPs, higher throughputs, more capacity, so it's better all around, and we're introducing a lot of new software features as well that make it easier to use, more manageable, more highly available, more options for customers, more isolation, more workload consolidation, so it's our usual better and better every year. We're already way ahead of the competition in pretty much every metric you can name, but we're not sitting back. We have the pedal to the metal and we're keeping it there. >> Okay, so as always, you announced some big numbers. You're referencing them. I did in my upfront narrative. You've claimed double to triple digit performance improvements. Tell us, what's the secret sauce that allows you to achieve that magnitude of performance gain? >> Yeah, there's a lot of secret sauce in Exadata. First of all, we have custom designed hardware, so we design the systems from the top down, so it's not a generic system. It's designed to run database with a specific and sole focus of running database, and so we have a lot of technologies in there. Persistent memory is a really big one that we've introduced that enables super low response times for OLTP. The RoCE, the remote RDMA over convergency ethernet with a hundred gigabit network is a big thing, offload to storage servers is a big thing. The columnar processing of the storage is a huge thing, so there's a lot of secret sauce, most of it is software and hardware related and interesting about it, it's very unique. So we've been introducing more and more technologies and actually advancing our lead by introducing very unique, very effective technologies, like the ones I mentioned, and we're continuing that with our X9 generation. >> So that persistent memory allows you to do a right directly, atomic right directly to memory, and then what, you update asynchronously to the backend at some point? Can you double click on that a little bit? >> Yeah, so we use persistent memory as kind of the first tier of storage. And the thing about persistent memory is persistent. Unlike normal memory, it doesn't lose its contents when you lose power, so it's just as good as flash or traditional spinning disks in terms of storing data. And the integration that we do is we do what's called remote direct memory access, that means the hardware sends the new data directly into persistent memory and storage with no software, getting rid of all the software layers in between, and that's what enables us to achieve this extremely low latency. Once it's in persistent memory, it's stored. It's as good as being in flash or disc. So there's nothing else that we need to do. We do age things out of persistent memory to keep only hot data in there. That's one of the tricks that we do to make sure, because persistent memory is more expensive than flash or disc, so we tier it. So we age data in and out as it becomes hot, age it out as it becomes cold, but once it's in persistent memory, it's as good as being stored. It is stored. >> I love it. Flash is a slow tier now. So, (laughs) let's talk about what this-- >> Right, I mean persistent memory is about an order of magnitude faster. Flash is more than an order of magnitude faster than disk drive, so it is a new technology that provides big benefits, particularly for latency on OLTP. >> Great, thank you for that, okay, we'll get out of the plumbing. Let's talk about what this announcement means to customers. How does all this performance, and you got a lot of scale here, how does it translate into tangible results say, for a bank? >> Yeah, so there's a lot of ways. So, I mentioned performance is a big thing, always with Exadata. We're increasing the performance significantly for OLTP, analytics, so OLTP, 50, 60% performance improvements, analytics, 80% performance improvements in terms of costs, effectiveness, 30 to 60% improvement, so all of these things are big benefits. You know, one of the differences between a server product like Exadata and a consumer product is performance translates in the cost also. If I get a new smartphone that's faster, it doesn't actually reduce my costs, it just makes my experience a little better. But with a server product like Exadata, if I have 50% faster, I can translate that into I can serve 50% more users, 50% more workload, 50% more data, or I can buy a 50% smaller system to run the same workload. So, when we talk about performance, it also means lower costs, so if big customers of ours, like banks, telecoms, retailers, et cetera, they can take that performance and turn it into better response times. They can also take that performance and turn it into lower costs, and everybody loves both of those things, so both of those are big benefits for our customers. >> Got it, thank you. Now in a move that was maybe a little bit controversial, you stated flat out that you're not going to bother to compare Exadata cloud and customer performance against AWS Outposts and Azure Stack, rather you chose to compare to RDS, Redshift, Azure SQL. Why, why was that? >> Yeah, so our Exadata runs in the public cloud. We have Exadata that runs in Cloud@Customer, and we have Exadata that runs on Prem. And Azure and Azure Stack, they have something a little more similar to Cloud@Customer. They have where they take their cloud solutions and put them in the customer data center. So when we came out with our new X8, 9M Cloud@Customer, we looked at those technologies and honestly, we couldn't even come up with a good comparison with their equivalent, for example, AWS Outpost, because those products really just don't really run. For example, the two database products that Outposts promote or that Amazon promotes is Aurora for OLTP and Redshift for analytics. Well, those two can't even run at all on their Outposts product. So, it's kind of like beating up on a child or something. (laughs) It doesn't make sense. They're out of our weight class, so we're not even going to compare against them. So we compared what we run, both in public cloud and Cloud@Customer against their best product, which is the Redshifts and the Auroras in their public cloud, which is their most scalable available products. With their equivalent Cloud@Customer, not only does it not perform, it doesn't run at all. Their Premiere products don't run at all on those platforms. >> Okay, but RDS does, right? I think, and Redshift and Azure SQL, right, will run a their version, so you compare it against those. What were the results of the benchmarks when you did made those comparisons? >> Yeah, so compared against their public cloud or Cloud@Customer, we generally get results that are something like 50 times lower latency and close to a hundred times higher analytic throughput, so it's orders of magnitude. We're not talking 50%, we're talking 50 times, so compared to those products, there really is kind of, we're in a different league. It's kind of like they're the middle school little league and we're the professional team, so it's really dramatically different. It's not even in the same league. >> All right, now you also chose to compare the X9M performance against on-premises storage systems. Why and what were those results? >> Yeah, so with the on-premises, traditionally customers bought conventional storage and that kind of stuff, and those products have advanced quite a bit. And again, those aren't optimized. Those aren't designed to run database, but some customers have traditionally deployed those, you know, there's less and less these days, but we do get many times faster both on OLTP and analytic performance there, I mean, with analytics that can be up to 80 times faster, so again, dramatically better, but yeah, there's still a lot of on-premise systems, so we didn't want to ignore that fact and compare only to cloud products. >> So these are like to like in the sense that they're running the same level of database. You're not playing games in terms of the versioning, obviously, right? >> Actually, we're giving them a lot of the benefit. So we're taking their published numbers that aren't even running a database, and they use these low-level benchmarking tools to generate these numbers. So, we're comparing our full end-to-end database to storage numbers against their low-level IO tool that they've published in their data sheets, so again, we're trying to give them the benefit of the doubt, but we're still orders of magnitude better. >> Okay, now another claim that caught our attention was you said that 87% of the Fortune 100 organizations run Exadata, and you're claiming many thousands of other organizations globally. Can you paint a picture of the ICP, the Ideal Customer Profile for Exadata? What's a typical customer look like, and why do they use Exadata, Juan? >> Yeah, so the ideal customer is pretty straightforward, customers that care about data. That's pretty much it. (Dave laughs) If you care about data, if you care about performance of data, if you care about availability of data, if you care about manageability, if you care about security, those are the customers that should be looking strongly at Exadata, and those are the customers that are adopting Exadata. That's why you mentioned 87% of the global Fortune 100 have already adopted Exadata. If you look at a lot of industries, for example, pretty much every major bank almost in the entire world is running Exadata, and they're running it for their mission critical workloads, things like financial trading, regulatory compliance, user interfaces, the stuff that really matters. But in addition to the biggest companies, we also have thousands of smaller companies that run it for the same reason, because their data matters to them, and it's frankly the best platform, which is why we get chosen by these very, very sophisticated customers over and over again, and why this product has grown to encompass most of the major corporations in the world and governments also. >> Now, I know Deutsche bank is a customer, and I guess now an engineering partner from the announcement that I saw earlier this summer. They're using Cloud@Customer, and they're collaborating on things like security, blockchain, machine intelligence, and my inference is Deutsch Bank is looking to build new products and services that are powered by your platforms. What can you tell us about that? Can you share any insights? Are they going to be using X9M, for example? >> Yes, Deutsche Bank is a partnership that we announced a few months ago. It's a major partnership. Deutsche Bank is one of the biggest banks in the world. They traditionally are an on-premises customer, and what they've announced is they're going to move almost the entire database estate to our Exadata Cloud@Customer platform, so they want to go with a cloud platform, but they're big enough that they want to run it in their own data center for certain regulatory reasons. And so, the announcement that we made with them is they're moving the vast bulk of their data estate to this platform, including their core banking, regulatory applications, so their most critical applications. So, obviously they've done a lot of testing. They've done a lot of trials and they have the confidence to make this major transition to a cloud model with the Exadata Cloud@Customer solution, and we're also working with them to enhance that product and to work in various other fields, like you mentioned, machine learning, blockchain, that kind of project also. So it's a big deal when one of the biggest, most conservative, best respected financial institution in the world says, "We're going all in on this product," that's a big deal. >> Now outside of banking, I know a number of years ago, I stumbled upon an installation or a series of installations that Samsung found out about them as a customer. I believe it's now public, but they've something like 300 Exadatas. So help us understand, is it common that customers are building these kinds of Exadata farms? Is this an outlier? >> Yeah, so we have many large customers that have dozens to hundreds of Exadatas, and it's pretty simple, they start with one or two, and then they see the benefits, themselves, and then it grows. And Samsung is probably the biggest, most successful and most respected electronics company in the world. They are a giant company. They have a lot of different sub units. They do their own manufacturing, so manufacturing's one of their most critical applications, but they have lots of other things they run their Exadata for. So we're very happy to have them as one of our major customers that run Exadata, and by the way, Exadata again, very huge in electronics, in manufacturing. It's not just banking and that kind of stuff. I mean, manufacturing is incredibly critical. If you're a company like Samsung, that's your bread and butter. If your factory stops working, you have huge problems. You can't produce products, and you will want to improve the quality. You want to improve the tracking. You want to improve the customer service, all that requires a huge amount of data. Customers like Samsung are generating terabytes and terabytes of data per day from their manufacturing system. They track every single piece, everything that happens, so again, big deal, they care about data. They care deeply about data. They're a huge Exadata customer. That's kind of the way it works. And they've used it for many years, and their use is growing and growing and growing, and now they're moving to the cloud model as well. >> All right, so we talked about some big customers and Juan, as you know, we've covered Exadata since its inception. We were there at the announcement. We've always stressed the fit in our research with mission critical workloads, which especially resonates with these big customers. My question is how does Exadata resonate with the smaller customer base? >> Yeah, so we talk a lot about the biggest customers, because honestly they have the most critical requirements. But, at some level they have worldwide requirements, so if one of the major financial institutions goes down, it's not just them that's affected, that reverberates through the entire world. There's many other customers that use Exadata. Maybe their application doesn't stop the world, but it stops them, so it's very important to them. And so one of the things that we've introduced in our Cloud@Customer and public cloud Exadata platforms is the ability for Oracle to manage all the infrastructure, which enables smaller customers that don't have as much IT sophistication to adopt these very mission critical technology, so that's one of the big advancements. Now, we've always had smaller customers, but now we're getting more and more. We're getting universities, governments, smaller businesses adopting Exadata, because the cloud model for adopting is dramatically simpler. Oracle does all the administration, all the low-level stuff. They don't have to get involved in it at all. They can just use the data. And, on top of that comes our autonomous database, which makes it even easier for smaller customers to adapt. So Exadata, which some people think of as a very high-end platform in this cloud model, and particularly with autonomous databases is very accessible and very useful for any size customer really. >> Yeah, by all accounts, I wouldn't debate Exadata has been a tremendous success. But you know, a lot of customers, they still prefer to roll their own, do it themselves, and when I talk to them and ask them, "Okay, why is that?" They feel it limits their reliance on a single vendor, and it gives them better ability to build what I call a horizontal infrastructure that can support say non-Oracle workloads, so what do you tell those customers? Why should those customers run Oracle database on Exadata instead of a DIY infrastructure? >> Yeah, so that debate has gone on for a lot of years. And actually, what I see, there's less and less of that debate these days. You know, initially customers, many customers, they were used to building their own. That's kind of what they did. They were pretty good at it. What we have shown customers, and when we talk about these major banks, those are the kinds of people that are really good at it. They have giant IT departments. If you look at a major bank in the world, they have tens of thousands of people in their IT departments. These are gigantic multi-billion dollar organizations, so they were pretty good at this kind of stuff. And what we've shown them is you can't build this yourself. There's so much software that we've written to integrate with the database that you just can't build yourself, it's not possible. It's kind of like trying to build your own smartphone. You really can't do it, the scale, the complexity of the problem. And now as the cloud model comes in, customers are realizing, hey, all this attention to building my own infrastructure, it's kind of last decade, last century. We need to move on to more of an as a service model, so we can focus on our business. Let enterprises that are specialized in infrastructure, like Oracle that are really, really good at it, take care of the low-level details, and let me focus on things that differentiate me as a business. It's not going to differentiate them to establish their own storage for database. That's not a differentiator, and they can't do it nearly as well as we can, and a lot of that is because we write a lot of special technology and software that they just can't do themselves, it's not possible. It's just like you can't build your own smartphone. It's just really not possible. >> Now, another area that we've covered extensively, we were there at the unveiling, as well is ZDLRA, Zero Data Loss Recovery Appliance. We've always liked this product, especially for mission critical workloads, we're near zero data loss, where you can justify that. But while we always saw it as somewhat of a niche market, first of all, is that fair, and what's new with ZDLRA? >> Yeah ZDLRA has been in the market for a number of years. We have some of the biggest corporations in the world running on that, and one of the big benefits has been zero data loss, so again, if you care about data, you can't lose data. You can't restore to last night's backup if something happens. So if you're a bank, you can't restore everybody's data to last night. Suppose you made a deposit during the day. They're like, "Hey, sorry, Mr. Customer, your deposit, "well, we don't have any record of it anymore, "'cause we had to restore to last night's backup," you know, that doesn't work. It doesn't work for airlines. It doesn't work for manufacturing. That whole model is obsolete, so you need zero data loss, and that's why we introduced Zero Data Loss Recovery Appliance, and it's been very successful in the market. In addition to zero data loss, it actually provides much faster restore, much more reliable restores. It's more scalable, so it has a lot of advantages. With our X9M generation, we're introducing several new capabilities. First of all, it has higher capacity, so we can store more backups, keep data for longer. Another thing is we're actually dropping the price of the entry-level configuration of ZDLRA, so it makes it more affordable and more usable by smaller businesses, so that's a big deal. And then the other thing that we're hearing a lot about, and if you read the news at all, you hear a lot about ransomware. This is a major problem for the world, cyber criminals breaking into your network and taking the data ransom. And so we've introduced some, we call cyber vault capabilities in ZDLRA. They help address this ransomware issue that's kind of rampant throughout the world, so everybody's worried about that. There's now regulatory compliance for ransomware that particularly financial institutions have to conform to, and so we're introducing new capabilities in that area as well, which is a big deal. In addition, we now have the ability to have multiple ZDLRAs in a large enterprise, and if something happens to one, we automatically fail over backups to another. We can replicate across them, so it makes it, again, much more resilient with replication across different recovery appliances, so a lot of new improvements there as well. >> Now, is an air gap part of that solution for ransomware? >> No, air gap, you really can't have your back, if you're continuously streaming changes to it, you really can't have an air gap there, but you can protect the data. There's a number of technologies to protect the data. For example, one of the things that a cyber criminal wants to do is they want to take control of your data and then get rid of your backup, so you can't restore them. So as a simple example of one thing we're doing is we're saying, "Hey, once we have the data, "you can't delete it for a certain amount of days." So you might say, "For the 30 days, "I don't care who you are. "I don't care what privileges you have. "I don't care anything, I'm holding onto that data "for at least 30 days," so for example, a cyber criminal can't come in and say, "Hey, I'm going to get into the system "and delete that stuff or encrypt it," or something like that. So that's a simple example of one of the things that the cyber vault does. >> So, even as an administrator, I can't change that policy? >> That's right, that's one of the goals is doesn't matter what privileges you have, you can't change that policy. >> Does that eliminate the need for an air gap or would you not necessarily recommend, would you just have another layer of protection? What's your recommendation on that to customers? >> We always recommend multiple layers of protection, so for example, in our ZDLRA, we support, we offload tape backups directly from the appliance, so a great way to protect the data from any kind of thing is you put it on a tape, and guess what, once that tape drive is filed away, I don't care what cyber criminal you are, if you're remote, you can't access that data. So, we always promote multiple layers, multiple technologies to protect the data, and tape is a great way to do that. We can also now archive. In addition to tape, we can now archive to the public cloud, to our object storage servers. We can archive to what we call our ZFS appliance, which is a very low cost storage appliance, so there's a number of secondary archive copies that we offload and implement for customers. We make it very easy to do that. So, yeah, you want multiple layers of protection. >> Got it, okay, your tape is your ultimate air gap. ZDLRA is your low RPO device. You've got cloud kind of in the middle, maybe that's your cheap and deep solution, so you have some options. >> Juan: Yes. >> Okay, last question. Summarize the announcement, if you had to mention two or three takeaways from the X9M announcement for our audience today, what would you choose to share? >> I mean, it's pretty straightforward. It's the new generation. It's significantly faster for OLTP, for analytics, significantly better consolidation, more cost-effective. That's the big picture. Also there's a lot of software enhancements to make it better, improve the management, make it more usable, make it better disaster recovery. I talked about some of these cyber vault capabilities, so it's improved across all the dimensions and not in small ways, in big ways. We're talking 50% improvement, 80% improvements. That's a big change, and also we're keeping the price the same, so when you get a 50 or 80% improvement, we're not increasing the price to match that, so you're getting much better value as well. And that's pretty much what it is. It's the same product, even better. >> Well, I love this cadence that we're on. We love having you on these video exclusives. We have a lot of Oracle customers in our community, so we appreciate you giving us the inside scope on these announcements. Always a pleasure having you on theCUBE. >> Thanks for having me. It's always fun to be with you, Dave. >> All right, and thank you for watching. This is Dave Vellante for theCUBE, and we'll see you next time. (bright music)

Published Date : Sep 28 2021

SUMMARY :

and databases have to run It's great to be here. of the X9M announcement today? We have the pedal to the metal sauce that allows you to achieve and so we have a lot of that means the hardware sends the new data Flash is a slow tier now. that provides big benefits, and you got a lot of scale here, and everybody loves both of those things, Now in a move that was maybe and we have Exadata that runs on Prem. and Azure SQL, right, and close to a hundred times Why and what were those results? and compare only to cloud products. of the versioning, obviously, right? and they use these of the Fortune 100 and it's frankly the best platform, is looking to build new and to work in various other it common that customers and now they're moving to and Juan, as you know, is the ability for Oracle to and it gives them better ability to build and a lot of that is because we write first of all, is that fair, and so we're introducing new capabilities of one of the things That's right, that's one of the goals In addition to tape, we can now You've got cloud kind of in the middle, from the X9M announcement the price to match that, so we appreciate you It's always fun to be with you, Dave. and we'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

SamsungORGANIZATION

0.99+

Deutsche BankORGANIZATION

0.99+

JuanPERSON

0.99+

twoQUANTITY

0.99+

Juan LoaizaPERSON

0.99+

Deutsche bankORGANIZATION

0.99+

DavePERSON

0.99+

September 2021DATE

0.99+

OracleORGANIZATION

0.99+

50 timesQUANTITY

0.99+

thousandsQUANTITY

0.99+

30 daysQUANTITY

0.99+

Deutsch BankORGANIZATION

0.99+

50%QUANTITY

0.99+

30QUANTITY

0.99+

AmazonORGANIZATION

0.99+

oneQUANTITY

0.99+

50QUANTITY

0.99+

80%QUANTITY

0.99+

87%QUANTITY

0.99+

ZDLRAORGANIZATION

0.99+

60%QUANTITY

0.99+

bothQUANTITY

0.99+

AWSORGANIZATION

0.99+

last nightDATE

0.99+

last centuryDATE

0.99+

first tierQUANTITY

0.99+

dozensQUANTITY

0.98+

this yearDATE

0.98+

more than a terabyte per secondQUANTITY

0.98+

RedshiftTITLE

0.97+

ExadataORGANIZATION

0.97+

FirstQUANTITY

0.97+

hundredsQUANTITY

0.97+

X9MTITLE

0.97+

more than a terabyte per secondQUANTITY

0.97+

OutpostsORGANIZATION

0.96+

Azure SQLTITLE

0.96+

Azure StackTITLE

0.96+

zero dataQUANTITY

0.96+

over a dozen yearsQUANTITY

0.96+

Andy Mendelsohn, Oracle | CUBE Conversation, March 2021


 

the cloud has dramatically changed the way providers think about delivering database technologies not only has cloud first become a mandate for many if not most but customers are demanding more capabilities from their technology vendors examples include a substantially similar experience for cloud and on-prem workloads increased automation and a never-ending quest for more secure platforms broadly there are two prevailing models that have emerged one is to provide highly specialized database products that focus on optimizing for a specific workload signature the other end of the spectrum combines technologies in a converge platform to satisfy satisfy the needs of a much broader set of use cases and with me to get a perspective on these and other issues is andy mendelson is the executive vice president of oracle the world's leading database company andy leads database server technologies hello andy thanks for coming on hey dave glad to be here okay so we saw the recent announcements this is kind of your baby around next generation autonomous data warehouse maybe you could take us through the path you took from the original cloud data warehouses to where we are today yeah when we uh we first brought autonomous database out uh we were basically a second generation technology at that point you know we decided that what customers wanted was to the other you know the push of a button provision the really powerful oracle database technology that they've been using for years and um we did that with autonomous database and beyond that we provided a very unique capability that around self-tuning self-driving of the database which is something the first generation vendors didn't provide and this this is really important because customers today are you know developers and data analysts you know you know at the push of a button build out their their data warehouses but you know they're not experts in tuning and so what we thought was really important is that customers get great performance out of the box and that's one of the really unique things about autonomous data warehouse autonomous database in particular and then this latest generation that we just came out with also answers the questions we got from you know the data analysts and developers they said you know it's really great that i can press a button and provision this very powerful data warehouse infrastructure or database infrastructure from oracle but you know if i'm an analyst i want data you know so it's still hard for me to go and you know get data from various data sources transform them clean them up and get them to a way a place where i can start querying the data now i still need data engineers to help me do that and so we've done in the new release we said okay we want to give data analysts and data engineer data scientists developers is a true self-service experience where they can do their job completely without bringing in any you know any any engineers from their i.t organization and so that's what this new version is all about yeah awesome i mean look years ago you guys identified the i.t labor problem and you've been focused on r d and putting it in your r d to solve that problem for customers so we're really starting to see that hit now now gartner recently did some analysis they ranked and rated them some of the more popular cloud databases and oracle did very well i mean particularly particularly in operational categories i mean an operational side and the mission critical stuff you smoked everybody we had mark stamer and david floyer on and our big takeaways were that you're you're again dominating in that mission critical workloads that that that dominance continues but your approach of converging functionality really differs from some others that we saw i mean obviously when you get high ratings from gartner you're pretty stoked about that but what do you think contributed to those rankings and what are you finding specifically in customer interactions yeah so gardner does a lot of its analysis based on talking to customers finding out how their product these products that sound great on paper actually work in practice and i think that's one of the places where oracle database technology really shines it's it's uh it solves real-world problems um it's been doing it for a long time and as we've moved that technology into the cloud you know that continues you know the differentiation we've built up over the years really stands out you know you look at like amazon's databases they generally take some open source technology that isn't that new it could be 30 years old 25 years old and they put it up on the cloud and they say oh it's cloud native it's great but but in fact it's the same old you know technology that that doesn't really compete you know decade behind oracle's database technology so i think the gartner analysis really showed that sort of thing quite clearly yeah so let's talk about that a little bit because obviously i've learned a lot you know one of the things i've learned over the last many years of following this business a lot of ways to skin a cat and cloud database vendors if you think about you mentioned aws you know look at snowflake kind of right tool for the right job approach they're going to say that their specialty databases they're focused uh are better than your converged approach which they make you know think of as a you know swiss army knife what's your take on that yeah well the converged approach is something of course we've been working on for a long time so the the idea is pretty simple you know think about your smartphone you know if you can think back you know over 10 years ago used to have you know a camcorder and a a camera and a messaging device and also a dump phone device that all those different devices got converged into what we now call the smartphone why did the smartphone win it's just simply much more productive for you to carry one device around that that is actually best to breed in all the different categories instead of lots of separate devices and that's what we're doing with converge database over the years you know we've been able to build out technologies that are really good at transaction breasts at analytics for data warehousing now we're working on you know json technologies graph technologies the other vendors basically can't do this i mean it's much easier to build a specialty database that does one thing to build out a converged database that does end things really well and that's what we've been doing for years and again it's it's based on technology that uh you've invested in for quite a long time um and it's something that i think uh customers and developers and analyze analysts find to be a much more productive way of doing their jobs it's very unique and not common at all to see a technology that's been around as long as oracle database to see that sort of morph into a more modern platform i mean you mentioned aws uses leverages open source a lot you know snowflake would say okay hey we are born in the cloud and they are i think google bigquery would be another good example but but but that notion of boy i want to get your take on this born in the cloud those folks would say well we're superior to oracle's because you know they started you know decades ago not necessarily you know native cloud services uh how have you been able to address that i know you know cloud first is kind of the buzzword but but how have you you made that sort of transparent to users or or irrelevant to users because you are cloud first maybe you could talk about how you've able to achieve that and convince us that you actually really are cloud native now you know one of the things we we sort of like pointing out is that um oracle very uniquely has had this scale out technology for running all kinds of workloads not just analytic workloads which is what you see out in the cloud there but we can also scale out transaction processing workloads now that was another one of the reasons we do so well in for example the gardner analysis for trans operational workloads and that technology is really valuable as we went to cloud it lets us do some really unique things and the most obvious unique thing we we have is something we like to call you know you know cloud native you know instant elasticity and so with our technology if you want to provision a share you know some number of amount of compute to run your workloads you can provision exactly what you need you know if you need 17 cpus to get your job done you do 17 cpus when you provision your autonomous database our competitors who claim to be born in the cloud like snowflake and amazon they still use this this archaic way of provisioning uh servers based on shapes you know snowflake you know says what which shape cluster do you want you want 16 you want 32 you want 64. no it goes up by a power of 2 which means if you compare that to what oracle does you you have to provision up to like twice as much cpu than you really need so if you really need 17 they make you provision 32. if you really need 33 they make your provision 64. so this is not a cloud native experience at all it's an archaic way of doing things and and we like to point out with our instant elasticity you know we can go from 17 to 18 to 19 you know whatever you want plus we have something called auto scale so you can set your baseline to be 17 let's say but we will automatically based on your workload scale you up to three times that so in this case be 51 and because of that true elasticity we have we are really the only ones that can deliver true pay as you go kind of you know just pay for what you need kind of capability which is certainly what amazon was talking about when they first called their cloud elastic but it turns out for database services these guys still do this archaic thing with shapes so that's a really good example of where we're quite better than the other guys and it's much more cloud native than the other guys i want to follow up on that uh just stay here for a second because you're basically saying we have we have better granularity than the so-called cloud native guys now you mentioned snowflake right you got you got the shapes you got to you got to choose which shape you want and it sounds like it sounds like redshift the same and of course i know the way in which amazon separates compute from storage is largely a tiering exercise so it's not as as is as smooth as you might expect but nonetheless it's it's good how is it that you were you were able to achieve this with a database that was you know born you know many decades ago is it i mean what is it in from a technical standpoint an r d standpoint that you were able to do i mean did you design that in in the 1980s how did you how did you get here yeah well um it's a combination of interesting technologies so autonomous database you know it has the oracle database software that software is running on a very powerful optimized infrastructure for database based on the exadata technology that we've had on prem for many years we brought that to the cloud and that technology is a scale-out infrastructure that supports you know thousands of cpus and then we use our multi-tenant technology which is a way of sharing large infrastructures amongst amongst separate uh clients and we divide it up dynamically on the fly so if there's thousands of cpus you know this guy wants 20 and this one wants 30 we we divide it up and give them exactly what they need and if they want to grow we just take some extra cpus that are in reserve and we give it to them instantly and so that's a very different way of doing things and that's been a shape based approach where you know what what snowflake and amazon do under the covers they give you a real physical server you know or a cluster and that's how they provision if you want to grow they give you another big physical cluster which takes a long time to get the data populated to get it get it working we just have that one infrastructure that we're sharing among lots of users and we just give you a little extra capacity we don't it doesn't it's done instantly there's no need for data to be moved to populate the new clusters that you know snowflake or amazon are provisioning for you so it's a very different way of doing things and you're able to do that because of the tight integration between you mentioned exadata tight integration between the hardware and software we got david floyer calls it the iphone of enterprise sometimes sometimes you get some grief for that but it's it's not a bad metaphor but is that really the sort of secret well the big secret under the covers is this you know exudated technology our real application cluster scale out technologies our multi-tenant technologies so these are things we've been working on for a long time and they are very mature very powerful technologies and they really provide very unique benefits in a cloud world where people want things to happen instantly and they want to work well for any kind of workload um you know that's that's why we call we talk about being converged we can do mixed workloads you can do transactions and analytics all in the same data the other guys can't do that you know they're really good at like you said a narrow workload like i can do analytics or i can do graph you know i can do json but they can't really do the combination which is what real world applications are like they're not pure one thing versus enough right thank you for that so one of the questions people want to know is can oracle attract you know new customers that aren't existing oracle customers so maybe you could talk about that and you know why should uh somebody who's not an existing oracle customer think about using autonomous database yeah that's a that's a really good question you know oracle if you look at our customer base has a lot of really large enterprises you know the biggest banks and the biggest telcos you know they run oracle they run their businesses on oracle and these guys are sort of the most conservative of the bunch out there and they are moving to cloud at a somewhat slower rate than the than the smaller companies and so if you look at who's using autonomous database now it's actually the smaller companies you know the same type of people that first decided amazon was an interesting cloud 10 years ago they're also using our technologies and it's for the same reason they're finding you know they don't have large it organizations they don't have large numbers of engineers to engineer their infrastructure and that's why cloud is so attractive to them and autonomous database on top of cloud is really attractive as well because you know information is the lifeblood of every organization and if they can empower their analysts to get their job done without lots of help from it organizations they're going to do it and you know that's really what's made autonomous database really interesting you know the whole self-driving nature is very attractive to the smaller shops that don't have a lot of sophisticated um i.t expertise all right let's talk about developers you guys are the stewards of the java community so obviously you know big probably you know the biggest most popular programming language out there but when i think of developers i think of guys in hoodies pounding away but when i think of oracle developers i might think of maybe an app dev team inside of maybe some of those large customers that you talked about but why would developers and or analysts be interested in in using oracle as opposed to some some of those more focused narrow use databases that we were talking about earlier yeah so if you're a developer um you want to get your job done as fast as possible and so having a database that gives you the most productive application development experience is important to you and so you know i was talking we've been talking about converged database off and on so if i'm a developer i have a given job to do a converged database that lets me do a combination of analytics and and transactions and do a little json and little graph all in one is a much more productive place to go because if i if i i don't have something like that then i'm stuck taking my my application and breaking it up into pieces you know this piece i'm going to run on say aurora on amazon and this piece i have to run on the graph database and here's some json i got to run that on some document database and then i have to move the data around the data gets sort of fragmented between these databases and i have to do all this data you know integration and and whatever with a converged database i have a much simpler world where i can just use one technology stack i can get my job done and then i'm future proof against change you know requirements change all the time so you build the initial version of the application and your users say you know that this is not what i want i want some something else and it turns out that something else often is why i want analytics and you use something like a you know a document stored technology that has really poor analytic capabilities and then so you have to take that data and you have to move it to another database and so with with our converged approach you don't have to do that you know you're already in a place where everything works everything that you need you can possibly need in the future is going to be there as well and so for developers i i think you know converged is the right way to go plus for people who are what we call citizen developers you know like the data analysts that they cuddle they write a little code occasionally but they're really after getting value of the data we have this really fabulous no code loco tool called apex and apex is again a very mature technology it's been around for years and it lets somebody who's just a data analyst he knows a little sql but doesn't want to write code get their job done really fast and we've published some benchmark on our website showing you know basically you can get the job done 20 to 40 times faster using a no co loco tool like apex versus something like you know just writing cutting lots of traditional code i'm glad you brought up apex we recently interviewed one of your former colleagues amit xavery and all he would talk about is low code no code and then in the apex announcement you said something to the effect of coding should be the exception not the rule did you mean that what do you mean by that yeah so apex is a tool that people use with our our database technology for building what we call data driven applications so if you got a bunch of data and you want to get some value out of it you want to build maybe dashboards or more sophisticated reports apex is an incredible tool for doing that and it's it's modern you know it builds applications that look great on your smartphone and it automatically you know renders that same user interface on a bigger device like a laptop desktop device as well and uh it's very it's one of these things that uh the people that use it just go bonkers with it it's a viral technology they get really excited about how productive they they've been using it and they tell all their friends and i think we decided uh i guess about a year ago when we came up with this apex service that you know we really want to start going bigger on the marketing around it because it's very unique nobody else has anything quite like it and it's it again it just adds value to the whole developer productivity story around an oracle database so uh that's why we have the apex service now and we also have apex available with every oracle database on the cloud god i want to i want to ask you about some of the features around 21c there are a lot of them you announced earlier this year maybe you could tease out some of the top things that we should be paying attention to in 21c yeah sure um so one of the ways to look at 21c is we're we're continuing down this path of a converged database and so one of the the marquee features in 21c is something we call blockchain tables so what is blockchain well blockchain was this technology that's under the covers behind bitcoin you know it's a way of creating a tamper-proof data store um that was used by the original bitcoin algorithms well developers actually like having tamper proof data objects and databases too um you know and so what we decided to do was say well if i create a sql table in an oracle database what if there's a new option that just says i want that table implemented using blockchain technology to make the table tamper proof and fully audited etc and so we just did that and so in 21c you can now get a basically another feature of the converged database that says uh you know give me a sql table i can do everything i can query it i can insert rows into it but it's it's tamper proof i can't ever update it i can't delete rows from it amazon did the their usual thing they took again some open source technology and they said hey we got this great thing called quantum ledger database and it does blockchain tables but but if you want to do blockchain tables in any of their other databases you're out of luck they don't have it you have to go move the data into this new thing and it's again one of their it's again showing sort of the problem with their their proprietary this proprietary approach of having specialty databases versus just having one conversion that does it all so that's the blockchain cable feature uh we did a bunch of other things um the one i i think is worth mentioning the most is is support for persistent memory so a lot of people out there haven't noticed this this very interesting technology that intel shipped a couple years ago called optane data center memory and what it is it's basically a hybrid of flash memory which is persistent memory and standard dram which is not persistent means you can't store a database in dram um and so with this persistent memory you can basically have a database stored persistently in memory all the time and so it's a very innovative new technology from a database standpoint it's a very disruptive technology to the database market because now you can have an in-memory database basic period all the time 24 7. and so 21c is the first database out there that has native support for this new kind of persistent memory technology and we think it's it's really important so we're actually making it available as uh to our 19c customers as well and uh you know that's another technology i'd call out that we think is very unique we're way ahead of the game there and we're going to continue investing moving forward in that space as well yeah so that layer in between dram and and persistent flash that's that's a great innovation and good game changing from a from a performance and actually the way you write applications but i gotta i gotta ask you i and all the analysts were wrong with juan recently juan loyza and and to listen to that introduction of blockchain and everybody wants to know is safra going to start putting bitcoin on the oracle balance sheet i'm about to get that leap yeah that's a good question who knows yeah i can't comment on speculation ah that would be interesting okay last question then we got to go uh look oracle the narrative on oracle is you're expensive and you're mean you know it's hard to do business with do you care are you doing things to maybe change that perception in the cloud yeah i think we've made a very conscious decision that as we move to the cloud we're offering a totally new business model on the club that is a a cloud-native model you pay for what you use um you have everyday low prices you don't have to negotiate with some salesman for for months to get get a good price um so yeah we really like the message to get out there that those of you who think you know what oracle's all about um you know i and how it might be to work with oracle on in from your on premises days um you should really check out how oracle is now on the cloud we have this autonomous database technology really easy to use really simple any analysts can help get value out of the data without any help from any other engineers it's very unique it's it's uh it's the same technology you're used to but now it's delivered in a way that's much easier to consume and much lower cost and so yeah you should definitely take a look at what we've got out there on the cloud and it's all free to try out we got this free tier you can provision free vms free databases um free apex whatever you want and uh try it out and see what you think well thanks for that i was kidding about me and a lot of a lot of friends at oracle some relatives as well and thanks andy for coming on thecube today it's really great to talk to you yeah it's my pleasure and thanks for watching this is dave vellante we'll see you next time you

Published Date : Mar 29 2021

SUMMARY :

and so for developers i i think you know

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Andy MendelsohnPERSON

0.99+

amazonORGANIZATION

0.99+

March 2021DATE

0.99+

20QUANTITY

0.99+

gartnerORGANIZATION

0.99+

oracleORGANIZATION

0.99+

apexTITLE

0.99+

juan loyzaPERSON

0.99+

first databaseQUANTITY

0.99+

OracleORGANIZATION

0.99+

david floyerPERSON

0.99+

two prevailing modelsQUANTITY

0.99+

twiceQUANTITY

0.98+

dave vellantePERSON

0.98+

todayDATE

0.98+

first generationQUANTITY

0.98+

10 years agoDATE

0.98+

thousands of cpusQUANTITY

0.98+

decades agoDATE

0.98+

40 timesQUANTITY

0.98+

51OTHER

0.97+

25 years oldQUANTITY

0.97+

30QUANTITY

0.96+

firstQUANTITY

0.96+

andy mendelsonPERSON

0.96+

17OTHER

0.96+

1980sDATE

0.96+

second generationQUANTITY

0.96+

33OTHER

0.96+

oneQUANTITY

0.96+

30 years oldQUANTITY

0.96+

jsonORGANIZATION

0.95+

earlier this yearDATE

0.95+

one deviceQUANTITY

0.94+

amit xaveryPERSON

0.94+

mark stamerPERSON

0.92+

googleORGANIZATION

0.91+

yearsDATE

0.91+

32OTHER

0.9+

about a year agoDATE

0.9+

oracleTITLE

0.9+

over 10 years agoDATE

0.89+

safraORGANIZATION

0.88+

16OTHER

0.88+

one thingQUANTITY

0.87+

many decades agoDATE

0.87+

lot of peopleQUANTITY

0.83+

Bob Ward & Jeff Woolsey, Microsoft | Dell Technologies World 2019


 

(energetic music) >> Live from Las Vegas. It's theCUBE. Covering Dell Technologies World 2019. Brought to you by Dell Technologies and it's Ecosystem Partners. >> Welcome back to theCUBE, the ESPN of tech. I'm your host, Rebecca Knight along with my co-host Stu Miniman. We are here live in Las Vegas at Dell Technologies World, the 10th anniversary of theCUBE being here at this conference. We have two guests for this segment. We have Jeff Woolsey, the Principal Program Manager Windows Server/Hybrid Cloud, Microsoft. Welcome, Jeff. >> Thank you very much. >> And Bob Ward, the principal architect at Microsoft. Thank you both so much for coming on theCUBE. >> Thanks, glad to be here. >> It's a pleasure. Honor to be here on the 10th anniversary, by the way. >> Oh is that right? >> Well, it's a big milestone. >> Congratulations. >> Thank you very much. >> I've never been to theCUBE. I didn't even know what it was. >> (laughs) >> Like what is this thing? >> So it is now been a couple of days since Tatiana Dellis stood up on that stage and talked about the partnership. Now that we're sort of a few days past that announcement, what are you hearing? What's the feedback you're getting from customers? Give us some flavor there. >> Well, I've been spending some time in the Microsoft booth and, in fact, I was just chatting with a bunch of the guys that have been talking with a lot of customers as well and we all came to the consensus that everyone's telling us the same thing. They're very excited to be able to use Azure, to be able to use VMware, to be able to use these in the Azure Cloud together. They feel like it's the best of both worlds. I already have my VMware, I'm using my Office 365, I'm interested in doing more and now they're both collocated and I can do everything I need together. >> Yeah it was pretty interesting for me 'cause VMware and Microsoft have had an interesting relationship. I mean, the number one application that always lived on a VM was Microsoft stuff. The operating system standpoint an everything, but especially in the end using computer space Microsoft and VM weren't necessarily on the same page to see both CEOs, also both CUBE alums, up there talking about that really had most of us sit up and take notice. Congratulations on the progress. >> For me, being in a SQL server space, it's a huge popular workload on VMware, as you know and virtualization so everybody's coming up to me saying when can I start running SQL server in this environment? So we're excited to kind of see the possibilities there. >> Customers, they live in a heterogeneous environment. Multicloud has only amplified that. It's like, I want to be able to choose my infrastructure, my Cloud, and my application of choice and know that my vendors are going to rally around me and make this easy to use. >> This is about meeting our customers where they are, giving them the ability to do everything they need to do, and make our customers just super productive. >> Yeah, absolutely. >> So, Jeff, there's some of the new specific give us the update as to the pieces of the puzzle and the various options that Microsoft has in this ecosystem. >> Well, a lot of these things are still coming to light and I would tell people definitely take a look at the blog. The blog really goes in in depth. But key part of this is, for customers that want to use their VMware, you get to provision your resources using, for example, the well known, well easy to use Azure Infrastructure and Azure Portal, but when it's time to actually do your VMs or configure your network, you get to use all of the same tools that you're using. So your vCenter, your vSphere, all of the things that a VMware administrator knows how to do, you continue to use those. So, it feels familiar. You don't feel like there's a massive change going on. And then when you want to hook this up to your Azure resources, we're making that super easy, as well, through integration in the portal. And you're going to see a lot more. I think really this is just the beginning of a long road map together. >> I want to ask you about SQL 19. I know that's your value, so-- >> That's what I do, I'm the SQL guy. >> Yeah, so tell us what's new. >> Well, you know, we launched SQL 19 last year at Ignite with our preview of SQL 19. And it'll be, by the way, it'll be generally available in the second half of this calendar year. We did something really radical with SQL 19. We did something called data virtualization polybase. Imagine as a SQL customer you connecting with SQL and then getting access to Oracle, MongoDB, Hadoop data sources, all sorts of different data in your environment, but you don't move the data. You just connect to SQL Server and get access to everything in your corporate environment now. We realize you're not just going to have SQL Server now in your environment. You're going to have everything. But we think SQL can become like your new data hub to put that together. And then we built something called big data clusters where we just deploy all that for you automatically. We even actually built a Hadoop cluster for you with SQL. It's kind of radical stuff for the normal database people, right? >> Bob, it's fascinating times. We know it used to be like you know I have one database and now when I talk to customers no, I have a dozen databases and my sources of data are everywhere and it's an opportunity of leveraging the data, but boy are there some challenges. How are customers getting their arms around this. >> I mean, it's really difficult. We have a lot of people that are SQL Server customers that realize they have those other data sources in their environment, but they have skills called TSQL, it's a programming language. And they don't want to lose it, they want to learn, like, 10 other languages, but they have to access that data source. Let me give you an example. You got Oracle in a Linux environment as your accounting system and you can't move it to SQL Server. No problem. Just use SQL with your TSQL language to query that data, get the results, and join it with your structured data in SQL Server itself. So that's a radical new thing for us to do and it's all coming in SQL 19. >> And what it helps-- what really helps break down is when you have all of these disparate sources and disparate databases, everything gets siloed. And one of the things I have to remind people is when I talk to people about their data center modernization and very often they'll talk about you know, I've had servers and data that's 20, 30, even, you know, decades old and they talk about it almost like it's like baggage it's luggage. I'm like, no, that's your company, that's your history. That data is all those customer interactions. Wouldn't it be great if you could actually take better advantage of it. With this new version of SQL, you can bring all of these together and then start to leverage things like ML and AI to actually better harvest and data mine that and rather than keeping those in disparate silos that you can't access. >> How ready would you say are your customers to take advantage of AI and ML and all the other-- >> It's interesting you say that because we actually launched the ability to run R and Python with SQL Server even two years ago. And so we've got a whole new class of customers, like data scientists now, that are working together with DBAs to start to put those workloads together with SQL Server so it's actually starting to come a really big deal for a lot of our community. >> Alright, so, Jeff, we had theCUBE at Microsoft Ignite last year, first time we'd done a Microsoft show. As you mentioned, our 10th year here, at what used to be EMC World. It was Interesting for me to dig in. There's so many different stack options, like we heard this week with Dell Technologies. Azure, I understood things a lot from the infrastructure side. I talked to a lot of your partners, talked to me about how many nodes and how many cores and all that stuff. But very clearly at the show, Azure Stack is an extension of Azure and therefore the applications that live on it, how I manage that, I should think Azure first, not infrastructure first. There's other solutions that extend the infrastructure side, things like WSSD I heard a lot about. But give us the update on Azure Stack, always interest in the Cloud, watching where that fits and some of the other adjacent pieces of the portfolio. >> So the Azure Stack is really becoming a rich portfolio now. So we launched with Azure Stack, which is, again, to give you that Cloud consistency. So you can literally write applications that you can run on premises, you can move to the Cloud. And you can do this without any code change. At the same time, a bunch of customers came to us and they said this is really awesome, but we have other environments where we just simply need to run traditional workloads. We want to run traditional VMs and containers and stuff like that. But we really want to make it easy to connect to the Cloud. And so what we have actually launched is Azure Stack HCI. It's been out about a month, month and a half. And, in fact, here at Dell EMC Dell Technology World here, we actually have Azure Stack HCI Solutions that are shipping, that are on the marketplace right now here are the show as well and I was just demoing one to someone who was blown away at just how easy it is with our admin center integration to actually manage the hyper converged cluster and very quickly and easily configure it to Azure so that I can replicate a virtual machine to Azure with one click. So I can back up to Azure in just a couple clicks. I can set up easy network connectivity in all of these things. And best yet, Dell just announced their integration for their servers into admin center here at Dell Technologies World. So there's a lot that we're doing together on premises as well. >> Okay, so if I understand right, is Dell is that one of their, what they call Ready Nodes, or something in the VxFlex family. >> Yes. >> That standpoint. The HCI market is something that when we wrote about it when it was first coming out, it made sense that, really, the operating system and hypervisor companies take a lead in that space. We saw VMware do it aggressively and Microsoft had a number of different offerings, but maybe explain why this offering today versus where we were five years ago with HCI. >> Well, one of the things that we've been seeing, so as people move to the Cloud and they start to modernize their applications and their portfolio, we see two things happen. Generally, there are some apps that people say hey, I'm obviously going to move that stuff to Azure. For example, Exchange. Office 365, Microsoft, you manage my mail for me. But then there are a bunch of apps that people say that are going to stay on Prem. So, for example, in the case of SQL, SQL is actually an example of one I see happening going in both places. Some people want to run SQL up in the Cloud, 'cause they want to take advantage of some of the services there. And then there are people who say I have SQL that is never, ever, ever, ever, ever going to the Cloud because of latency or for governance and compliance. So I want to run that on modern hardware that's super fast. So this new Dell Solutions that have Intel, Optane DC Persistent Memory have lots of cores. >> I'm excited about that stuff, man. >> Oh my gosh, yes. Optane Persistent Memory and lots of cores, lots of fast networking. So it's modern, but it's also secure. Because a lot of servers are still very old, five, seven, ten years old, those don't have things like TPM, Secure Boot, UEFI. And so you're running on a very insecure platform. So we want people to modernize on new hardware with a new OS and platform that's secure and take advantage of the latest and greatest and then make it easy to connect up to Azure for hybrid cloud. >> Persistent Memory's pretty exciting stuff. >> Yes. >> Actually, Dell EMC and Intel just published a paper using SQL Server to take advantage of that technology. SQL can be I/O bound application. You got to have data and storage, right? So now Dell EMC partnered together with SQL 19 to access Persistent Memory, bypass the I/O part of the kernel itself. And I think they achieved something like 170% faster performance versus even a fast NVNMe. It's a great example of just using a new technology, but putting the code in SQL to have that intelligence to figure out how fast can Persistent Memory be for your application. >> I want to ask about the cultural implications of the Dell Microsoft relationship partnership because, you know, these two companies are tech giants and really of the same generation. They're sort of the Gen Xers, in their 30s and 40s, they're not the startups, been around the block. So can you talk a little bit about what it's like to work so closely with Dell and sort of the similarities and maybe the differences. >> Sure. >> Well, first of all, we've been doing it for, like you said, we've been doing this for awhile. So it's not like we're strangers to this. And we've always had very close collaboration in a lot of different ways. Whether it was in the client, whether it's tablets, whether it's devices, whether it's servers, whether it's networking. Now, what we're doing is upping our cloud game. Essentially what we're doing is, we're saying there is an are here in Cloud where we can both work a lot closer together and take advantage of the work that we've done traditionally at the hardware level. Let's take that engineering investment and let's do that in the Cloud together to benefit our mutual customers. >> Well, SQL Server is just a primary application that people like to run on Dell servers. And I've been here for 26 years at Microsoft and I've seen a lot of folks run SQL Server on Dell, but lately I've been talking to Dell, it's not just about running SQL on hardware, it's about solutions. I was even having discussions yesterday about Dell about taking our ML and AI services with SQL and how could Dell even package ready solutions with their offerings using our software stack, but even addition, how would you bring machine learning and SQL and AI together with a whole Dell comp-- So it's not just about talking about the servers anymore as much, even though it's great, it's all about solutions and I'm starting to see that conversation happen a lot lately. >> And it's generally not a server conversation. That's one of the reasons why Azure Stack HCI is important. Because its customers-- customers don't come to me and say Jeff, I want to buy a server. No, I want to buy a solution. I want something that's pre configured, pre validated, pre certified. That's why when I talk about Azure Stack HCI, invariably, I'm going to get the question: Can I build my own? Yes, you can build your own. Do I recommend it? No, I would actually recommend you take a look at our Azure Stack HCI catalog. Like I said, we've got Dell EMC solutions here because not only is the hardware certified for Windows server, but then we go above and beyond, we actually run whole bunch of BurnInTests, a bunch of stress tests. We actually configure, tune, and tune these things for the best possible performance and security so it's ready to go. Dell EMC can ship it to you and you're up and running versus hey, I'm trying to configure make all this thing work and then test it for the next few months. No, you're able to consume Cloud very quickly, connect right up, and, boom, you got hybrid in the house. >> Exactly. >> Jeff and Bob, thank you both so much for coming on theCUBE. It was great to have you. >> Our pleasure. Thanks for having us. Enjoyed it, thank you. >> I'm Rebecca Knight for Stu Miniman. We will have more of theCUBEs live coverage of Dell Technologies World coming up in just a little bit.

Published Date : May 2 2019

SUMMARY :

Brought to you by Dell Technologies We have Jeff Woolsey, the Principal Program Manager Thank you both so much for coming on theCUBE. Honor to be here on the 10th anniversary, by the way. I've never been to theCUBE. what are you hearing? and we all came to the consensus but especially in the end using computer space it's a huge popular workload on VMware, as you know and make this easy to use. and make our customers just super productive. and the various options that Microsoft has Well, a lot of these things are still coming to light I want to ask you about SQL 19. and get access to everything in your and it's an opportunity of leveraging the data, and you can't move it to SQL Server. And one of the things I have to remind people is so it's actually starting to come and some of the other adjacent pieces of the portfolio. a bunch of customers came to us and they said or something in the VxFlex family. and hypervisor companies take a lead in that space. and they start to modernize their applications and then make it easy to connect up to Azure Actually, Dell EMC and Intel just published a paper and really of the same generation. and let's do that in the Cloud together and I'm starting to see that conversation Dell EMC can ship it to you and you're up and running Jeff and Bob, Thanks for having us. of Dell Technologies World

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeffPERSON

0.99+

Jeff WoolseyPERSON

0.99+

Rebecca KnightPERSON

0.99+

Tatiana DellisPERSON

0.99+

Stu MinimanPERSON

0.99+

MicrosoftORGANIZATION

0.99+

Bob WardPERSON

0.99+

Dell TechnologiesORGANIZATION

0.99+

SQL 19TITLE

0.99+

fiveQUANTITY

0.99+

DellORGANIZATION

0.99+

170%QUANTITY

0.99+

Las VegasLOCATION

0.99+

BobPERSON

0.99+

sevenQUANTITY

0.99+

Azure StackTITLE

0.99+

yesterdayDATE

0.99+

26 yearsQUANTITY

0.99+

SQL ServerTITLE

0.99+

IntelORGANIZATION

0.99+

SQLTITLE

0.99+

ten yearsQUANTITY

0.99+

last yearDATE

0.99+

two guestsQUANTITY

0.99+

Azure Stack HCITITLE

0.99+

bothQUANTITY

0.99+

two companiesQUANTITY

0.99+

AzureTITLE

0.99+

Dell Technologies WorldORGANIZATION

0.99+

OracleORGANIZATION

0.99+

Office 365TITLE

0.99+

10th yearQUANTITY

0.99+

five years agoDATE

0.99+

two years agoDATE

0.98+

PythonTITLE

0.98+

two thingsQUANTITY

0.98+

both placesQUANTITY

0.98+

Dominic Preuss, Google | Google Cloud Next 2019


 

>> Announcer: Live from San Francisco, it's theCUBE. Covering Google Cloud Next '19. Brought to you by Google Cloud and it's ecosystem partners. >> Welcome back to the Moscone Center in San Francisco everybody. This is theCUBE, the leader in live tech coverage. This is day two of our coverage of Google Cloud Next #GoogleNext19. I'm here with my co-host Stuart Miniman and I'm Dave Vellante, John Furrier is also here. Dominic Preuss is here, he's the Director of Product Management, Storage and Databases at Google. Dominic, good to see you. Thanks for coming on. >> Great, thanks to be here. >> Gosh, 15, 20 years ago there were like three databases and now there's like, I feel like there's 300. It's exploding, all this innovation. You guys made some announcements yesterday, we're gonna get into, but let's start with, I mean, data, we were just talking at the open, is the critical part of any IT transformation, business value, it's at the heart of it. Your job is at the heart of it and it's important to Google. >> Yes. Yeah, you know, Google has a long history of building businesses based on data. We understand the importance of it, we understand how critical it is. And so, really, that ethos is carried over into Google Cloud platform. We think about it very much as a data platform and we have a very strong responsibility to our customers to make sure that we provide the most secure, the most reliable, the most available data platform for their data. And it's a key part of any decision when a customer chooses a hyper cloud vendor. >> So summarize your strategy. You guys had some announcements yesterday really embracing open source. There's certainly been a lot of discussion in the software industry about other cloud service providers who were sort of bogarting open source and not giving back, et cetera, et cetera, et cetera. How would you characterize Google's strategy with regard to open source, data storage, data management and how do you differentiate from other cloud service providers? >> Yeah, Google has always been the open cloud. We have a long history in our commitment to open source. Whether be Kubernetes, TensorFlow, Angular, Golang. Pick any one of these that we've been contributing heavily back to open source. Google's entire history is built on the success of open source. So we believe very strongly that it's an important part of the success. We also believe that we can take a different approach to open source. We're in a very pivotal point in the open source industry, as these companies are understanding and deciding how to monetize in a hyper cloud world. So we think we can take a fundamentally different approach and be very collaborative and support the open source community without taking advantage or not giving back. >> So, somebody might say, okay, but Google's got its own operational databases, you got analytic databases, relational, non-relational. I guess Google Spanner kind of fits in between those. It was an amazing product. I remember that that first came out, it was making my eyes bleed reading the white paper on it but awesome tech. You certainly own a lot of your own database technology and do a lot of innovation there. So, square that circle with regard to partnerships with open source vendors. >> Yeah, I think you alluded to a little bit earlier there are hundreds of database technologies out there today. And there's really been a proliferation of new technology, specifically databases, for very specific use cases. Whether it be graph or time series, all these other things. As a hyper cloud vendor, we're gonna try to do the most common things that people need. We're gonna do manage MySQL, and PostgreS and SQL Server. But for other databases that people wanna run we want to make sure that those solutions are first class opportunities on the platform. So we've engaged with seven of the top and leading open source companies to make sure that they can provide a managed service on Google Cloud Platform that is first class. What that means is that as a GCP customer I can choose a Google offered service or a third-party offered service and I'm gonna have the same, seamless, frictionless, integrated experience. So I'm gonna get unified billing, I'm gonna get one bill at the end of the day. I'm gonna have unified support, I'm gonna reach out to Google support and they're going to figure out what the problem is, without blaming the third-party or saying that isn't our problem. We take ownership of the issue and we'll go and figure out what's happening to make sure you get an answer. Then thirdly, a unified experience so that the GCP customer can manage that experience, inside a cloud console, just like they would their Google offered serves. >> A fully-managed database as a service essentially. >> Yes, so of the seven vendors, a number of them are databases. But also for Kafka, to manage Kafka or any other solutions that are out there as well. >> All right, so we could spend the whole time talking about databases. I wanna spend a couple minutes talking about the other piece of your business, which is storage. >> Dominic: Absolutely. >> Dave and I have a long history in what we'd call traditional storage. And the dialog over the last few years has been we're actually talking about data more than the storing of information. A few years back, I called cloud the silent killer of the old storage market. Because, you know, I'm not looking at buying a storage array or building something in the cloud. I use storage is one of the many services that I leverage. Can you just give us some of the latest updates as to what's new and interesting in your world. As well as when customers come to Google where does storage fit in that overall discussion? >> I think that the amazing opportunity that we see for for large enterprises right now is today, a lot of that data that they have in their company are in silos. It's not properly documented, they don't necessarily know where it is or who owns it or the data lineage. When we pick all that date up across the enterprise and bring it in to Google Cloud Platform, what's so great about is regardless of what storage solution you choose to put your data in it's in a centralized place. It's all integrated, then you can really start to understand what data you have, how do I do connections across it? How do I try to drive value by correlating it? For us, we're trying to make sure that whatever data comes across, customers can choose whatever storage solution they want. Whichever is most appropriate for their workload. Then once the data's in the platform we help them take advantage of it. We are very proud of the fact that when you bring data into object storage, we have a single unified API. There's only one product to use. If you would have really cold data, or really fast data, you don't have to wait hours to get the data, it's all available within milliseconds. Now we're really excited that we announced today is a new storage class. So, in Google Cloud Storage, which is our object storage product, we're now gonna have a very cold, archival storage option, that's going to start at $0.12 per gigabyte, per month. We think that that's really going to change the game in terms of customers that are trying to retire their old tape backup systems or are really looking for the most cost efficient, long term storage option for their data. >> The other thing that we've heard a lot about this week is that hybrid and multi-cloud environment. Google laid out a lot of the partnerships. I think you had VMware up on stage. You had Cisco up on stage, I see Nutanix is here. How does that storage, the hybrid multi-cloud, fit together for your world. >> I think the way that we view hybrid is that every customer, at some point, is hybrid. Like, no one ever picks up all their data on day one and on day two, it's on the cloud. It's gonna be a journey of bringing that data across. So, it's always going to be hybrid for that period of time. So for us, it's making sure that all of our storage solutions, we support open standards. So if you're using an an S3 compliant storage solution on-premise, you can use Google Cloud Storage with our S3 compatible API. If you are doing block, we work with all the large vendors, whether be NetApp or EMC or any of the other vendors you're used to having on-premise, making sure we can support those. I'm personally very excited about the work that we've done with NetApp around NetApp cloud buying for Google Cloud Platform. If you're a NetApp shop and you've been leveraging that technology and you're really comfortable and really like it on-premise, we make it really easy to bring that data to the cloud and have the same exact experience. You get all the the wonderful features that NetApp offers you on-premise in a cloud native service where you're paying on a consumption based service. So, it really takes, kind of, the decision away for the customers. You like NetApp on-premise but you want cloud native features and pricing? Great, we'll give you NetApp in the cloud. It really makes it to be an easy transition. So, for us it's making sure that we're engaged and that we have a story with all the storage vendors that you used to using on-premise today. >> Let me ask you a question, about go back, to the very cold, ice cold storage. You said $0.12 per gigabyte per month, which is kinda in between your other two major competitors. What was your thinking on the pricing strategy there? >> Yeah, basically everything we do is based on customer demand. So after talking to a bunch of customers, understanding the workloads, understanding the cost structure that they need, we think that that's the right price to meet all of those needs and allow us to basically compete for all the deals. We think that that's a really great price-point for our customers. And it really unlocks all those workloads for the cloud. >> It's dirt cheap, it's easy to store and then it takes a while to get it back, right, that's the concept? >> No, it is not at all. We are very different than other storage vendors or other public cloud offerings. When you drop your data into our system, basically, the trade up that you're making is saying, I will give you a cheaper price in exchange for agreeing to leave the data in the platform, for a longer time. So, basically you're making a time-based commitment to us, at which point we're giving you a cheaper price. But, what's fundamentally different about Google Cloud Storage, is that regardless of which storage class you use, everything is available within milliseconds. You don't have to wait hours or any amount of time to be able to get that data. It's all available to you. So, this is really important, if you have long-term archival data and then, let's say, that you got a compliance request or regulatory requests and you need to analyze all the data and get to all your data, you're not waiting hours to get access to that data. We're actually giving you, within milliseconds, giving you access to that data, so that you can get the answers you need. >> And the quid pro quo is I commit to storing it there for some period of time, is that you said? >> Correct. So, we have four storage classes. We have our Standard, our Nearline, our Coldline and this new Archival. Each of them has a lower price point, in exchange for a longer, committed time the you'll leave the product. >> That's cool. I think that adds real business value there. So, obviously, it's not sitting on tape somewhere. >> We have a number of solutions for how we store the data. For us, it's indifferent, how we store the data. It's all about how long you're willing to tell us it'll be there and that allows us to plan for those resources long term. >> That's a great story. Now, you also have this pay-as-you-go pricing tiers, can you talk about that a little bit? >> For which, for Google Cloud Storage? >> Dave: Yes. >> Yeah, everything is pay-as-you-go and so basically you write data to us and there's a charge for the operations you do and then you charge for however long you leave the data in the system. So, if you're using our Standard class, you're just paying our standard price. You can either use Regional or Multi-Regional, depending on the disaster recovery and the durability and availability requirements that you have. Then you're just paying us for that for however long you leave the data in the system. Once you delete it, you stop paying. >> So it must be, I'm not sure what kind of customer discussions are going on in terms of storage optionality. It used to be just, okay, I got block and I got file, but now you've got all different kind of. You just mentioned several different tiers of performance. What's the customer conversation like, specifically in terms of optionality and what are they asking you to deliver? >> I think within the storage space, there's really three things, there's object, block and file. So, on the object side, or on the block side we have our persistence product. Customers are asking for better price performance, more performance, more IOPS, more throughput. We're continuing to deliver a higher-performance, block device for them and that's going very, very well. For those that need file, we have our first-party service, which is Cloud Filestore, which is our manage NFS. So if you need managed NFS, we can provide that for you at a really low price point. We also partner with, you mentioned Elastifile earlier. We partner with NetApp, we're partnering with EMC. So all those options are also available for file. Then on the object side, if you can accept the object API, it's not POSIX-compliant it's a very different model. If your workloads can support that model then we give you a bunch of options with the Object Model API. >> So, data management is another hot topic and it means a lot of things to a lot of people. You hear the backup guys talking about data management. The database guys talk about data management. What is data management to Google and what your philosophy and strategy there? >> I think for us, again, I spend a lot of time making sure that the solutions are unified and consistent across. So, for us, the idea is that if you bring data into the platform, you're gonna get a consistent experience. So you're gonna have consistent backup options you're gonna have consistent pricing models. Everything should be very similar across the various products So, number one, we're just making sure that it's not confusing by making everything very simple and very consistent. Then over time, we're providing additional features that help you manage that. I'm really excited about all the work we're doing on the security side. So, you heard Orr's talk about access transparency and access approvals right. So basically, we can have a unified way to know whether or not anyone, either Google or if a third-party offer, a third-party request has come in about if we're having to access the data for any reason. So we're giving you full transparency as to what's going on with your data. And that's across the data platform. That's not on a per-product basis. We can basically layer in all these amazing security features on top of your data. The way that we view our business is that we are stewards of your data. You've given us your data and asked us to take care of it, right, don't lose it. Give it back to me when I want it and let me know when anything's happening to it. We take that very seriously and we see all the things we're able to bring to bear on the security side, to really help us be good stewards of that data. >> The other thing you said is I get those access logs in near real time, which is, again, nuanced but it's very important. Dominic, great story, really. I think clear thinking and you, obviously, delivered some value for the customers there. So thanks very much for coming on theCUBE and sharing that with us. >> Absolutely, happy to be here. >> All right, keep it right there everybody, we'll be back with our next guest right after this. You're watching theCUBE live from Google Cloud Next from Moscone. Dave Vellante, Stu Miniman, John Furrier. We'll be right back. (upbeat music)

Published Date : Apr 10 2019

SUMMARY :

Brought to you by Google Cloud and it's ecosystem partners. Dominic Preuss is here, he's the Director Your job is at the heart of it and it's important to Google. to make sure that we provide the most secure, and how do you differentiate from We have a long history in our commitment to open source. So, square that circle with regard to partnerships and I'm gonna have the same, seamless, But also for Kafka, to manage Kafka the other piece of your business, which is storage. of the old storage market. to understand what data you have, How does that storage, the hybrid multi-cloud, and that we have a story with all the storage vendors to the very cold, ice cold storage. that that's the right price to meet all of those needs can get the answers you need. the you'll leave the product. I think that adds real business value there. We have a number of solutions for how we store the data. can you talk about that a little bit? for the operations you do and then you charge and what are they asking you to deliver? Then on the object side, if you can accept and it means a lot of things to a lot of people. on the security side, to really help us be good stewards and sharing that with us. we'll be back with our next guest right after this.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Stuart MinimanPERSON

0.99+

Dominic PreussPERSON

0.99+

DavePERSON

0.99+

Stu MinimanPERSON

0.99+

DominicPERSON

0.99+

GoogleORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

John FurrierPERSON

0.99+

EMCORGANIZATION

0.99+

EachQUANTITY

0.99+

San FranciscoLOCATION

0.99+

yesterdayDATE

0.99+

NutanixORGANIZATION

0.99+

seven vendorsQUANTITY

0.99+

ColdlineORGANIZATION

0.99+

MySQLTITLE

0.99+

firstQUANTITY

0.99+

todayDATE

0.98+

sevenQUANTITY

0.98+

KafkaTITLE

0.98+

one productQUANTITY

0.98+

NetAppTITLE

0.98+

two major competitorsQUANTITY

0.97+

PostgreSTITLE

0.97+

NetAppORGANIZATION

0.97+

Google Cloud NextTITLE

0.97+

day twoQUANTITY

0.97+

one billQUANTITY

0.96+

S3TITLE

0.96+

three thingsQUANTITY

0.96+

300QUANTITY

0.96+

oneQUANTITY

0.96+

singleQUANTITY

0.96+

Cloud FilestoreTITLE

0.95+

hundreds of database technologiesQUANTITY

0.94+

three databasesQUANTITY

0.94+

day oneQUANTITY

0.94+

first classQUANTITY

0.94+

20 years agoDATE

0.94+

this weekDATE

0.93+

SQL ServerTITLE

0.93+

$0.12 per gigabyteQUANTITY

0.93+

ElastifileORGANIZATION

0.92+

2019DATE

0.91+

Google Cloud PlatformTITLE

0.9+

GoshPERSON

0.89+

Moscone CenterLOCATION

0.87+

Google Cloud StorageTITLE

0.82+

MosconeLOCATION

0.8+

theCUBEORGANIZATION

0.75+

15DATE

0.73+

Object ModelOTHER

0.73+

A few years backDATE

0.73+

OrrORGANIZATION

0.68+

Google SpannerTITLE

0.66+