Image Title

Search Results for ZFS:

Mai Lan Tomsen Bukovec & Wayne Duso, AWS | AWS re:Invent 2021


 

>>Hi, buddy. Welcome back to the keeps coverage of AWS 2021. Re-invent you're watching the cube and I'm really excited. We're going to go outside the storage box. I like to say with my lawn Thompson Bukovac, who's the vice-president of block and object storage and Wayne Duso was a VP of storage edge and data governance guys. Great to see you again, we saw you at storage day, the 15 year anniversary of AWS, of course, the first product service ever. So awesome to be here. Isn't it. Wow. >>So much energy in the room. It's so great to see customers learning from each other, learning from AWS, learning from the things that you're observing as well. >>A lot of companies decided not to do physical events. I think you guys are on the right side of history. We're going to show you, you weren't exactly positive. How many people are going to show up. Everybody showed. I mean, it's packed house here, so >>Number 10. Yeah. >>All right. So let's get right into it. Uh, news of the week. >>So much to say, when you want to kick this off, >>We had a, we had a great set of announcements that Milan, uh, talked about yesterday, uh, in her talk and, and a couple of them in the file space, specifically a new, uh, member of the FSX family. And if you remember that the FSA, Amazon FSX is, uh, for customers who want to run fully managed versions of third party and open source file systems on AWS. And so yesterday we announced a new member it's FSX for open ZFS. >>Okay, cool. And there's more, >>Well, there's more, I mean, one of the great things about the new match file service world and CFS is it's powered by gravity. >>It is taught by Gravatar and all of the capabilities that AWS brings in terms of networking, storage, and compute, uh, to our customers. >>So this is really important. I want the audience to understand this. So I I've talked on the cube about how a large proportion let's call it. 30% of the CPU cycles are kind of wasted really on things like offloads, and we could be much more efficient, so graviton much more efficient, lower power and better price performance, lower cost. Amazon is now on a new curve, uh, cycles are faster for processors, and you can take advantage of that in storage it's storage users, compute >>That's right? In fact, you have that big launch as well for luster, with gravity. >>We did in fact, uh, so with, with, uh, Yasmin of open CFS, we also announced the next gen Lustre offering. And both of these offerings, uh, provide a five X improvement in performance. For example, now with luster, uh, customers can drive up to one terabyte per second of throughput, which is simply amazing. And with open CFS, right out of, right out of the box at GA a million IOPS at sub-millisecond latencies taking advantage of gravitas, taking advantage of our storage and networking capabilities. >>Well, I guess it's for HPC workloads, but what's the difference between these days HPC, big data, data intensive, a lot of AI stuff, >>All right. You to just, there's a lot of intersection between all of those different types of workloads they have, as you said, and you know, it all, it all depends on it all matters. And this is the reason why having the suite of capabilities that the, if you would, the members of the family is so important to our guests. >>We've talked a lot about, it's really can't think about traditional storage as a traditional storage anymore. And certainly your world's not a box. It's really a data platform, but maybe you could give us your point of view on that. >>Yeah, I think, you know, if, if we look, if we take a step back and we think about how does AWS do storage? Uh, we think along multiple dimensions, we have the dimension that Wayne's talking about, where you bring together the power of compute and storage for these managed file services that are so popular. You and I talked about, um, NetApp ONTAP. Uh, we went into some detail on that with you as well, and that's been enormously popular. And so that whole dimension of these managed file services is all about where is the customer today and how can we help them get to the cloud? But then you think about the other things that we're also imagining, and we're, re-imagining how customers want to grow those applications and scale them. And so a great example here at reinvent is let's just take the concept of archive. >>So many people, when they think about archive, they think about taking that piece of data and putting it away on tape, putting it away in a closet somewhere, never pulling it out. We don't think about archive like that archive just happens to be data that you just aren't using at the moment, but when you need it, you need it right away. And that's why we built a new storage class that we launched just yesterday, Dave, and it's called glacier instead of retrieval, it has retrieval and milliseconds, just like an Esri storage class has the same pricing of four tenths of a cent as glacier archive. >>So what's interesting at the analyst event today, Adam got a question about, and somebody was poking at him, you know, analysts can be snarky sometimes about, you know, price, declines and so forth. And he said, you know, one of the, one of the things that's not always shown up and we don't always get credit for lowering prices, but we might lower costs. And there's the archive and deep archive is an example of that. Maybe you could explain that point of view. >>Yeah. The way we look at it is that our customers, when they talk to us about the cost of storage, they talked to us about the total cost of the storage, and it's not just storing the data, it's retrieving it and using it. And so we have done an amazing amount across all the portfolio around reducing costs. We have glacier answer retrieval, which is 68% cheaper than standard infrequent access. That's a big cost reduction. We have EBS snapshots archive, which we introduced yesterday, 75% cheaper to archive a snapshot. And these are the types of that just transform the total cost. And in some cases we just eliminate costs. And so the glacier storage class, all bulk retrievals of data from the glacier storage class five to 12 hours, it's now free of charge. If you don't even have to think about, we didn't even reduce it. We just eliminated the cost of that data retrieval >>And additive to what Milan said around, uh, archiving. If you look at what we've done throughout the entire year, you know, a interesting statistic that was brought up yesterday is over the course of 2021, between our respective teams, we've launched over 105 capabilities for our customers throughout this year. And in some of them, for instance, on the file side for EFS, we launched one zone which reduced, uh, customer costs by 47%. Uh, you can now achieve on EFS, uh, cost of roughly 4.30 cents per gigabyte month on, uh, FSX, we've reduced costs up to 92%, uh, on Lustre and FSX for windows and with the introduction of ONTAP and open CFS, we continue those forward, including customers ability to compress and Dedoose against those costs. So they ended up seeing a considerable savings, even over what our standard low prices are. >>100 plus, what can I call them releases? And how can you categorize those? Are they features of eight? Do they fall into, >>Because they range for major services, like what we've launched with open ZFS to major features and really 95 of those were launched before re-invent. And so really what you have between the different teams that work in storage is you have this relentless drive to improve all the storage platforms. And we do it all across the course of the year, all across the course of the year. And in some cases, the benefit shows up at no cost at all to a customer. >>Uh, how, how did this, it seems like you're on an accelerated pace, a S3 EBS, and then like hundreds of services. I guess the question is how come it took so long and how is it accelerating now? Is it just like, there was so much focus on compute before you had to get that in place, or, but now it's just rapidly accessing, >>I I'll tell you, Dave, we took the time to count this year. And so we came to you with this number of 106, uh, that acceleration has been in place for many years. We just didn't take the time to couch. Correct. So this has been happening for years and years. Wayne and I have been with AWS for, for a long time now for 10 plus years. And really that velocity that we're talking about right now that has been happening every single year, which is where you have storage today. And I got to tell you, innovation is in our DNA and we are not going to stop now >>So 10 years. Okay. So it was really, the first five years was kind of slow. And then >>I think that's true at all. I don't think that try, you know, if you, if you look at, uh, the services that we have, we have the most complete portfolio of any cloud provider when it comes to storage and data. And so over the years, we've added to the foundation, which is S3 and the foundation, which is EBS. We've come out with a number of storage services in the, in the file space. Now you have an entire suite of persistent data stores within AWS and the teams behind those that are able to accelerate that pace. Just to give you an example, when I joined 10 years ago, AWS launched within that year, roughly a hundred and twenty, a hundred and twenty eight services or features our teams together this year have launched almost that many, just in those in, just in this space. So AWS continues to accelerate the storage teams continue to accelerate. And as my line said, we just started counting >>The thing. And if you think about those first five years, that was laying the baseline to launch us three, to launch EBS, to get that foundation in place, get lifecycle policies in place. But really, I think you're just going to see an even faster acceleration that number's going up. >>No, I that's what I'm saying. It does appear that way. And you had to build a team and put teams in place. And so that's, you know, part of the equation. But again, I come back to, it's not even, I don't even think of it as storage anymore. It's it's data. People are data lake is here to stay. You might not like the term. We always use the joke about a data ocean, but data lake is here to say 200,000 data lakes. Now we heard Adam talk about, uh, this morning. I think it was Adam. No, it was Swami. Do you want a thousand data lakes in your customer base now? And people are adding value to that data in new ways, injecting machine intelligence, you know, SageMaker is a big piece of that. Tying it in. I know a lot of customers are using glue as catalogs and which I'm like, wow, is glue a catalog or, I mean, it's just so flexible. So what are you seeing customers do with that base of data now and driving new business value? Because I've said last decade plus has been about it transformation. And now we're seeing business transformation. Maybe you could talk about that a little bit. >>Well, the base of every data lake is going to be as three yesterday has over 200 trillion objects. Now, Dave, and if you think about that, if you took every person on the planet, each of those people would have 26,000 S3 objects. It's gotten that big. And you know, if you think about the base of data with 200 trillion plus objects, really the opportunity for innovation is limitless. And you know, a great example for that is it's not just business value. It's really the new customer experiences that our customers are inventing the NFL. Uh, they, you know, they have that application called digital athlete where, you know, they started off with 10,000 labeled images or up to 20,000 labeled images now. And they're all using it to drive machine learning models that help predict and support the players on the field when they start to see things unfold that might cause injury. That is a brand new experience. And it's only possible with vast amounts of data >>Additive to when my line said, we're, we're in you talk about business transformation. We are in the age of data and we represent storage services. But what we really represent is what our customers hold one of their most valuable assets, which is their data. And that set of data is only growing. And the ability to use that data, to leverage that data for value, whether it's ML training, whether it's analytics, that's only accelerated, this is the feedback we get from our customers. This is where these features and new capabilities come from. So that's, what's really accelerating our pace >>Guys. I wish we had more time. I'd have to have you back because we're on a tight clock here, but, um, so great to see you both especially live. I hope we get to do more of this in 2022. I'm an optimist. Okay. And keep it right there, everybody. This is Dave Volante for the cube you're leader in live tech coverage, right back.

Published Date : Dec 2 2021

SUMMARY :

Great to see you again, we saw you at storage day, the 15 year anniversary of AWS, So much energy in the room. I think you guys are on the right side of history. Uh, news of the week. And if you remember that the FSA, And there's more, Well, there's more, I mean, one of the great things about the new match file service world and CFS is it's powered It is taught by Gravatar and all of the capabilities that AWS brings a new curve, uh, cycles are faster for processors, and you can take advantage of that in storage In fact, you have that big launch as well for luster, with gravity. And both of these offerings, You to just, there's a lot of intersection between all of those different types of workloads they have, as you said, but maybe you could give us your point of view on that. Uh, we went into some detail on that with you as well, and that's been enormously popular. that you just aren't using at the moment, but when you need it, you need it right away. And he said, you know, one of the, one of the things that's not always shown up and we don't always get credit for And so the glacier storage class, the entire year, you know, a interesting statistic that was brought up yesterday is over the course And so really what you have between the different there was so much focus on compute before you had to get that in place, or, but now it's just And so we came to you And then I don't think that try, you know, if you, And if you think about those first five years, that was laying the baseline to launch us three, And so that's, you know, part of the equation. And you know, a great example for that is it's not just business value. And the ability to use that data, to leverage that data for value, whether it's ML training, I'd have to have you back because we're on a tight clock here,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VolantePERSON

0.99+

DavePERSON

0.99+

WaynePERSON

0.99+

AWSORGANIZATION

0.99+

AdamPERSON

0.99+

2022DATE

0.99+

30%QUANTITY

0.99+

10 plus yearsQUANTITY

0.99+

75%QUANTITY

0.99+

47%QUANTITY

0.99+

68%QUANTITY

0.99+

10 yearsQUANTITY

0.99+

Wayne DusoPERSON

0.99+

yesterdayDATE

0.99+

2021DATE

0.99+

95QUANTITY

0.99+

AmazonORGANIZATION

0.99+

fiveQUANTITY

0.99+

YasminPERSON

0.99+

200,000 data lakesQUANTITY

0.99+

10,000 labeled imagesQUANTITY

0.99+

12 hoursQUANTITY

0.99+

first five yearsQUANTITY

0.99+

FSXTITLE

0.99+

10 years agoDATE

0.98+

over 200 trillion objectsQUANTITY

0.98+

todayDATE

0.98+

eachQUANTITY

0.98+

this yearDATE

0.98+

oneQUANTITY

0.98+

threeQUANTITY

0.97+

bothQUANTITY

0.97+

S3COMMERCIAL_ITEM

0.97+

up to 20,000 labeled imagesQUANTITY

0.97+

eightQUANTITY

0.96+

one zoneQUANTITY

0.96+

five XQUANTITY

0.96+

NetAppTITLE

0.95+

200 trillion plus objectsQUANTITY

0.95+

last decadeDATE

0.95+

a hundred and twenty, a hundred and twenty eight servicesQUANTITY

0.95+

this morningDATE

0.94+

EBSORGANIZATION

0.94+

over 105 capabilitiesQUANTITY

0.94+

ONTAPTITLE

0.93+

4.30 centsQUANTITY

0.93+

100 plusQUANTITY

0.92+

SwamiPERSON

0.92+

up to 92%QUANTITY

0.91+

NFLORGANIZATION

0.9+

CFSTITLE

0.9+

MilanPERSON

0.89+

15 year anniversaryQUANTITY

0.88+

single yearQUANTITY

0.87+

SageMakerORGANIZATION

0.87+

four tenths of a centQUANTITY

0.87+

GravatarORGANIZATION

0.86+

InventEVENT

0.85+

hundreds of servicesQUANTITY

0.84+

a millionQUANTITY

0.84+

windowsTITLE

0.82+

Mai Lan Tomsen BukovecPERSON

0.81+

Juan Loaiza, Oracle | CUBE Conversation, September 2021


 

(bright music) >> Hello, everyone, and welcome to this CUBE video exclusive. This is Dave Vellante, and as I've said many times what people sometimes forget is Oracle's chairman is also its CTO, and he understands and appreciates the importance of engineering. It's the lifeblood of tech innovation, and Oracle continues to spend money on R and D. Over the past decade, the company has evolved its Exadata platform by investing in core infrastructure technology. For example, Oracle initially used InfiniBand, which in and of itself was a technical challenge to exploit for higher performance. That was an engineering innovation, and now it's moving to RoCE to try and deliver best of breed performance by today's standards. We've seen Oracle invest in machine intelligence for analytics. It's converged OLTB and mixed workloads. It's driving automation functions into its Exadata platform for things like indexing. The point is we've seen a consistent cadence of improvements with each generation of Exadata, and it's no secret that Oracle likes to brag about the results of its investments. At its heart, Oracle develops database software and databases have to run fast and be rock solid. So Oracle loves to throw around impressive numbers, like 27 million AKI ops, more than a terabyte per second for analytics scans, running it more than a terabyte per second. Look, Oracle's objective is to build the best database platform and convince its customers to run on Oracle, instead of doing it themselves or in some other cloud. And because the company owns the full stack, Oracle has a high degree of control over how to optimize the stack for its database. So this is how Oracle intends to compete with Exadata, Exadata Cloud@Customer and other products, like ZDLRA against AWS Outposts, Azure Arc and do it yourself solutions. And with me, to talk about Oracle's latest innovation with its Exadata X9M announcement is Juan Loaiza, who's the Executive Vice President of Mission Critical Database Technologies at Oracle. Juan, thanks for coming on theCUBE, always good to see you, man. >> Thanks for having me, Dave. It's great to be here. >> All right, let's get right into it and start with the news. Can you give us a quick overview of the X9M announcement today? >> Yeah, glad to. So, we've had Exadata on the market for a little over a dozen years, and every year, as you mentioned, we make it better and better. And so this year we're introducing our X9M family of products, and as usual, we're making it better. We're making it better across all the different dimensions for OLTP, for analytics, lower costs, higher IOPs, higher throughputs, more capacity, so it's better all around, and we're introducing a lot of new software features as well that make it easier to use, more manageable, more highly available, more options for customers, more isolation, more workload consolidation, so it's our usual better and better every year. We're already way ahead of the competition in pretty much every metric you can name, but we're not sitting back. We have the pedal to the metal and we're keeping it there. >> Okay, so as always, you announced some big numbers. You're referencing them. I did in my upfront narrative. You've claimed double to triple digit performance improvements. Tell us, what's the secret sauce that allows you to achieve that magnitude of performance gain? >> Yeah, there's a lot of secret sauce in Exadata. First of all, we have custom designed hardware, so we design the systems from the top down, so it's not a generic system. It's designed to run database with a specific and sole focus of running database, and so we have a lot of technologies in there. Persistent memory is a really big one that we've introduced that enables super low response times for OLTP. The RoCE, the remote RDMA over convergency ethernet with a hundred gigabit network is a big thing, offload to storage servers is a big thing. The columnar processing of the storage is a huge thing, so there's a lot of secret sauce, most of it is software and hardware related and interesting about it, it's very unique. So we've been introducing more and more technologies and actually advancing our lead by introducing very unique, very effective technologies, like the ones I mentioned, and we're continuing that with our X9 generation. >> So that persistent memory allows you to do a right directly, atomic right directly to memory, and then what, you update asynchronously to the backend at some point? Can you double click on that a little bit? >> Yeah, so we use persistent memory as kind of the first tier of storage. And the thing about persistent memory is persistent. Unlike normal memory, it doesn't lose its contents when you lose power, so it's just as good as flash or traditional spinning disks in terms of storing data. And the integration that we do is we do what's called remote direct memory access, that means the hardware sends the new data directly into persistent memory and storage with no software, getting rid of all the software layers in between, and that's what enables us to achieve this extremely low latency. Once it's in persistent memory, it's stored. It's as good as being in flash or disc. So there's nothing else that we need to do. We do age things out of persistent memory to keep only hot data in there. That's one of the tricks that we do to make sure, because persistent memory is more expensive than flash or disc, so we tier it. So we age data in and out as it becomes hot, age it out as it becomes cold, but once it's in persistent memory, it's as good as being stored. It is stored. >> I love it. Flash is a slow tier now. So, (laughs) let's talk about what this-- >> Right, I mean persistent memory is about an order of magnitude faster. Flash is more than an order of magnitude faster than disk drive, so it is a new technology that provides big benefits, particularly for latency on OLTP. >> Great, thank you for that, okay, we'll get out of the plumbing. Let's talk about what this announcement means to customers. How does all this performance, and you got a lot of scale here, how does it translate into tangible results say, for a bank? >> Yeah, so there's a lot of ways. So, I mentioned performance is a big thing, always with Exadata. We're increasing the performance significantly for OLTP, analytics, so OLTP, 50, 60% performance improvements, analytics, 80% performance improvements in terms of costs, effectiveness, 30 to 60% improvement, so all of these things are big benefits. You know, one of the differences between a server product like Exadata and a consumer product is performance translates in the cost also. If I get a new smartphone that's faster, it doesn't actually reduce my costs, it just makes my experience a little better. But with a server product like Exadata, if I have 50% faster, I can translate that into I can serve 50% more users, 50% more workload, 50% more data, or I can buy a 50% smaller system to run the same workload. So, when we talk about performance, it also means lower costs, so if big customers of ours, like banks, telecoms, retailers, et cetera, they can take that performance and turn it into better response times. They can also take that performance and turn it into lower costs, and everybody loves both of those things, so both of those are big benefits for our customers. >> Got it, thank you. Now in a move that was maybe a little bit controversial, you stated flat out that you're not going to bother to compare Exadata cloud and customer performance against AWS Outposts and Azure Stack, rather you chose to compare to RDS, Redshift, Azure SQL. Why, why was that? >> Yeah, so our Exadata runs in the public cloud. We have Exadata that runs in Cloud@Customer, and we have Exadata that runs on Prem. And Azure and Azure Stack, they have something a little more similar to Cloud@Customer. They have where they take their cloud solutions and put them in the customer data center. So when we came out with our new X8, 9M Cloud@Customer, we looked at those technologies and honestly, we couldn't even come up with a good comparison with their equivalent, for example, AWS Outpost, because those products really just don't really run. For example, the two database products that Outposts promote or that Amazon promotes is Aurora for OLTP and Redshift for analytics. Well, those two can't even run at all on their Outposts product. So, it's kind of like beating up on a child or something. (laughs) It doesn't make sense. They're out of our weight class, so we're not even going to compare against them. So we compared what we run, both in public cloud and Cloud@Customer against their best product, which is the Redshifts and the Auroras in their public cloud, which is their most scalable available products. With their equivalent Cloud@Customer, not only does it not perform, it doesn't run at all. Their Premiere products don't run at all on those platforms. >> Okay, but RDS does, right? I think, and Redshift and Azure SQL, right, will run a their version, so you compare it against those. What were the results of the benchmarks when you did made those comparisons? >> Yeah, so compared against their public cloud or Cloud@Customer, we generally get results that are something like 50 times lower latency and close to a hundred times higher analytic throughput, so it's orders of magnitude. We're not talking 50%, we're talking 50 times, so compared to those products, there really is kind of, we're in a different league. It's kind of like they're the middle school little league and we're the professional team, so it's really dramatically different. It's not even in the same league. >> All right, now you also chose to compare the X9M performance against on-premises storage systems. Why and what were those results? >> Yeah, so with the on-premises, traditionally customers bought conventional storage and that kind of stuff, and those products have advanced quite a bit. And again, those aren't optimized. Those aren't designed to run database, but some customers have traditionally deployed those, you know, there's less and less these days, but we do get many times faster both on OLTP and analytic performance there, I mean, with analytics that can be up to 80 times faster, so again, dramatically better, but yeah, there's still a lot of on-premise systems, so we didn't want to ignore that fact and compare only to cloud products. >> So these are like to like in the sense that they're running the same level of database. You're not playing games in terms of the versioning, obviously, right? >> Actually, we're giving them a lot of the benefit. So we're taking their published numbers that aren't even running a database, and they use these low-level benchmarking tools to generate these numbers. So, we're comparing our full end-to-end database to storage numbers against their low-level IO tool that they've published in their data sheets, so again, we're trying to give them the benefit of the doubt, but we're still orders of magnitude better. >> Okay, now another claim that caught our attention was you said that 87% of the Fortune 100 organizations run Exadata, and you're claiming many thousands of other organizations globally. Can you paint a picture of the ICP, the Ideal Customer Profile for Exadata? What's a typical customer look like, and why do they use Exadata, Juan? >> Yeah, so the ideal customer is pretty straightforward, customers that care about data. That's pretty much it. (Dave laughs) If you care about data, if you care about performance of data, if you care about availability of data, if you care about manageability, if you care about security, those are the customers that should be looking strongly at Exadata, and those are the customers that are adopting Exadata. That's why you mentioned 87% of the global Fortune 100 have already adopted Exadata. If you look at a lot of industries, for example, pretty much every major bank almost in the entire world is running Exadata, and they're running it for their mission critical workloads, things like financial trading, regulatory compliance, user interfaces, the stuff that really matters. But in addition to the biggest companies, we also have thousands of smaller companies that run it for the same reason, because their data matters to them, and it's frankly the best platform, which is why we get chosen by these very, very sophisticated customers over and over again, and why this product has grown to encompass most of the major corporations in the world and governments also. >> Now, I know Deutsche bank is a customer, and I guess now an engineering partner from the announcement that I saw earlier this summer. They're using Cloud@Customer, and they're collaborating on things like security, blockchain, machine intelligence, and my inference is Deutsch Bank is looking to build new products and services that are powered by your platforms. What can you tell us about that? Can you share any insights? Are they going to be using X9M, for example? >> Yes, Deutsche Bank is a partnership that we announced a few months ago. It's a major partnership. Deutsche Bank is one of the biggest banks in the world. They traditionally are an on-premises customer, and what they've announced is they're going to move almost the entire database estate to our Exadata Cloud@Customer platform, so they want to go with a cloud platform, but they're big enough that they want to run it in their own data center for certain regulatory reasons. And so, the announcement that we made with them is they're moving the vast bulk of their data estate to this platform, including their core banking, regulatory applications, so their most critical applications. So, obviously they've done a lot of testing. They've done a lot of trials and they have the confidence to make this major transition to a cloud model with the Exadata Cloud@Customer solution, and we're also working with them to enhance that product and to work in various other fields, like you mentioned, machine learning, blockchain, that kind of project also. So it's a big deal when one of the biggest, most conservative, best respected financial institution in the world says, "We're going all in on this product," that's a big deal. >> Now outside of banking, I know a number of years ago, I stumbled upon an installation or a series of installations that Samsung found out about them as a customer. I believe it's now public, but they've something like 300 Exadatas. So help us understand, is it common that customers are building these kinds of Exadata farms? Is this an outlier? >> Yeah, so we have many large customers that have dozens to hundreds of Exadatas, and it's pretty simple, they start with one or two, and then they see the benefits, themselves, and then it grows. And Samsung is probably the biggest, most successful and most respected electronics company in the world. They are a giant company. They have a lot of different sub units. They do their own manufacturing, so manufacturing's one of their most critical applications, but they have lots of other things they run their Exadata for. So we're very happy to have them as one of our major customers that run Exadata, and by the way, Exadata again, very huge in electronics, in manufacturing. It's not just banking and that kind of stuff. I mean, manufacturing is incredibly critical. If you're a company like Samsung, that's your bread and butter. If your factory stops working, you have huge problems. You can't produce products, and you will want to improve the quality. You want to improve the tracking. You want to improve the customer service, all that requires a huge amount of data. Customers like Samsung are generating terabytes and terabytes of data per day from their manufacturing system. They track every single piece, everything that happens, so again, big deal, they care about data. They care deeply about data. They're a huge Exadata customer. That's kind of the way it works. And they've used it for many years, and their use is growing and growing and growing, and now they're moving to the cloud model as well. >> All right, so we talked about some big customers and Juan, as you know, we've covered Exadata since its inception. We were there at the announcement. We've always stressed the fit in our research with mission critical workloads, which especially resonates with these big customers. My question is how does Exadata resonate with the smaller customer base? >> Yeah, so we talk a lot about the biggest customers, because honestly they have the most critical requirements. But, at some level they have worldwide requirements, so if one of the major financial institutions goes down, it's not just them that's affected, that reverberates through the entire world. There's many other customers that use Exadata. Maybe their application doesn't stop the world, but it stops them, so it's very important to them. And so one of the things that we've introduced in our Cloud@Customer and public cloud Exadata platforms is the ability for Oracle to manage all the infrastructure, which enables smaller customers that don't have as much IT sophistication to adopt these very mission critical technology, so that's one of the big advancements. Now, we've always had smaller customers, but now we're getting more and more. We're getting universities, governments, smaller businesses adopting Exadata, because the cloud model for adopting is dramatically simpler. Oracle does all the administration, all the low-level stuff. They don't have to get involved in it at all. They can just use the data. And, on top of that comes our autonomous database, which makes it even easier for smaller customers to adapt. So Exadata, which some people think of as a very high-end platform in this cloud model, and particularly with autonomous databases is very accessible and very useful for any size customer really. >> Yeah, by all accounts, I wouldn't debate Exadata has been a tremendous success. But you know, a lot of customers, they still prefer to roll their own, do it themselves, and when I talk to them and ask them, "Okay, why is that?" They feel it limits their reliance on a single vendor, and it gives them better ability to build what I call a horizontal infrastructure that can support say non-Oracle workloads, so what do you tell those customers? Why should those customers run Oracle database on Exadata instead of a DIY infrastructure? >> Yeah, so that debate has gone on for a lot of years. And actually, what I see, there's less and less of that debate these days. You know, initially customers, many customers, they were used to building their own. That's kind of what they did. They were pretty good at it. What we have shown customers, and when we talk about these major banks, those are the kinds of people that are really good at it. They have giant IT departments. If you look at a major bank in the world, they have tens of thousands of people in their IT departments. These are gigantic multi-billion dollar organizations, so they were pretty good at this kind of stuff. And what we've shown them is you can't build this yourself. There's so much software that we've written to integrate with the database that you just can't build yourself, it's not possible. It's kind of like trying to build your own smartphone. You really can't do it, the scale, the complexity of the problem. And now as the cloud model comes in, customers are realizing, hey, all this attention to building my own infrastructure, it's kind of last decade, last century. We need to move on to more of an as a service model, so we can focus on our business. Let enterprises that are specialized in infrastructure, like Oracle that are really, really good at it, take care of the low-level details, and let me focus on things that differentiate me as a business. It's not going to differentiate them to establish their own storage for database. That's not a differentiator, and they can't do it nearly as well as we can, and a lot of that is because we write a lot of special technology and software that they just can't do themselves, it's not possible. It's just like you can't build your own smartphone. It's just really not possible. >> Now, another area that we've covered extensively, we were there at the unveiling, as well is ZDLRA, Zero Data Loss Recovery Appliance. We've always liked this product, especially for mission critical workloads, we're near zero data loss, where you can justify that. But while we always saw it as somewhat of a niche market, first of all, is that fair, and what's new with ZDLRA? >> Yeah ZDLRA has been in the market for a number of years. We have some of the biggest corporations in the world running on that, and one of the big benefits has been zero data loss, so again, if you care about data, you can't lose data. You can't restore to last night's backup if something happens. So if you're a bank, you can't restore everybody's data to last night. Suppose you made a deposit during the day. They're like, "Hey, sorry, Mr. Customer, your deposit, "well, we don't have any record of it anymore, "'cause we had to restore to last night's backup," you know, that doesn't work. It doesn't work for airlines. It doesn't work for manufacturing. That whole model is obsolete, so you need zero data loss, and that's why we introduced Zero Data Loss Recovery Appliance, and it's been very successful in the market. In addition to zero data loss, it actually provides much faster restore, much more reliable restores. It's more scalable, so it has a lot of advantages. With our X9M generation, we're introducing several new capabilities. First of all, it has higher capacity, so we can store more backups, keep data for longer. Another thing is we're actually dropping the price of the entry-level configuration of ZDLRA, so it makes it more affordable and more usable by smaller businesses, so that's a big deal. And then the other thing that we're hearing a lot about, and if you read the news at all, you hear a lot about ransomware. This is a major problem for the world, cyber criminals breaking into your network and taking the data ransom. And so we've introduced some, we call cyber vault capabilities in ZDLRA. They help address this ransomware issue that's kind of rampant throughout the world, so everybody's worried about that. There's now regulatory compliance for ransomware that particularly financial institutions have to conform to, and so we're introducing new capabilities in that area as well, which is a big deal. In addition, we now have the ability to have multiple ZDLRAs in a large enterprise, and if something happens to one, we automatically fail over backups to another. We can replicate across them, so it makes it, again, much more resilient with replication across different recovery appliances, so a lot of new improvements there as well. >> Now, is an air gap part of that solution for ransomware? >> No, air gap, you really can't have your back, if you're continuously streaming changes to it, you really can't have an air gap there, but you can protect the data. There's a number of technologies to protect the data. For example, one of the things that a cyber criminal wants to do is they want to take control of your data and then get rid of your backup, so you can't restore them. So as a simple example of one thing we're doing is we're saying, "Hey, once we have the data, "you can't delete it for a certain amount of days." So you might say, "For the 30 days, "I don't care who you are. "I don't care what privileges you have. "I don't care anything, I'm holding onto that data "for at least 30 days," so for example, a cyber criminal can't come in and say, "Hey, I'm going to get into the system "and delete that stuff or encrypt it," or something like that. So that's a simple example of one of the things that the cyber vault does. >> So, even as an administrator, I can't change that policy? >> That's right, that's one of the goals is doesn't matter what privileges you have, you can't change that policy. >> Does that eliminate the need for an air gap or would you not necessarily recommend, would you just have another layer of protection? What's your recommendation on that to customers? >> We always recommend multiple layers of protection, so for example, in our ZDLRA, we support, we offload tape backups directly from the appliance, so a great way to protect the data from any kind of thing is you put it on a tape, and guess what, once that tape drive is filed away, I don't care what cyber criminal you are, if you're remote, you can't access that data. So, we always promote multiple layers, multiple technologies to protect the data, and tape is a great way to do that. We can also now archive. In addition to tape, we can now archive to the public cloud, to our object storage servers. We can archive to what we call our ZFS appliance, which is a very low cost storage appliance, so there's a number of secondary archive copies that we offload and implement for customers. We make it very easy to do that. So, yeah, you want multiple layers of protection. >> Got it, okay, your tape is your ultimate air gap. ZDLRA is your low RPO device. You've got cloud kind of in the middle, maybe that's your cheap and deep solution, so you have some options. >> Juan: Yes. >> Okay, last question. Summarize the announcement, if you had to mention two or three takeaways from the X9M announcement for our audience today, what would you choose to share? >> I mean, it's pretty straightforward. It's the new generation. It's significantly faster for OLTP, for analytics, significantly better consolidation, more cost-effective. That's the big picture. Also there's a lot of software enhancements to make it better, improve the management, make it more usable, make it better disaster recovery. I talked about some of these cyber vault capabilities, so it's improved across all the dimensions and not in small ways, in big ways. We're talking 50% improvement, 80% improvements. That's a big change, and also we're keeping the price the same, so when you get a 50 or 80% improvement, we're not increasing the price to match that, so you're getting much better value as well. And that's pretty much what it is. It's the same product, even better. >> Well, I love this cadence that we're on. We love having you on these video exclusives. We have a lot of Oracle customers in our community, so we appreciate you giving us the inside scope on these announcements. Always a pleasure having you on theCUBE. >> Thanks for having me. It's always fun to be with you, Dave. >> All right, and thank you for watching. This is Dave Vellante for theCUBE, and we'll see you next time. (bright music)

Published Date : Sep 28 2021

SUMMARY :

and databases have to run It's great to be here. of the X9M announcement today? We have the pedal to the metal sauce that allows you to achieve and so we have a lot of that means the hardware sends the new data Flash is a slow tier now. that provides big benefits, and you got a lot of scale here, and everybody loves both of those things, Now in a move that was maybe and we have Exadata that runs on Prem. and Azure SQL, right, and close to a hundred times Why and what were those results? and compare only to cloud products. of the versioning, obviously, right? and they use these of the Fortune 100 and it's frankly the best platform, is looking to build new and to work in various other it common that customers and now they're moving to and Juan, as you know, is the ability for Oracle to and it gives them better ability to build and a lot of that is because we write first of all, is that fair, and so we're introducing new capabilities of one of the things That's right, that's one of the goals In addition to tape, we can now You've got cloud kind of in the middle, from the X9M announcement the price to match that, so we appreciate you It's always fun to be with you, Dave. and we'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

SamsungORGANIZATION

0.99+

Deutsche BankORGANIZATION

0.99+

JuanPERSON

0.99+

twoQUANTITY

0.99+

Juan LoaizaPERSON

0.99+

Deutsche bankORGANIZATION

0.99+

DavePERSON

0.99+

September 2021DATE

0.99+

OracleORGANIZATION

0.99+

50 timesQUANTITY

0.99+

thousandsQUANTITY

0.99+

30 daysQUANTITY

0.99+

Deutsch BankORGANIZATION

0.99+

50%QUANTITY

0.99+

30QUANTITY

0.99+

AmazonORGANIZATION

0.99+

oneQUANTITY

0.99+

50QUANTITY

0.99+

80%QUANTITY

0.99+

87%QUANTITY

0.99+

ZDLRAORGANIZATION

0.99+

60%QUANTITY

0.99+

bothQUANTITY

0.99+

AWSORGANIZATION

0.99+

last nightDATE

0.99+

last centuryDATE

0.99+

first tierQUANTITY

0.99+

dozensQUANTITY

0.98+

this yearDATE

0.98+

more than a terabyte per secondQUANTITY

0.98+

RedshiftTITLE

0.97+

ExadataORGANIZATION

0.97+

FirstQUANTITY

0.97+

hundredsQUANTITY

0.97+

X9MTITLE

0.97+

more than a terabyte per secondQUANTITY

0.97+

OutpostsORGANIZATION

0.96+

Azure SQLTITLE

0.96+

Azure StackTITLE

0.96+

zero dataQUANTITY

0.96+

over a dozen yearsQUANTITY

0.96+

Nelson Nahum, Zadara Storage - Google Next 2017 - #GoogleNext17 - #theCUBE


 

>> And who we are today as a country, as a universe. >> Announcer: Congratulations, Reggie Jackson. You are Cube alumni. (gentle music) Live from Silicon Valley, it's theCube, covering Google Cloud Next '17. (techno music) >> Hi, and welcome back to theCube's coverage of Google Next 2017. We're at the heart of Silicon Valley in our great studio in Palo Alto. We've got a team of reporters and analysts up in San Francisco with the 10,000 people attending, really, Google's enterprise show. It's got Cloud, it's got G Suite. It's got a little bit of the devices. I'm happy to welcome back to the program, someone we've had on many times, Nelson Nahum, the CEO of Zadara Storage. Really a company that's in there, understanding this cloud transition from, kind of, how the enterprise gets in multicloud and all those pieces. Nelson, welcome back to the program. >> Thank you, sir, how are you? >> All right, it's good. So, you know, you and I last time we talked was at another cloud show, a little bigger cloud show, one that's been going on for a number of years, but let's start talking about Google. What's, you know, we think a lot of progress over the last couple of years. I mean, Diane Green definitely has put her stamp on this company. A ton of people that have been brought in, many of them, those of us in the industry, we know these people. We've seen them grow these businesses, understand how to talk to the enterprise, so, what's your take on the show so far and how Google's doing as a company? And we'll get into how they are as a partner soon, too. >> So this is our first Google Cloud show for us, for Zadara. We've been in, I think, since 2011, in Amazon Prime event, and we like it. This is going very well. We have a lot of conversations. We saw that there are many, many customers looking to have multicloud strategy. That actually works very well for us, because we can provide storage that can be accessible at the same time concurrently from Amazon, and from Google and Azure and others. So, yeah, it's a good trend. I think most of the people we found, they're either already on Amazon and looking to expand to Google or some on-premise to Google, and so on. >> Can you help unpack that multicloud a bit for us? So you know, maybe some of our audience might not know, you guys sit in some of those mega-datacenters, like the Equinixes of the world, and direct connect to the public cloud. >> Exactly, exactly. >> So, I've got, kind of, my storage being Zadara, and there actually is, you know, physical storage there, and then that, you know, plugs into the Cloud resources, of course. We know AWS's Direct Connect. Amazon has, you know, their equivalent, and Google has the same, so, can you have, you know, it's a single solution that, does it just get fibers to all three of them? Is there software that takes care of it? >> Yeah, actually- >> How does that work? >> Yeah, great question. So we sit in Equinix data centers with our cloud, and from there, in many cases, we use Equinix Cloud Exchange, that is, basically, like a networking inside Equinix, and can be connected to many different potential targets, so, currently we are cross-connected to Amazon, Google and Azure. So our customer, at first, can create a storage and mount with NFS or ZFS, or Block, and can mount the storage, especially if it is file storage, then you can share data. You can mount the same storage to virtual machines in Google, and to virtual machines in Amazon, and at the same time, they see the same files. >> Yeah, and what's the use case, why are they doing that? Is that for redundancy or certain features? You know, there was a certain cloud outage a week ago, were your customers riding through that, based on what they're doing? >> Yeah, so, the measured cloud outage that Amazon had last week, S3 ... it caused my people to rethink (laughs), I guess. Fortunately, for us, all our customers that sit in our storage, wasn't impacted, because they sit in our storage and they don't use, we don't use Amazon infrastructure, so they could continue- >> Stu: No S3 for your customers, right? >> Right, so we are doing the Block and the file storage for our customers. I think that, what is important here is not the outage, but what is important is people start recognizing that you need to have the data in two locations in order to be safe. >> Yeah, it's ... People that have done architecture and understand infrastructure is, you know, I need to be thoughtful as to how I architect things, so either I need to make sure I have the availability zones and the services and can take care of that, or perhaps even multicloud to be able to take care of that. >> So, multicloud and you're completely independent to each other. So, we have an array of many customers using us and Amazon and, again, because our storage can be cross-connected to multiple clouds, it's very easy to access from virtual machines in any cloud at the same time. So, people that are using that, it's either for a, kind of, these are tolerant solutions or more robust solutions. As well, in some cases for migration. Each cloud provider has the places or the attributes that it has. You can run applications better in that particular cloud, so, for example, in Microsoft Azure, anything that is related to Windows, they are the best, and, Oracle Cloud, if you run Oracle, probably is the best way to start. So, I will say that the multicloud is not only the disaster recovery type, but people want to use the best cloud for the particular application they have, and they have multiple applications, so use multiple clouds. >> I'm curious, do you get visibility, as to, you know, why are customers choosing Google? Are there, do you have customers that are using Google that aren't using the other public clouds, is it primarily your customers are using it as a secondary source, any data you've got or anecdotes would be helpful. >> So, we have two types of customers, the ones that are multicloud and the others that are going from on-premise to the cloud. As you know, we have an on-premise business, and we make it very easy, from on-premise, to move to the cloud. We just launched it, here in Google Cloud, a service called cloud iteratation, that basically, we allow a customer to move their entire infrastructure from on-premise to the cloud with zero, or minimal, downtime. So, we will ship all the storage to the on-premise facilities, the customer will pay per use, we will start doing replication to the cloud, or in some cases, if it is multiple petabytes, we will ship the equipment to the cloud, and, in the meantime, we can do replication and, at the end, we can switch and fail over, and the customer can continue from the cloud. >> Cloud hydration, >> Cloud hydration, yes. >> is that service. Does that support all the services, all the clouds? >> Yeah, so today, we are doing this for many customers, and the good use case is when a customer wants to move a lot of data to the cloud, but they don't want to have downtime. Because, Amazon Snowball and all these boxes, you need to copy the data, and then ship, and then restore. >> Stu: So, it's not a truck that takes three months >> Yeah, exactly. >> sitting on your location. >> This is what we do, we ship double amount of equipment to the customer, they start doing the copy, and then, half of it, we ship to the cloud. We connect to the cloud, and resume the connection, and, all the time, the customer continues to run, okay? And, at the last moment, they do the fail over. So, it's minimum downtime, even if you need to ship one petabyte of storage. >> Yeah. I'm curious, we've been going through such tremendous changes in the storage industry, do you guys sell, you know, is it the storage person, who do you sell to, and where is their mind at when they think about storage today? >> Yeah. Yeah, we sell storage, so the storage person is the one that's buying- >> Yeah, you know, a lot of people, if they're buying Google, or even if they're putting in AWS services, the storage person is, a lot of time, kind of shoved out of the mix, you're a little bit- >> Shoved out of the mix until they have a problem that they need to bring back the storage- >> Wait, are you saying that could be a problem? (laughs) >> So, what happen is that, and this is, I think, how the cloud started, is cloud storage is, "Ah, storage is just storage," until you start running real applications and you need the performance, and you need the reliability, and so on. So, this is why you need the storage guy to architect the solution, and this is where we, we come in and actually act as a really good outsourcing team of storage experts to the customer, and we help them with this transition from on-premise to the cloud and, in many cases, back and forth, if the customer wants to have a leg in the cloud, a leg on-premise, and move data easily back and forth. >> So, Google made a good push at the show, talking about building the ecosystem, how they want to work with partners. They had companies, like you know, PwC's all over the place, SAP, very strong partnership. How have you found it to work with Google? Any things you'd say to them as to how they can accelerate and move things faster to, you know, build up the ecosystem? >> Yeah, so far, our experience with Google was extremely good. The people are very dynamic, they have the Google dynamism that is very good, and, for us, it was really good to have a close relationship with the Google product managers and sales people, and so on, so we enjoy, have a really good relationship with Google Cloud. >> All right, well, Nelson, I want to give you the final word. You know, things you've learned this week, any cool customer conversation you've had, give us the final takeaway. >> Yeah, so, the ... I guess my summary is that, here in Google Cloud, we have a big advantage because we have ... NAS, NFS, ZFS, we've architected their integration and all the snapshot capabilities that enterprises need. And, you know that Google doesn't have a EFS type of functionality, and our functionality's actually higher than EFS. So, this is what we are talking to customers, here in Google Cloud, anybody that needs NFS and ZFS, and NAS and multicloud, and on-premise to the cloud, they talk to us and we are ready to go. >> All right, Nelson Nahum, really appreciate you coming to the studio here to share what's happening at the Google event. Be sure to check out wikibon.com for our cloud research and, of course, siliconangle.tv to see all the shows we're going to be at, as well as the replays from this and lots of other cloud infrastructure, IoT and big data shows. We'll be back with lots more coverage here, day two of two, covering Google Cloud. From the SiliconANGLE media studio in Palo Alto, you're watching The Cube. (techno music)

Published Date : Mar 9 2017

SUMMARY :

as a universe. it's theCube, covering It's got a little bit of the devices. over the last couple of years. and looking to expand to Google and direct connect to the public cloud. and Google has the same, so, and at the same time, the measured cloud outage is not the outage, but what is important I have the availability anything that is related to as to, you know, why are and, at the end, we can Does that support all the the good use case is when a customer and resume the connection, is it the storage person, so the storage person and you need the performance, know, PwC's all over the place, the Google product managers to give you the final word. and on-premise to the cloud, at the Google event.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Diane GreenPERSON

0.99+

AmazonORGANIZATION

0.99+

Reggie JacksonPERSON

0.99+

AWSORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

NelsonPERSON

0.99+

San FranciscoLOCATION

0.99+

Nelson NahumPERSON

0.99+

Palo AltoLOCATION

0.99+

three monthsQUANTITY

0.99+

Zadara StorageORGANIZATION

0.99+

Silicon ValleyLOCATION

0.99+

todayDATE

0.99+

two typesQUANTITY

0.99+

last weekDATE

0.99+

10,000 peopleQUANTITY

0.99+

a week agoDATE

0.99+

G SuiteTITLE

0.99+

EquinixORGANIZATION

0.99+

two locationsQUANTITY

0.99+

firstQUANTITY

0.99+

AzureORGANIZATION

0.99+

one petabyteQUANTITY

0.98+

OracleORGANIZATION

0.98+

2011DATE

0.98+

The CubeTITLE

0.98+

SiliconANGLEORGANIZATION

0.98+

MicrosoftORGANIZATION

0.98+

WindowsTITLE

0.98+

Each cloudQUANTITY

0.98+

EquinixesORGANIZATION

0.97+

Google CloudTITLE

0.97+

S3TITLE

0.97+

twoQUANTITY

0.96+

threeQUANTITY

0.96+

single solutionQUANTITY

0.96+

CubeORGANIZATION

0.95+

zeroQUANTITY

0.95+

theCubeORGANIZATION

0.94+

this weekDATE

0.93+

PwCORGANIZATION

0.91+

ZadaraLOCATION

0.9+

siliconangle.tvOTHER

0.89+

ZFSTITLE

0.87+

SnowballCOMMERCIAL_ITEM

0.85+

day twoQUANTITY

0.85+

last couple of yearsDATE

0.82+

wikibon.comOTHER

0.8+

ZadaraORGANIZATION

0.8+

2017DATE

0.79+

halfQUANTITY

0.78+

SAPORGANIZATION

0.73+

CloudTITLE

0.72+

AzureTITLE

0.72+

GoogleEVENT

0.68+