Image Title

Search Results for storage day:

Ashish Palekar & Cami Tavares, AWS | AWS Storage Day 2022


 

(upbeat music) >> Okay, we're back covering AWS Storage Day 2022 with Ashish Palekar. Who's the general manager of AWS EBS Snapshot and Edge and Cami Tavares. Who's the head of product at Amazon EBS. Thanks for coming back in theCube guys. Great to see you again. >> Great to see you as well, Dave. >> Great to see you, Dave. Ashish, we've been hearing a lot today about companies all kinds of applications to the cloud and AWS and using their data in new ways. Resiliency is always top of mind for companies when they think about just generally their workloads and specifically the clouds. How should they think about customers think about data resiliency? >> Yeah, when we think about data resiliency it's all about making sure that your application data, the data that your application needs is available when it needs it. It's really the ability for your workload to mitigate disruptions or recover from them. And to build that resilient architecture you really need to understand what kinds of disruptions your applications can experience. How broad the impact of those disruptions is, and then how quickly you need to recover. And a lot of this is a function of what the application does, how critical it is. And the thing that we constantly tell customers is, this works differently in the cloud than it does in a traditional on-premises environment. >> What's different about the cloud versus on-prem? Can you explain how it's different? >> Yeah, let me start with a video on-premises one. And in the on-premises one, building resilient architectures is really the customer's responsibility, and it's very challenging. You'll start thinking about what your single points of failure are. To avoid those, you have to build in redundancy, you might build in replication as an example for storage and doing this now means you have to have provision more hardware. And depending on what your availability requirements are, you may even have to start looking for multiple data centers, some in the same regions, some in different geographical locations. And you have to ensure that you're fully automated, so that your recovery processes can take place. And as you can see that's a lot of owners being placed on the customer. One other thing that we hear about is really elasticity and how elasticity plays into the resiliency for applications. As an example, if you experience a sudden spike in workloads, in a on-premises environment, that can lead to resource saturation. And so really you have two choices. One is to sort of throttle the workload and experience resiliency, or your second option becomes buying additional hardware and securing more capacity and keeping it fair low in case of experiencing such a spike. And so your two propositions that are either experiencing resiliency, challenges or paying really to have infrastructure that's lying around. And both of those are different really when you start thinking about the cloud. >> Yeah, there's a third option too, which is lose data, which is not an option. Go ahead- >> Which is not, yeah, I pretty much as a storage person, that is not an option. The reason about that that we think is reasonable for customers to take. The big contrast in the cloud really comes with how we think about capacity. And fundamentally the the cloud gives you that access to capacity so you are not managing that capacity. The infrastructure complexity and the cost associated with that are also just a function of how infrastructure is built really in the cloud. But all of that really starts with the bedrock of how we design for avoiding single points of failure. The best way to explain this is really to start thinking about our availability zones. Typically these availability zones consist of multiple data centers, located in the same regional area to enable high throughput and low latency for applications. But the availability zones themselves are physically independent. They have independent connections to utility power, standalone backup power resources, independent mechanical services and independent network connectivity. We take availability zone independence extremely seriously, so that when customers are building the availability of their workload, they can architect using these multiple zones. And that is something that when I'm talking to customers or Tami is talking to customers, we highly encourage customers to keep in mind as they're building resiliency for their applications. >> Right, so you can have within an availability zone, you can have, you know, instantaneous, you know when you're doing it right. You've got, you've captured that data and you can asynchronously move to outside of that in case there's, the very low probability, but it does happen, you get some disasters. You're minimizing that RPO. And I don't have to worry about that as a customer and figuring out how to do three site data centers. >> That's right. Like that even further, now imagine if you're expanding globally. All those things that we described about like creating new footprint and creating a new region and finding new data centers. As a customer in an on-premises environment, you take that on yourself. Whereas with AWS, because of our global presence, you can expand to a region and bring those same operational characteristics to those environments. And so again, bringing resiliency as you're thinking about expanding your workload, that's another benefit that you get from using the availability zone region architecture that AWS has. >> And as Charles Phillips, former CEO of Infor said, "Friends, don't let friends build data center," so I don't have to worry about building the data center. Let's bring Cami into the discussion here. Cami, think about elastic block storage, it gives, you know customers, you get persistent block storage for EC2 instances. So it's foundational for any mission critical or business critical application that you're building on AWS. How do you think about data resiliency in EBS specifically? I always ask the question, what happens if something goes wrong? So how should we think about data resiliency in EBS specifically? >> Yeah, you're right Dave, block storage is a really foundational piece. When we talk to customers about building in the cloud or moving an application to the cloud, and data resiliency is something that comes up all the time. And with EBS, you know EBS is a very large distributed system with many components. And we put a lot of thought and effort to build resiliency into EBS. So we design those components to operate and fail independently. So when customers create an EBS volume for example, we'll automatically choose the best storage nodes to address the failure domain and the data protection strategy for each of our different volume types. And part of our resiliency strategy also includes separating what we call a volume life cycle control plane. Which are things like creating a volume, or attaching a volume to an EC2 instance. So we separate that control plane, from the storage data plane, which includes all the components that are responsible for serving IO to your instance, and then persisting it to durable media. So what that means is once a volume is created and attached to the instance, the operations on that volume they're independent from the control point function. So even in the case of an infrastructure event, like a power issue, for example, you can recreate an EBS volume from a snapshot. And speaking of snapshots, that's the other core pillar of resiliency in EBS. Snapshots are point in time copies of EBS volumes that would store in S3. And snapshots are actually a regional service. And that means internally we use multiple of the availability zones that Ashish was talking about to replicate your data so that the snapshots can withstand the failure of an availability zone. And so thanks to that availability zone independence, and then this builtin component independence, customers can use that snapshot and recreate an EBS following another AZO or even in another region if they need to. >> Great so, okay, so you touched on some of the things EBS does to build resiliency into the service. Now thinking about over your right shoulders, you know, Joan Deviva, so what can organizations do to build more resilience into their applications on EBS so they can enjoy life without anxiety? >> (laughs) That is a great question. Also something that we love to talk to customers about. And the core thing to think about here is that we don't believe in a one size fits all approach. And so what we are doing in EBS is we give customers different tools so that they can design a resiliency strategy that is custom tailored for their data. And so to do this, this resiliency assessment, you have to think about the context of this specific workload and ask questions like what other critical services depend on this data and what will break if this data's not available and how long can can those systems withstand that, for example. And so the most important step I'll mention it again, snapshots, that is a very important step in a recovery plan. Make sure you have a backup of your data. And so we actually recommend that customers take the snapshots at least daily. And we have features that make that easier for you. For example, Data Lifecycle Manager which is a feature that is entirely free. It allows you to create backup policies, and then you can automate the process of creating the snapshot, so it's very low effort. And then when you want to use that backup to recreate a volume, we have a feature called Fast Snapshot Restore, that can expedite the creation of the volume. So if you have a more, you know a shorter recovery time objective you can use that feature to expedite the recovery process. So that's backup. And then the other pillar we talked to customers about is data replication. Just another very important step when you're thinking about your resiliency and your recovery plans. So with EBS, you can use replication tools that work at the level of the operating system. So that's something like DRBD for example. Or you can use AWS Elastic Disaster Recovery, and that will replicate your data across availability zones or nearby regions too. So we talked about backup and replication, and then the last topic that we recommend customers think about is having a workload monitoring solution in place. And you can do that in EBS, using cloud watch metrics. So you can monitor the health of your EBS volume using those metrics. We have a lot of tips in our documentation on how to measure that performance. And then you can use those performance metrics as triggers for automated recovery workflows that you can build using tools like auto scaling groups for example. >> Great, thank you for that advice. Just quick follow up. So you mentioned your recommendation, at least daily, what kind of granularity, if I want to compress my RPO can I go at a more granular level? >> Yes, you can go more granular and you can use again the daily lifecycle manager to define those policies. >> Great, thank you. Before we go, I want to just quickly cover what's new with EBS. Ashish, maybe you could talk about, I understand you've got something new today. You've got an announcement, take us through that. >> Yeah, thanks for checking in and I'm so glad you asked. We talked about how snapshots help resilience and are a critical part of building resilient architectures. So customers like the simplicity of backing up their EC2 instances, using multi volume snapshots. And what they're looking for is the ability to back up only to exclude specific volumes from the backup, especially those that don't need backup. So think of applications that have cash data, or applications that have temporary data that really doesn't need backup. So today we are adding a new parameter to the create snapshots API, which creates a crash consistent set of snapshots for volumes attached to an EC2 instance. Where customers can now exclude specific volumes from an instance backup. So customers using data life cycle manager that can be touched on, can automate their backups. And again they also get to exclude these specific volumes. So really the feature is not just about convenience, but it's also to help customers save on cost. As many of these customers are managing tens of thousands of snapshots. And so we want to make sure they can take it at the granularity that they need it. So super happy to bring that into the hands of customers as well. >> Yeah, that's a nice option. Okay, Ashish, Cami thank you so much for coming back in theCube, helping us learn about what's new and what's cool and EBS, appreciate your time. >> Thank you for having us Dave. >> Thank you for having us Dave. >> You're very welcome now, if you want to learn more about EBS resilience, stay right here because coming up, we've got a session which is a deep dive on protecting mission critical workloads with Amazon EBS. Stay right there, you're watching theCube's coverage of AWS Storage Day 2022. (calm music)

Published Date : Aug 12 2022

SUMMARY :

Great to see you again. and specifically the clouds. And the thing that we And so really you have two choices. option too, which is lose data, to capacity so you are not and you can asynchronously that you get from using so I don't have to worry about And with EBS, you know EBS is a very large of the things EBS does And the core thing to So you mentioned your and you can use again the Ashish, maybe you could is the ability to back up only you so much for coming back if you want to learn more

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AshishPERSON

0.99+

Ashish PalekarPERSON

0.99+

AWSORGANIZATION

0.99+

Charles PhillipsPERSON

0.99+

Joan DevivaPERSON

0.99+

DavePERSON

0.99+

CamiPERSON

0.99+

third optionQUANTITY

0.99+

EBSORGANIZATION

0.99+

two propositionsQUANTITY

0.99+

second optionQUANTITY

0.99+

OneQUANTITY

0.99+

InforORGANIZATION

0.99+

Cami TavaresPERSON

0.99+

bothQUANTITY

0.99+

todayDATE

0.98+

two choicesQUANTITY

0.98+

EBSTITLE

0.97+

EC2TITLE

0.97+

TamiPERSON

0.96+

tens of thousands of snapshotsQUANTITY

0.95+

eachQUANTITY

0.95+

AZOTITLE

0.93+

Amazon EBSORGANIZATION

0.91+

theCubeORGANIZATION

0.89+

AshishORGANIZATION

0.89+

single pointsQUANTITY

0.86+

three siteQUANTITY

0.83+

single pointsQUANTITY

0.82+

DRBDTITLE

0.8+

Storage Day 2022EVENT

0.78+

one sizeQUANTITY

0.76+

Elastic DisasterTITLE

0.7+

EdgeORGANIZATION

0.68+

CEOPERSON

0.63+

LifecycleTITLE

0.59+

thingQUANTITY

0.57+

SnapshotTITLE

0.49+

S3TITLE

0.46+

Edward Naim, AWS | AWS Storage Day 2022


 

[Music] welcome back to aws storage day 2022 i'm dave vellante and we're pleased to have back on thecube edname the gm of aws file storage ed how you doing good to see you i'm good dave good good to see you as well you know we've been tracking aws storage for a lot of years 16 years actually we we've seen the evolution of services of course we started with s3 and object and saw that expand the block and file and and now the pace is actually accelerating and we're seeing aws make more moves again today and block an object but what about file uh it's one format in the world and the day wouldn't really be complete without talking about file storage so what are you seeing from customers in terms of let's start with data growth how are they dealing with the challenges what are those challenges if you could address you know specifically some of the issues that they're having that would be great and then later we're going to get into the role that cloud file storage plays take it away well dave i'm definitely increasingly hearing customers talk about the challenges in managing ever-growing data sets and they're especially challenged in doing that on-premises when we look at the data that's stored on premises zettabytes of data the fastest growing data sets consist of unstructured data that are stored as files and many cups have tens of petabytes or hundreds of petabytes or even exabytes of file data and this data is typically growing 20 30 percent a year and in reality on-premises models really designed to handle this amount of data in this type of growth and i'm not just talking about keeping up with hardware purchases and hardware floor space but a big part of the challenge is labor and talent to keep up with the growth they're seeing companies managing storage on-prem they really need an unprecedented number of skilled resources to manage the storage and these skill sets are in really high demand and they're in short supply and then another big part of the challenge that customers tell me all the time is that that operating at scale dealing with these ever-growing data sets at scale is really hard and it's not just hard in terms of the the people you need and the skill sets that you need but operating at scale presents net new challenges so for example it becomes increasingly hard to know what data you have and what storage media your data stored on when you have a massive amount of data that's spanning hundreds of thousands or uh thousands of applications and users and it's growing super fast each year and at scale you start seeing edge technical issues get triggered more commonly impacting your availability or your resiliency or your security and you start seeing processes that used to work when you were a much smaller scale no longer work it's just scale is hard it's really hard and then finally companies are wanting to do more with their fast growing data sets to get insights from it and they look at the machine learning and the analytics and the processing services and the compute power that they have at their fingertips on the cloud and having that data be in silos on-prem can really limit how they get the most out of their data you know i've been covering glad you brought up the skills gap i've been covering that quite extensively with my colleagues at etr you know our survey partner so that's a really important topic and we're seeing it across the board i mean really acute in cyber security but for sure just generally in i.t and frankly ceos they don't want to invest in training people to manage storage i mean it wasn't that long ago that managing loans was a was a talent and that's of course nobody does that anymore but they'd executives would much rather apply skills to get value from data so my specific question is what can be done what is aws doing to address this problem well with the growth of data that that we're seeing it it's just it's really hard for a lot of it teams to keep up with just the infrastructure management part that's needed so things like deploying capacity and provisioning resources and patching and conducting compliance reviews and that stuff is just table stakes the asks on these teams to your point are growing to be much bigger than than those pieces so we're really seeing fast uptake of our amazon fsx service because it's such an easy path for helping customers with these scaling challenges fsx enables customers to launch and to run and to scale feature rich and highly performant network attached file systems on aws and it provides fully managed file storage which means that we handle all of the infrastructure so all of that provisioning and that patching and ensuring high availability and customers simply make api calls to do things like scale up their storage or change their performance level at any point or change a backup policy and a big part of why fsx has been so feeling able to customers is it really enables them to to choose the file system technology that powers their storage so we provide four of the most popular file system technologies we provide windows file server netapp ontap open zfs and luster so that storage and application admins can use what they're familiar with so they essentially get the full capabilities and even the management clis that they're used to and that they've built workflows and applications around on-premises but they get along with that of course the benefits of fully managed elastic cloud storage that can be spin up and spun spin down and scaled on demand and performance changed on demand etc and what storage and application admins are seeing is that fsx not only helps them keep up with their scale and growth but it gives them the bandwidth to do more of what they want to do supporting strategic decision making helping their end customers figure out how they can get more value from their data identifying opportunities to reduce cost and what we realize is that for for a number of storage and application admins the cloud is is a different environment from what they're used to and we're making it a priority to help educate and train folks on cloud storage earlier today we talked about aws storage digital badges and we announced a dedicated file badge that helps storage admins and professionals to learn and demonstrate their aws skills in our aws storage badges you can think of them as credentials that represent cloud computing learning that customers can add to their repertoire add to their resume as they're embarking on this cloud journey and we'll be talking more in depth on this later today especially around the file badge which i'm very excited about so a couple things there that i wanted to comment on i mean i was there for the netapp you know your announcement we've covered that quite extensively this is just shows that it's not a zero-sum game necessarily right it's a win-win-win for customers you've got your you know specific aws services you've got partner services you know customers want want choice and then the managed service model you know to me is a no-brainer for most customers we learned this in the hadoop years i mean it just got so complicated then you saw what happened with the managed services around you know data lakes and lake houses it's just really simplified things for customers i mean there's still some customers that want to do it yourself but a managed service for the file storage sounds like a really easy decision especially for those it teams that are overburdened as we were talking about before and i also like you know the education component is nice touch too you get the badge thing so that's kind of cool so i'm hearing that if the fully managed file storage service is a catalyst for cloud adoption so the question is which workloads should people choose to move into the cloud where's the low friction low risk sweet spot ed well that's one of the first questions that customers ask when they're about to embark on their cloud journey and i wish i could give a simple or a single answer but the answer is really it varies and it varies per customer and i'll give you an example for some customers the cloud journey begins with what we call extending on-premises workloads into the cloud so an example of that is compute bursting workloads where customers have data on premises and they have some compute on premises but they want to burst their processing of that data to the cloud because they really want to take advantage of the massive amount of compute that they get on aws and that's common with workloads like visual effects ringer chip design simulation genomics analysis and so that's an example of extending to the cloud really leveraging the cloud first for your workloads another example is disaster recovery and that's a really common example customers will use a cloud for their secondary or their failover site rather than maintaining their their second on-prem location and so that's a lot of customers start with some of those workloads by extending to the cloud and then there's there's a lot of other customers where they've made the decision to migrate most or all of their workloads and they're not they're skipping the whole extending step they aren't starting there they're instead focused on going all in as fast as possible because they really want to get to the full benefits of the cloud as fast as possible and for them the migration journey is really it's a matter of sequencing sequencing which specific workloads to move and when and what's interesting is we're increasingly seeing customers prioritizing their most important and their most mission-critical applications ahead of their other workloads in terms of timing and they're they're doing that to get their workloads to benefit from the added resilience they get from running on the cloud so really it really does uh depend dave yeah thank you i mean that's pretty pretty good description of the options there and i i just come something you know bursting obviously i love those examples you gave around genomics chip design visual effects rendering the dr piece is again very common sort of cloud you know historical you know sweet spots for cloud but then the point about mission critical is interesting because i hear a lot of customers especially with the digital transformation push wanting to change their operating model i mean on the one hand not changing things put it in the cloud the lift and shift you have to change things low friction but then once they get there they're like wow we can do a lot more with the cloud so that was really helpful those those examples now last year at storage day you released a new file service and then you followed that up at re-event with another file service introduction sometimes i can admit i get lost in the array of services so help us understand when a customer comes to aws with like an nfs or an smb workload how do you steer them to the right managed service you know the right horse for the right course yeah well i'll start by saying uh you know a big part of our focus has been in providing choice to customers and what customers tell us is that the spectrum of options that we provide to them really helps them in their cloud journey because there really isn't a one-size-fits-all file system for all workloads and so having these options actually really helps them to to be able to move pretty easily to the cloud um and so my answer to your question about uh where do we steer a customer when they have a file workload is um it really depends on what the customer is trying to do and uh in many cases where they're coming from so i'll walk you through a little bit of of of how we think about this with customers so for storage and application admins who are extending existing workloads to the cloud or migrating workloads to aws the easiest path generally is to move to an fsx file system that provides the same or really similar underlying file system engine that they use on premises so for example if you're running a netapp appliance on premises or a windows file server on premises choosing that option within fsx provides the least effort for a customer to lift their application and their data set and they'll get the full safe set of capabilities that they're used to they'll get the performance profiles that they're used to but of course they'll get all the benefits of the cloud that i was talking about earlier like spin up and spin down and fully managed and elastic capacity then we also provide open source file systems within the fsx family so if you're a customer and you're used to those or if you aren't really wedded to a particular file system technology these are really good options and they're built on top of aws's latest infrastructure innovations which really allows them to provide pretty significant price and performance benefits to customers so for example the file system file servers for these offerings are powered by aws's graviton family of processors and under the hood we use storage technology that's built on top of aws's scalable reliable datagram transport protocol which really optimizes for for speed on the cloud and so for those two open source file systems we have open zfs and that provides a really powerful highly performant nfs v3 and v4 and 4.1 and 4.2 file system built on a fast and resilient open source linux file system it has a pretty rich set of capabilities it has things like point-to-time snapshots and in-place data cloning and our customers are really using it because of these capabilities and because of its performance for a pretty broad set of enterprise i.t workloads and vertically focused workloads like within the financial services space and the healthcare life sciences space and then luster is a scale-out file system that's built on the world's most popular high-performance file system which is the luster open source file system and customers are using it for compute intensive workloads where they're throwing tons of compute at massive data sets and they need to drive tens or hundreds of gigabytes per second of throughput it's really popular for things like machine learning training and high performance computing big data analytics video rendering and transcoding so really those scale out compute intensive workloads and then we have a very different type of customer very different persona and this is the individual that we call the aws builder and these are folks who are running cloud native workloads they leverage a broad spectrum of aws's compute and analytic services and they have really no history of on-prem examples are data scientists who require a file share for training sets research scientists who are performing analysis on lab data developers who are building containerized or serverless workloads and cloud practitioners who need a simple solution for storing assets for their cloud workflows and and these these folks are building and running a wide range of data focused workloads and they've grown up using services like lambda and building containerized workloads so most of these individuals generally are not storage experts and they look for storage that just works s3 and consumer file shares uh like dropbox are their reference point for how cloud storage works and they're indifferent to or unaware of bio protocols like smb or nfs and performing typical nas administrative tasks is just not it's not a natural experience for them it's not something they they do and we built amazon efs to meet the needs of that group it's fully elastic it's fully serverless spreads data across multiple availability zones by default it scales infinitely it works very much like s3 so for example you get the same durability and availability profile of s3 you get intelligent tiering of colder data just like you do on s3 so that service just clicks with cloud native practitioners it's it's intuitive and it just works there's mind-boggling the number of use cases you just went through and this is where it's so you know it's you know a lot of times people roll their eyes oh here's amazon talking about you know customer obsession again but if you don't stay close to your customers there's no way you could have predicted when you're building these services how they were going to be put to use the only way you can understand it is watch what customers do with it i loved the conversation about graviton we've written about that a lot i mean nitro we've written about that how it's you've completely rethought virtualization the security components in there the hpc luster piece and and the efs for data scientists so really helpful there thank you i'm going to change uh topics a little bit because there's been this theme that you've been banging on at storage day putting data to work and i tell you it's a bit of a passion of mine ed because frankly customers have been frustrated with the return on data initiatives it's been historically complicated very time consuming and expensive to really get value from data and often the business lines end up frustrated so let's talk more about that concept and i understand you have an announcement that fits with this scene can you tell us more about that absolutely today we're announcing a new service called amazon file cache and it's a service on aws that accelerates and simplifies hybrid workflows and specifically amazon file cache provides a high speed cache on aws that makes it easier to process file data regardless of where the data is stored and amazon file cache serves as a temporary high performance storage location and it's for data that's stored in on-premise file servers or in file systems or object stores in aws and what it does is it enables enterprises to make these dispersed data sets available to file based applications on aws with a unified view and at high speeds so think of sub millisecond latencies and and tens or hundreds of gigabytes per second of throughput and so a really common use case it supports is if you have data stored on premises and you want to burst the processing workload to the cloud you can set up this cache on aws and it allows you to have the working set for your compute workload be cached near your aws compute so what you would do as a customer when you want to use this is you spin up this cache you link it to one or more on-prem nfs file servers and then you mount this cache to your compute instances on aws and when you do this all of your on-prem data will appear up automatically as folders and files on the cache and when your aws compute instances access a file for the first time the cache downloads the data that makes up that file in real time and that data then would reside on the cache as you work with it and when it's in the cache your application has access to that data at those sub millisecond latencies and at up to hundreds of gigabytes per second of throughput and all of this data movement is done automatically and in the background completely transparent to your application that's running on the compute instances and then when you're done with your workload with your data processing job you can export the changes and all the new data back to your on-premises file servers and then tear down the cache another common use case is if you have a compute intensive file-based application and you want to process a data set that's in one or more s3 buckets you can have this cache serve as a really high speed layer that your compute instances mount as a network file system you can also place this cache in front of a mix of on-prem file servers and s3 buckets and even fsx file systems that are on aws all of the data from these locations will appear within a single name space that clients that mount the cache have access to and those clients get all the performance benefits of the cache and also get a unified view of their data sets and and to your point about listening to customers and really paying attention to customers dave we built this service because customers asked us to a lot of customers asked us to actually it's a really helpful enable enabler for a pretty wide variety of cloud bursting workloads and hybrid workflows ranging from media rendering and transcoding to engineering design simulation to big data analytics and it really aligns with that theme of extend that we were talking about earlier you know i often joke that uh aws has the best people working on solving the speed of light problem so okay but so this idea of bursting as i said has been a great cloud use case from the early days and and bringing it to file storage is very sound and approach with file cache looks really practical um when is the service available how can i get started you know bursting to aws give us the details there yeah well stay tuned we we announced it today at storage day and it will be generally available later this year and once it becomes available you can create a cache via the the aws management console or through the sdks or the cli and then within minutes of creating the cache it'll be available to your linux instances and your instances will be able to access it using standard file system mount commands and the pricing model is going to be a pretty familiar one to cloud customers customers will only pay for the cash storage and the performance they need and they can spin a cash up and use it for the duration of their compute burst workload and then tear it down so i'm really excited that amazon file cache will make it easier for customers to leverage the agility and the performance and the cost efficiency of aws for processing data no matter where the data is stored yeah cool really interested to see how that gets adopted ed always great to catch up with you as i said the pace is mind-boggling it's accelerating in the cloud overall but storage specifically so by asking us can we take a little breather here can we just relax for a bit and chill out uh not as long as customers are asking us for more things so there's there's more to come for sure all right ed thanks again great to see you i really appreciate your time thanks dave great catching up okay and thanks for watching our coverage of aws storage day 2022 keep it right there for more in-depth conversations on thecube your leader in enterprise and emerging tech coverage [Music] you

Published Date : Aug 12 2022

SUMMARY :

and then you mount this cache to your

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Edward NaimPERSON

0.99+

tensQUANTITY

0.99+

tens of petabytesQUANTITY

0.99+

hundreds of petabytesQUANTITY

0.99+

AWSORGANIZATION

0.99+

amazonORGANIZATION

0.99+

awsORGANIZATION

0.99+

hundreds of thousandsQUANTITY

0.99+

last yearDATE

0.98+

16 yearsQUANTITY

0.98+

todayDATE

0.98+

first timeQUANTITY

0.97+

each yearQUANTITY

0.97+

davePERSON

0.97+

secondQUANTITY

0.97+

dave vellantePERSON

0.97+

20 30 percent a yearQUANTITY

0.97+

later this yearDATE

0.96+

oneQUANTITY

0.96+

awsTITLE

0.95+

windowsTITLE

0.94+

thousands of applicationsQUANTITY

0.94+

later todayDATE

0.93+

one formatQUANTITY

0.93+

hundreds of gigabytes per secondQUANTITY

0.93+

first questionsQUANTITY

0.93+

hundreds of gigabytes per secondQUANTITY

0.92+

two open sourceQUANTITY

0.92+

s3TITLE

0.92+

fsxTITLE

0.89+

4.1TITLE

0.88+

firstQUANTITY

0.88+

a lot of yearsQUANTITY

0.87+

earlier todayDATE

0.84+

linuxTITLE

0.84+

four of the most popular fileQUANTITY

0.79+

nitroORGANIZATION

0.79+

netappTITLE

0.78+

4.2TITLE

0.74+

single answerQUANTITY

0.74+

gravitonTITLE

0.74+

zettabytesQUANTITY

0.73+

dayEVENT

0.73+

lot of customersQUANTITY

0.73+

exabytesQUANTITY

0.72+

a lot of other customersQUANTITY

0.71+

2022DATE

0.71+

v4TITLE

0.71+

single nameQUANTITY

0.68+

tons of computeQUANTITY

0.64+

couple thingsQUANTITY

0.63+

minutesQUANTITY

0.56+

DayEVENT

0.54+

Mat Mathews & Randy Boutin, AWS | AWS Storage Day 2022


 

(upbeat music) >> Welcome to theCube's coverage of AWS Storage Day. We're here with a couple of AWS product experts. Covering AWS's migration and transfer services, Randy Boutin is the general manager of AWS DataSync, and Mat Matthews, GM of AWS Transfer Family. Guys, good to see you again. Thanks for coming on. >> Dave, thanks. >> So look, we saw during the pandemic, the acceleration to cloud migration. We've tracked that, we've quantified that. What's driving that today? >> Yeah, so Dave, great to be back here. Saw you last year at Storage Day. >> Nice to be in studio too, isn't it? Thanks, guys, for coming in. >> We've conquered COVID. >> So yeah, I mean, this is a great question. I think digital transformation is really what's driving a lot of the focus right now from companies, and it's really not about just driving down costs. It's also about what are the opportunities available once you get into the cloud in terms of, what does that unlock in terms of innovation? So companies are focused on the usual things, optimizing costs, but ensuring they have the right security and agility. You know, a lot has happened over the last year, and companies need to be able to react, right? They need to be able to react quickly, so cloud gives them a lot of these capabilities, but the real benefit that we see is that once your data's in the cloud, it opens up the power of the cloud for analytics, for new application development, and things of that sort, so what we're seeing is that companies are really just focused on understanding cloud migration strategy, and how they can get their data there, and then use that to unlock that data for the value. >> I mean, if I've said it once, I've said it 100 times, if you weren't a digital business during the pandemic, you were out of business. You know, migration historically is a bad word in IT. Your CIOs see it and go, "Ugh." So what's the playbook for taking years of data on-prem, and moving it into the cloud? What are you seeing as best practice there? >> Yeah, so as you said, the migration historically has been painful, right? And it's a daunting task for any business or any IT executive, but fortunately, AWS has a broad suite of capabilities to help enable these migrations. And by that, I mean, we have tools to help you understand your existing on-prem workloads, understand what services in the AWS offering align to those needs, but also help you estimate the cost, right? Cost is a big part of this move. We can help you estimate that cost, and predict that cost, and then use tools like DataSync to help you move that data when that time comes. >> So you're saying you help predict the cost of the migration, or the cost of running in the cloud? >> Running in the cloud, right. Yeah, we can help estimate the run time. Based on the performance that we assess on-prem, we can then project that into a cloud service, and estimate that cost. >> So can you guys explain DataSync? Sometimes I get confused, DataSync, what's the difference between DataSync and Storage Gateway? And I want to get into when we should use each, but let's start there if we could. >> Yeah, sure, I'll take that. So Storage Gateway is primarily a means for a customer to access their data in the cloud from on-prem. All right, so if you have an application that you want to keep on-prem, you're not ready yet to migrate that application to the cloud, Gateway is a strong solution, because you can move a lot of that data, a lot of your cold or long tail data into something like S3 or EFS, but still access it from your on-prem location. DataSync's all about data movement, so if you need to move your data from A to B, DataSync is your optimized solution to do that. >> Are you finding that people, that's ideally a one time move, or is it actually, sometimes you're seeing customers do it more? Again, moving data, if I don't- Move as much data as you need to, but no more, to paraphrase Einstein. >> What we're seeing in DataSync is that customers do use DataSync for their initial migration. They'll also, as Matt was mentioning earlier, once you get your data into the cloud, that flywheel of potential starts to take hold, and customers want to ultimately move that data within the cloud to optimize its value. So you might move from service to service. You might move from EFS to S3, et cetera, to enable the cloud flywheel to benefit you. DataSync does that as well, so customers use us to initially migrate, they use us to move within the cloud, and also we just recently announced service for other clouds, so you can actually bring data in now from Google and Azure as well. >> Oh, how convenient. So okay, so that's cool. So you helped us understand the use cases, but can we dig one more layer, like what protocols are supported? I'm trying to understand really the right fit for the right job. >> Yeah, so that's really important. So for transfer specifically, one of the things that we see with customers is you've got obviously a lot of internal data within your company, but today it's a very highly interconnected world, so companies deal with lots of business partners, and historically they've used, there's a big prevalence of using file transfer to exchange data with business partners, and as you can imagine, there's a lot of value in that data, right? Sometimes it's purchase orders, inventory data from suppliers, or things like that. So historically customers have had protocols like SFTP or FTP to help them interface with or exchange data or files with external partners. So for transfer, that's what we focus on is helping customers exchange data over those existing protocols that they've used for many years. And the real focus is it's one thing to migrate your own data into the cloud, but you can't force thousands or tens of thousands sometimes of partners to also work in a different way to get you their data, so we want to make that very seamless for customers using the same exact protocols like SFTP that they've used for years. We just announced AS2 protocol, which is very heavily used in supply chains to exchange inventory and information across multi-tiers of partners, and things of that nature. So we're really focused on letting customers not have to impact their partners, and how they work and how they exchange, but also take advantage of the data, so get that data into the cloud so they can immediately unlock the value with analytics. >> So AS2 is specifically in the context of supply chain, and I'm presuming it's secure, and kind of governed, and safe. Can you explain that a little bit? >> Yeah, so AS2 has a lot of really interesting features for transactional type of exchanges, so it has signing and encryption built in, and also has notification so you can basically say, "Hey, I sent you this purchase order," and to prove that you received it, it has capability called non-repudiation, which means it's actually a legal transaction. So those things are very important in transactional type of exchanges, and allows customers in supply chains, whether it's vendors dealing with their suppliers, or transportation partners, or things like that to leverage file transfer for those types of exchanges. >> So encryption, providence of transactions, am I correct, without having to use the blockchain, and all the overhead associated with that? >> It's got some built in capabilities. >> I mean, I love blockchain, but there's drawbacks. >> Exactly, and that's why it's been popular. >> That's really interesting, 'cause Andy Jassy one day, I was on a phone call with him and John Furrier, and we were talking up crypto and blockchain. He said, "Well, why do, explain to me." You know Jassy, right? He always wants to go deeper. "Explain why I can't do this with some other approach." And so I think he was recognizing some of the drawbacks. So that's kind of a cool thing, and it leads me- We're running this obviously today, August 10th. Yesterday we had our Supercloud event in Palo Alto on August 9th, and it's all about the ecosystem. One of the observations we made about the 2020s is the cloud is totally different now. People are building value on top of the infrastructure that you guys have built out over the last 15 years. And so once an organization's data gets into the cloud, how does it affect, and it relates to AS2 somewhat, how does it affect the workflows in terms of interacting with external partners, and other ecosystem players that are also in the cloud? >> Yeah, great, yeah, again, we want to try and not have to affect those workflows, take them as they are as much as possible, get the data exchange working. One of the things that we focus on a lot is, how do you process this data once it comes in? Every company has governance requirements, security requirements, and things like that, so they usually have a set of things that they need to automate and orchestrate for the data as it's coming in, and a lot of these companies use something called Managed File Transfer Solutions that allow them to automate and orchestrate those things. We also see that many times this is very customer specific, so a bank might have a certain set of processes they have to follow, and it needs to be customized. As you know, AWS is a great solution for building custom solutions, and actually today, we're just announcing a new set of of partners in a program called the Service Delivery Program with AWS Transfer Family that allows customers to work with partners that are very well versed in transfer family and related services to help build a very specific solution that allows them to build that automation orchestration, and keep their partners kind of unaware that they're interfacing in a different way. >> And once this data is in the cloud, or actually, maybe stays on-prem in some cases, but it basically plugs in to the AWS services portfolio, the whole security model, the governance model, shared responsibility comes in, is that right? It's all, sort of all in there? >> Yeah, that's right, that's exactly right, and we're working with it's all about the customer's needs, and making sure that their investment in AWS doesn't disrupt their existing workflows and their relationships with their customers and their partners, and that's exactly what Matt's been describing is we're taking a close look at how we can extend the value of AWS, integrate into our customer's workflows, and bring that value to them with minimal investment or disruption. >> So follow up on that. So I love that, because less disruption means it's easier, less friction, and I think of like, trying to think of examples. Think about data de-duplication like purpose-built backup appliances, right? Data domain won that battle, because they could just plug right in. Avamar, they were trying to get you to redo everything, okay, and so we saw that movie play out. At the same time, I've talked to CIOs that say, "I love that, but the cloud opens up all these cool new opportunities for me to change my operating model." So are you seeing that as well? Where okay, we make it easy to get in. We're not disrupting workflows, and then once they get in, they say, "Well if we did it this way, we'd take out a bunch of costs. We'd accelerate our business." What's that dynamic like? >> Exactly that, right. So that moved to the Cloud Continuum. We don't think it's going to be binary. There's always going to be something on-prem. We accept that, but there's a continuum there, so day one, they'll migrate a portion of that workload into the cloud, start to extract and see value there, but then they'll continue, as you said, they'll continue to see opportunities. With all of the various capabilities that AWS has to offer, all the value that represents, they'll start to see that opportunity, and then start to engage and consume more of those features over time. >> Great, all right, give us the bumper sticker. What's next in transfer services from your perspectives? >> Yeah, so we're obviously always going to listen to our customers, that's our focus. >> You guys say that a lot. (all laughing) We say it a lot. But yeah, so we're focused on helping customers again increase that level of automation orchestration, again that suite of capability, generally, in our industry, known as managed file transfer, when a file comes in, it needs to get maybe encrypted, or decrypted, or compressed, or decompressed, scanned for viruses, those kind of capabilities, make that easier for customers. If you remember last year at Storage Day, we announced a low code workflow framework that allows customers to kind of build those steps. We're continuing to add built-in capabilities to that so customers can easily just say, "Okay, I want these set of activities to happen when files come in and out." So that's really what's next for us. >> All right, Randy, we'll give you the last word. Bring us home. >> I'm going to surprise you with the customer theme. >> Oh, great, love it. >> Yeah, so we're listening to customers, and what they're asking for our support for more sources, so we'll be adding support for more cloud sources, more on-prem sources, and giving the customers more options, also performance and usability, right? So we want to make it easier, as the enterprise continues to consume the cloud, we want to make DataSync and the movement of their data as easy as possible. >> I've always said it starts with the data. S3, that was the first service, and the other thing I've said a lot is the cloud is expanding. We're seeing connections to on-prem. We're seeing connections out to the edge. It's just becoming this massive global system, as Werner Vogels talks about all the time. Thanks, guys, really appreciate it. >> Dave, thank you very much. >> Thanks, Dave. >> All right, keep it right there for more coverage of AWS Storage Day 2022. You're watching theCube. (upbeat music)

Published Date : Aug 12 2022

SUMMARY :

Guys, good to see you again. the acceleration to cloud migration. Yeah, so Dave, great to be back here. Nice to be in studio too, isn't it? and companies need to and moving it into the cloud? in the AWS offering align to those needs, Running in the cloud, right. So can you guys explain DataSync? All right, so if you have an application but no more, to paraphrase Einstein. for other clouds, so you can for the right job. so get that data into the cloud and kind of governed, and safe. and to prove that you received it, but there's drawbacks. Exactly, and that's One of the observations we made that they need to automate and orchestrate and making sure that their investment for me to change my operating model." So that moved to the Cloud Continuum. services from your perspectives? always going to listen that allows customers to give you the last word. I'm going to surprise the movement of their data We're seeing connections out to the edge. of AWS Storage Day 2022.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Randy BoutinPERSON

0.99+

AWSORGANIZATION

0.99+

JassyPERSON

0.99+

Mat MatthewsPERSON

0.99+

MattPERSON

0.99+

Palo AltoLOCATION

0.99+

thousandsQUANTITY

0.99+

DavePERSON

0.99+

100 timesQUANTITY

0.99+

Andy JassyPERSON

0.99+

EinsteinPERSON

0.99+

August 9thDATE

0.99+

first serviceQUANTITY

0.99+

RandyPERSON

0.99+

last yearDATE

0.99+

John FurrierPERSON

0.99+

GoogleORGANIZATION

0.99+

todayDATE

0.99+

Werner VogelsPERSON

0.99+

YesterdayDATE

0.99+

tens of thousandsQUANTITY

0.99+

S3TITLE

0.98+

one timeQUANTITY

0.98+

one more layerQUANTITY

0.98+

Mat MathewsPERSON

0.98+

OneQUANTITY

0.97+

2020sDATE

0.97+

AWS DataSyncORGANIZATION

0.97+

Storage DayEVENT

0.97+

pandemicEVENT

0.97+

AWS Transfer FamilyORGANIZATION

0.97+

one thingQUANTITY

0.97+

eachQUANTITY

0.96+

oneQUANTITY

0.94+

EFSTITLE

0.94+

DataSyncTITLE

0.93+

SupercloudEVENT

0.89+

COVIDOTHER

0.85+

AzureORGANIZATION

0.82+

August 10thDATE

0.79+

last 15 yearsDATE

0.79+

today,DATE

0.77+

AS2OTHER

0.75+

yearsQUANTITY

0.7+

Transfer FamilyTITLE

0.69+

Storage Day 2022EVENT

0.66+

Service Delivery ProgramOTHER

0.64+

AS2ORGANIZATION

0.63+

thingsQUANTITY

0.58+

onceQUANTITY

0.57+

DataSyncORGANIZATION

0.57+

AWSEVENT

0.56+

GatewayORGANIZATION

0.52+

theCubePERSON

0.52+

AS2TITLE

0.5+

Storage GatewayORGANIZATION

0.48+

AvamarORGANIZATION

0.47+

dayQUANTITY

0.47+

SFTPOTHER

0.45+

SFTPTITLE

0.44+

2022DATE

0.43+

Storage GatewayTITLE

0.31+

Kevin Miller, AWS | Modernize, unify, and innovate with data | AWS Storage Day 2022


 

(upbeat music) >> We're here on theCube covering AWS Storage Day 2022. Kevin Miller joins us. He's the vice president and general manager of Amazon S3. Hello, Kevin, good to see you again. >> Hey Dave, it's great to see you as always. >> It seems like just yesterday we were celebrating the 15th anniversary of S3, and of course the launch of the modern public cloud, which started there. You know, when you think back Kevin, over the past year, what are some of the trends that you're seeing and hearing from customers? What do they want to see AWS focus more on? What's the direction that you're setting? >> Yeah, well Dave, really I think there's probably three trends that we're seeing really pop this year. I think one just given the kind of macroeconomic situation right now is cost optimization. That's not a surprise. Everyone's just taking a closer look at what they're using, and where they might be able to pair back. And you know, I think that's a place that obviously S3 has a long history of helping customers save money. Whether it's through our new storage classes, things like our Glacier Instant Retrieval, storage class that we launched to reinvent last year. Or things like our S3 storage lens capability to really dig in and help customers identify where their costs are are being spent. But so certainly every, you know, a lot of customers are focused on that right now, and for obvious reasons. I think the second thing that we're seeing is, just a real focus on simplicity. And it kind of goes hand in hand with cost optimization, because what a lot of customers are looking for is, how do I take the staff that I have, and do more this year. Right, continue to innovate, continue to bring new applications or top line generating revenue applications to the market, but not have to add a lot of extra headcount to do that. And so, what they're looking for is management and simplicity. How do I have all of this IT infrastructure, and not have to have people spending a lot of their time going into kind of routine maintenance and operations. And so that's an area that we're spending a lot of time. We think we have a lot of capability today, but looking at ways that we can continue to simplify, make it easier for customers to manage their infrastructure. Things like our S3 intelligent tiering storage class, which just automatically gives cost savings for data that's not routinely accessed. And so that's a big focus for us this year as well. And then I think the last and probably third thing I would highlight is an emerging theme or it's been a theme, but really continuing to increase in volume, is all around sustainability. And you know, our customers are looking for us to give them the data and the assurances for them, for their own reports and their own understanding of how sustainable is my infrastructure. And so within AWS, of course, you know we're on a path towards operating with 100% renewable energy by 2025. As well as helping the overall Amazon goal of achieving net zero carbon by 2040. So those are some big lofty goals. We've been giving customers greater insights with our carbon footprint tool. And we think that, you know the cloud continues to be just a great place to run and reduce customer's carbon footprint for the similar you know, storage capacity or similar compute capacity. But that's just going to continue to be a trend and a theme that we're looking at ways that we can continue to help customers do more to aggressively drive down their carbon footprint. >> I mean, it makes sense. It's like you're partnering up with the cloud, you know, you did same thing on security, you know, there's that shared responsibility model, same thing now with ESG. And on the macro it's interesting Kevin, this is the first time I can remember where, you know it used to be, if there's a downturn it's cost optimization, you go to simplicity. But at the same time with digital, you know, the rush to digital, people still are thinking about, okay how do I invest in the future? So but let's focus on cost for a moment then we'll come back to sort of the data value. Can you tell us how AWS helps customers save on storage, you know, beyond just the price per terabyte actions that you could take. I mean I love that, you guys should keep doing that. >> Absolutely. >> But what other knobs are you turning? >> Yeah, right and we've had obviously something like 15 cost reductions or price reductions over the years, and we're just going to continue to use that lever where we can, but it's things like the launch of our Glacier Instant Retrieval storage class that we did last year at Reinvent, where that's now you know, 4/10ths of a cent per gigabyte month. For data that customers access pretty infrequently maybe a few times a year, but they can now access that data immediately and just pay a small retrieval fee when they access that data. And so that's an example of a new capability that reduces customer's total cost of ownership, but is not just a straight up price reduction. I mentioned S3 Intelligent-Tiering, that's another case where, you know, when we launch Glacier Instant Retrieval, we integrated that with Intelligent-Tiering as well. So we have the archive instant access tier within Intelligent-Tiering. And so now data that's not accessed for 90 days is just automatically put into AIA and and then results in a reduced storage cost to customers. So again, leaning into this idea that customers are telling us, "Just do, you know what should be done "for my data to help me reduce cost, can you just do it, "and sort of give me the right defaults." And that's what we're trying to do with things like Intelligent-Tiering. We've also, you know, outside of the S3 part of our portfolio, we've been adding similar kinds of capabilities within some of our file services. So things like our, you know elastic file service launched a one zone storage class as well as an intelligent tiering capability to just automatically help customers save money. I think in some cases up to 92% on their their EFS storage costs with this automatic intelligent tiering capability. And then the last thing I would say is that we also are just continuing to help customers in other ways, like I said, our storage lens is a great way for customers to really dig in and figure out. 'Cause you know, often customers will find that they may have, you know, certain data sets that someone's forgotten about or, they're capturing more data than they expected perhaps in a logging application or something that ends up generating a lot more data than they expected. And so storage lens helps them really zoom in very quickly on, you know this is the data, here's how frequently it's being accessed and then they can make decisions about use that data I keep, how long do I keep it? Maybe that's good candidates to move down into one of our very cold storage classes like Glacier Deep Archive, where they they still have the data, but they don't expect to need to actively retrieve it on a regular basis. >> SDL bromide, if you can measure it, you can manage it. So if I can see it, visualize it, that I can take actions. When you think about S3- >> That's right. it's always been great for archival workloads but you made some updates to Glacier that changed the way that we maybe think about archive data. Can you talk about those changes specifically, what it means for how customers should leverage AWS services going forward? >> Yeah, and actually, you know, Glacier's coming up on its 10 year anniversary in August, so we're pretty excited about that. And you know, but there's just been a real increase in the pace of innovation, I think over the last three or four years there. So we launched the Glacier Deep Archive capability in 2019, 2018, I guess it was. And then we launched Glacier Instant Retrieval of course last year. So really what we're seeing is we now have three storage classes that cover are part of the Glacier family. So everything from millisecond retrieval for that data, that needs to be accessed quickly when it is accessed, but isn't being accessed, you know, regularly. So maybe a few times a year. And there's a lot of use cases that we're seeing really quickly emerge for that. Everything from, you know, user generated content like photos and videos, to big broadcaster archives and particularly in media and entertainment segment. Seeing a lot of interest in Glaciers Instant Retrieval because that data is pretty cold on a regular basis. But when they want to access it, they want a huge amount of data, petabytes of data potentially back within seconds, and that's the capability we can provide with Glacier Instant Retrieval. And then on the other end of the spectrum, with Glacier Deep Archive, again we have customers that have huge archives of data that they be looking to have that 3-AZ durability that we provide with Glacier, and make sure that data is protected. But really, you know expect to access it once a year if ever. Now it could be a backup copy of data or secondary or tertiary copy of data, could be data that they just don't have an active use for it. And I think that's one of the things we're starting to see grow a lot, is customers that have shared data sets where they may not need that data right now but they do want to keep it because as they think about, again these like new applications that can drive top line growth, they're finding that they may go back to that data six months or nine months from now and start to really actively use it. So if they want that option value to keep that data so they can use it down the road, Glacier Deep Archive, or Glacier Flexible Retrieval, which is kind of our storage class right in the middle of the road. Those are great options for customers to keep the data, keep it safe and secure, but then have it, you know pretty accessible when they're ready to get it back. >> Got it, thank you for that. So, okay, so customers have choices. I want to get into some of the competitive differentiators. And of course we were talking earlier about cost optimization, which is obviously an important topic given the macro environment you know, but there's more. And so help us understand what's different about AWS in terms of helping customers get value from their data, cost reduction as a component of value, part of the TCO, for sure. But just beyond being a cloud bit bucket, you know just a storage container in the cloud, what are some of the differentiators that you can talk to? >> Yeah, well Dave, I mean, I think that when it comes to value, I think there's tremendous benefits in AWS, well beyond just cost reduction. I think, you know, part of it is S3 now has built, I think, an earned reputation for being resilient, for storing, you know, at massive scale giving customers that confidence that they will be able to scale up. You know, we store more than 200 trillion objects. We regularly peak at over 100 million requests per second. So customers can build on S3 and Glacier with the confidence that we're going to be there to help their applications grow and scale over time. And then I think that in all of the applications both first party and third party, the customers can use, and services that they can use to build modern applications is an incredible benefit. So whether it's all of our serverless offerings, things like Lambda or containers and everything we have to manage that. Or whether it's the deep analytics and machine learning capabilities we have to help really extract, you know value and insight from data in near real time. You know, we're just seeing an incredible number of customers build those kinds of applications where they're processing data and feeding their results right back into their business right away. So I'm just going to briefly mention a couple, like, you know one example is ADP that really helps their customers measure, compare and sort of analyze their workforce. They have a couple petabytes of data, something like 25 billion individual data points and they're just processing that data continuously through their analytics and machine learning applications to then again, give those insights back to their customers. Another good example is AstraZeneca. You know, they are processing petabytes and petabytes of genomic sequencing data. And they have a goal to analyze 2 million genomes over the next four years. And so they're just really scaling up on AWS, both from a pure storage point of view, but more importantly, from all of the compute and analytics capability on top that is really critical to achieving that goal. And then, you know, beyond the first party services we have as I mentioned, it's really our third party, right? The AWS partner network provides customers an incredible range of choice in off the shelf applications that they can quickly provision and make use of the data to drive those business insights. And I think today the APN has something like 100,000 partners over in 150 countries. And we specifically have a storage competency partner where customers can go to get those applications that directly work, you know, on top of their data. And really, like I said, drive some of that insight. So, you know, I think it's that overall benefit of being able to really do a lot more with their data than just have it sit idle. You know, that's where I think we see a lot of customers interested in driving additional value. >> I'm glad you mentioned the ecosystem, and I'm glad you mentioned the storage competency as well. So there are other storage partners that you have, even though you're a head of a big storage division. And then I think there's some other under the cover things too. I've recently wrote, actually have written about this a lot. Things like nitro and rethinking virtualization and how to do, you know offloads. The security that comes, you know fundamentally as part of the platform is, I think architecturally is something that leads the way in the industry for sure. So there's a lot we could unpack, but you've fundamentally changed the storage market over the last 16 years. And again, I've written about this extensively. We used to think about storage in blocks or you got, you know, somebody who's really good in files, there were companies that dominated each space with legacy on-prem storage. You know, when you think about object storage Kevin, it was a niche, right? It was something used for archival, it was known for its simple, get put syntax, great for cheap and deep storage, and S3 changed that. Why do you think that's happened and S3 has evolved, the object has evolved the way it has, and what's the future hold for S3? >> Yeah I mean, you know, Dave, I think that probably the biggest overall trend there is that customers are looking to build cloud native applications. Where as much of that application is managed as they can have. They don't want to have to spend time managing the underlying infrastructure, the compute and storage and everything that goes around it. And so a fully managed service like S3, where there's no provisioning storage capacity, there's, you know we provide the resiliency and the durability that just really resonates with customers. And I think that increasingly, customers are seeing that they want to innovate across the entire range of business. So it's not about a central IT team anymore, it's about engineers that are embedded within lines of business, innovating around what is critical to achieve their business results. So, you know, if they're in a manufacturing segment, how can we pull data from sensors and other instrumentation off of our equipment and then make better decisions about when we need to do predictive maintenance, how quickly we can run our manufacturing line, looking for inefficiencies. And so we've developed around our managed offerings like S3, we've just developed, you know, customers who are investing and executing on plans and you know transformations. That really give them, you know put digital technology directly into the line of business that they're looking for. And I think that trend is just going to continue. People sometimes ask me, well "I mean, 16 years, you know, isn't S3 done?" And I would say, "By no stretcher are we done." We have plenty of feedback from customers on ways that we can continue to simplify, reduce the kinds of things they need to do, when they're looking for example and rolling out new security policies and parameters across their entire organization. So raising the bar there, finding, you know, raising the bar on how they can efficiently manage their storage and reduce costs. So I think we have plenty of innovation ahead of us to continue to help customers provide that fully managed capability. >> Yeah I often say Kevin, the next 10 years ain't going to be like the last in cloud. So I really thank you for coming on theCube and sharing your insights, really appreciate it. >> Absolutely Dave, thanks for having me. >> You're welcome. Okay keep it right there for more coverage of AWS Storage Day 2022 in theCube. (calm bright music)

Published Date : Aug 10 2022

SUMMARY :

Hello, Kevin, good to see you again. to see you as always. and of course the launch And we think that, you know that you could take. that they may have, you When you think about S3- Glacier that changed the way And you know, but there's that you can talk to? And then, you know, beyond the and how to do, you know offloads. and you know transformations. So I really thank you of AWS Storage Day 2022 in theCube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Kevin MillerPERSON

0.99+

KevinPERSON

0.99+

AWSORGANIZATION

0.99+

DavePERSON

0.99+

AmazonORGANIZATION

0.99+

2019DATE

0.99+

90 daysQUANTITY

0.99+

2025DATE

0.99+

AugustDATE

0.99+

2018DATE

0.99+

2040DATE

0.99+

last yearDATE

0.99+

16 yearsQUANTITY

0.99+

100,000 partnersQUANTITY

0.99+

2 million genomesQUANTITY

0.99+

S3TITLE

0.99+

yesterdayDATE

0.99+

more than 200 trillion objectsQUANTITY

0.99+

todayDATE

0.99+

GlacierORGANIZATION

0.99+

ReinventORGANIZATION

0.98+

bothQUANTITY

0.98+

15th anniversaryQUANTITY

0.98+

first timeQUANTITY

0.98+

second thingQUANTITY

0.98+

nine monthsQUANTITY

0.98+

this yearDATE

0.98+

APNORGANIZATION

0.97+

LambdaTITLE

0.97+

oneQUANTITY

0.97+

150 countriesQUANTITY

0.96+

AstraZenecaORGANIZATION

0.96+

each spaceQUANTITY

0.95+

15 cost reductionsQUANTITY

0.95+

S3COMMERCIAL_ITEM

0.95+

three storage classesQUANTITY

0.95+

25 billion individual data pointsQUANTITY

0.94+

one exampleQUANTITY

0.94+

three trendsQUANTITY

0.93+

10 year anniversaryQUANTITY

0.93+

up to 92%QUANTITY

0.91+

Instant RetrievalCOMMERCIAL_ITEM

0.9+

six monthsQUANTITY

0.9+

over 100 million requests per secondQUANTITY

0.89+

past yearDATE

0.87+

four yearsQUANTITY

0.87+

last 16 yearsDATE

0.87+

firstQUANTITY

0.87+

third thingQUANTITY

0.86+

Glacier InstantCOMMERCIAL_ITEM

0.85+

100% renewable energyQUANTITY

0.83+

one zoneQUANTITY

0.83+

gigabyteQUANTITY

0.83+

Glacier Deep ArchiveORGANIZATION

0.83+

Glacier Flexible RetrievalORGANIZATION

0.81+

Glacier Instant RetrievalCOMMERCIAL_ITEM

0.79+

Storage Day 2022EVENT

0.78+

few times a yearQUANTITY

0.75+

Wayne Duso & Nancy Wang | AWS Storage Day 2022


 

>>Okay, we're back. My name is Dave Valante and this is the Cube's coverage of AWS storage day. You know, coming off of reinforc I wrote the, the cloud was a new layer of defense. In fact, the first line of defense in a cyber security strategy. And that brings new thinking and models for protecting data, data protection, specifically, traditionally thought of as backup and recovery, it's become a critical adjacency to security and a component of a comprehensive cybersecurity strategy. We're here in our studios outside of Boston with two cube alums, and we're gonna discuss this in other topics. Wayne do so is the vice president for AWS storage edge and data services, and Nancy Wong as general manager of AWS backup and data protection services, guys. Welcome. Great to see you again. Thanks for coming on. Of >>Course, always a pleasure, Dave. Good to >>See you, Dave. All right. So Wayne, let's talk about how organizations should be thinking about this term data protection. It's an expanding definition, isn't >>It? It is an expanding definition. They, last year we talked about data and the importance of data to companies. Every company is becoming a data company, you know, da the amount of data they generate, the amount of data they can use to create models, to do predictive analytics. And frankly, to find ways of innovating is, is grown rapidly. And, you know, there's this tension between access to all that data, right? Getting the value out of that data. And how do you secure that data? And so this is something we think about with customers all the time. So data durability, data protection, data resiliency, and, you know, trust in their data. If you think about running your organization on your data, trust in your data is so important. So, you know, you gotta trust where you're putting your data. You know, people who are putting their data on a platform need to trust that platform will in fact, ensure it's durability, security, resiliency. >>And, you know, we see ourselves AWS as a partner in securing their data, making their data dur durable, making their data resilient, right? So some of that responsibility is on us. Some of that is on so shared responsibility around data protection, data resiliency. And, you know, we think about forever, you know, the notion of, you know, compromise of your infrastructure, but more and more people think about the compromise of their data as data becomes more valuable. And in fact, data is a company's most valuable asset. We've talked about this before. Only second to their people. You know, the people that are most valuable asset, but right next to that is their data. So really important stuff. >>So Nancy, you talked to a lot of customers, but by the way, it always comes back to the data. We've saying this for years, haven't we? So you've got this expanding definition of data protection, you know, governance is in there. You, you think about access cetera. When you talk to customers, what are you hearing from them? How are they thinking about data protection? >>Yeah. So a lot of the customers that Wayne and I have spoken to often come to us seeking thought leadership about, you know, how do I solve this data challenge? How do I solve this data sprawl challenge, but also more importantly, tying it back to data protection and data resiliency is how do I make sure that data is secure, that it's protected against, let's say ransomware events, right. And continuously protected. So there's a lot of mental frameworks that come to mind and a very popular one that comes up in quite a few conversations is this cybersecurity framework, right? And from a data protection perspective is just as important to protect and recover your data as it is to be able to detect different events or be able to respond to those events. Right? So recently I was just having a conversation with a regulatory body of financial institutions in Europe, where we're designing a architecture that could help them make their data immutable, but also continuously protected. So taking a step back, that's really where I see AWS's role in that we provide a wide breadth of primitives to help customers build secure platforms and scaffolding so that they can focus on building the data protection, the data governance controls, and guardrails on top of that platform. >>And, and that's always been AWS's philosophy, you know, make sure that developers have access to those primitives and APIs so that they can move fast and, and essentially build their own if that that's in fact what they wanna do. And as you're saying, when data protection is now this adjacency to cyber security, but there's disaster recoveries in there, business continuance, cyber resilience, et cetera. So, so maybe you could pick up on that and sort of extend how you see AWS, helping customers build out those resilient services. >>Yeah. So, you know, two core pillars to a data protection strategy is around their data durability, which is really an infrastructure element. You know, it's, it's, it's, it's by and large the responsibility of the provider of that infrastructure to make sure that data's durable, cuz if it's not durable, everything else doesn't matter. And then the second pillar is really about data resiliency. So in terms of security, controls and governance, like these are really important, but these are shared responsibility. Like the customers working with us with the services that we provide are there to architect the design, it's really human factors and design factors that get them resiliency, >>Nancy, anything you would add to what Wayne just said. >>Yeah, absolutely. So customers tell us that they want always on data resiliency and data durability, right? So oftentimes in those conversations, three common themes come up, which is they want a centralized solution. They want to be able to transcribe their intent into what they end up doing with their data. And number three, they want something that's policy driven because once you centralize your policies, it's much better and easier to establish control and governance at an organizational level. So keeping that in mind with policy as our interface, there's two managed AWS solutions that I recommend you all check out in terms of data resiliency and data durability. Those are AWS backup, which is our centralized solution for managing protection recovery, and also provides an audit audit capability of how you protect your data across 15 different AWS services, as well as on-premises VMware and for customers whose mission critical data is contained entirely on disk. We also offer AWS elastic disaster recovery services, especially for customers who want to fail over their workloads from on premises to the cloud. >>So you can essentially centralize as a quick follow up, centralize the policy. And like I said, the intent, but you can support a federated data model cuz you're building out this massive, you know, global system, but you can take that policy and essentially bring it anywhere on the AWS cloud. Is that >>Right? Exactly. And actually one powerful integration I want to touch upon is that AWS backup is natively integrated with AWS organizations, which is our defacto multi account federated organization model for how AWS services work with customers, both in the cloud, on the edge, at the edge and on premises. >>So that's really important because as, as we talk about all the time on the cube, this notion of a, a decentralized data architecture data mesh, but the problem is how do you ensure governance and a federated model? So we're clearly moving in that direction. Wayne, I want to ask you about cyber as a board level discussion years ago, I interviewed Dr. Robert Gates, you know, former defense secretary and he sat on a number of boards and I asked him, you know, how important and prominent is security at the board level? Is it really a board level discussion? He said, absolutely. Every time we meet, we talk about cyber security, but not every company at the time, this was kind of early last decade was doing that. That's changed now. Ransomware is front and center. Hear about it all the time. What's AWS. What's your thinking on cyber as a board level discussion and specifically what are you guys doing around ran ransomware? >>Yeah. So, you know, malware in general, ransomware being a particular type of malware. Sure. It's a hot topic and it continues to be a hot topic. And whether at the board level, the C-suite level, I had a chance to listen to Dr. Gates a couple months ago and super motivational, but we think about ransomware and the same way that our customers do. Right? Cause all of us are subject to an incident. Nobody is immune to a ransomware incident. So we think very much the same way. And you, as Nancy said, along the lines of the, this framework, we really think about, you know, how do customers identify their critical access? How do they plan for protecting those assets, right? How do they make sure that they are in fact protected? And if they do detect the ransomware event and ransomware events come from a lot of different places, like there's not one signature, there's not one thumbprint, if you would for ransomware. >>So it's, it's, there's really a lot of vigilance that needs to be put in place, but a lot of planning that needs to be put in place. And once that's detected and a, a, we have to recover, you know, we know that we have to take an action and recover having that plan in place, making sure that your assets are fully protected and can be restored. As you know, ransomware is a insidious type of malware. You know, it sits in your system for a long time. It figures out what's going on, including your backup policies, your protection policies, and figures out how to get around those with some of the things that Nancy talked about in terms of air gaping, your capabilities, being able to, if you would scan your secondary, your backup storage for malware, knowing that it's a good copy. And then being able to restore from that known good copy in the event of an incident is critical. So we think about this for ourselves and the same way that we think about these for our customers. You gotta have a great plan. You gotta have great protection and you gotta be ready to restore in the case of an incident. And we wanna make sure we provide all the capabilities to do >>That. Yeah. So I'll glad you mentioned air gaping. So at the recent re reinforce, I think it was Kurt kufeld was speaking about ransomware and he didn't specifically mention air gaping. I had to leave. So I might have, I might have missed it cause I was doing the cube, but that's a, that's a key aspect. I'm sure there were, were things on the, on the deep dives that addressed air gaping, but Nancy look, AWS has the skills. It has the resources, you know, necessary to apply all these best practices and, you know, share those with customers. But, but what specific investments is AWS making to make the CISO's life easier? Maybe you could talk about that. >>Sure. So following on to your point about the reinforced keynote, Dave, right? CJ Boes talked about how the events of a ransomware, for example, incident or event can take place right on stage where you go from detect to respond and to recover. And specifically on the recovery piece, you mentioned AWS backup, the managed service that protects across 15 different AWS services, as well as on-premises VMware as automated recovery. And that's in part why we've decided to continue that investment and deliver AWS backup audit manager, which helps customers actually prove their posture against how their protection policies are actually mapping back to their organizational controls based on, for example, how they TA tag their data for mission criticality or how sensitive that data is. Right. And so turning to best practices, especially for ransomware events. Since this is very top of mind for a lot of customers these days is I will, will always try to encourage customers to go through game day simulations, for example, identifying which are those most critical applications in their environment that they need up and running for their business to function properly, for example, and actually going through the recovery plan and making sure that their staff is well trained or that they're able to go through, for example, a security orchestration automation, recovery solution, to make sure that all of their mission critical applications are back up and running in case of a ransomware event. >>Yeah. So I love the game day thing. I mean, we know, well just the, in the history of it, you couldn't even test things like disaster recovery, right? Because it was too dangerous with the cloud. You can test these things safely and actually plan out, develop a blueprint, test your blueprint. I love the, the, the game day >>Analogy. Yeah. And actually one thing I'd love to add is, you know, we talked about air gaping. I just wanna kind of tie up that statement is, you know, one thing that's really interesting about the way that the AWS cloud is architected is the identity access and management platform actually allows us to create identity constructs, that air gap, your data perimeter. So that way, when attackers, for example, are able to gain a foothold in your environment, you're still able to air gap your most mission critical and also crown jewels from being infiltrated. >>Mm that's key. Yeah. We've learned, you know, when paying the ransom is not a good strategy, right? Cuz most of the time, many times you don't even get your data back. Okay. So we, we're kind of data geeks here. We love data and we're passionate about it on the cube AWS and you guys specifically are passionate about it. So what excites you, Wayne, you start and then Nancy, you bring us home. What excites you about data and data protection and why? >>You know, we are data nerds. So at the end of the day, you know, there's this expressions we use all the time, but data is such a rich asset for all of us. And some of the greatest innovations that come out of AWS comes out of our analysis of our own data. Like we collect a lot of data on our operations and some of our most critical features for our customers come out of our analysis, that data. So we are data nerds and we understand how businesses view their data cuz we view our data the same way. So, you know, Dave security really started in the data center. It started with the enterprises. And if we think about security, often we talk about securing compute and securing network. And you know, if you, if you secured your compute, you secured your data generally, but we've separated data from compute so that people can get the value from their data no matter how they want to use it. And in doing that, we have to make sure that their data is durable and it's resilient to any sort of incident and event. So this is really, really important to us. And what do I get excited about? You know, again, thinking back to this framework, I know that we as thought leaders alongside our customers who also thought leaders in their space can provide them with the capabilities. They need to protect their data, to secure their data, to make sure it's compliant and always, always, always durable. >>You know, it's funny, you'd say funny it's it's serious actually. Steven Schmidt at reinforc he's the, the, the chief security officer at Amazon used to be the C C ISO of AWS. He said that Amazon sees quadrillions of data points a month. That's 15 zeros. Okay. So that's a lot of data. Nancy bring us home. What's what excites you about data and data protection? >>Yeah, so specifically, and this is actually drawing from conversations that I had with multiple ISV partners at AWS reinforc is the ability to derive value from secondary data, right? Because traditionally organizations have really seen that as a call center, right? You're producing secondary data because most likely you're creating backups of your mission critical workloads. But what if you're able to run analytics and insights and derive insights from that, that secondary data, right? Then you're actually able to let AWS do the undifferentiated heavy lifting of analyzing that secondary data state. So that way us customers or ISV partners can build value on the security layers above. And that is how we see turning cost into value. >>I love it. As you're taking the original premise of the cloud, taking away the under heavy lifting for, you know, D deploying, compute, storage, and networking now bringing up to the data level, the analytics level. So it continues. The cloud continues to expand. Thank you for watching the cubes coverage of AWS storage day 2022.

Published Date : Aug 10 2022

SUMMARY :

Great to see you again. So Wayne, let's talk about how organizations should be thinking about this term data So data durability, data protection, data resiliency, and, you know, And, you know, we think about forever, you know, the notion of, you know, So Nancy, you talked to a lot of customers, but by the way, it always comes back to the data. about, you know, how do I solve this data challenge? And, and that's always been AWS's philosophy, you know, make sure that developers have access it's, it's, it's by and large the responsibility of the provider of that infrastructure to make sure that data's durable, how you protect your data across 15 different AWS services, as well as on-premises VMware And like I said, the intent, but you can support a federated data model cuz you're building both in the cloud, on the edge, at the edge and on premises. data mesh, but the problem is how do you ensure governance and a federated model? along the lines of the, this framework, we really think about, you know, how do customers identify you know, we know that we have to take an action and recover having that plan in place, you know, necessary to apply all these best practices and, And specifically on the recovery piece, you mentioned AWS backup, you couldn't even test things like disaster recovery, right? I just wanna kind of tie up that statement is, you know, one thing that's really interesting Cuz most of the time, many times you don't even get your data back. So at the end of the day, you know, there's this expressions we use What's what excites you about data and data protection? at AWS reinforc is the ability to derive value from secondary data, you know, D deploying, compute, storage, and networking now bringing up to the data level,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
NancyPERSON

0.99+

Nancy WongPERSON

0.99+

DavePERSON

0.99+

Steven SchmidtPERSON

0.99+

AWSORGANIZATION

0.99+

Dave ValantePERSON

0.99+

AmazonORGANIZATION

0.99+

EuropeLOCATION

0.99+

WaynePERSON

0.99+

BostonLOCATION

0.99+

15QUANTITY

0.99+

Kurt kufeldPERSON

0.99+

CJ BoesPERSON

0.99+

Nancy WangPERSON

0.99+

Robert GatesPERSON

0.99+

twoQUANTITY

0.99+

last yearDATE

0.99+

GatesPERSON

0.99+

first lineQUANTITY

0.99+

second pillarQUANTITY

0.99+

oneQUANTITY

0.99+

Wayne DusoPERSON

0.99+

bothQUANTITY

0.98+

15 zerosQUANTITY

0.98+

one thumbprintQUANTITY

0.98+

one signatureQUANTITY

0.97+

two core pillarsQUANTITY

0.96+

early last decadeDATE

0.96+

three common themesQUANTITY

0.95+

a monthQUANTITY

0.9+

secondQUANTITY

0.88+

couple months agoDATE

0.85+

Dr.PERSON

0.84+

two cubeQUANTITY

0.77+

VMwareTITLE

0.71+

Day 2022EVENT

0.71+

threeQUANTITY

0.66+

yearsDATE

0.65+

gameEVENT

0.57+

dayEVENT

0.52+

2022DATE

0.45+

CubeORGANIZATION

0.35+

Wayne Durso & Nancy Wang | AWS Storage Day 2022


 

[Music] okay we're back my name is dave vellante and this is thecube's coverage of aws storage day you know coming off of reinforce i wrote that the cloud was a new layer of defense in fact the first line of defense in a cyber security strategy that brings new thinking and models for protecting data data protection specifically traditionally thought of as backup and recovery it's become a critical adjacency to security and a component of a comprehensive cyber security strategy we're here in our studios outside of boston with two cube alums and we're going to discuss this and other topics wayne dusso is the vice president for aws storage edge and data services and nancy wong as general manager of aws backup and data protection services guys welcome great to see you again thanks for coming on of course always a pleasure dave good to see you dave all right so wayne let's talk about how organizations should be thinking about this term data protection it's an expanding definition isn't it it is an expanded definition dave last year we talked about uh data and the importance of data to companies every company um is becoming a data company uh you know the amount of data they generate uh the amount of data they can use to uh create models to do predictive analytics and frankly uh to find ways of innovating uh is is growing uh rapidly and you know there's this tension between access to all that data right getting the value out of that data and how do you secure that data and so this is something we think about with customers all the time so data durability data protection data resiliency and you know trust in their data if you think about running your organization on your data trust in your data is so important so you know you got to trust where you're putting your data you know people who are putting their data on a platform need to trust that platform will in fact ensure its durability security resiliency and you know we see ourselves uh aws as a partner uh in securing their data making their data they're built durable making their data resilient all right so some of that responsibility is on us some of that is on amazon responsibility around data protection data resiliency and you know um we think about forever you know the notion of um you know compromise of your infrastructure but more and more people think about the compromise of their data as data becomes more valuable in fact data is a company's most valuable asset we've talked about this before only second to their people you know the people who are the most valuable asset but right next to that is their data so really important stuff so nancy you talk to a lot of customers but by the way it always comes back to the data we've been saying this for years haven't we so you've got this expanding definition of data protection you know governance is in there you think about access etc when you talk to customers what are you hearing from them how are they thinking about data protection yeah so a lot of the customers that wayne and i have spoken to often come to us seeking thought leadership about you know how do i solve this data challenge how do i solve this data sprawl challenge but also more importantly tying it back to data protection and data resiliency is how do i make sure that data is secure that it's protected against let's say ransomware events right and continuously protected so there's a lot of mental frameworks that come to mind and a very popular one that comes up in quite a few conversations is in this cyber security framework right and from a data protection perspective it's just as important to protect and recover your data as it is to be able to detect different events or be able to respond to those events right so recently i was just having a conversation with a regulatory body of financial institutions in europe where we're designing a architecture that could help them make their data immutable but also continuously protected so taking a step back that's really where i see aws's role in that we provide a wide breadth of primitives to help customers build secure platforms and scaffolding so that they can focus on building the data protection the data governance controls and guardrails on top of that platform and that's always been aws philosophy make sure that developers have access to those primitives and apis so that they can move fast and essentially build their own if that that's in fact what they want to do and as you're saying when data protection is now this adjacency to cyber security but there's disaster recoveries in there business continuance cyber resilience etc so so maybe you could pick up on that and sort of extend how you see aws helping customers build out those resilient services yeah so you know two uh core pillars to a data protection strategy is around their data durability which is really an infrastructural element you know it's it's it's by and large the responsibility of the provided that infrastructure to make sure that data is durable because if it's not durable and everything else doesn't matter um and the second pillar is really about data resiliency so in terms of security controls and governance like these are really important but these are a shared responsibility like the customers working with us with the services that we provide are there to architect the design it's really human factors and design factors that get them resiliency nancy anything you would add to what wayne just said yeah absolutely so customers tell us that they want always on data resiliency and data durability right so oftentimes in those conversations three common themes come up which is they want a centralized solution they want to be able to transcribe their intent into what they end up doing with their data and number three they want something that's policy driven because once you centralize your policies it's much better and easier to establish control and governance at an organizational level so keeping that in mind with policy as our interface there's two managed aws solutions that i recommend you all check out in terms of data resiliency and data durability those are aws backup which is our centralized solution for managing protection recovery and also provides an audit audit capability of how you protect your data across 15 different aws services as well as on-premises vmware and for customers whose mission-critical data is contained entirely on disk we also offer aws elastic disaster recovery services especially for customers who want to fail over their workloads from on-premises to the cloud so you can essentially centralize as a quick follow-up centralize the policy and as you said the intent but you can support a federated data model because you're building out this massive you know global system but you can take that policy and essentially bring it anywhere on the aws cloud is that right exactly and actually one powerful integration i want to touch upon is that aws backup is natively integrated with aws organizations which is our de facto multi-account federated organization model for how aws services work with customers both in the cloud on the edge at the edge and on premises so that's really important because as we talk about all the time on the cube this notion of a decentralized data architecture data mesh but the problem is how do you ensure governance in a federated model so we're clearly moving in that direction when i want to ask you about cyber as a board level discussion years ago i interviewed dr robert gates you know former defense secretary and he sat on a number of boards and i asked him you know how important and prominent is security at the board level is it really a board level discussion he said absolutely every time we meet we talk about cyber security but not every company at the time this was kind of early last decade was doing that that's changed um now ransomware is front and center hear about it all the time what's aws what's your thinking on cyber as a board level discussion and specifically what are you guys doing around ransomware yeah so you know malware in general ransomware being a particular type of malware um it's a hot topic and it continues to be a hot topic and whether at the board level the c-suite level um i had a chance to listen to uh dr gates a couple months ago and uh it was super motivational um but we think about ransomware in the same way that our customers do right because all of us are subject to an incident nobody is uh uh immune to a ransomware incident so we think very much the same way and as nancy said along the lines of the nist framework we really think about you know how do customers identify their critical access how do they plan for protecting those assets right how do they make sure that they are in fact protected and if they do detect a ransomware event and ransomware events come from a lot of different places like there's not one signature there's not one thumb print if you would for ransomware so it's it's there's really a lot of vigilance uh that needs to be put in place but a lot of planning that needs to be put in place and once that's detected and a we have to recover you know we know that we have to take an action and recover having that plan in place making sure that your assets are fully protected and can be restored as you know ransomware is a insidious uh type of malware you know it sits in your system for a long time it figures out what's going on including your backup policies your protection policies and figures out how to get around those with some of the things that nancy talked about in terms of air gapping your capabilities being able to if you would scan your secondary your backup storage for malware knowing that it's a good copy and then being able to restore from that known good copy in the event of an incident is critical so we think about this for ourselves in the same way that we think about these for our customers you've got to have a great plan you've got to have great protection and you've got to be ready to restore in the case of an incident and we want to make sure we provide all the capabilities to do that yeah so i'm glad you mentioned air gapping so at the recent reinforce i think it was kurt kufeld was speaking about ransomware and he didn't specifically mention air gapping i had to leave so i might i might have missed it because i'm doing the cube but that's a that's a key aspect i'm sure there were things in the on the deep dives that addressed air gapping but nancy look aws has the skills it has the resources you know necessary to apply all these best practices and you know share those as customers but but what specific investments is aws making to make the cso's life easier maybe you could talk about that sure so following on to your point about the reinforced keynote dave right cj moses talked about how the events of a ransomware for example incident or event can take place right on stage where you go from detect to respond and to recover and specifically on the recover piece he mentioned aws backup the managed service that protects across 15 different aws services as well as on-premises vmware as automated recovery and that's in part why we've decided to continue that investment and deliver aws backup audit manager which helps customers actually prove their posture against how their protection policies are actually mapping back to their organizational controls based on for example how they tag their data for mission criticality or how sensitive that data is right and so turning to best practices especially for ransomware events since this is very top of mind for a lot of customers these days is i will always try to encourage customers to go through game day simulations for example identifying which are those most critical applications in their environment that they need up and running for their business to function properly for example and actually going through the recovery plan and making sure that their staff is well trained or that they're able to go through for example a security orchestration automation recovery solution to make sure that all of their mission critical applications are back up and running in case of a ransomware event yeah so i love the game date thing i mean we know well just in the history of it you couldn't even test things like disaster recovery be right because it was too dangerous with the cloud you can test these things safely and actually plan out develop a blueprint test your blueprint i love the the game day analogy yeah and actually one thing i love to add is you know we talked about air gapping i just want to kind of tie up that statement is you know one thing that's really interesting about the way that the aws cloud is architected is the identity access and management platform actually allows us to create identity constructs that air gap your data perimeter so that way when attackers for example are able to gain a foothold in your environment you're still able to air gap your most mission critical and also crown jewels from being infiltrated that's key yeah we've learned you know when paying the ransom is not a good strategy right because most of the time many times you don't even get your data back okay so we we're kind of data geeks here we love data um and we're passionate about it on the cube aws and you guys specifically are passionate about it so what excites you wayne you start and then nancy you bring us home what excites you about data and data protection and why you know we are data nerds uh so at the end of the day um you know there's there's expressions we use all the time but data is such a rich asset for all of us some of the greatest innovations that come out of aws comes out of our analysis of our own data like we collect a lot of data on our operations and some of our most critical features for our customers come out of our analysis that data so we are data nerds and we understand how businesses uh view their data because we view our data the same way so you know dave security really started in the data center it started with the enterprises and if we think about security often we talk about securing compute and securing network and you know if you if you secured your compute you secured your data generally but we've separated data from compute so that people can get the value from their data no matter how they want to use it and in doing that we have to make sure that their data is durable and it's resilient to any sort of incident event so this is really really important to us and what do i get excited about um you know again thinking back to this framework i know that we as thought leaders alongside our customers who also thought leaders in their space can provide them with the capabilities they need to protect their data to secure their data to make sure it's compliant and always always always durable you know it's funny you'd say it's not funny it's serious actually steven schmidt uh at reinforce he's the the chief security officer at amazon used to be the c c iso of aws he said that amazon sees quadrillions of data points a month that's 15 zeros okay so that's a lot of data nancy bring us home what's what excites you about data and data protection yeah so specifically and this is actually drawing from conversations that i had with multiple isv partners at aws reinforce is the ability to derive value from secondary data right because traditionally organizations have really seen that as a cost center right you're producing secondary data because most likely you're creating backups of your mission critical workloads but what if you're able to run analytics and insights and derive insights from that secondary data right then you're actually able to let aws do the undifferentiated heavy lifting of analyzing that secondary data as state so that way you as customers or isv partners can build value on the security layers above and that is how we see turning cost into value i love it you're taking the original premise of the cloud taking away the undifferentiated heavy lifting for you know deploying compute storage and networking now bringing up to the data level the analytics level so it continues the cloud continues to expand thank you for watching thecube's coverage of aws storage day 2022

Published Date : Aug 5 2022

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
amazonORGANIZATION

0.99+

awsORGANIZATION

0.99+

kurt kufeldPERSON

0.99+

europeLOCATION

0.99+

last yearDATE

0.99+

bostonLOCATION

0.99+

wayne dussoPERSON

0.99+

steven schmidtPERSON

0.99+

Nancy WangPERSON

0.99+

twoQUANTITY

0.98+

Wayne DursoPERSON

0.98+

uh awsORGANIZATION

0.98+

first lineQUANTITY

0.97+

AWSORGANIZATION

0.97+

dave vellantePERSON

0.97+

davePERSON

0.97+

one signatureQUANTITY

0.97+

three common themesQUANTITY

0.96+

one thumbQUANTITY

0.96+

waynePERSON

0.96+

nancyPERSON

0.95+

second pillarQUANTITY

0.94+

15 zerosQUANTITY

0.94+

oneQUANTITY

0.92+

15 differentQUANTITY

0.92+

bothQUANTITY

0.92+

dr robert gatesPERSON

0.91+

secondQUANTITY

0.91+

a monthQUANTITY

0.9+

one thingQUANTITY

0.88+

vmwareTITLE

0.81+

a couple months agoDATE

0.81+

early last decadeDATE

0.8+

years agoDATE

0.78+

lot of customersQUANTITY

0.76+

lotQUANTITY

0.76+

15 differentQUANTITY

0.74+

a lot of customersQUANTITY

0.74+

dr gatesPERSON

0.67+

day 2022EVENT

0.65+

dataQUANTITY

0.63+

cubeORGANIZATION

0.63+

ransomwareTITLE

0.62+

nancyORGANIZATION

0.59+

threeQUANTITY

0.54+

Day 2022EVENT

0.53+

yearsQUANTITY

0.48+

coreQUANTITY

0.48+

nancy wongPERSON

0.47+

thecubePERSON

0.47+

cloudTITLE

0.36+

AWS Storage Day 2022 Intro


 

(upbeat music) >> Welcome to theCUBE's coverage of AWS Storage Day 2022. My name is Dave Vellante. In 2021, theCUBE team was in Seattle covering Storage Day. And after that event, I wrote a breaking analysis piece on Wikibon and SiliconANGLE called "Thinking Outside The Box, "AWS Signals A New Era For Storage." And the point of that post was that the cloud's impact was clearly moving into the storage realm in a big way. And the days of consuming storage as a box were numbered. And I projected, AWS doesn't share these numbers but I projected that AWS's storage business was on track to hit $10 billion, making it the second largest purveyor of storage with a gross trajectory that by mid-decade would make AWS the number one storage player in the market. Now, a lot of people didn't like that post, particularly the fact that I was mixing AWS storage service, OpEx, with what generally were CapEx purchases. But I didn't really care to argue the nuance of CapEx versus OpEx. Rather, the point I was really making was, and I was looking at the spending data from ETR and estimating the revenue for the players, and the message was clear. Data was moving to and being created in the cloud much faster than on-prem and the spending patterns were following data growth. Now, fast forward almost 12 months and the picture is even more clear to me. The number of cloud storage services from AWS is expanding as is their consequent adoption. The pace of delivery is accelerating. And very importantly, the optionality of the ecosystem is exploding. Virtually every storage company, primary, secondary, data protection, archival, is partnering with AWS to run their services in the cloud and in many cases connect to their on-prem installations, expanding the cloud as we've talked about and written about extensively. Despite the narrative from some about repatriation and people moving out of the cloud back on-prem, such activity is a rounding error in the grand scheme of enterprise tech spending. The data is clear, cloud and cloud storage spending continues to grow at 30% plus per year, far ahead of any other markets. Now, the edge presents new opportunities and likely will bring novel architectures as we've predicted many times covering what AWS is doing with the Arm-based Graviton and others. Now, this is especially important at the far edge, like real-time AI inferencing and new workloads. You know, there's questions that remain about how much storage is going to persist at the edge, how much is going to go back into the cloud, and what requirements exist across the board. But in many respects, the edge is all incremental in terms of data growth and data creation. So the challenge is how do we harness the power of that data? So what can we expect going forward in storage? Well, the pace of service delivery from hyperscale providers generally and AWS specifically is going to continue to accelerate. AWS is likely going to lead the way. We've seen this, started with S3, expand storage portfolio into block and file, and then bringing cohort services like new compute architectures, we've talked about Nitro and Graviton and others, and a portfolio of database options and new machine intelligence, machine learning, and AI solutions. Storage in the cloud is moving from being a bit bucket to being a platform that is evolving as part of an emerging data mesh architecture where business users, those with context, gain secure, governed, and facile self-service access to data that they need when they need it so they can make better decisions and importantly create new data products and services. This is the vision for data generally in the 2020s and cloud storage specifically will be an underpinning of this new era. Thanks for watching theCUBE's coverage of AWS Storage Day. This is Dave Vellante. (upbeat music)

Published Date : Aug 5 2022

SUMMARY :

and the message was clear.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

AWSORGANIZATION

0.99+

$10 billionQUANTITY

0.99+

SeattleLOCATION

0.99+

2021DATE

0.99+

2020sDATE

0.99+

Storage DayEVENT

0.97+

theCUBEORGANIZATION

0.96+

NitroORGANIZATION

0.95+

S3TITLE

0.94+

ETRORGANIZATION

0.93+

GravitonORGANIZATION

0.92+

almost 12 monthsQUANTITY

0.91+

SiliconANGLEORGANIZATION

0.9+

WikibonORGANIZATION

0.85+

Storage Day 2022EVENT

0.84+

30% plusQUANTITY

0.83+

number oneQUANTITY

0.82+

CapExTITLE

0.82+

second largest purveyorQUANTITY

0.81+

AWSEVENT

0.8+

Thinking Outside The BoxTITLE

0.78+

OpExTITLE

0.76+

AWS Storage DayEVENT

0.69+

OpExORGANIZATION

0.62+

midQUANTITY

0.55+

decadeQUANTITY

0.47+

GravitonCOMMERCIAL_ITEM

0.42+

Joe Fitzgerald, AWS | AWS Storage Day


 

(joyful music) >> According to storage guru, Fred Moore, 60 to 80% of all stored data is archival data, leading to the need for what he calls the infinite archive. And in this world, digital customers require inexpensive access to archive data that's protected, it's got to be available, durable, it's got to be able to scale and also has to support the governance and compliance edicts of the organizations. Welcome to this next session of the AWS Storage Day with theCUBE. I'm your host, Dave Vellante. We're going to dig into the topic of archiving and digitally preserving data and we're joined by Joe Fitzgerald, who's the general manager of Amazon S3 Glacier. Joe, welcome to the program. >> Hey, Dave. It's great to be here. Thanks for having me. >> Okay, I remember early last decade, AWS announced Glacier, it got a lot of buzz. And since then you've evolved your archival storage services, strategy and offerings. First question: why should customers archive their data in AWS? >> That's a great question. I think Amazon S3 Glacier is a great place for customers to archive data. And I think the preface that you gave, I think, covers a lot of the reasons why customers are looking to archive data on the cloud. We're finding a lot of customers have a lot of data. And if you think about it, most of the world's data is cold by nature. It's not data that you're accessing all the time. So if you don't have an archival story as part of your data strategy, I think you're missing out on a cost savings opportunity. So one of the reasons we're finding customers looking to move data to S3 Glacier is because of cost. With Glacier Deep Archive, we have an industry-leading price point of a dollar per terabyte per month. I think another reason that we're finding customers wanting to move data to the cloud, into Glacier, is because of the security, durability and availability that we offer. Instead of having to worry about some of the most valuable data that your company has and worrying about that being in a tape library that doesn't get access very often on premises or offsite in a data locker that you don't really have access to, and we offer the best story in terms of the durability and security and availability of that data. And I think the other reason that we're finding customers wanting to move data to S3 Glacier is just the flexibility and agility that having your data in the cloud offers. A lot of the data, you can put it in Deep Archive and have it sit there and not access it but then if you have some sort of event that you want to access that data, you can get that back very quickly, as well as put the power the rest of the AWS offerings, whether that's our compute offerings, our machine learning and analytics offerings. So you just have unmatched flexibility, cost, and durability of your data. So we're finding a lot of customers looking to optimize their business by moving their archive data to the cloud. >> So let's stick on the business case for a minute. You nailed the cost side of the equation. Clearly, you mentioned several of the benefits, but for those customers that may not be leaning in to archive data, how do they think about the cost-benefit analysis when you talk to customers, what are you hearing from them, the ones that have used your services to archive data, what are the benefits that they're getting? >> It's a great question. I think we find customers fall into a few different camps and use cases and one thing that we recommend as a starting point is if you have a lot of data and you're not really familiar with your access patterns, like what part of the data is warm, what part is cold? We offer a storage class called S3 intelligent tiering. And what that storage class does is it optimizes the placement of that data and the cost of that data based on the access patterns. So if it's data that is accessed very regularly, it'll sit in one of the warmer storage tiers. If it's accessed infrequently, it'll move down into the infrequent access tier or to the archive or deep archive access tiers. So it's a great way for customers who are struggling to think about archive, because it's not something that every customer thinks about everyday, to get automatic cost savings. And then for customers who have either larger amounts of data or better understand the access patterns, like some of the industries that we're seeing, like autonomous vehicles, they might generate tons of training data from running the autonomous vehicles. And they know, okay, this data, we're not actively using it, but it's also very valuable. They don't want to throw it away. They'll choose to move that data into an archive tier. So a lot of it comes down to the degree to which you're able to easily understand the access pattern of the data to figure out which storage class and which archive storage class maps best to your use case. >> I get it, so if you add that deep archive tier, you automagically get the benefit, thanks to the intelligent tiering. What about industry patterns? I mean, obviously, highly regulated industries have compliance issues and you have data intensive industries are going to potentially have this because they want to lower costs, but do you see any patterns emerging? I mean every industry needs this, but are there any industries that are getting more bang from the buck that you see? >> I would say every industry definitely has archived data. So we have customers in every vertical segment. I think some of the ones that we're definitely seeing more activity from would be media and entertainment customers are a great fit for archive. If you think about even digital native studios who are generating very high definition footage and they take all that footage, they produce the movie, but they have a lot of original data that they might reuse, that you remaster, director's cut, to use later, they're finding archive is a great fit for that. So they're able to use S3 Standard for their active production, but when they're done finishing a movie or production, they can save all that valuable original footage and move it in deep archive and just know that it's going to be there whenever they might need to use it. Another use case, we're staying in media, entertainment, similar to that and this is a good use case for S3 Glacier is if you have sports footage from like the '60s and then there's some sort of breaking news event about some athlete that you want to be able to cut a shot for the six o'clock news, with S3 Glacier and expedited retrievals, you're able to get that data back in a couple of minutes and that way you have the benefit of very low cost archive storage, but being able to get the immediacy of having some of that data back when you need it. So that's just some of the examples that we're seeing in terms of how customers are using archives. >> I love that example because the prevailing wisdom is the older data is, the less valuable it is, but if you can pull a clip up of Babe Ruth at the right time, even though it's a little grainy, wow, that's huge value for the-- >> We're finding like lots of customers that they've retained this data, they haven't known why they're going to need it, they just intrinsically know this data is really valuable, we might need it. And then as they look for new opportunities and they're like, hey, we're going to remaster this. And they've gone through a lot of digital transformation. So we're seeing companies have decades of original material moving into the cloud. We're also seeing fairly nascent startups who are also just generating lots of archive data. So it's just one of the many use cases we see from our customers love Glacier. >> Data hoarders heaven. I love it. Okay, Joe. Let's wrap up. Give us your closing thoughts, how you see the future of this business, where you want to take your business for your customers. >> Mostly, we just really want to help customers optimize their storage and realize the potential of their data. So for a lot of customers, that really just comes down to knowing that S3 glacier is a great and trusted place for their data, and that they're able to meet their compliance and regulatory needs, but for a lot of other customers, they're looking to transform their business and reinvent themselves as they move to the cloud. And I think we're just excited by a lot of emerging use cases and being able to find that flexibility of having very low cost storage, as well as being able to get access to that data and hook it up into the other AWS services and really realize the potential of their data. >> 100%, we've seen it over the decades, cost drops and use cases explode. Thank you, Joe. Thanks so much for coming on theCUBE. >> Thanks a lot, Dave. It's been great being here. >> All right, keep it right there for more storage and data insights. You're watching AWS Storage Day on theCUBE. (tranquil music)

Published Date : Sep 7 2021

SUMMARY :

and also has to support Thanks for having me. it got a lot of buzz. A lot of the data, you the ones that have used your So a lot of it comes down to the degree from the buck that you see? and just know that it's going to be there So it's just one of the many use cases where you want to take your and being able to find that flexibility cost drops and use cases explode. Thanks a lot, Dave. and data insights.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Fred MoorePERSON

0.99+

AWSORGANIZATION

0.99+

Joe FitzgeraldPERSON

0.99+

60QUANTITY

0.99+

JoePERSON

0.99+

DavePERSON

0.99+

AmazonORGANIZATION

0.99+

six o'clockDATE

0.99+

First questionQUANTITY

0.98+

S3 GlacierCOMMERCIAL_ITEM

0.98+

80%QUANTITY

0.98+

oneQUANTITY

0.98+

one thingQUANTITY

0.98+

100%QUANTITY

0.97+

GlacierORGANIZATION

0.96+

early last decadeDATE

0.94+

AWSEVENT

0.93+

S3 StandardTITLE

0.93+

'60sDATE

0.92+

terabyteQUANTITY

0.84+

couple of minutesQUANTITY

0.83+

Storage DayEVENT

0.74+

S3OTHER

0.74+

S3 glacierLOCATION

0.73+

a minuteQUANTITY

0.68+

DeepCOMMERCIAL_ITEM

0.68+

tons of training dataQUANTITY

0.66+

decadesQUANTITY

0.65+

casesQUANTITY

0.63+

ArchiveORGANIZATION

0.6+

S3 GlacierORGANIZATION

0.57+

Babe RuthTITLE

0.57+

theCUBEORGANIZATION

0.49+

dollarQUANTITY

0.41+

Satish Lakshmanan & Nancy Wang | AWS Storage Day 2021


 

(upbeat music) >> Hi everybody, we're here in downtown Seattle covering AWS storage day. My name is Dave Vellante with the Cube, and we're really excited. We're going to talk about rethinking data protection in the 2020s. I'm here with Nancy Wong, who is the general manager of AWS backup, and Satish Lakshmanan, the director of storage business development at AWS. Folks, welcome. Good to see you again. So let's talk about the evolution of data protection. You've got three major disruptors going on. There's obviously the data explosion. We talk about that all the time, but there's cloud has changed the way people are thinking about data protection and now you've got cyber. What's AWS's point of view on all this. >> Great question, Dave. You know, in my role as the global head of storage business development and solution architecture for storage, I have the privilege of working with customers all around the globe, in every geography and every segment. And we recently talked to thousands of customers and we did a survey for about 5,000 customers. And many of them told us that they expect to see a ransomware attack once every 11 seconds. So it's top of mind for almost every customer so much so that if you remember earlier this year, the white house issued an executive order, you know, making the nation aware of across public and private sector about cybersecurity and the need for, for, for us to be prepared. Customers as a result, largely think of not only ransomware protection, but also recovery. And they have largely allocated budgets across every geography to make sure that they're well protected. And in the, in the event of an attack, they can recover from it. That's where Nancy's, you know, data protection services and backup services come into play. And maybe she'll add a few comments about how she approaches it from a technology perspective. >> Yeah, sure. Thanks, Satish yeah, as a general manager of AWS backup and our data protection services, it's really my team and my charter to help our customers centralize, automate, and also protect themselves from attacks like ransomware. Right? And so for example, you know, across our many services today we offer AWS backup as a secondary data collection and management across our many AWS regions and also across the aid of many AWS accounts that a single customer must manage, right. And if you recall having multiple copies of your data exist in backups is a core part of any customers ransomware protection strategy. And lastly, I just want to say something that we just launched recently called AWS backup audit manager also helps you operationalize and monitor your backups against any ransomware attack. >> So, the adversary, obviously, as we know, was well-equipped and they're quite sophisticated. And anybody who has inside access can become a ransomware attacker because of things like ransomware as a service. So, what are you specifically doing to address ransomware? >> Yeah. So, in talking to several thousand of our customers, what we have learned is customers are typically vulnerable in one or more of three scenarios, right? The first scenario is when they're not technically ready. What that means is either their software patches are not up to date, or they have too many manual processes that really prevent them from being prepared for defending against an attack. The second is typically around a lack of awareness. These are situations where IT administrators leveraging cloud-based services are recognizing that, or not recognizing per se, that they're easy to instances, Lambda instances have public access and same applies to S3 buckets. And the third is lack of governance and governance based practices. The way we are educating our customers training in enabling them and empowering them, because it's a shared security model, is really through our well-architected framework. That's the way we shared best practices that we have learned across all our customers, across our industries. And we enable it and empower them to not only identify areas of vulnerability, but also be able to recover in the event of an attack. Nancy. >> Yeah, and to add to that right, our team, and now my team and I, for example, watch every ransomware incident and because it really informs the way that we plan our product roadmap and deliver features that help our customers protect, detect, and also recover from ransomware. So there's an ebook out there, suggest you go check it out, of securing your cloud environment against ransomware attacks. And aside from the technical maintenance suggestions that Satish provided, as well as the security awareness suggestions, there's really two things that I usually tell customers who come to me with ransomware questions. Which is one, right, don't rely on the good will of your ransomware attacker to restore your data. Because I mean, just studies show over 90% of ransom payers actually don't successfully recover all of their data because, hey, what if they don't give you the full decryption utility? Or what if your backups are not restorable? Right? So, rather than relying on that good will, make sure that you have a plan in place where you can recover from backups in case you get ransomed. Right? And two, is make sure that in addition to just taking backups, which obviously, you know, as a GM of AWS backup, I would highly recommend you do, right. Is make sure that those backups are actually restorable, right? Do game day testing, make sure that it's configured properly because you'd be surprised at the, just the number and the sheer percentage of customers who when, let's say the attack happens, actually find that they don't have a good set of data to recover their businesses from. >> I believe it. Backup is, one thing as they say, recovery is everything. So you've got the AWS well-architected framework. How does that fit in, along with the AWS data protection services into this whole ransomware discussion? >> Yeah, absolutely. You know, the AWS wall architected framework actually has four design approaches that I usually share with customers that are very relevant to the ransomware conversation. And one is, you know, anticipate where that ransomware attack may come from. Right? And two, make sure that you write down your approaches whereby you can solve for that ransomware attack, right? Three, just like I advocate my teams and customers to do, right. Then look back on what you've written down as your approach and reflect back on what are the best practices or lessons learned that you can gain from that exercise. And make sure as part four, is you consistently plan game days where you can go through these various scenario tests or ransomware game day attacks. And lastly, just as a best practice is ransomware recovery and protection isn't just the role of IT Professionals like us, right. It's really important to also include HR, professional, legal professionals. Frankly, anyone in a business who might come and be compromised by ransomware attack, and make sure that they're involved in your response. And so Satish, I'd love to hear as well, how you communicate to customers and what best practices you offer them. >> Yeah, thanks Nancy. I think in addition to the fantastic points you made, Nancy, Dave, the well architected framework has been built on eight to 10 years worth of customer engagements across all segments and verticals. And essentially it's a set of shared best practices, tools, training, and methodology that we, you know, exchange with customers in order to help them be more prepared to fight ransomware attacks and be able to recover from them. Recently, there've been some enhancements made where we have put industry or use case specific lenses to the well architected framework. For example, for customers looking to build IOT applications, customers who are trying to use server less and Lambda functions, customers who may be within the financial services or healthcare life sciences, where to go, looking to understand best practices from other people who've implemented, you know, some of the technologies that Nancy talked about. In addition, as I talked about earlier, training and enablement is extremely critical to make sure that if companies don't have the skillset, we are basically giving them the skillset to be able to defend. So we do a lot of hands-on labs. Lastly, the well architected framework tool has been integrated into the console, and it gives customers who are essentially managing the workloads, the ability to look at access permissions, ability to look at what risks they have through malware and ransomware detection techniques. Machine learning capability is built into all the services that are native to AWS that allow them to then react to them. If companies don't have the skills, we have a vast network of partners who can help them basically implement the right technologies. And they can always reach out to our technical account manager for additional information as well. >> I love the best practice discussion. For customers, it's a journey. I mean, CSOs tell us their one problem is lack of talent and so they need help. So, last question is what can people expect from AWS? You're the experts. In particular, how you can help them recover from ransomware? >> Yeah, and that conversation is ever evolving, right? As hackers get more sophisticated then clearly we have to get more sophisticated as well. And so one of our mental models that we often share with customers is defense in depth, right? So if you consider all of the layers, including all of the constructs that exist natively on AWS, right? The first layer is through identity access management constructs. So building a trust radius around your workloads, around your applications, whereby you can deny permissions or access permissions to individuals who are not authorized to access your mission critical applications, right. Then beyond that first layer of defense, the second layer should be automated monitoring or observability. For example, if individuals were to penetrate within your security perimeter, and often times I, you know, that could be done through a delayed response where it gives your CSO or your security operations team, the ability to react to such a unauthorized access, for example. And so the third line of defense is if someone were to penetrate both first layer, as well as the second layer, is actually through backups. And this is where it goes back to what I was mentioning earlier is make sure that your backups are ready and able to be restored and have the RTO and SLA guarantees that help your business remain functional even after an attack. >> Excellent. Guys, we got to go. I love that, zero trust layer defenses, got to have the observability in the analytics and then the last resort RTO, and of course, RPO. Guys, thanks so much, really appreciate your insights. >> Good to see you. >> Thank you for watching. Keep it right there for more great content from AWS storage day. (upbeat music)

Published Date : Sep 2 2021

SUMMARY :

We talk about that all the time, that they expect to see and also across the aid So, the adversary, that they're easy to instances, make sure that you have a plan in place How does that fit in, and make sure that they're the ability to look at access permissions, I love the best practice discussion. the ability to react to in the analytics Thank you for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
NancyPERSON

0.99+

AWSORGANIZATION

0.99+

Nancy WongPERSON

0.99+

Dave VellantePERSON

0.99+

SatishPERSON

0.99+

Satish LakshmananPERSON

0.99+

DavePERSON

0.99+

eightQUANTITY

0.99+

Nancy WangPERSON

0.99+

first layerQUANTITY

0.99+

2020sDATE

0.99+

oneQUANTITY

0.99+

third lineQUANTITY

0.99+

thirdQUANTITY

0.99+

bothQUANTITY

0.99+

second layerQUANTITY

0.99+

first scenarioQUANTITY

0.99+

ThreeQUANTITY

0.99+

twoQUANTITY

0.99+

two thingsQUANTITY

0.99+

10 yearsQUANTITY

0.99+

secondQUANTITY

0.98+

over 90%QUANTITY

0.98+

about 5,000 customersQUANTITY

0.98+

one problemQUANTITY

0.97+

LambdaTITLE

0.97+

three scenariosQUANTITY

0.96+

todayDATE

0.95+

earlier this yearDATE

0.93+

thousands of customersQUANTITY

0.9+

one thingQUANTITY

0.89+

downtown SeattleLOCATION

0.87+

four design approachesQUANTITY

0.84+

S3TITLE

0.82+

single customerQUANTITY

0.79+

part fourOTHER

0.75+

zero trust layerQUANTITY

0.74+

thousandQUANTITY

0.7+

white houseORGANIZATION

0.7+

CubeORGANIZATION

0.7+

three major disruptorsQUANTITY

0.69+

once every 11 secondsQUANTITY

0.66+

Storage DayEVENT

0.61+

2021DATE

0.5+

Siddhartha Roy, Mat Mathews, Randy Boutin | AWS Storage Day 2021


 

>>We'll go back to the queue. It's continuous coverage of AWS storage day. We're here in Seattle home with the Mariners home, with the Seahawks home of the Seattle storm. If you're a w NBA fan your cloud migration, according to our surveys and the ETR data that we use last year was number two initiative for it. Practitioners behind security. Welcome to this power panel on migration and transfer services. And I'm joined now by Matt Matthews. Who's the general manager of AWS transfer a family of services sitting. Roy is the GM of the snow family. And Randy boudin is the general manager of AWS data sync, gents. Welcome to good to see you. Thank you. So, Matt, you heard my narrative upfront, obviously it's top of mind for it. Pros, what are you seeing in the marketplace? >>Yeah, uh, certainly, um, many customers are currently executing on data migration strategies, uh, to the cloud. And AWS has been a primary choice for cloud storage for 15 years. Right. Um, but we still see many customers are evaluating, um, how to do their cloud migration strategies. And they're looking for, you know, um, uh, understanding what services can help them with those migrations. >>So said, well, why now? I mean, a lot of people might be feeling, you know, you got, you've got a hesitancy of taking a vaccine. What about hesitancy making a move? Maybe the best move is no movable. W why now? Why does it make sense? >>So AWS offers compelling, uh, cost savings to customers. I think with our global footprint that our 11 nines of durability are fully managed services. You're really getting the centralization benefits for the cloud, like all the resiliency and durability. And then besides that you are unlocking the on-prem data center and data store costs as well. So it's like a dual prong cost saving on both ends >>Follow up on that. If I may, I mean, again, the data was very clear cloud migration, top priority F for a lot of reasons, but at the same time migration, as you know, it's almost like a dirty word sometimes in it. So, so where do people even start? I mean, they've got so much data to migrate. How can they even handle >>That? Yeah. I'd recommend, uh, customers look at their cool and cold data. Like if they look at their backups and archives and they have not been used for long, I mean, it doesn't make sense to kind of keep them on prem, look at how you can move those and migrate those first and then slowly work your way up into like warm data and then hot data. >>Okay, great. Uh, so Randy, we know about the snow family of products. Of course, everybody's familiar with that, but what about online data migration? What can you tell us there? What's the, what are customers thinking >>About? Sure. So as you know, for many their journey to the cloud starts with data migration, right? That's right. So if you're, if you're starting that journey with, uh, an offline movement, you look to the snow family of products. If you, if you're looking for online, that's when you turn to data, sync data thinks that online data, movement, service data is it makes it fast and easy to move your data into AWS. The customers >>Figure out which services to use. Do you, how do you advise them on that? Or is it sort of word of mouth, peer to peer? How do they figure it out that that's squint through that? Yeah, >>So it comes down to a combination of things. So first is the amount of available bandwidth that you have, the amount of data that you're looking to move and the timeframe you have in which to do that. Right. So if you have a, high-speed say gigabit, uh, uh, network, uh, you can move data very quickly using data sync. If, if you have a slower network or perhaps you don't want to utilize your existing network for this purpose, then the snow family of products makes a lot of sense. Call said, that's it? Call center. That's >>My answer. Yeah, there you go. Oh, you'll >>Joke. Right. See Tam that's Chevy truck access method. You put it right on there and break it over. How about, you know, Matt, I wonder if we could talk maybe about some, some customer examples, any, any favorites that you see are ones that stand out in various industries? >>Yeah. So one of the things we're seeing is certainly getting your data to the cloud is, is important, but also customers want to migrate their applications to the cloud. And when they, when they do that, they, uh, the many applications still need ongoing data transfers from third parties, from ex partners and customers and, and whatnot. So, great example of this is, uh, FINRA and their partnership with AWS. So a FINRA is the single largest, um, uh, regulatory body for securities in the U S and they take in 335 billion market events per day, over 600,000 of their member brokers, registered brokers. So, uh, they use, um, AWS transfer family, uh, secure file transfers, uh, to get that data in an aggregated in, in S3, so they can, um, analyze it and, and, uh, really kind of, uh, understand that data so they can protect investors. So that's, that's a great example. >>So it's not just seeding the cloud, right? It's the ongoing population of it. How about, I mean, how do you guys see this shaping up the future? We all talk about storage silos. I see this as, you know, the cloud is in some ways a silo Buster. Okay. We've got all this data in the cloud now, but you know, you can not apply machine learning. There are other tooling, so what's the north star here. >>Yeah. It's really the north star of getting, you know, we want to unlock, uh, not only get the data in the cloud, but actually use it to unlock the benefits of the cloud has to offer. Right. That's really what you're getting at, aggregating all that data, uh, and using the power of the cloud to really, um, you know, harness that power to analyze the data. It's >>A big, big challenge that customers have. I mean, you guys are obsessed listening to customers, you know, w what kinds of things do you see in the future? Sid and Randy, maybe, maybe see if you can start, >>Uh, I'll start with the I'll kind of dovetail, on example, a Matthews, uh, I'll talk about a customer join, who moved 3.4 petabytes of data to the cloud joined was a streaming service provider out of Germany. They had prohibitive on-prem costs. They saved 500 K per year by moving to the cloud. And by moving to the cloud, they get much more of the data by being able to fine tune their content to local audiences and be more reactive and quicker, a reaction to business changes. So centralizing in the cloud had its benefits of access, flexibility, agility, and faster innovation, and faster time to market. Anything you'd add, right. >>Yeah, sure. So we have a customer Takara bio they're a biotech company. Uh, they're working with genome sequencing, right? So data rich information coming out of those sequencers, they're collecting and analyzing this data daily and sending it up into AWS for analysis, um, and, uh, by using data sync in order to do that, they've improved their data transfer rate by three times. And they've reduced their, uh, overhead six by 66% in terms of their process. >>Guys get, must be blown away by this. I mean, we've all sort of lived in this, so I'm prem world and you sort of lay it out infrastructure, and then you go onto the next one, but the use cases are so diverse. The industry, examples. Matt will give you the last >>Word here. Yeah, no, w w what are we looking to do? You know, we, we always want to listen to our customers, uh, but you know, collectively our, our services and working across other services, AWS, we really, uh, want to help customers not only move their data in the crowd, but also unlock the power of that data. And really, um, you know, uh, we think there's a big opportunity across their migration and transfer services to help customers choose, choose the right service, uh, based on their, where they are in their cloud migration, uh, and, and all the different things they're dealing with. >>I've said a number of times the next 10 years is not going to be like the last 10 years. It's like the cloud is growing up. You know, it's out of the infancy stage. Maybe it's an adolescent. So I don't really know exactly, but guys, thanks so much for coming to the cube and sharing your insights and information. Appreciate it. And thank you for watching everybody keep it right there. More great content from AWS storage day in Seattle.

Published Date : Sep 2 2021

SUMMARY :

what are you seeing in the marketplace? And they're looking for, you know, um, uh, understanding what services can help them with those I mean, a lot of people might be feeling, you know, you got, you've got a hesitancy of that you are unlocking the on-prem data center and data store costs as well. a lot of reasons, but at the same time migration, as you know, it's almost like a dirty word sometimes I mean, it doesn't make sense to kind of keep them on prem, look at how you can move those and migrate those first and What can you tell us there? you look to the snow family of products. Or is it sort of word of mouth, peer to peer? So first is the amount of available bandwidth that you have, Yeah, there you go. How about, you know, Matt, I wonder if we could talk maybe about some, some customer examples, any, any favorites that you see So a FINRA is the single largest, I see this as, you know, the cloud is in some ways a silo Buster. aggregating all that data, uh, and using the power of the cloud to really, um, you know, you know, w what kinds of things do you see in the future? So centralizing in the cloud had its benefits of access, flexibility, And they've reduced their, uh, overhead six by 66% in terms of their process. I mean, we've all sort of lived in this, so I'm prem world and you sort of lay it out infrastructure, uh, but you know, collectively our, our services and working across other services, And thank you for

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AWSORGANIZATION

0.99+

Randy boudinPERSON

0.99+

SidPERSON

0.99+

Matt MatthewsPERSON

0.99+

Randy BoutinPERSON

0.99+

Siddhartha RoyPERSON

0.99+

SeattleLOCATION

0.99+

Mat MathewsPERSON

0.99+

GermanyLOCATION

0.99+

MattPERSON

0.99+

FINRAORGANIZATION

0.99+

15 yearsQUANTITY

0.99+

RoyPERSON

0.99+

SeahawksORGANIZATION

0.99+

RandyPERSON

0.99+

3.4 petabytesQUANTITY

0.99+

66%QUANTITY

0.99+

U SLOCATION

0.99+

three timesQUANTITY

0.99+

last yearDATE

0.99+

sixQUANTITY

0.99+

over 600,000QUANTITY

0.99+

MarinersORGANIZATION

0.98+

both endsQUANTITY

0.98+

335 billion market eventsQUANTITY

0.98+

firstQUANTITY

0.97+

11 ninesQUANTITY

0.97+

TamPERSON

0.95+

oneQUANTITY

0.94+

singleQUANTITY

0.92+

MatthewsPERSON

0.88+

AWSEVENT

0.85+

ChevyORGANIZATION

0.83+

500 K per yearQUANTITY

0.83+

S3TITLE

0.8+

number two initiativeQUANTITY

0.78+

dayEVENT

0.77+

Takara bioORGANIZATION

0.68+

last 10 yearsDATE

0.68+

next 10 yearsDATE

0.64+

NBAORGANIZATION

0.61+

Storage Day 2021EVENT

0.59+

Kevin Miller, AWS | AWS Storage Day 2021


 

(bright music) >> Welcome to this next session of AWS Storage Day. I'm your host, Dave Vellante of theCUBE. And right now we're going to explore how to simplify and evolve your data lake backup disaster recovery and analytics in the cloud. And we're joined by Kevin Miller who's the general manager of Amazon S3. Kevin, welcome. >> Thanks Dave. Great to see you again. >> Good to see you too. So listen, S3 started as like a small ripple in the pond and over the last 15 years, I mean, it's fundamentally changed the storage market. We used to think about storage as, you know, a box of disc drives that either store data in blocks or file formats and then object storage at the time it was, kind of used in archival storage, it needed specialized application interfaces, S3 changed all that. Why do you think that happened? >> Well, I think first and foremost, it's really just, the customers appreciated the value of S3 and being fully managed where, you know, we manage capacity. Capacity is always available for our customers to bring new data into S3 and really therefore to remove a lot of the constraints around building their applications and deploying new workloads and testing new workloads where they know that if something works great, it can scale up by a 100x or a 1000x. And if it doesn't work, they can remove the data and move on to the next application or next experiment they want to try. And so, you know, really, it's exciting to me. Really exciting when I see businesses across essentially every industry, every geography, you know, innovate and really use data in new and really interesting ways within their business to really drive actual business results. So it's not just about building data, having data to build a report and have a human look at a report, but actually really drive the day-to-day operations of their business. So that can include things like personalization or doing deeper analytics in industrial and manufacturing. A customer like Georgia-Pacific for example, I think is one of the great examples where they use a big data lake and collect a lot of sensor data, IoT sensor data off of their paper manufacturing machines. So they can run them at just the right speed to avoid tearing the paper as it's going through, which really just keeps their machines running more and therefore, you know, just reduce their downtime and costs associated with it. So you know, it's just that transformation again, across many industries, almost every industry that I can think of. That's really what's been exciting to see and continue to see. I think we're still in the really early days of what we're going to see as far as that innovation goes. >> Yeah, I got to agree. I mean, it's been pretty remarkable. Maybe you could talk about the pace of innovation for S3. I mean, if anything, it seems to be accelerating. How Kevin, does AWS, how has it thought about innovation over the past decade plus and where do you see it headed? >> Yeah, that's a great question Dave, really innovation is at our core as part of our core DNA. S3 launched more than 15 years ago, almost 16 years old. We're going to get a learner's permit for it next year. But, you know, as it's grown to exabytes of storage and trillions of objects, we've seen almost every use case you can imagine. I'm sure there's a new one coming that we haven't seen yet, but we've learned a lot from those use cases. And every year we just think about what can we do next to further simplify. And so you've seen that as we've launched over the last few years, things like S3 Intelligent Tiering, which was really the clouds first storage class to automatically optimize and reduce customer's costs for storage, for data that they were storing for a long time. And based on, you know, variable access patterns. We launched S3 Access Points to provide a simpler way to have different applications operating on shared data sets. And we launched earlier this year S3 Object Lambda, which really is, I think, cool technology. We're just starting to see how it can be applied to simplify serverless application development. Really the next wave, I think, of application development that doesn't need, not only is the storage fully managed, but the compute is fully managed as well. Really just simplify that whole end to end application development. >> Okay, so we heard this morning in the keynote, some exciting news. What can you tell us, Kevin? >> Yeah, so this morning we launched S3 Multi-Region Access Points and these are access points that give you a single global endpoint to access data sets that can span multiple S3 buckets in different AWS regions around the world. And so this allows you to build these multi-region applications and multi-region architectures with, you know, with the same approach that you use in a single region and then run these applications anywhere around the world. >> Okay. So if I interpret this correctly, it's a good fit for organizations with clients or operations around the globe. So for instance, gaming, news outlets, think of content delivery types of customers. Should we think about this as multi-region storage and why is that so important in your view? >> Absolutely. Yeah, that is multi-region storage. And what we're hearing is seeing as customers grow and we have multinational customers who have operations all around the world. And so as they've grown and their data needs grow around the world, they need to be using multiple AWS regions to store and access that data. Sometimes it's for low latency so that it can be closer to their end users or their customers, other times it's for regions where they just have a particular need to have data in a particular geography. But this is really a simple way of having one endpoint in front of data, across multiple buckets. So for applications it's quite easy, they just have that one end point and then the data, the requests are automatically routed to the nearest region. >> Now earlier this year, S3 turned 15. What makes S3 different, Kevin in your view? >> Yeah, it turned 15. It'll be 16 soon, you know, S3 really, I think part of the difference is it just operates at really an unprecedented scale with, you know, more than a hundred trillion objects and regularly peaking to tens of millions of requests per second. But it's really about the resiliency and availability and durability that are our responsibility and we focus every single day on protecting those characteristics for customers so that they don't have to. So that they can focus on building the businesses and applications that they need to really run their business and not worry about the details of running highly available storage. And so I think that's really one of the key differences with S3. >> You know, I first heard the term data lake, it was early last decade. I think it was around 2011, 2012 and obviously the phrase has stuck. How are S3 and data lakes simpatico, and how have data lakes on S3 changed or evolved over the years? >> Yeah. You know, the idea of data lakes, obviously, as you say, came around nine or 10 years ago, but I actually still think it's really early days for data lakes. And just from the standpoint of, you know, originally nine or 10 years ago, when we talked about data lakes, we were looking at maybe tens of terabytes, hundreds of terabytes, or a low number of petabytes and for a lot of data lakes, we're still seeing that that's the kind of scale that currently they're operating at, but I'm also seeing a class of data lakes where you're talking about tens or hundreds of petabytes or even more, and really just being used to drive critical aspects of customer's businesses. And so I really think S3, it's been a great place to run data lakes and continues to be. We've added a lot of capability over the last several years, you know, specifically for that data lake use case. And we're going to continue to do that and grow the feature set for data lakes, you know, over the next many years as well. But really, it goes back to the fundamentals of S3 providing that 11 9s of durability, the resiliency of having three independent data centers within regions. So the customers can use that storage knowing their data is protected. And again, just focus on the applications on top of that data lake and also run multiple applications, right? The idea of a data lake is you're not limited to one access pattern or one set of applications. If you want to try out a new machine learning application or something, do some advanced analytics, that's all possible while running the in-flight operational tools that you also have against that data. So it allows for that experimentation and for transforming businesses through new ideas. >> Yeah. I mean, to your point, if you go back to the early days of cloud, we were talking about storing, you know, gigabytes, maybe tens of terabytes that was big. Today, we're talking about hundreds and hundreds of terabytes, petabytes. And so you've got huge amount of information customers that are of that size and that scale, they have to optimize costs. Really that's top of mind, how are you helping customers save on storage costs? >> Absolutely. Dave, I mean, cost optimization is one of the key things we look at every single year to help customers reduce their costs for storage. And so that led to things like the introduction of S3 Intelligent Tiering, 10 years ago. And that's really the only cloud storage class that just delivers the automatic storage cost savings, as data access patterns change. And, you know, we deliver this without performance impact or any kind of operational overhead. It's really intended to be, you know, intelligent where customers put the data in. And then we optimize the storage cost. Or for example, last year we launched S3 Storage Lens, which is really the first and only service in the cloud that provides organization-wide visibility into where customers are storing their data, what the request rates are and so forth against their storage. So when you talk about these data lakes of hundreds of petabytes or even smaller, these tools are just really invaluable to help customers reduce their storage costs year after year. And actually, Dave I'm pleased, you know, today we're also announcing the launch of some improvements to S3 Intelligent Tiering, to actually further automate the cost savings. And what we're doing is we're actually removing the minimum storage duration. Previously, Intelligent Tiering had a 30 day minimum storage duration, and we're also eliminating our monitoring and automation charge for small objects. So previously there was that monitoring and automation charge applied to all objects independent of size. And now any object less than 120 kilobytes is not charged at that charge. So, and I think some pretty critical innovations on Intelligent Tiering that will help customers use that for an even wider set of data lake and other applications. >> That's three, it's ubiquitous. The innovation continues. You can learn more by attending the Storage Day S3 deep dive right after this interview. Thank you, Kevin Miller. Great to have you on the program. >> Yeah, Dave, thanks for having me. Great to see you. >> You're welcome, this is Dave Vellante and you're watching theCUBE's coverage of AWS Storage Day. Keep it right there. (bright music)

Published Date : Sep 2 2021

SUMMARY :

and analytics in the cloud. and over the last 15 years, I mean, and therefore, you know, over the past decade plus and And based on, you know, in the keynote, some exciting news. And so this allows you to build around the globe. they need to be using multiple AWS regions Kevin in your view? and applications that they need and obviously the phrase has stuck. And just from the standpoint of, you know, storing, you know, gigabytes, And so that led to things Great to have you on the program. Great to see you. Vellante and you're watching

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Kevin MillerPERSON

0.99+

Dave VellantePERSON

0.99+

DavePERSON

0.99+

Dave VellantePERSON

0.99+

KevinPERSON

0.99+

30 dayQUANTITY

0.99+

firstQUANTITY

0.99+

AWSORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

last yearDATE

0.99+

hundreds of terabytesQUANTITY

0.99+

TodayDATE

0.99+

tens of terabytesQUANTITY

0.99+

next yearDATE

0.99+

hundredsQUANTITY

0.99+

oneQUANTITY

0.99+

nineDATE

0.99+

100xQUANTITY

0.99+

less than 120 kilobytesQUANTITY

0.99+

threeQUANTITY

0.98+

more than a hundred trillion objectsQUANTITY

0.98+

2012DATE

0.98+

S3TITLE

0.98+

1000xQUANTITY

0.98+

one setQUANTITY

0.98+

Storage Day S3EVENT

0.98+

10 years agoDATE

0.97+

todayDATE

0.97+

11 9sQUANTITY

0.97+

hundreds of petabytesQUANTITY

0.97+

tens of millionsQUANTITY

0.96+

15QUANTITY

0.96+

first storage classQUANTITY

0.95+

single regionQUANTITY

0.95+

2011DATE

0.95+

hundreds of terabytesQUANTITY

0.95+

this morningDATE

0.94+

S3COMMERCIAL_ITEM

0.94+

earlier this yearDATE

0.94+

singleQUANTITY

0.94+

earlier this yearDATE

0.93+

S3 Object LambdaTITLE

0.93+

past decadeDATE

0.9+

one endpointQUANTITY

0.9+

16QUANTITY

0.9+

almost 16 years oldQUANTITY

0.89+

theCUBEORGANIZATION

0.86+

Storage Day 2021EVENT

0.85+

three independent data centersQUANTITY

0.83+

one end pointQUANTITY

0.83+

trillions of objectsQUANTITY

0.8+

petabytesQUANTITY

0.8+

Storage DayEVENT

0.78+

single yearQUANTITY

0.77+

last 15 yearsDATE

0.75+

S3 Storage LensCOMMERCIAL_ITEM

0.74+

last decadeDATE

0.74+

secondQUANTITY

0.73+

tensQUANTITY

0.73+

more than 15 years agoDATE

0.73+

one accessQUANTITY

0.7+

Ashish Palekar & Cami Tavares | AWS Storage Day 2021


 

(upbeat music) >> Welcome back to theCUBE's continuous coverage of AWS storage day. My name is Dave Vellante and we're here from Seattle. And we're going to look at the really hard workloads, those business and mission critical workloads, the most sensitive data. They're harder to move to the cloud. They're hardened. They have a lot of technical debt. And the blocker in some cases has been storage. Ashish Palekar is here. He's the general manager of EBS snapshots, and he's joined by Cami Tavares who's a senior manager of product management for Amazon EBS. Folks, good to see you. >> Ashish: Good to see you again Dave. >> Dave: Okay, nice to see you again Ashish So first of all, let's start with EBS. People might not be familiar. Everybody knows about S3 is famous, but how are customers using EBS? What do we need to know? >> Yeah, it's super important to get the basics, right? Right, yeah. We have a pretty broad storage portfolio. You talked about S3 and S3 glacier, which are an object and object and archival storage. We have EFS and FSX that cover the file site, and then you have a whole host of data transfer services. Now, when we think about block, we think of a really four things. We think about EBS, which is the system storage for EC2 volumes. When we think about snapshots, which is backups for EBS volumes. Then we think about instant storage, which is really a storage that's directly attached to an instance and manages and then its life cycle is similar to that of an instance. Last but not the least, data services. So things like our elastic volumes capability of fast snapshot restore. So the answer to your question really is EBS is persistent storage for EC2 volumes. So if you've used EC2 instances, you'll likely use EBS volumes. They service boot volumes and they service data volumes, and really cover a wide gamut of workloads from relational databases, no SQL databases, file streaming, media and coding. It really covers the gamut of workloads. >> Dave: So when I heard SAN in the cloud, I laughed out loud. I said, oh, because I could think about a box, a bunch of switches and this complicated network, and then you're turning it into an API. I was like, okay. So you've made some announcements that support SAN in the cloud. What, what can you tell us about? >> Ashish: Yeah, So SANs and for customers and storage, those are storage area networks, really our external arrays that customers buy and connect their performance critical and mission critical workloads. With block storage and with EBS, we got a bunch of customers that came to us and said, I'm thinking about moving those kinds of workloads to the cloud. What do you have? And really what they're looking for and what they were looking for is performance availability and durability characteristics that they would get from their traditional SANs on premises. And so that's what the team embarked on and what we launched at reinvent and then at GEd in July is IO2 block express. And what IO2 block express does is it's a complete ground app, really the invention of our storage product offering and gives customers the same availability, durability, and performance characteristics that can, we'll go into little later about that they're used to in their on premises. The other thing that we realized is that it's not just enough to have a volume. You need an instance that can drive that kind of throughput and IOPS. And so coupled with our trends in EC2 we launched our R5b that now triples the amount of IOPS and throughput that you can get from a single instance to EBS storage. So when you couple the sub millisecond latency, the capacity and the performance that you get from IO2 block express with R5b, what we hear from customers is that gives them enough of the performance availability characteristics and durability characteristics to move their workloads from on premises, into the cloud, for the mission critical and business critical apps. >> Dave: Thank you for that. So Cami when I, if I think about how the prevailing way in which storage works, I drop off a box at the loading dock and then I really don't know what happens. There may be a service organization that's maybe more intimate with the customer, but I don't really see the innovations and the use cases that are applied clouds, different. You know, you live it every day. So you guys always talk about customer inspired innovation. So what are you seeing in terms of how people are using this capability and what innovations they're driving? >> Cami: Yeah, so I think when we look at the EBS portfolio and this, the evolution over the years, you can really see that it was driven by customer need and we have different volume types and they have very specific performance characteristics, and they're built to meet these unique needs of customer workloads. So I'll tell you a little bit about some of our specific volume types to kind of illustrate this evolution over the years. So starting with our general purpose volumes, we have many customers that are using these volumes today. They really are looking for high performance at a low cost, and you have all kinds of transactional workloads and low-latency interactive applications and boot volumes, as Ashish mentioned. And they tell us, the customer is using these general purpose volumes, they tell us that they really like this balanced cost and performance. And customers also told us, listen, I have these more demanding applications that need higher performance. I need more IOPS, more throughput. And so looking at that customer need, we were really talking about these IO intensive applications like SAP HANA and Oracle and databases that require just higher durability. And so we looked at that customer feedback and we launched our provisioned IOPS IO2 volume. And with that volume, you get five nines of durability and four times the IOPS that you would get with general purpose volumes. So it's a really compelling offering. Again, customers came to us and said, this is great. I need more performance, I need more IOPS, more throughput, more storage than I can get with a single IO2 volume. And so these were talking about, you mentioned mission critical applications, SAP HANA, Oracle, and what we saw customers doing often is they were striping together multiple IO2 volumes to get the maximum performance, but very quickly with the most demanding applications, it got to a point where we have more IO2 volumes that you want to manage. And so we took that feedback to heart and we completely reinvented the underlying EBS hardware and the software and networking stacks. And we'll launched block express. With block express, you can get four times the IOPS throughput and storage that you would get with a single io2 volume. So it's a really compelling offering for customers. >> Dave: If I had to go back and ask you, what was the catalyst, what was the sort of business climate that really drove the decision here. Was that people were just sort of fed up with you know, I'll use the phrase, the undifferentiated, heavy lifting around SAN, what was it, was it COVID driven? What was the climate? >> You know, it's important to recognize when we are talking about business climate today, every business is a data business and block storage is really a foundational part of that. And so with SAN in the cloud specifically, we have seen enterprises for several years, buying these traditional hardware arrays for on premises SANs. And it's a very expensive investment. Just this year alone, they're spending over $22 billion on SANs. And with this old model on premises SANs, you would probably spend a lot of time doing this upfront capacity planning, trying to figure out how much storage you might need. And in the end, you'd probably end up overbuying for peak demand because you really don't want to get stuck, not having what you need to scale your business. And so now with block express, you don't have to do that anymore. You pay for what you need today, and then you can increase your storage as your business needs change. So that's cost and cost is a very important factor. But really when we're talking to customers and enterprises that are looking for SAN in the cloud, the number one reason that they want to move to the cloud with their SANs and these mission, critical workloads is agility and speed. And it's really transformational for businesses to be able to change the customer experience for their customers and innovate at a much faster pace. And so with the block express product, you get to do that much faster. You can go from an idea to an implementation orders of magnitude faster. Whereas before if you had these workloads on premises, it would take you several weeks just to get the hardware. And then you have to build all this surrounding infrastructure to get it up and running. Now, you don't have to do that anymore. You get your storage in minutes, and if you change your mind, if your business needs change, if your workloads change, you can modify your EBS volume types without interrupting your workload. >> Dave: Thank you for that. So Cami kind of addressed some of this, but I know store admins say, don't touch my SAN, I'm not moving it. This is a big decision for a lot of people. So kind of a two-part question, you know, why now, what do people need to know? And give us the north star close it out with, with where you see the future. >> Ashish: Yeah, so let's, I'll kick things off and then Cami, do jump in. So first of the volume is one part of the story, right? And with IO2 block express, I think we've given customers an extremely compelling offering to go build their mission critical and business critical applications on. We talked about the instance type R5b in terms of giving that instance level performance, but all this is on the foundation of AWS in terms of availability zones and regions. So you think about the constructs and we talk them in terms of building blocks, but our building blocks are really availability zones and regions. And that gives you that core availability infrastructure that you need to build your mission critical and business critical applications. You then take layer on top of that our regional footprint, right. And now you can spin up those workloads globally, if you need to. And then last but not the least, once you're in AWS, you have access to other services. Be it AI, be it ML, be it our relational database services that you can start to think about undifferentiated, heavy lifting. So really you get the smorgasbord really from the availability footprint to global footprint and all the way up to sort of our service stack that you get access to. >> Dave: So that's really thinking out of the box. We're out of time. Cami we'll give you the last word. >> Cami: I just want to say, if you want to learn more about EBS, there's a deep dive session with our principal engineer, Marc Olson later today. So definitely join that. >> Dave: Folks, thanks so much for coming to theCUBE. (in chorus )Thank you. >> Thank you for watching. Keep it right there for more great content from AWS storage day from Seattle.

Published Date : Sep 2 2021

SUMMARY :

And the blocker in some So first of all, let's start with EBS. and then you have a whole host What, what can you tell us about? that you can get from a single So what are you seeing in And with that volume, you that really drove the decision here. and then you can increase your storage So kind of a two-part question, you know, And that gives you that core Cami we'll give you the last word. if you want to learn more about EBS, much for coming to theCUBE. Thank you for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

Ashish PalekarPERSON

0.99+

Dave VellantePERSON

0.99+

AshishPERSON

0.99+

Cami TavaresPERSON

0.99+

Marc OlsonPERSON

0.99+

SeattleLOCATION

0.99+

CamiPERSON

0.99+

AWSORGANIZATION

0.99+

EBSORGANIZATION

0.99+

two-partQUANTITY

0.99+

one partQUANTITY

0.99+

JulyDATE

0.99+

over $22 billionQUANTITY

0.99+

EC2TITLE

0.99+

FSXTITLE

0.99+

EFSTITLE

0.99+

firstQUANTITY

0.99+

EBSTITLE

0.98+

four timesQUANTITY

0.98+

IO2 block expressTITLE

0.97+

OracleORGANIZATION

0.96+

todayDATE

0.94+

five ninesQUANTITY

0.93+

this yearDATE

0.92+

SQLTITLE

0.92+

theCUBEORGANIZATION

0.92+

singleQUANTITY

0.91+

later todayDATE

0.87+

SAP HANATITLE

0.86+

four thingsQUANTITY

0.86+

single instanceQUANTITY

0.85+

R5bOTHER

0.85+

block expressTITLE

0.84+

block expressORGANIZATION

0.76+

S3TITLE

0.75+

Amazon EBSORGANIZATION

0.74+

oneQUANTITY

0.71+

AWS Storage Day 2021EVENT

0.69+

GEdORGANIZATION

0.63+

storage dayEVENT

0.59+

starLOCATION

0.58+

several weeksQUANTITY

0.56+

COVIDOTHER

0.53+

S3COMMERCIAL_ITEM

0.51+

IO2TITLE

0.44+

Duncan Lennox | AWS Storage Day 2021


 

>>Welcome back to the cubes, continuous coverage of AWS storage day. We're in beautiful downtown Seattle in the great Northwest. My name is Dave Vellante and we're going to talk about file systems. File systems are really tricky and making those file systems elastic is even harder. They've got a long history of serving a variety of use cases as with me as Duncan Lennox. Who's the general manager of Amazon elastic file system. Dunkin. Good to see you again, Dave. Good to see you. So tell me more around the specifically, uh, Amazon's elastic file system EFS you, you know, broad file portfolio, but, but let's narrow in on that. What do we need to know? >>Yeah, well, Amazon elastic file system or EFS as we call it is our simple serverless set and forget elastic file system service. So what we mean by that is we deliver something that's extremely simple for customers to use. There's not a lot of knobs and levers. They need to turn or pull to make it work or manage it on an ongoing basis. The serverless part of it is there's absolutely no infrastructure for customers to manage. We handled that entirely for them. The elastic part then is the file system automatically grows and shrinks as they add and delete data. So they never have to provision storage or risk running out of storage and they pay only for the storage they're actually using. >>What are the sort of use cases and workloads that you see EFS supporting? >>Yeah. Yeah. It has to support a broad set of customer workloads. So it's everything from, you know, serial, highly latency, sensitive applications that customers might be running on-prem today and want to move to the AWS cloud up to massively parallel scale-out workloads that they have as well. >>So. Okay. Are there any industry patterns that you see around that? Are there other industries that sort of lean in more or is it more across the board? We >>See it across the board, although I'd have to say that we see a lot of adoption within compliance and regulated industries. And a lot of that is because of not only our simplicity, but the high levels of availability and durability that we bring to the file system as well. The data is designed for 11 nines of durability. So essentially you don't need to be worrying about your anything happening into your data. And it's a regional service meaning that your file system is available from all availability zones in a particular region for high availability. >>So as part of storage data, we, we saw some, some new tiering announcements. W w w what can you tell us about those >>Super excited to be announcing EFS intelligent tiering? And this is a capability that we're bringing to EFS that allows customers to automatically get the best of both worlds and get cost optimization for their workloads and how it works is the customer can select, uh, using our lifecycle management capability, a policy for how long they want their data to remain active in one of our active storage classes, seven days, for example, or 30 days. And what we do is we automatically monitor every access to every file they have. And if we see no access to a file for their policy period, like seven days or 30 days, we automatically and transparently move that file to one of our cost optimized, optimized storage classes. So they can save up to 92% on their storage costs. Um, one of the really cool things about intelligent tiering then is if that data ever becomes active again and their workload or their application, or their users need to access it, it's automatically moved back to a performance optimized storage class, and this is all completely transparent to their applications and users. >>So, so how, how does that work? Are you using some kind of machine intelligence to sort of monitor things and just learn over time? And like, what if I policy, what if I don't get it quite right? Or maybe I have some quarter end or maybe twice a year, you know, I need access to that. Can you, can the system help me figure >>That out? Yeah. The beauty of it is you don't need to know how your application or workload is accessing the file system or worry about those access patterns changing. So we'll take care of monitoring every access to every file and move the file either to the cost optimized storage class or back to the performance optimized class as needed by your application. >>And then optimized storage classes is again, selected by the system. I don't have to >>It that's right. It's completely transparent. So we will take care of that for you. So you'll set the policy by which you want active data to be moved to the infrequent access cost optimized storage class, like 30 or seven days. And then you can set a policy that says if that data is ever touched again, to move it back to the performance optimized storage class. So that's then all happened automatically by the service on our side. You don't need to do anything >>It's, it's it's serverless, which means what I don't have to provision any, any compute infrastructure. >>That's right. What you get is an end point, the ability to Mount your file system using NFS, or you can also manage your file system from any of our compute services in AWS. So not only directly on an instance, but also from our serverless compute models like AWS Lambda and far gays, and from our container services like ECS and EKS, and all of the infrastructure is completely managed by us. You don't see it, you don't need to worry about it. We scale it automatically for you. >>What was the catalyst for all this? I mean, you know, you got to tell me it's customers, but maybe you could give me some, some insight and add some, some color. Like, what would you decoded sort of what the customers were saying? Did you get inputs from a lot of different places, you know, and you had to put that together and shape it. Uh, tell us, uh, take us inside that sort of how you came to where you are >>Today. Well, you know, I guess at the end of the day, when you think about storage and particularly file system storage, customers always want more performance and they want lower costs. So we're constantly optimizing on both of those dimensions. How can we find a way to deliver more value and lower cost to customers, but also meet the performance needs that their workloads have. And what we found in talking to customers, particularly the customers that EFS targets, they are application administrators, their dev ops practitioners, their data scientists, they have a job they want to do. They're not typically storage specialists. They don't want to have know or learn a lot about the bowels of storage architecture, and how to optimize for what their applications need. They want to focus on solving the business problems. They're focused on whatever those are >>You meaning, for instance. So you took tiering is obvious. You're tiering to lower cost storage, serverless. I'm not provisioning, you know, servers, myself, the system I'm just paying for what I use. The elasticity is a factor. So I'm not having to over provision. And I think I'm hearing, I don't have to spend my time turning knobs. You've talked about that before, because I don't know how much time is spent, you know, tuning systems, but it's gotta be at least 15 to 20% of the storage admins time. You're eliminating that as well. Is that what you mean by sort of cost optimum? Absolutely. >>So we're, we're providing the scale of capacity of performance that customer applications need as they needed without the customer needing to know exactly how to configure the service, to get what they need. We're dealing with changing workloads and changing access patterns. And we're optimizing their storage costs. As at the same time, >>When you guys step back, you get to the whiteboard out, say, okay, what's the north star that you're working because you know, you set the north star. You don't want to keep revisiting that, right? This is we're moving in this direction. How do we get there might change, but what's your north star? Where do you see the future? >>Yeah, it's really all about delivering simple file system storage that just works. And that sounds really easy, but there's a lot of nuance and complexity behind it, but customers don't want to have to worry about how it works. They just need it to work. And we, our goal is to deliver that for a super broad cross section of applications so that customers don't need to worry about how they performance tune or how they cost optimize. We deliver that value for them. >>Yeah. So I'm going to actually follow up on that because I feel like, you know, when you listen to Werner Vogels talk, he gives takes you inside. It's a plumbing sometimes. So what is the, what is that because you're right. That it, it sounds simple, but it's not. And as I said up front file systems, getting that right is really, really challenging. So technically what's the challenges, is it doing this at scale? And, and, and, and, and, and having some, a consistent experience for customers, there's >>Always a challenge to doing what we do at scale. I mean, the elasticity is something that we provide to our customers, but ultimately we have to take their data as bits and put them into Adams at some point. So we're managing infrastructure on the backend to support that. And we also have to do that in a way that delivers something that's cost-effective for customers. So there's a balance and a natural tension there between things like elasticity and simplicity, performance, cost, availability, and durability, and getting that balance right. And being able to cover the maximum cross section of all those things. So for the widest set of workloads, we see that as our job and we're delivering value, and we're doing that >>For our customers. Then of course, it was a big part of that. And of course, when we talk about, you know, the taking away the, the need for tuning, but, but you got to get it right. I mean, you, you, you can't, you can't optimize for every single use case. Right. But you can give great granularity to allow those use cases to be supported. And that seems to be sort of the balancing act that you guys so >>Well, absolutely. It's focused on being a general purpose file system. That's going to work for a broad cross section of, of applications and workloads. >>Right. Right. And that's, that's what customers want. You know, generally speaking, you go after that, that metal Dunkin, I'll give you the last word. >>I just encourage people to come and try out EFS it's as simple as a single click in our console to create a file system and get started. So come give it a, try the >>Button Duncan. Thanks so much for coming back to the cube. It's great to see you again. Thanks, Dave. All right. And keep it right there for more great content from AWS storage day from Seattle.

Published Date : Sep 2 2021

SUMMARY :

Good to see you again, Dave. So they never have to provision storage or risk running out of storage and they pay only for the storage they're actually you know, serial, highly latency, sensitive applications that customers might be running on-prem today Are there other industries that sort of lean in more or is it more across the board? So essentially you don't need to be worrying can you tell us about those And if we see no access to a file for their policy period, like seven days or 30 days, twice a year, you know, I need access to that. access to every file and move the file either to the cost optimized storage class or back I don't have to And then you can set a policy that says if that data is ever touched What you get is an end point, the ability to Mount your file system using NFS, I mean, you know, you got to tell me it's customers, but maybe you could give me some, of storage architecture, and how to optimize for what their applications need. Is that what you mean by sort of cost optimum? to get what they need. When you guys step back, you get to the whiteboard out, say, okay, what's the north star that you're working because you know, a super broad cross section of applications so that customers don't need to worry about how they performance So what is the, what is that because you're right. And being able to cover the maximum cross section And that seems to be sort of the balancing act that you guys so That's going to work for a broad cross section that metal Dunkin, I'll give you the last word. I just encourage people to come and try out EFS it's as simple as a single click in our console to create a file It's great to see you again.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

DavePERSON

0.99+

SeattleLOCATION

0.99+

Duncan LennoxPERSON

0.99+

seven daysQUANTITY

0.99+

30 daysQUANTITY

0.99+

AmazonORGANIZATION

0.99+

30QUANTITY

0.99+

Werner VogelsPERSON

0.99+

AWSORGANIZATION

0.99+

TodayDATE

0.99+

11 ninesQUANTITY

0.98+

bothQUANTITY

0.98+

up to 92%QUANTITY

0.97+

both worldsQUANTITY

0.97+

DunkinPERSON

0.97+

oneQUANTITY

0.97+

todayDATE

0.97+

twice a yearQUANTITY

0.94+

20%QUANTITY

0.93+

AWSEVENT

0.92+

single clickQUANTITY

0.88+

single use caseQUANTITY

0.85+

LambdaTITLE

0.77+

ECSTITLE

0.71+

dayEVENT

0.7+

EFSTITLE

0.65+

Storage Day 2021EVENT

0.61+

northORGANIZATION

0.6+

DuncanPERSON

0.59+

north starORGANIZATION

0.57+

at least 15QUANTITY

0.56+

EKSTITLE

0.53+

AdamsPERSON

0.43+

Wayne Duso | AWS Storage Day 2021


 

(Upbeat intro music) >> Thanks guys. Hi everybody. Welcome back to The Spheres. My name is Dave Vellante and you're watching theCubes continuous coverage of AWS storage day. I'm really excited to bring on Wayne Duso. Wayne is the vice-president of AWS Storage Edge and Data Governance Services. Wayne, two Boston boys got to come to Seattle to see each other. You know. Good to see you, man. >> Good to see you too. >> I mean, I'm not really from Boston. The guys from East Boston give me crap for saying that. [Wayne laughs] That my city, right? You're a city too. >> It's my city as well I'm from Charlestown so right across the ocean. >> Charlestown is actually legit Boston, you know I grew up in a town outside, but that's my city. So all the sports fan. So, hey great keynote today. We're going to unpack the keynote and, and really try to dig into it a little bit. You know, last 18 months has been a pretty bizarre, you know, who could have predicted this. We were just talking to my line about, you know, some of the permanent changes and, and even now it's like day to day, you're trying to figure out, okay, you know, what's next, you know, our business, your business. But, but clearly this has been an interesting time to say the least and the tailwind for the Cloud, but let's face it. How are customers responding? How are they changing their strategies as a result? >> Yeah. Well, first off, let me say it's good to see you. It's been years since we've been in chairs across from one another. >> Yeah. A couple of years ago in Boston, >> A couple of years ago in Boston. I'm glad to see you're doing well. >> Yeah. Thanks. You too. >> You look great. (Wayne Laughs) >> We get the Sox going. >> We'll be all set. >> Mm Dave you know, the last 18 months have been challenging. There's been a lot of change, but it's also been inspiring. What we've seen is our customers engaging the agility of the Cloud and appreciating the cost benefits of the Cloud. You know, during this time we've had to be there for our partners, our clients, our customers, and our people, whether it's work from home, whether it's expanding your capability, because it's surging say a company like zoom, where they're surging and they need more capability. Our cloud capabilities have allowed them to function, grow and thrive. In these challenging times. It's really a privilege that we have the services and we have the capability to enable people to behave and, execute and operate as normally as you possibly can in something that's never happened before in our lifetimes. It's unprecedented. It's a privilege. >> Yeah. I mean, I agree. You think about it. There's a lot of negative narrative, in the press about, about big tech and, and, and, you know, the reality is, is big tech has, has stood and small tech has stepped up big time and we were really think about it, Wayne, where would we be without, without tech? And I know it sounds bizarre, but we're kind of lucky. This pandemic actually occurred when it did, because had it occurred, you know, 10 years ago it would have been a lot tougher. I mean, who knows the state of vaccines, but certainly from a tech standpoint, the Cloud has been a savior. You've mentioned Zoom. I mean, you know, we, productivity continues. So that's been, been pretty key. I want to ask you, in you keynote, you talked about two paths to, to move to the Cloud, you know, Vector one was go and kind of lift and shift if I got it right. And then vector two was modernized first and then go, first of all, did I get that right? And >> Super close and >> So help me course correct. And what are those, what are those two paths mean for customers? How should we think about that? >> Yeah. So we want to make sure that customers can appreciate the value of the Cloud as quickly as they need to. And so there's, there's two paths and with not launches and, we'll talk about them in a minute, like our FSX for NetApp ONTAP, it allows customers to quickly move from like to like, so they can move from on-prem and what they're using in terms of the storage services, the processes they use to administer the data and manage the data straight onto AWS, without any conversion, without any change to their application. So I don't change to anything. So storage administrators can be really confident that they can move. Application Administrators know it will work as well, if not better with the Cloud. So moving onto AWS quickly to value that's one path. Now, once they move on to AWS, some customers will choose to modernize. So they will, they will modernize by containerizing their applications, or they will modernize by moving to server-less using Lambda, right? So that gives them the opportunity at the pace they want as quickly or as cautiously as they need to modernize their application, because they're already executing, they're already operating already getting value. Now within that context, then they can continue that modernization process by integrating with even more capabilities, whether it's ML capabilities or IOT capabilities, depending on their needs. So it's really about speed agility, the ability to innovate, and then the ability to get that flywheel going with cost optimization, feed those savings back into betterment for their customers. >> So how did the launches that you guys have made today and even, even previously, do they map into those two paths? >> Yeah, they do very well. >> How so? Help us understand that. >> So if we look, let's just run down through some of the launches today, >> Great. >> And we can, we can map those two, those two paths. So like we talked about FSX for NetApp ONTAP, or we just like to say FSX for ONTAP because it's so much easier to say. [Dave laughs] >> So FSX for ONTAP is a clear case of move. >> Right >> EBS io2 Block Express for Sand, a clear case of move. It allows customers to quickly move their sand workloads to AWS, with the launch of EBS direct API, supporting 64 terabyte volumes. Now you can snapshot your 64 terabyte volumes on-prem to already be in AWS, and you can restore them to an EBS io2 Block Express volume, allowing you to quickly move an ERP application or an Oracle application. Some enterprise application that requires the speed, the durability and the capability of VBS super quickly. So that's, those are good examples of, of that. In terms of the modernization path, our launch of AWS transfer managed workflows is a good example of that. Manage workflows have been around forever. >> Dave: Yeah. >> And, and customers rely on those workflows to run their business, but they really want to be able to take advantage of cloud capabilities. They want to be able to, for instance, apply ML to those workflows because it really kind of makes sense that their workloads are people related. You can apply artificial intelligence to them, >> Right >> This is an example of a service that allows them to modify those workflows, to modernize them and to build additional value into them. >> Well. I like that example. I got a couple of followup questions, if I may. Sticking on the machine learning and machine intelligence for a minute. That to me is a big one because when I was talking to my line about this is this, it's not just you sticking storage in a bucket anymore, right? You're invoking other services: machine intelligence, machine learning, might be database services, whatever it is, you know, streaming services. And it's a service, you know, there it is. It's not a real complicated integration. So that to me is big. I want to ask you about the block side of things >> Wayne: Sure >> You built in your day, a lot of boxes. >> Wayne: I've built a lot of boxes. >> And you know, the Sand space really well. >> Yeah. >> And you know, a lot of people probably more than I do storage admins that say you're not touching my Sand, right? And they just build a brick wall around it. Okay. And now eventually it ages out. And I think, you know, that whole cumbersome model it's understood, but nonetheless, their workloads and our apps are running on that. How do you see that movement from those and they're the toughest ones to move. The Oracle, the SAP they're really, you know, mission critical Microsoft apps, the database apps, hardcore stuff. How do you see that moving into the Cloud? Give us a sense as to what customers are telling you. >> Storage administrators have a hard job >> Dave: Yeah >> And trying to navigate how they move from on-prem to in Cloud is challenging. So we listened to the storage administrators, even when they tell us, No. we want to understand why no. And when you look at EBS io2 Block Express, this is in part our initial response to moving their saying into the Cloud super easily. Right? Because what do they need? They need performance. They need their ability. They need availability. They need the services to be able to snap and to be able to replicate their Capa- their storage. They need to know that they can move their applications without having to redo all they know to re-plan all they work on each and every day. They want to be able to move quickly and confidently. EBS io2 Block Express is the beginning of that. They can move confidently to sand in the Cloud using EBS. >> Well, so why do they say 'no'? Is it just like the inherent fear? Like a lawyer would say, don't do that, you know, don't or is it just, is it, is it a technical issue? Is it a cultural issue? And what are you seeing there? >> It's a cultural issue. It's a mindset issue, but it's a responsibility. I mean, these folks are responsible for the, one of the most important assets that you have. Most important asset for any company is people. Second most important asset is data. These folks are responsible for a very important asset. And if they don't get it right, if they don't get security, right. They don't get performance right. They don't get durability right. They don't get availability right. It's on them. So it's on us to make sure they're okay. >> Do you see it similar to the security discussion? Because early on, I was just talking to Sandy Carter about this and we were saying, you remember the CIA deal? Right? So I remember talking to the financial services people said, we'll never put any data in the Cloud. Okay they got to be one of your biggest industries, if not your biggest, you know customer base today. But there was fear and, and the CIA deal changed that. They're like, wow CIA is going to the Cloud They're really security conscious. And that was an example of maybe public sector informing commercial. Do you see it as similar? I mean there's obviously differences, but is it a sort of similar dynamic? >> I do. I do. You know, all of these ilities right. Whether it's, you know, durability, availability, security, we'll put ility at the end of that somehow. All of these are not jargon words. They mean something to each persona, to each customer. So we have to make sure that we address each of them. So like security. And we've been addressing the security concern since the beginning of AWS, because security is job number one. And operational excellence job number two. So, a lot of things we're talking about here is operational excellence, durability, availability, likeness are all operational concerns. And we have to make sure we deliver against those for our customers. >> I get it. I mean, the storage admins job is thankless, but the same time, you know, if your main expertise is managing LUNs, your growth path is limited. So they, they want to transform. They want to modernize their own careers. >> I love that. >> It's true. Right? I mean it's- >> Yeah. Yeah. So, you know, if you're a storage administrator today, understanding the storage portfolio that AWS delivers will allow you, and it will enable you empower you to be a cloud storage administrator. So you have no worry because you're, let's take FSX for ONTAP. You will take the skills that you've developed and honed over years and directly apply them to the workloads that you will bring to the Cloud. Using the same CLIs, The same APIs, the same consoles, the same capabilities. >> Plus you mentioned you guys announced, you talked about AWS backup services today, announced some stuff there. I see security governance, backup, identity access management, and governance. These are all adjacency. So if you're a, if you're a cloud storage administrator, you now are going to expand your scope of operations. You, you know, you're not going to be a security, Wiz overnight by any means, but you're now part of that, that rubric. And you're going to participate in that opportunity and learn some things and advance your career. I want to ask you, before we run out of time, you talked about agility and cost optimization, and it's kind of the yin and the yang of Cloud, if you will. But how are these seemingly conflicting forces in sync in your view. >> Like many things in life, right? [Wayne Laughs] >> We're going to get a little spiritually. >> We might get a little philosophical here. [Dave Laughs] >> You know, cloud announced, we've talked about two paths and in part of the two paths is enabling you to move quickly and be agile in how you move to the Cloud. Once you are on the Cloud, we have the ability through all of the service integrations that we have. In your ability to see exactly what's happening at every moment, to then cost optimize, to modernize, to cost optimize, to improve on the applications and workloads and data sets that you've brought. So this becomes a flywheel cost optimization allows you to reinvest, reinvest, be more agile, more innovative, which again, returns a value to your business and value to your customers. It's a flywheel effect. >> Yeah. It's kind of that gain sharing. Right? >> It is. >> And, you know, it's harder to do that in a, in an on-prem world, which everything is kind of, okay, it's working. Now boom, make it static. Oh, I want to bring in this capability or this, you know, AI. And then there's an integration challenge >> That's true. >> Going on. Not, not that there's, you know, there's differences in, APIs. But that's, to me is the opportunity to build on top of it. I just, again, talking to my line, I remember Andy Jassy saying, Hey, we purposefully have created our services at a really atomic level so that we can get down to the primitives and change as the market changes. To me, that's an opportunity for builders to create abstraction layers on top of that, you know, you've kind of, Amazon has kind of resisted that over the years, but, but almost on purpose. There's some of that now going on specialization and maybe certain industry solutions, but in general, your philosophy is to maintain that agility at the really granular level. >> It is, you know, we go back a long way. And as you said, I've built a lot of boxes and I'm proud of a lot of the boxes I've built, but a box is still a box, right? You have constraints. And when you innovate and build on the Cloud, when you move to the Cloud, you do not have those constraints, right? You have the agility, you can stand up a file system in three seconds, you can grow it and shrink it whenever you want. And you can delete it, get rid of it whenever you want back it up and then delete it. You don't have to worry about your infrastructure. You don't have to worry about is it going to be there in three months? It will be there in three seconds. So the agility of each of these services, the unique elements of all of these services allow you to capitalize on their value, use what you need and stop using it when you don't, and you don't have the same capabilities when you use more traditional products. >> So when you're designing a box, how is your mindset different than when you're designing a service? >> Well. You have physical constraints. You have to worry about the physical resources on that device for the life of that device, which is years. Think about what changes in three or five years. Think about the last two years alone and what's changed. Can you imagine having been constrained by only having boxes available to you during this last two years versus having the Cloud and being able to expand or contract based on your business needs, that would be really tough, right? And it has been tough. And that's why we've seen customers for every industry accelerate their use of the Cloud during these last two years. >> So I get that. So what's your mindset when you're building storage services and data services. >> So. Each of the surfaces that we have in object block file, movement services, data services, each of them provides very specific customer value and each are deeply integrated with the rest of AWS, so that when you need object services, you start using them. The integrations come along with you. When, if you're using traditional block, we talked about EBS io2 Block Express. When you're using file, just the example alone today with ONTAP, you know, you get to use what you need when you need it, and the way that you're used to using it without any concerns. >> (Dave mumbles) So your mindset is how do I exploit all these other services? You're like the chef and these are ingredients that you can tap and give a path to your customers to explore it over time. >> Yeah. Traditionally, for instance, if you were to have a filer, you would run multiple applications on that filer you're worried about. Cause you should, as a storage administrator, will each of those applications have the right amount of resources to run at peak. When you're on the Cloud, each of those applications will just spin up in seconds, their own file system. And those file systems can grow and shrink at whatever, however they need to do so. And you don't have to worry about one application interfering with the other application. It's not your concern anymore. And it's not really that fun to do. Anyway. It's kind of the hard work that nobody really you know, really wants to reward you for. So you can take your time and apply it to more business generate, you know, value for your business. >> That's great. Thank you for that. Okay. I'll I'll give you the last word. Give us the bumper sticker on AWS Storage day. Exciting day. The third AWS storage day. You guys keep getting bigger, raising the bar. >> And we're happy to keep doing it with you. >> Awesome. >> So thank you for flying out from Boston to see me. >> Pleasure, >> As they say. >> So, you know, this is a great opportunity for us to talk to customers, to thank them. It's a privilege to build what we build for customers. You know, our customers are leaders in their organizations and their businesses for their customers. And what we want to do is help them continue to be leaders and help them to continue to build and deliver we're here for them. >> Wayne. It's great to see you again. Thanks so much. >> Thanks. >> Maybe see you back at home. >> All right. Go Sox. All right. Yeah, go Sox. [Wayne Laughs] All right. Thank you for watching everybody. Back to Jenna Canal and Darko in the studio. Its Dave Volante. You're watching theCube. [Outro Music]

Published Date : Sep 2 2021

SUMMARY :

I'm really excited to bring on Wayne Duso. I mean, I'm not really from Boston. right across the ocean. you know, our business, your business. it's good to see you. I'm glad to see you're doing well. You too. You look great. have the capability to I mean, you know, we, And what are those, the ability to innovate, How so? because it's so much easier to say. So FSX for ONTAP is and you can restore them to for instance, apply ML to those workflows that allows them to And it's a service, you know, And you know, the And I think, you know, They need the services to be able to that you have. I remember talking to the Whether it's, you know, but the same time, you know, I mean it's- to the workloads that you and it's kind of the yin and the yang We're going to get We might get a little and in part of the two paths is that gain sharing. or this, you know, AI. Not, not that there's, you know, and you don't have the same capabilities having boxes available to you So what's your mindset so that when you need object services, and give a path to your have the right amount of resources to run I'll I'll give you the last word. And we're happy to So thank you for flying out and help them to continue to build It's great to see you again. Thank you

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

WaynePERSON

0.99+

SeattleLOCATION

0.99+

DavePERSON

0.99+

AmazonORGANIZATION

0.99+

Andy JassyPERSON

0.99+

AWSORGANIZATION

0.99+

BostonLOCATION

0.99+

threeQUANTITY

0.99+

Wayne LaughsPERSON

0.99+

Dave VolantePERSON

0.99+

Sandy CarterPERSON

0.99+

Wayne DusoPERSON

0.99+

CIAORGANIZATION

0.99+

Dave LaughsPERSON

0.99+

two pathsQUANTITY

0.99+

one pathQUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

LambdaTITLE

0.99+

CharlestownLOCATION

0.99+

eachQUANTITY

0.99+

East BostonLOCATION

0.99+

FSXTITLE

0.99+

twoQUANTITY

0.99+

OracleORGANIZATION

0.99+

three secondsQUANTITY

0.99+

DarkoPERSON

0.99+

five yearsQUANTITY

0.99+

SecondQUANTITY

0.99+

each customerQUANTITY

0.99+

64 terabyteQUANTITY

0.99+

EBSORGANIZATION

0.99+

EachQUANTITY

0.99+

each personaQUANTITY

0.99+

three monthsQUANTITY

0.98+

10 years agoDATE

0.98+

todayDATE

0.98+

Dave mumblesPERSON

0.98+

oneQUANTITY

0.98+

one applicationQUANTITY

0.97+

Jenna CanalPERSON

0.97+

SAPORGANIZATION

0.97+

SoxORGANIZATION

0.96+

pandemicEVENT

0.95+

firstQUANTITY

0.95+

thirdQUANTITY

0.95+

Mai Lan Tomsen Bukovec | AWS Storage Day 2021


 

(pensive music) >> Thank you, Jenna, it's great to see you guys and thank you for watching theCUBE's continuous coverage of AWS Storage Day. We're here at The Spheres, it's amazing venue. My name is Dave Vellante. I'm here with Mai-Lan Tomsen Bukovec who's Vice President of Block and Object Storage. Mai-Lan, always a pleasure to see you. Thanks for coming on. >> Nice to see you, Dave. >> It's pretty crazy, you know, this is kind of a hybrid event. We were in Barcelona a while ago, big hybrid event. And now it's, you know, it's hard to tell. It's almost like day-to-day what's happening with COVID and some things are permanent. I think a lot of things are becoming permanent. What are you seeing out there in terms of when you talk to customers, how are they thinking about their business, building resiliency and agility into their business in the context of COVID and beyond? >> Well, Dave, I think what we've learned today is that this is a new normal. These fluctuations that companies are having and supply and demand, in all industries all over the world. That's the new normal. And that has what, is what has driven so much more adoption of cloud in the last 12 to 18 months. And we're going to continue to see that rapid migration to the cloud because companies now know that in the course of days and months, you're, the whole world of your expectations of where your business is going and where, what your customers are going to do, that can change. And that can change not just for a year, but maybe longer than that. That's the new normal. And I think companies are realizing it and our AWS customers are seeing how important it is to accelerate moving everything to the cloud, to continue to adapt to this new normal. >> So storage historically has been, I'm going to drop a box off at the loading dock and, you know, have a nice day. And then maybe the services team is involved in, in a more intimate way, but you're involved every day. So I'm curious as to what that permanence, that new normal, some people call it the new abnormal, but it's the new normal now, what does that mean for storage? >> Dave, in the course of us sitting here over the next few minutes, we're going to have dozens of deployments go out all across our AWS storage services. That means our customers that are using our file services, our transfer services, block and object services, they're all getting improvements as we sit here and talk. That is such a fundamentally different model than the one that you talked about, which is the appliance gets dropped off at the loading dock. It takes a couple months for it to get scheduled for setup and then you have to do data migration to get the data on the new appliance. Meanwhile, we're sitting here and customers storage is just improving, under the hood and in major announcements, like what we're doing today. >> So take us through the sort of, let's go back, 'cause I remember vividly when, when S3 was announced that launched this cloud era and people would, you know, they would do a lot of experimentation of, we were storing, you know, maybe gigabytes, maybe even some terabytes back then. And, and that's evolved. What are you seeing in terms of how people are using data? What are the patterns that you're seeing today? How is that different than maybe 10 years ago? >> I think what's really unique about AWS is that we are the only provider that has been operating at scale for 15 years. And what that means is that we have customers of all sizes, terabytes, petabytes, exabytes, that are running their storage on AWS and running their applications using that storage. And so we have this really unique position of being able to observe and work with customers to develop what they need for storage. And it really breaks down to three main patterns. The first one is what I call the crown jewels, the crown jewels in the cloud. And that pattern is adopted by customers who are looking at the core mission of their business and they're saying to themselves, I actually can't scale this core mission on on-premises. And they're choosing to go to the cloud on the most important thing that their business does because they must, they have to. And so, a great example of that is FINRA, the regulatory body of the US stock exchanges, where, you know, a number of years ago, they took a look at all the data silos that were popping up across their data centers. They were looking at the rate of stock transactions going up and they're saying, we just can't keep up. Not if we want to follow the mission of being the watchdog for consumers, for transactions, for stock transactions. And so they moved that crown jewel of their application to AWS. And what's really interesting Dave, is, as you know, 'cause you've talked to many different companies, it's not technology that stops people from moving to the cloud as quick as they want to, it's culture, it's people, it's processes, it's how businesses work. And when you move the crown jewels into the cloud, you are accelerating that cultural change and that's certainly what FINRA saw. Second thing we see, is where a company will pick a few cloud pilots. We'll take a couple of applications, maybe one or a several across the organization and they'll move that as sort of a reference implementation to the cloud. And then the goal is to try to get the people who did that to generalize all the learning across the company. That is actually a really slow way to change culture. Because, as many of us know, in large organizations, you know, you have, you have some resistance to other organizations changing culture. And so that cloud pilot, while it seems like it would work, it seems logical, it's actually counter-productive to a lot of companies that want to move quickly to the cloud. And the third example is what I think of as new applications or cloud first, net new. And that pattern is where a company or a startup says all new technology initiatives are on the cloud. And we see that for companies like McDonald's, which has transformed their drive up experience by dynamically looking at location orders and providing recommendations. And we see it for the Digital Athlete, which is what the NFL has put together to dynamically take data sources and build these models that help them programmatically simulate risks to player health and put in place some ways to predict and prevent that. But those are the three patterns that we see so many customers falling into depending on what their business wants. >> I like that term, Digital Athlete, my business partner, John Furrier, coined the term tech athlete, you know, years ago on theCUBE. That third pattern seems to me, because you're right, you almost have to shock the system. If you just put your toe in the water, it's going to take too long. But it seems like that third pattern really actually de-risks it in a lot of cases, it's so it's said, people, who's going to argue, oh, the new stuff should be in the cloud. And so, that seems to me to be a very sensible way to approach that, that blocker, if you will, what are your thoughts on that? >> I think you're right, Dave. I think what it does is it allows a company to be able to see the ideas and the technology and the cultural change of cloud in different parts of the organization. And so rather than having a, one group that's supposed to generalize it across an organization, you get it decentralized and adopted by different groups and the culture change just goes faster. >> So you, you bring up decentralization and there's a, there's an emerging trend referred to as a data mesh. It was, it was coined, the term coined by Zhamak Dehghani, a very thought-provoking individual. And the concept is basically the, you know, data is decentralized, and yet we have this tendency to sort of shove it all into, you know, one box or one container, or you could say one cloud, well, the cloud is expanding, it's the cloud is, is decentralizing in many ways. So how do you see data mesh fitting in to those patterns? >> We have customers today that are taking the data mesh architectures and implementing them with AWS services. And Dave, I want to go back to the start of Amazon, when Amazon first began, we grew because the Amazon technologies were built in microservices. Fundamentally, a data mesh is about separation or abstraction of what individual components do. And so if I look at data mesh, really, you're talking about two things, you're talking about separating the data storage and the characteristics of data from the data services that interact and operate on that storage. And with data mesh, it's all about making sure that the businesses, the decentralized business model can work with that data. Now our AWS customers are putting their storage in a centralized place because it's easier to track, it's easier to view compliance and it's easier to predict growth and control costs. But, we started with building blocks and we deliberately built our storage services separate from our data services. So we have data services like Lake Formation and Glue. We have a number of these data services that our customers are using to build that customized data mesh on top of that centralized storage. So really, it's about at the end of the day, speed, it's about innovation. It's about making sure that you can decentralize and separate your data services from your storage so businesses can go faster. >> But that centralized storage is logically centralized. It might not be physically centralized, I mean, we put storage all over the world, >> Mai-Lan: That's correct. >> right? But, but we, to the developer, it looks like it's in one place. >> Mai-Lan: That's right. >> Right? And so, so that's not antithetical to the concept of a data mesh. In fact, it fits in perfectly to the point you were making. I wonder if we could talk a little bit about AWS's storage strategy and it started of course, with, with S3, and that was the focus for years and now of course EBS as well. But now we're seeing, we heard from Wayne this morning, the portfolio is expanding. The innovation is, is accelerating that flywheel that we always talk about. How would you characterize and how do you think about AWS's storage strategy per se? >> We are a dynamically and constantly evolving our AWS storage services based on what the application and the customer want. That is fundamentally what we do every day. We talked a little bit about those deployments that are happening right now, Dave. That is something, that idea of constant dynamic evolution just can't be replicated by on-premises where you buy a box and it sits in your data center for three or more years. And what's unique about us among the cloud services, is again that perspective of the 15 years where we are building applications in ways that are unique because we have more customers and we have more customers doing more things. So, you know, I've said this before. It's all about speed of innovation Dave, time and change wait for no one. And if you're a business and you're trying to transform your business and base it on a set of technologies that change rapidly, you have to use AWS services. Let's, I mean, if you look at some of the launches that we talk about today, and you think about S3's multi-region access points, that's a fundamental change for customers that want to store copies of their data in any number of different regions and get a 60% performance improvement by leveraging the technology that we've built up over, over time, leveraging the, the ability for us to route, to intelligently route a request across our network. That, and FSx for NetApp ONTAP, nobody else has these capabilities today. And it's because we are at the forefront of talking to different customers and that dynamic evolution of storage, that's the core of our strategy. >> So Andy Jassy used to say, oftentimes, AWS is misunderstood and you, you comfortable with that. So help me square this circle 'cause you talked about things you couldn't do on on-prem, and yet you mentioned the relationship with NetApp. You think, look at things like Outposts and Local Zones. So you're actually moving the cloud out to the edge, including on-prem data centers. So, so how do you think about hybrid in that context? >> For us, Dave, it always comes back to what the customer's asking for. And we were talking to customers and they were talking about their edge and what they wanted to do with it. We said, how are we going to help? And so if I just take S3 for Outposts, as an example, or EBS and Outposts, you know, we have customers like Morningstar and Morningstar wants Outposts because they are using it as a step in their journey to being on the cloud. If you take a customer like First Abu Dhabi Bank, they're using Outposts because they need data residency for their compliance requirements. And then we have other customers that are using Outposts to help, like Dish, Dish Networks, as an example, to place the storage as close as account to the applications for low latency. All of those are customer driven requirements for their architecture. For us, Dave, we think in the fullness of time, every customer and all applications are going to be on the cloud, because it makes sense and those businesses need that speed of innovation. But when we build things like our announcement today of FSx for NetApp ONTAP, we build them because customers asked us to help them with their journey to the cloud, just like we built S3 and EBS for Outposts for the same reason. >> Well, when you say over time, you're, you believe that all workloads will be on the cloud, but the cloud is, it's like the universe. I mean, it's expanding. So what's not cloud in the future? When you say on the cloud, you mean wherever you meet customers with that cloud, that includes Outposts, just the programming, it's the programmability of that model, is that correct? That's it, >> That's right. that's what you're talking about? >> In fact, our S3 and EBS Outposts customers, the way that they look at how they use Outposts, it's either as part of developing applications where they'll eventually go the cloud or taking applications that are in the cloud today in AWS regions and running them locally. And so, as you say, this definition of the cloud, you know, it, it's going to evolve over time. But the one thing that we know for sure, is that AWS storage and AWS in general is going to be there one or two steps ahead of where customers are, and deliver on what they need. >> I want to talk about block storage for a moment, if I can, you know, you guys are making some moves in that space. We heard some announcements earlier today. Some of the hardest stuff to move, whether it's cultural or maybe it's just hardened tops, maybe it's, you know, governance edicts, or those really hardcore mission critical apps and workloads, whether it's SAP stuff, Oracle, Microsoft, et cetera. You're clearly seeing that as an opportunity for your customers and in storage in some respects was a blocker previously because of whatever, latency, et cetera, then there's still some, some considerations there. How do you see those workloads eventually moving to the cloud? >> Well, they can move now. With io2 Block Express, we have the performance that those high-end applications need and it's available today. We have customers using them and they're very excited about that technology. And, you know, again, it goes back to what I just said, Dave, we had customers saying, I would like to move my highest performing applications to the cloud and this is what I need from the, from the, the storage underneath them. And that's why we built io2 Block Express and that's how we'll continue to evolve io2 Block Express. It is the first SAN technology in the cloud, but it's built on those core principles that we talked about a few minutes ago, which is dynamically evolving and capabilities that we can add on the fly and customers just get the benefit of it without the cost of migration. >> I want to ask you about, about just the storage, how you think about storage in general, because typically it's been a bucket, you know, it's a container, but it seems, I always say the next 10 years aren't going to be like the last, it seems like, you're really in the data business and you're bringing in machine intelligence, you're bringing in other database technology, this rich set of other services to apply to the data. That's now, there's a lot of data in the cloud and so we can now, whether it's build data products, build data services. So how do you think about the business in that sense? It's no longer just a place to store stuff. It's actually a place to accelerate innovation and build and monetize for your customers. How do you think about that? >> Our customers use the word foundational. Every time they talk about storage, they say for us, it's foundational, and Dave, that's because every business is a data business. Every business is making decisions now on this changing landscape in a world where the new normal means you cannot predict what's going to happen in six months, in a year. And the way that they're making those smart decisions is through data. And so they're taking the data that they have in our storage services and they're using SageMaker to build models. They're, they're using all kinds of different applications like Lake Formation and Glue to build some of the services that you're talking about around authorization and data discovery, to sit on top of the data. And they're able to leverage the data in a way that they have never been able to do before, because they have to. That's what the business world demands today, and that's what we need in the new normal. We need the flexibility and the dynamic foundational storage that we provide in AWS. >> And you think about the great data companies, those were the, you know, trillions in the market cap, their data companies, they put data at their core, but that doesn't mean they shove all the data into a centralized location. It means they have the identity access capabilities, the governance capabilities to, to enable data to be used wherever it needs to be used and, and build that future. That, exciting times we're entering here, Mai-Lan. >> We're just set the start, Dave, we're just at the start. >> Really, what ending do you think we have? So, how do you think about Amazon? It was, it's not a baby anymore. It's not even an adolescent, right? You guys are obviously major player, early adulthood, day one, day zero? (chuckles) >> Dave, we don't age ourself. I think if I look at where we're going for AWS, we are just at the start. So many companies are moving to the cloud, but we're really just at the start. And what's really exciting for us who work on AWS storage, is that when we build these storage services and these data services, we are seeing customers do things that they never thought they could do before. And it's just the beginning. >> I think the potential is unlimited. You mentioned Dish before, I mean, I see what they're doing in the cloud for Telco. I mean, Telco Transformation, that's an industry, every industry, there's a transformation scenario, a disruption scenario. Healthcare has been so reluctant for years and that's happening so quickly, I mean, COVID's certainly accelerating that. Obviously financial services have been super tech savvy, but they're looking at the Fintech saying, okay, how do we play? I mean, there isn't manufacturing with EV. >> Mai-Lan: Government. >> Government, totally. >> It's everywhere, oil and gas. >> There isn't a single industry that's not a digital industry. >> That's right. >> And there's implications for everyone. And it's not just bits and atoms anymore, the old Negroponte, although Nicholas, I think was prescient because he's, he saw this coming, it really is fundamental. Data is fundamental to every business. >> And I think you want, for all of those in different industries, you want to pick the provider where innovation and invention is in our DNA. And that is true, not just for storage, but AWS, and that is driving a lot of the changes you have today, but really what's coming in the future. >> You're right. It's the common editorial factors. It's not just the, the storage of the data. It's the ability to apply other technologies that map into your business process, that map into your organizational skill sets that drive innovation in whatever industry you're in. It's great Mai-Lan, awesome to see you. Thanks so much for coming on theCUBE. >> Great seeing you Dave, take care. >> All right, you too. And keep it right there for more action. We're going to now toss it back to Jenna, Canal and Darko in the studio. Guys, over to you. (pensive music)

Published Date : Sep 2 2021

SUMMARY :

it's great to see you guys And now it's, you know, it's hard to tell. in the last 12 to 18 months. the loading dock and, you know, than the one that you talked about, and people would, you know, and they're saying to themselves, coined the term tech athlete, you know, and the cultural change of cloud And the concept is and it's easier to predict But that centralized storage it looks like it's in one place. to the point you were making. is again that perspective of the 15 years the cloud out to the edge, in the fullness of time, it's the programmability of that's what you're talking about? definition of the cloud, you know, Some of the hardest stuff to move, and customers just get the benefit of it lot of data in the cloud and the dynamic foundational and build that future. We're just set the start, Dave, So, how do you think about Amazon? And it's just the beginning. doing in the cloud for Telco. It's everywhere, that's not a digital industry. Data is fundamental to every business. the changes you have today, It's the ability to Great seeing you Dave, Jenna, Canal and Darko in the studio.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

JennaPERSON

0.99+

Dave VellantePERSON

0.99+

AWSORGANIZATION

0.99+

TelcoORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

threeQUANTITY

0.99+

FINRAORGANIZATION

0.99+

Andy JassyPERSON

0.99+

oneQUANTITY

0.99+

John FurrierPERSON

0.99+

BarcelonaLOCATION

0.99+

NicholasPERSON

0.99+

60%QUANTITY

0.99+

Mai-LanPERSON

0.99+

Zhamak DehghaniPERSON

0.99+

15 yearsQUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

NFLORGANIZATION

0.99+

MorningstarORGANIZATION

0.99+

McDonald'sORGANIZATION

0.99+

WaynePERSON

0.99+

OracleORGANIZATION

0.99+

third exampleQUANTITY

0.99+

First Abu Dhabi BankORGANIZATION

0.99+

three patternsQUANTITY

0.99+

two thingsQUANTITY

0.99+

Lake FormationORGANIZATION

0.99+

third patternQUANTITY

0.99+

two stepsQUANTITY

0.99+

10 years agoDATE

0.99+

six monthsQUANTITY

0.98+

GlueORGANIZATION

0.98+

one boxQUANTITY

0.98+

Mai-Lan Tomsen BukovecPERSON

0.98+

one containerQUANTITY

0.98+

first oneQUANTITY

0.98+

DarkoPERSON

0.97+

todayDATE

0.97+

firstQUANTITY

0.97+

EBSORGANIZATION

0.97+

Second thingQUANTITY

0.96+

NetAppTITLE

0.96+

S3TITLE

0.95+

Telco TransformationORGANIZATION

0.95+

BlockORGANIZATION

0.94+

FintechORGANIZATION

0.94+

years agoDATE

0.93+

a yearQUANTITY

0.92+

Ed Naim & Anthony Lye | AWS Storage Day 2021


 

(upbeat music) >> Welcome back to AWS storage day. This is the Cubes continuous coverage. My name is Dave Vellante, and we're going to talk about file storage. 80% of the world's data is in unstructured storage. And most of that is in file format. Devs want infrastructure as code. They want to be able to provision and manage storage through an API, and they want that cloud agility. They want to be able to scale up, scale down, pay by the drink. And the big news of storage day was really the partnership, deep partnership between AWS and NetApp. And with me to talk about that as Ed Naim, who's the general manager of Amazon FSX and Anthony Lye, executive vice president and GM of public cloud at NetApp. Two Cube alums. Great to see you guys again. Thanks for coming on. >> Thanks for having us. >> So Ed, let me start with you. You launched FSX 2018 at re-invent. How has it being used today? >> Well, we've talked about MSX on the Cube before Dave, but let me start by recapping that FSX makes it easy to, to launch and run fully managed feature rich high performance file storage in the cloud. And we built MSX from the ground up really to have the reliability, the scalability you were talking about. The simplicity to support, a really wide range of workloads and applications. And with FSX customers choose the file system that powers their file storage with full access to the file systems feature sets, the performance profiles and the data management capabilities. And so since reinvent 2018, when we launched this service, we've offered two file system choices for customers. So the first was a Windows file server, and that's really storage built on top of Windows server designed as a really simple solution for Windows applications that require shared storage. And then Lustre, which is an open source file system that's the world's most popular high-performance file system. And the Amazon FSX model has really resonated strongly with customers for a few reasons. So first, for customers who currently managed network attached storage or NAS on premises, it's such an easy path to move their applications and their application data to the cloud. FSX works and feels like the NAZA appliances that they're used to, but added to all of that are the benefits of a fully managed cloud service. And second, for builders developing modern new apps, it helps them deliver fast, consistent experiences for Windows and Linux in a simple and an agile way. And then third, for research scientists, its storage performance and its capabilities for dealing with data at scale really make it a no-brainer storage solution. And so as a result, the service is being used for a pretty wide spectrum of applications and workloads across industries. So I'll give you a couple of examples. So there's this class of what we call common enterprise IT use cases. So think of things like end user file shares the corporate IT applications, content management systems, highly available database deployments. And then there's a variety of common line of business and vertical workloads that are running on FSX as well. So financial services, there's a lot of modeling and analytics, workloads, life sciences, a lot of genomics analysis, media and entertainment rendering and transcoding and visual effects, automotive. We have a lot of electronic control units, simulations, and object detection, semiconductor, a lot of EDA, electronic design automation. And then oil and gas, seismic data processing, pretty common workload in FSX. And then there's a class of, of really ultra high performance workloads that are running on FSX as well. Think of things like big data analytics. So SAS grid is a, is a common application. A lot of machine learning model training, and then a lot of what people would consider traditional or classic high performance computing or HPC. >> Great. Thank you for that. Just quick follow-up if I may, and I want to bring Anthony into the conversation. So why NetApp? This is not a Barney deal, this was not elbow grease going into a Barney deal. You know, I love you. You love me. We do a press release. But, but why NetApp? Why ONTAP? Why now? (momentary silence) Ed, that was to you. >> Was that a question for Anthony? >> No, for you Ed. And then I want to bring Anthony in. >> Oh, Sure. Sorry. Okay. Sure. Yeah, I mean it, uh, Dave, it really stemmed from both companies realizing a combined offering would be highly valuable to and impactful for customers. In reality, we started collaborating in Amazon and NetApp on the service probably about two years ago. And we really had a joint vision that we wanted to provide AWS customers with the full power of ONTAP. The complete ONTAP with every capability and with ONTAP's full performance, but fully managed an offer as a full-blown AWS native service. So what that would mean is that customers get all of ONTAP's benefits along with the simplicity and the agility, the scalability, the security, and the reliability of an AWS service. >> Great. Thank you. So Anthony, I have watched NetApp reinvent itself started in workstations, saw you go into the enterprise, I saw you lean into virtualization, you told me at least two years, it might've been three years ago, Dave, we are going all in on the cloud. We're going to lead this next, next chapter. And so, I want you to bring in your perspective. You're re-inventing NetApp yet again, you know, what are your thoughts? >> Well, you know, NetApp and AWS have had a very long relationship. I think it probably dates now about nine years. And what we really wanted to do in NetApp was give the most important constituent of all an experience that helped them progress their business. So ONTAP, you know, the industry's leading shared storage platform, we wanted to make sure that in AWS, it was as good as it was on premise. We love the idea of giving customers this wonderful concept of symmetry. You know, ONTAP runs the biggest applications in the largest enterprises on the planet. And we wanted to give not just those customers an opportunity to embrace the Amazon cloud, but we wanted to also extend the capabilities of ONTAP through FSX to a new customer audience. Maybe those smaller companies that didn't really purchase on premise infrastructure, people that were born in the cloud. And of course, this gives us a great opportunity to present a fully managed ONTAP within the FSX platform, to a lot of non NetApp customers, to our competitors customers, Dave, that frankly, haven't done the same as we've done. And I think we are the benefactors of it, and we're in turn passing that innovation, that, that transformation onto the, to the customers and the partners. >> You know, one is the, the key aspect here is that it's a managed service. I don't think that could be, you know, overstated. And the other is that the cloud nativeness of this Anthony, you mentioned here, our marketplace is great, but this is some serious engineering going on here. So Ed maybe, maybe start with the perspective of a managed service. I mean, what does that mean? The whole ball of wax? >> Yeah. I mean, what it means to a customer is they go into the AWS console or they go to the AWS SDK or the, the AWS CLI and they are easily able to provision a resource provision, a file system, and it automatically will get built for them. And if there's nothing that they need to do at that point, they get an endpoint that they have access to the file system from and that's it. We handle patching, we handle all of the provisioning, we handle any hardware replacements that might need to happen along the way. Everything is fully managed. So the customer really can focus not on managing their file system, but on doing all of the other things that they, that they want to do and that they need to do. >> So. So Anthony, in a way you're disrupting yourself, which is kind of what you told me a couple of years ago. You're not afraid to do that because if we don't do it, somebody else is going to do it because you're, you're used to the old days, you're selling a box and you say, we'll see you next time, you know, three or four years. So from, from your customer's standpoint, what's their reaction to this notion of a managed service and what does it mean to NetApp? >> Well, so I think the most important thing it does is it gives them investment protection. The wonderful thing about what we've built with Amazon in the FSX profile is it's a complete ONTAP. And so one ONTAP cluster on premise can immediately see and connect to an ONTAP environment under FSX. We can then establish various different connectivities. We can use snap mirror technologies for disaster recovery. We can use efficient data transfer for things like dev test and backup. Of course, the wonderful thing that we've done, that we've gone beyond, above and beyond, what anybody else has done is we want to make sure that the actual primary application itself, one that was sort of built using NAS built in an on-premise environment an SAP and Oracle, et cetera, as Ed said, that we can move those over and have the confidence to run the application with no changes on an Amazon environment. So, so what we've really done, I think for customers, the NetApp customers, the non NetApp customers, is we've given them an enterprise grade shared storage platform that's as good in an Amazon cloud as it was in an on-premise data center. And that's something that's very unique to us. >> Can we talk a little bit more about those, those use cases? You know, both, both of you. What are you seeing as some of the more interesting ones that you can share? Ed, maybe you can start. >> Yeah, happy to. The customer discussions that we've, we've been in have really highlighted four cases, four use cases the customers are telling us they'll use a service for. So maybe I'll cover two and maybe Anthony can cover the other two. So, the first is application migrations. And customers are increasingly looking to move their applications to AWS. And a lot of those are applications work with file storage today. And so we're talking about applications like SAP. We're talking about relational databases like SQL server and Oracle. We're talking about vertical applications like Epic and the healthcare space. As another example, lots of media entertainment, rendering, and transcoding, and visual effects workload. workflows require Windows, Linux, and Mac iOS access to the same set of data. And what application administrators really want is they want the easy button. They want fully featured file storage that has the same capabilities, the same performance that their applications are used to. Has extremely high availability and durability, and it can easily enable them to meet compliance and security needs with a robust set of data protection and security capabilities. And I'll give you an example, Accenture, for example, has told us that a key obstacle their clients face when migrating to the cloud is potentially re-architecting their applications to adopt new technologies. And they expect that Amazon FSX for NetApp ONTAP will significantly accelerate their customers migrations to the cloud. Then a second one is storage migrations. So storage admins are increasingly looking to extend their on-premise storage to the cloud. And why they want to do that is they want to be more agile and they want to be responsive to growing data sets and growing workload needs. They want to last to capacity. They want the ability to spin up and spin down. They want easy disaster recovery across geographically isolated regions. They want the ability to change performance levels at any time. So all of this goodness that they get from the cloud is what they want. And more and more of them also are looking to make their company's data accessible to cloud services for analytics and processing. So services like ECS and EKS and workspaces and App Stream and VMware cloud and SageMaker and orchestration services like parallel cluster and AWS batch. But at the same time, they want all these cloud benefits, but at the same time, they have established data management workflows, and they build processes and they've built automation, leveraging APIs and capabilities of on-prem NAS appliances. It's really tough for them to just start from scratch with that stuff. So this offering provides them the best of both worlds. They get the benefits of the cloud with the NAS data management capabilities that they're used to. >> Right. >> Ed: So Anthony, maybe, do you want to talk about the other two? >> Well, so, you know, first and foremost, you heard from Ed earlier on the, the, the FSX sort of construct and how successful it's been. And one of the real reasons it's been so successful is, it takes advantage of all of the latest storage technologies, compute technologies, networking technologies. What's great is all of that's hidden from the user. What FSX does is it delivers a service. And what that means for an ONTAP customer is you're going to have ONTAP with an SLA and an SLM. You're going to have hundreds of thousands of IOPS available to you and sub-millisecond latencies. What's also really important is the design for FSX and app ONTAP was really to provide consistency on the NetApp API and to provide full access to ONTAP from the Amazon console, the Amazon SDK, or the Amazon CLI. So in this case, you've got this wonderful benefit of all of the, sort of the 29 years of innovation of NetApp combined with all the innovation AWS, all presented consistently to a customer. What Ed said, which I'm particularly excited about, is customers will see this just as they see any other AWS service. So if they want to use ONTAP in combination with some incremental compute resources, maybe with their own encryption keys, maybe with directory services, they may want to use it with other services like SageMaker. All of those things are immediately exposed to Amazon FSX for the app ONTAP. We do some really intelligent things just in the storage layer. So, for example, we do intelligent tiering. So the customer is constantly getting the, sort of the best TCO. So what that means is we're using Amazon's S3 storage as a tiered service, so that we can back off code data off of the primary file system to give the customer the optimal capacity, the optimal throughput, while maintaining the integrity of the file system. It's the same with backup. It's the same with disaster recovery, whether we're operating in a hybrid AWS cloud, or we're operating in an AWS region or across regions. >> Well, thank you. I think this, this announcement is a big deal for a number of reasons. First of all, it's the largest market. Like you said, you're the gold standard. I'll give you that, Anthony, because you guys earned it. And so it's a large market, but you always had to make previously, you have to make trade-offs. Either I could do file in the cloud, but I didn't get the rich functionality that, you know, NetApp's mature stack brings, or, you know, you could have wrapped your stack in Kubernete's container and thrown it into the cloud and hosted it there. But now that it's a managed service and presumably you're underneath, you're taking advantage. As I say, my inference is there's some serious engineering going on here. You're taking advantage of some of the cloud native capabilities. Yeah, maybe it's the different, you know, ECE two types, but also being able to bring in, we're, we're entering a new data era with machine intelligence and other capabilities that we really didn't have access to last decade. So I want to, I want to close with, you know, give you guys the last word. Maybe each of you could give me your thoughts on how you see this partnership of, for the, in the future. Particularly from a customer standpoint. Ed, maybe you could start. And then Anthony, you can bring us home. >> Yeah, well, Anthony and I and our teams have gotten to know each other really well in, in ideating around what this experience will be and then building the product. And, and we have this, this common vision that it is something that's going to really move the needle for customers. Providing the full ONTAP experience with the power of a, of a native AWS service. So we're really excited. We're, we're in this for the long haul together. We have, we've partnered on everything from engineering, to product management, to support. Like the, the full thing. This is a co-owned effort, a joint effort backed by both companies. And we have, I think a pretty remarkable product on day one, one that I think is going to delight customers. And we have a really rich roadmap that we're going to be building together over, over the years. So I'm excited about getting this in customer's hands. >> Great, thank you. Anthony, bring us home. >> Well, you know, it's one of those sorts of rare chances where you get to do something with Amazon that no one's ever done. You know, we're sort of sitting on the inside, we are a peer of theirs, and we're able to develop at very high speeds in combination with them to release continuously to the customer base. So what you're going to see here is rapid innovation. You're going to see a whole host of new services. Services that NetApp develops, services that Amazon develops. And then the whole ecosystem is going to have access to this, whether they're historically built on the NetApp APIs or increasingly built on the AWS APIs. I think you're going to see orchestrations. I think you're going to see the capabilities expand the overall opportunity for AWS to bring enterprise applications over. For me personally, Dave, you know, I've demonstrated yet again to the NetApp customer base, how much we care about them and their future. Selfishly, you know, I'm looking forward to telling the story to my competitors, customer base, because they haven't done it. So, you know, I think we've been bold. I think we've been committed as you said, three and a half years ago, I promised you that we were going to do everything we possibly could. You know, people always say, you know, what's, what's the real benefit of this. And at the end of the day, customers and partners will be the real winners. This, this innovation, this sort of, as a service I think is going to expand our market, allow our customers to do more with Amazon than they could before. It's one of those rare cases, Dave, where I think one plus one equals about seven, really. >> I love the vision and excited to see the execution Ed and Anthony, thanks so much for coming back in the Cube. Congratulations on getting to this point and good luck. >> Anthony and Ed: Thank you. >> All right. And thank you for watching everybody. This is Dave Vellante for the Cube's continuous coverage of AWS storage day. Keep it right there. (upbeat music)

Published Date : Sep 2 2021

SUMMARY :

And the big news of storage So Ed, let me start with you. And the Amazon FSX model has into the conversation. I want to bring Anthony in. and NetApp on the service And so, I want you to in the largest enterprises on the planet. And the other is that the cloud all of the provisioning, You're not afraid to do that that the actual primary of the more interesting ones and maybe Anthony can cover the other two. of IOPS available to you and First of all, it's the largest market. really move the needle for Great, thank you. the story to my competitors, for coming back in the Cube. This is Dave Vellante for the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

AnthonyPERSON

0.99+

Dave VellantePERSON

0.99+

Anthony LyePERSON

0.99+

AWSORGANIZATION

0.99+

EdPERSON

0.99+

AmazonORGANIZATION

0.99+

Ed NaimPERSON

0.99+

twoQUANTITY

0.99+

NetAppORGANIZATION

0.99+

29 yearsQUANTITY

0.99+

FSXTITLE

0.99+

BarneyORGANIZATION

0.99+

ONTAPTITLE

0.99+

oneQUANTITY

0.99+

bothQUANTITY

0.99+

threeQUANTITY

0.99+

80%QUANTITY

0.99+

both companiesQUANTITY

0.99+

NetAppTITLE

0.99+

four yearsQUANTITY

0.99+

LinuxTITLE

0.99+

WindowsTITLE

0.99+

MSXORGANIZATION

0.99+

OracleORGANIZATION

0.99+

firstQUANTITY

0.99+

Joe Fitzgerald, AWS | AWS Storage Day 2021


 

(upbeat music) >> According to storage guru, Fred Moore, 60 to 80% of all stored data is archival data leading to the need for what he calls the infinite archive, In this world digital customers require inexpensive access to archive data that's protected it's got to be available, durable, it's got to be able to scale and also has to support the governance and compliance edicts of the organizations. Welcome to this next session of the AWS storage day with The Cube. I'm your host, Dave Vellante. We're going to dig into the topic of archiving and digitally preserving data and we're joined by Joe Fitzgerald, who is the general manager of Amazon S3 Glacier, Joe. Welcome to the program. >> Hey Dave, it's great to be here. Thanks for having me. >> Yeah, I remember early last decade, AWS announced Glacier, it got a lot of buzz, and since then you've evolved your archival storage services, strategy and offerings. First question. Why should customers archive their data in AWS? >> That's a great question. I think Amazon S3 Glacier is a great place for customers to archive data and I think the preface that you gave, I think covers a lot of the reasons why customers are looking to archive data on the cloud. We're finding a lot of customers have a lot of data. And if you think about it, most of the world's data is cold by nature. It's not data that you're accessing all the time. So if you don't have an archival story as part of your data strategy, I think you're missing out on a cost savings opportunity. So one of the reasons we're finding customers looking to move data S3 glacier is because of cost with Glacier Deep archive we have an industry leading price point of a dollar per terabyte per month. I think another reason that we're finding customers wanting to move data to the cloud into glacier is because of the security, durability, and availability that we offer. Instead of having to worry about some of the most valuable data that your company has and worrying about that being in a tape library that doesn't get accessed very often on premises or offsite in a, in a data locker that you don't really have access to. And we offer the best story in terms of the durability and security and availability of that data. And I think the other reason that we're finding customers wanting to move data to S3 Glacier is just the flexibility and agility that having your data in the cloud offers. A lot of the data, you can put it in deep archive and have it sit there and not access it, but then if you have, you know, some sort of event that you want to access that data, you can get that back very quickly, as well as put to power the rest of the AWS offerings, whether that's our compute offerings, or our machine learning and analytics offerings. So you just have like unmatched, you know, flexibility, cost, and durability of your data. So we're finding a lot of customers looking to optimize their business by moving their archive data to the cloud. >> Let's stick on the business case for a minute. I mean, you kind of nailed the cost side of the equation. Clearly you mentioned several of the benefits, but, but for those customers that may not be leaning in to, to, to archive data, how do they think about the cost benefit analysis when you talk to customers, what are you hearing from them? The ones that have used your services to archive data, what are the benefits that they're getting? >> It's a great question. I think we find customers fall into a few different, you know, camps in use cases. And one thing that we recommend as a starting point is if you have a lot of data and you're not really familiar with your access patterns, like which what, what part of the data is warm, what part is cold, we offer a storage class called S3 intelligent tiering. And what that storage class does is it optimizes the placement of that data and the cost of that data based on the access patterns. So if, if it's data that is accessed very regularly, it'll sit in one of the warmer storage tiers. If it's, accessed infrequently, it'll move down into the infrequent access tier or into the archive or deep archive access tiers. So it's a great way for customers who are struggling to think about archive, because it's not something that every customer thinks about every day to get on automatic cost savings. And then for customers who have, you know, either larger amounts of data or, or better understand the access patterns, like, you know, some of the industries that we're seeing like in, you know, autonomous vehicles, you know, they, they might have, they might generate like tons of training data from, from, you know, from running the autonomous vehicles. And they kind of know, okay, this data it's, it's, we're not actively using it, but it's also very valuable. They don't want to throw it away, they'll choose to move that data into an archive tier. So a lot of it kind of comes down to the degree to which you're able to easily understand the access pattern of the data to figure out which storage class and which archive storage class match best to your use case. >> I get it, so if you add that deep archive tier, you auto-magically get the benefit thanks to the intelligent tiering. What about industry patterns? I mean, obviously highly regulated industries have compliance issues. You know,, data intensive industries are going to potentially have this because they want to lower lower costs, but do you see any patterns emerging? I mean, every industry kind of needs this, but, but are there any industries that are getting more bang from the buck that, that you see? >> I would say every industry definitely has archived data. So we have, we have customers in every vertical segment. I think some of the ones that we're definitely seeing more activity from would be, you know, media and entertainment customers are a great fit for archive. If you think about, you know, even like digital native studios who are, you know, generate, you know, very high definition footage and, you know, they take all that footage, they produce the movie, but they have a lot of original data that they, you know, they, they might reuse. You know, remaster director's cut or, you know, to use later. They're finding archive is a great fit for that. So they're able to use S3 standard for their active production, but when they're done finishing a movie or production, they can save all that valuable original footage and move it into deep archive and just know that it's going to be there whenever they might need to use it. Another use case for staying in media entertainment, you know, kind of similar to that. And this is a good use case for S3 Glacier is if, if you have like sports footage from like the '60s, and then, you know, there's like some sort of breaking news event about some athlete that you want to be able to cut a shot for the six o'clock news, with S3 Glacier and expedited retrievals, you're able to kind of get like that, you know, that data back in a couple of minutes and that way you have the benefit of like very low cost archive storage, but being able to get the immediacy of having some of that data back when you need it. So, that's just some of the examples that we're seeing in terms of how customers are using archives. >> I love that example because, you know, the, the prevailing wisdom is the older, you know, data is the less valuable it is, but if you can pull a clip up of, you know, Babe Ruth at the right time, even though it's a little grainy, wow. That's huge value for the-- >> Yeah, I mean, we're, we're finding like lots of customers that, you know, they've retained this data, they haven't known why they're going to need it. They just sort of intrinsically know this data is really valuable and, you know, we might need it. And then as they, you know, they look for new opportunities and they're like, hey, you know, we're, we're going to remaster this and they they've gone through a lot of digital transformation. So we're seeing companies have, you know, decades of original material moving to the cloud where we're also seeing, you know, fairly new startups who are also just generating lots of archive data. So it's just, you know, one of the many use cases we see from our customers who love Glacier. >> Data hoarder's heaven, I love it. Okay, Joe, let's wrap up, give us your closing thoughts, how you see the future of this business, where you want to take, take your, your business for your customers. >> I think mostly we, we just really want to help customers optimize their storage and realize the potential of their data. So for a lot of customers that really just comes down to knowing that S3 Glacier is a great and trusted place for their data, and that they're able to kind of meet their compliance and regulatory needs, but for, you know, a lot of other customers, they're, they're looking to kind of transform their business and reinvent themselves as they move to the cloud, and I think we're just excited by a lot of emerging use cases, and, you know, being able to find that flexibility of having like very low cost storage, as well as being able to get access to that data and, hook it up into the other AWS services and really realize the potential of their data. >> 100%, I mean, we've seen it over the decades, cost drops and use cases explode. Thank you, Joe. Thanks so much for coming on The Cube. >> Thanks a lot, Dave, it's been great being here. >> All right keep it right there for more storage and data insights. You're watching AWS Storage Day on The Cube. (upbeat music)

Published Date : Sep 1 2021

SUMMARY :

and also has to support the Hey Dave, it's great to be here. it got a lot of buzz, the preface that you gave, I mean, you kind of nailed And then for customers who have, you know, the buck that, that you see? data that they, you know, you know, data is the less valuable it is, So we're seeing companies have, you know, how you see the future of this business, and that they're able to kind seen it over the decades, Thanks a lot, Dave, All right keep it right there for more

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Fred MoorePERSON

0.99+

Joe FitzgeraldPERSON

0.99+

AWSORGANIZATION

0.99+

60QUANTITY

0.99+

DavePERSON

0.99+

JoePERSON

0.99+

AmazonORGANIZATION

0.99+

First questionQUANTITY

0.99+

80%QUANTITY

0.98+

early last decadeDATE

0.98+

one thingQUANTITY

0.96+

oneQUANTITY

0.96+

S3COMMERCIAL_ITEM

0.96+

six o'clockDATE

0.95+

S3TITLE

0.94+

S3 GlacierTITLE

0.91+

100%QUANTITY

0.87+

decadesQUANTITY

0.81+

Babe RuthPERSON

0.79+

Storage DayEVENT

0.77+

S3 GlacierLOCATION

0.77+

GlacierORGANIZATION

0.73+

S3 GlacierCOMMERCIAL_ITEM

0.73+

a dollar per terabyte per monthQUANTITY

0.69+

CubeTITLE

0.66+

minutesQUANTITY

0.65+

'60sDATE

0.65+

S3 glacierLOCATION

0.63+

AWSEVENT

0.58+

a minuteQUANTITY

0.57+

S3OTHER

0.56+

The CubeTITLE

0.54+

GlacierTITLE

0.49+

2021DATE

0.47+

The CubeORGANIZATION

0.46+

GlacierLOCATION

0.35+

Siddhartha Roy & Mark Cree | AWS Storage Day 2021


 

>>Welcome back to the cubes coverage of AWS storage. They were here in downtown Seattle, crisp downtown Seattle. Winter is coming we're to talk about the snow unintended and also the ever expanding cloud, the cloud it's in a way it's like the universe, it's moving out to the edge and to the data center, which is literally another edge node. If you think about it, Mark Curry is here as the general manager of AWS gateway and Sid Roy is the GM of AWS snow family. Folks. Welcome. Good to see you. Thank you. So mark, talk about how you think about on-prem and hybrid. >>That's an excellent question, Dave. So I represent a group of services called storage gateway, and that's exactly what storage gateway does. Is it bridges your on-prem applications with the cloud? And the way we do that is we deliver it with really four services that we all call it. We call it gateway is the first one being volume gateway. And what volume gateway does is give you a way to connect your block storage on prem with the cloud for file shares for backup is what popular application there. And, uh, for applications that can tolerate some latency and that's a traditional service, then we, uh, came out with something called virtual tape gateway, which I'm personally really excited about because we all know about, you know, the big clunky tapes that have been around for 50 years, that you have to have trucks pick up and go store in a mountain and all that, um, with the virtual tape gateway, what we can do is we all have our gateways install either as a software package on-prem or as an appliance hardware appliance, but we put the tape gateway on prem and the customer is able to back up their tapes to us. >>And we look like a tape drive, virtual tape dry. So what we're doing is we're allowing the customer to basically digitize in the cloud, all of their legacy tapes. And this I think is a huge industry and we've got some great customers there. One would be formula one. Um, they've used virtual tape library. Our gateway to, um, basically could reduce the recovery time from five days down to one. So big impact there. Uh, the next gateway is our file gateway. And what I felt gateway does, again, sits on-prem either the software package or as a hardware appliance and fell gateway exposed as both an SMB share for Microsoft traffic and then an NFS share for your NFS traffic. And basically what we do as we front end S3 with this gateway. And so the gateway caches, so your active workflow gets really great performance, but you can move your inactive data to the cloud in S3, uh, where you've got durable storage. >>It's, you know, it's over multiple regions. You can run all of our analytics on that, on that data as well. Um, a good example, there would be modernize. A company worked on the COVID vaccine. They used storage gateway the file version to move their instrumentation and scientific data into the cloud, where once it's up in S3 week, you know, we've got a really robust set of tools, allow them to do analytics on it. And then finally, but not least our last announcement was something called FSX gateway and FSX is a chemical product. So we offer FSX as a windows file system or file share in the cloud. Um, the, the gateway basically acts as a cache to that. So a customer can put our FSX gateway on prem in lieu of like a server of some sort, and we'll cash all the traffic for that active workflow again, and then push their inactive data back to the FSX file system in the cloud. >>Cool. Lots of ways to get data to the cloud compatibility >>Issues. So, excellent. Thank you for that. Mark said, >>We know about snowball snowcones snowmobile all the snows, where does that fit >>In? Yeah. So let me, let me talk about the AWS edge. First, the broader edge spectrum of AWS spans many things from snow to outpost, to IOT, where there's a lot of data being created at the edge within this edge spectrum. There's the rugged mobile edge, which is where snow plays, right? So snow's purpose is really to capture transform and optionally move the data from rugged edge to AWS, right? And in our portfolio we have different devices. Uh, so let me start with the stone device. We announced last year in 2020, uh, snow cone is a small, uh, tissue box sized device. You know, it, it, it is portable. It's highly mobile, portable, rugged. It can, it can capture data from rugged sensory and end points and industrial equipment. And once we capture the data, you can process the data locally right there. And then if you have to send the data back to AWS, you can ship the device back or use data sync to transfer the data back at VWs. >>Now, if you have higher compute needs, where you have what we call the core edge, where it's not portable, but you need to kind of process there. We have the snowball edge device now that can be single known or multi-node snowball edge devices in groups of clusters for storage at storage edge compute. There, you can process like large scale data capture and transform it right there with, you know, machine learning or other data management and analytics right there for real time and AI based edge local decisions. So I'll give you a couple of examples in each category. So for snowcones, for example, we are partnering with Facebook to deliver, uh, private, LTE based, uh, networks for, uh, remote and rural areas where the connectivity is not here there. Right? So, so, uh, we are serving those communities in partnership with Facebook to deliver private LTE networks. The second example I'll give you is with the snowball edge device, that multiple nodes, we, us air force recently demonstrated, uh, that ABM a system, which is the advanced battle management system, where they can do like a lot of data capture and local simulation with AI and ML on containers, right on the snow cone, uh, on the snowball edge devices. So those are two examples of how we're doing, uh, edge local processing and capture. >>Well, I think, I think you guys got it right. You got a lot of ways to get data on the on-ramps into the cloud, but I I'm particularly struck by your edge. You know, we didn't get into the it strategy, but the idea of processing locally, bringing machine learning, uh, you know, cause the future we think anyway is AI and for instance, where the data lives, right. And yet, like you said, if you want to bring it back and bring it back and we have ways to get it back. Right, exactly. I'll give you guys the last word. >>Well, I, I would just say, you know, our FSX gateways of relatively new announcement, it's got some really cool applications for, um, high-performance Microsoft applications, but also for remote offices that want to share files. >>Great. Well guys, mark said, thanks so much for coming on the cube. Thank you for sharing the insights and the data and really appreciate it. Okay. Thank you for watching. This is the cubes coverage of AWS storage day. Keep it right there.

Published Date : Sep 1 2021

SUMMARY :

So mark, talk about how you think about on-prem And the way we do that is we deliver it with really four services And so the gateway caches, so your active where once it's up in S3 week, you know, we've got a really robust set of tools, Thank you for that. And then if you have to send the data back to AWS, So I'll give you a couple of examples in each category. but the idea of processing locally, bringing machine learning, uh, you know, Well, I, I would just say, you know, our FSX gateways of relatively new announcement, it's got some really cool applications Thank you for sharing the insights and the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Mark CurryPERSON

0.99+

Sid RoyPERSON

0.99+

MarkPERSON

0.99+

AWSORGANIZATION

0.99+

FacebookORGANIZATION

0.99+

Mark CreePERSON

0.99+

Siddhartha RoyPERSON

0.99+

DavePERSON

0.99+

two examplesQUANTITY

0.99+

last yearDATE

0.99+

five daysQUANTITY

0.99+

FirstQUANTITY

0.99+

FSXTITLE

0.99+

MicrosoftORGANIZATION

0.99+

second exampleQUANTITY

0.99+

each categoryQUANTITY

0.98+

markPERSON

0.98+

COVIDOTHER

0.97+

S3TITLE

0.97+

2020DATE

0.96+

first oneQUANTITY

0.96+

four servicesQUANTITY

0.95+

singleQUANTITY

0.95+

AWS gatewayORGANIZATION

0.94+

50 yearsQUANTITY

0.93+

downtown SeattleLOCATION

0.92+

windowsTITLE

0.91+

oneQUANTITY

0.89+

bothQUANTITY

0.88+

DayEVENT

0.87+

SeattleLOCATION

0.86+

VWsORGANIZATION

0.86+

premORGANIZATION

0.74+

coupleQUANTITY

0.68+

OneQUANTITY

0.57+

2021DATE

0.53+

Michael Sotnick, Pure Storage & Rob Czarnecki, AWS Outposts | AWS re:Invent 2020 Partner Network Day


 

>>from >>around the globe. It's the Cube with digital coverage of AWS reinvent 2020. Special coverage sponsored by AWS Global Partner Network. >>Hi. Welcome to the Cube. Virtual and our coverage of AWS reinvent 2020 with special coverage of a PM partner experience. I'm John for your host. We are the Cube. Virtual. We can't be there in person with a remote. And our two next guests are We have pure storage. Michael Slotnick, VP of Worldwide Alliances, Pure storage. And Robert Czarnecki, principal product manager for a U. S. Outposts. Welcome to the Cube. >>Wonderful to be here. Great to see you. And thanks for having us, >>Michael. Great to see you pure. You guys had some great Momenta, um, earnings and some announcements. You guys have some new news? We're here. Reinvent all part of a W s and outpost. I want to get into it right away. Uh, talk about the relationship with AWS. I know you guys have some hot news. Just came out in late November. We're here in the event. All the talk is about new higher level services. Hybrid edge. What do you guys doing? What's the story? >>Yeah, Look, I gotta tell you the partnership with AWS is a very high profile and strategic partnership for pure storage. We've worked hard with our cloud block store for AWS, which is an extensive bility solution for pure flash array and into a W s. I think the big news and one of things that we're most proud of is the recent establishment of pure being service ready and outpost ready. And the first and Onley on Prem storage solution and were shoulder to shoulder with AWS is a W s takes outpost into the data center. Now they're going after key workloads that were well known for. And we're very excited Thio, partner with AWS in that regard, >>you know, congratulations to pure. We've been following you guys from the beginning since inception since it was founded startup. And now I'll see growing public company on the next level kind of growth plan. You guys were early on all this stuff with with with flash with software and cloud. So it's paying off. Rob, I wanna get toe Outpost because this was probably most controversial announcements I've ever covered at reinvent for the past eight years. It really was the first sign that Andy was saying, You know what? We're working backwards from the customers and they all are talking Hybrid. We're gonna have Outpost. Give us the update. What kind of workloads and verticals are seeing Success without post? Now that that's part of the portfolio, How does it all working out? Give us the update on the workloads in the verticals. >>Absolutely. Although I have to say I'd call it more exciting than controversial. We're so excited about the opportunities that outpost opened for our customers. And, you know, customers have been asking us for years. How can we bring AWS services to our data centers? And we thought about it for a long time. And until until we define the outpost service, we really I thought we could do better. And what outpost does it lets us take those services that customers are familiar with? It lets us bring it to their data center and and one of the really bright spots over the past year has just been how many different industries and market segments have shown interest. Outpost right. You could have customers, for example, with data residency needs, those that have to do local data processing. Uh, maybe have Leighton see needs on a specific workload that needs to run near their end users. We're just folks trying to modernize their data center, and that's a journey. That transformation takes time, right? So So Outpost works for all of those customers. And one of the things that's really become clear to us is that to enable the success that we think L Post can have, we need to meet customers where they are. And and one of the fantastic things about the outpost ready program is many of those customers air using pure and they have pure hardware and way. Send an outpost over to the pure lab recently, and I have to tell you a picture of those two racks next to each other looks really good. >>You know, 20 used to kind of welcome back my controversial comments. You know, I meant in the sense of that's when Cloud really got big into the enterprise and you have to deal with hybrid. So I do think it's exciting because the edges a big theme here. Can you just share real quick before I get in some of the pure questions on this edge piece with the hybrid because what what's the customer need? And when you talk to customers, I know you guys, you know, really kind of work backwards from the customer. What are their needs? What causes them to look at Outpost as part of their hybrid? What's the Keith consideration? >>Yeah, so? So there are a couple of different needs. John, right? One, for example, is way have regions and local zones across the globe. But we're not everywhere and and their their data residency regulations that they're becoming increasingly common and popular. So customers I come to us and say, Look, I really need to run, for example, of financial services workload. It needs to be in Thailand, and we don't have a reason or local zone in Thailand. But we could get him an outpost to to places where they need to be right. So the that that requirement to keep data, whether it's by regulation or by a contractual agreement, that's a that's a big driver. The other pieces there's There's a tremendous amount of interest in the that top down executive sponsorship across enterprise customers to transform their operations right to modernize their their digital approach but there, when they actually look a look at their estate, they do see an awful lot of hardware, and that's a hard challenge. Thio Plan the migration when you could bring an outpost right into that data center. It really makes it much easier because AWS is right there. There could be a monolithic architecture that it doesn't lend well toe having part of the workload running in the region, part of the workload running in their data center. But with an outpost, they can extend AWS to their data center, and that just makes it so much easier for them to get started on their digital transformation. >>Michael, this is This is the key trend. You guys saw early Cloud operations on premise. It becomes cloud ified at that point when you have Dev ops on on Premises and then cloud pure cloud for bursting all that stuff. And now you've got the edge exploding as well of growth and opportunity. What causes the customer to get the pure option on outputs? What's the What's the angle for you guys? Obviously storage, you get data and I can see this whole Yeah, there's no region and certainly outpost stores data, and that's a requirement for a lot of, you know, certainly global customers and needs. What's the pure angle on this? >>Yeah, I appreciate that. And appreciate Rob's comments around what AWS sees in the wild in terms of yours footprint in the market share that we've established his company over 11 years in business and, you know, over eight years of shipping product. You know, what I would tell you is one of the things that that a lot of people misses the simplicity and the consistency that air characteristically, you know very much in the AWS experience and equally within the pure experience and that that's really powerful. So as we were successful in putting pure into workloads that, you know, for for all the reasons that Rob talked about right data gravity, you know, the the regulatory issues, you know, just application architecture and its inability to move to the public cloud. Um, you know, our predictability are simplicity. Are consistency really match with the costumers getting with other work clothes that they had in AWS? And so with a W S outposts that's really bringing to the customer that single pane of glass to manage their entire environment. And so we saw that we made the three year investment in Outpost. Is Rob said Just having our solution? Inp Yours Data center. It's set up and running today with a solution built on flash Blade, which is our unstructured data solution and, you know, delivering fantastic performance results in a I and ML workloads. We see the same opportunity within backup and disaster recovery workloads and into analytics and then equally the opportunity toe build. You know, Flash Ray and our other storage solutions, and to build architectures with outposts in our data center and bring them to market >>real quick just to follow up on that. What use cases are you seeing that are most successful without post and in general in general, how do you guys get your customers to integrate with the rest of, uh, their environment? Because you you no one's got. Now this operating environments not just cloud public, is cloud on premise and everything else. >>Yeah, you know what's cool is, and then Rob hit right on. It is the the wide range of industries and the wide range of use cases and workloads that air finding themselves attracted to the outpost offering on DSO. You know, without a doubt there's gonna be, You know, I think what people would immediately believe ai and ml workloads and the importance of having high performance storage and to have a high performance outpost environment, you know, as close to the center as possible of those solutions. But it doesn't stop there. Traditional virtualized database workloads that for reasons of application architecture, aren't candidates to move. AWS is public cloud offering our great fit for outpost and those air workloads that we've always traditionally been successful within the market and see a great opportunity. Thio, you know, build on that success as an outpost partner. >>Rob, I gotta ask, you last reinvent when we're in person. When we had real life back then e was at the replay party and hanging out, and this guy comes out to me. I don't even know who he was. Obviously big time engineer over there opens his hand up and shows me this little processor and I'm like, closes and he's like and I go take a picture and it was like freaking out. Don't take a picture. It was it was the big processor was the big, uh, kind of person. Uh, I think it was the big monster. And it was just so small. See the innovation and hard where you guys have done a lot, there s that's cool. I like get your thoughts on where the future is going there because you've got great hardware innovation, but you got the higher level services with containers. I know you guys took your time. Containers are super important because that's going to deal with that. So how do you look at that? You got the innovation in the hardware check containers. How does that all fit in? Because you guys have been making a lot of investments in some of these cloud native projects. What's your position on that? >>You know, it's all part of one common story, John right customers that they want an easy path to delivering impact for their business. Right. And, you know, you've heard us speak a lot over the past few years about how we're really seeing these two different types of customers. We have those customers that really loved to get those foundational core building blocks and stitch them together in a creative way. But then you have more and more customers that they wanna. They wanna operate at a different level, and and that's okay. We want to support both of them. We want to give both of them all the tools that they need. Thio spend their time and put their resource is towards what differentiates their business and just be able to give them support at whatever level they need on the infrastructure side. And it's fantastic that are combination of investments in hardware and services. And now, with Outpost, we can bring those investments even closer to the customer. If you really think about it that way, the possibilities become limitless. >>Yeah, it's not like the simplicity asked, but it was pretty beautiful to the way it looks. It looks nice. Michael. Gotta ask you on your side. A couple of big announcements over that we've been following from pure looking back. You already had the periods of service announcement you bought the port Works was acquisition. Yeah, that's container management. Across the data center, including outposts you got pure is a service is pure. Is the service working with outpost and how and if so, how and what's the consumption model for customers there. >>Yeah, thanks so much, John. And appreciate you following us the way that you do it. Zits meaningful and appreciate it. Listen, you know, I think the customers have made it clear and in AWS is, you know, kind of led the way in terms of the consumption and experience expectations that customers have. It's got to be consumable. They've got to pay for what they use. It's got to be outcome oriented and and we're doing that with pure is a service. And so I think we saw that early and have invested in pure is a service for our customers. And, you know, we look at the way we acquired outposts as ah customer and a partner of AWS aan dat is exactly the same way customers can consume pure. You know, all of our solutions in a, you know, use what you need, pay for what you use, um, environment. And, you know, one of the exciting things about AWS partnership is its wide ranging and one of the things that AWS has done, I think world class is marketplace. And so we're excited to share with this audience, you know, really? On the back of just recent announcement that, pure is the service is available within the AWS marketplace. And so you think about the, you know, simplicity and the consistency that pure and AWS delivered to the market. AWS customers demand that they get that in the marketplace, and and we're proud to have our offerings there. And Port Works has been in the marketplace and and will continue to be showcased from a container management standpoint. So as those workloads increasingly become, you know, the cloud native you know, Dev Ops, Containerized workloads. We've got a solution and to end to support >>that great job. Great insight. Congratulations to pure good moves as making some good moves. Rob, I want to just get to the final word here on Outpost again. Great. Everyone loves this product again. It's a lot of attention. It's really that that puts the operating models cloud firmly on the in the on premise world for Amazon opens up a lot of good conversation and business opportunities and technical integrations or are all around you. So what's your message to the ecosystem out there for outposts? How do I What's the what's the word? I wanna do I work with you guys? How do I get involved? What are some of the opportunities? What's your position? How do you talk to the ecosystem? >>Yeah, You know, John, I think the best way to frame it is we're just getting started. We've got our first year in the books. We've seen so many promising signals from customers, had so many interesting conversations that just weren't possible without outposts. And, uh, you know, working with partners like pure and expanding our outpost. Ready program is just the beginning. Right? We launched back in September. We've We've seen another meaningful set of partners come out. Uh, here it reinvent, and we're gonna continue toe double down on both the outpost business, but specifically on on working with our partners. I think that the key to unlocking the magic of outpost is meeting customers where they are. And those customers are using our partners. And there's no reason that it shouldn't just work when they move there. Their partner based workload from their existing infrastructure right over to the outpost. >>All right, I'll leave it there. Michael saw the VP of worldwide alliances that pier storage congratulations. Great innovation strategy It's easy to do alliances when you've got a great product and technology congratulated. Rob Kearney Key principle product manager. Outpost will be speaking more to you throughout the next couple of weeks. Here at Reinvent Virtual. Thanks for coming. I appreciate it. >>Thank you. Thank you. >>Okay. So cute. Virtual. We are the Cube. Virtual. We wish we could be there in person this year, but it's a virtual event. Over three weeks will be lots of coverage. I'm John for your host. Thanks for watching.

Published Date : Dec 3 2020

SUMMARY :

It's the Cube with digital coverage We are the Cube. Great to see you. Great to see you pure. And the first and Onley on Prem storage And now I'll see growing public company on the next level kind of growth plan. Send an outpost over to the pure lab recently, and I have to tell you a picture of those two racks next to I meant in the sense of that's when Cloud really got big into the enterprise and you So the that that requirement to keep data, What's the What's the angle for you guys? the the regulatory issues, you know, just application architecture and its inability in general in general, how do you guys get your customers to integrate with the rest of, the importance of having high performance storage and to have a high performance outpost See the innovation and hard where you guys have done And, you know, you've heard us speak a lot You already had the periods of service announcement you bought the port Works was acquisition. to share with this audience, you know, really? It's really that that puts the And, uh, you know, working with partners like pure and expanding our outpost. Outpost will be speaking more to you throughout the next couple of weeks. Thank you. We are the Cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

AWSORGANIZATION

0.99+

Michael SotnickPERSON

0.99+

Robert CzarneckiPERSON

0.99+

Rob CzarneckiPERSON

0.99+

ThailandLOCATION

0.99+

MichaelPERSON

0.99+

Michael SlotnickPERSON

0.99+

AndyPERSON

0.99+

RobPERSON

0.99+

SeptemberDATE

0.99+

AmazonORGANIZATION

0.99+

three yearQUANTITY

0.99+

late NovemberDATE

0.99+

Rob KearneyPERSON

0.99+

Reinvent VirtualORGANIZATION

0.99+

two racksQUANTITY

0.99+

AWS Global Partner NetworkORGANIZATION

0.99+

bothQUANTITY

0.99+

first yearQUANTITY

0.99+

this yearDATE

0.99+

over 11 yearsQUANTITY

0.99+

L PostORGANIZATION

0.99+

oneQUANTITY

0.98+

over eight yearsQUANTITY

0.98+

LeightonORGANIZATION

0.98+

OutpostORGANIZATION

0.98+

firstQUANTITY

0.97+

KeithPERSON

0.97+

OneQUANTITY

0.97+

two next guestsQUANTITY

0.97+

outpostORGANIZATION

0.96+

todayDATE

0.96+

first signQUANTITY

0.96+

ThioPERSON

0.95+

Over three weeksQUANTITY

0.94+

2020TITLE

0.89+

Worldwide AlliancesORGANIZATION

0.88+

CubeCOMMERCIAL_ITEM

0.88+

single paneQUANTITY

0.88+

Flash RayORGANIZATION

0.87+

U. S.LOCATION

0.87+

two different typesQUANTITY

0.87+

re:Invent 2020 Partner Network DayEVENT

0.84+

past yearDATE

0.83+

outpostsORGANIZATION

0.83+

past eight yearsDATE

0.8+

AWS Opening Thoughts | AWS Storage Day 2019


 

(upbeat music) >> Hi everybody this is Dave Vellante with "theCUBE", and I'm very excited to be here at Amazon in Boston at Storage Day. And when you go back to 2006 and you think about the launch of Amazon Web Services, S3 was the first storage service. You know very simple, object store, and it became very, very popular, and I remember when Amazon announced EBS. I said okay, was at re:Invent and it was an exciting time. Well today, we are going to cover the innovations that Amazon has in its expanding storage portfolio. We've got experts from Amazon that are going to help us drill in to actually what's being announced, how these announcements work for the customer and what the business impact is going to be. Now, if you think about the history of cloud generally, but AWS specifically, which got cloud started, you know, it really started this, okay, I can now put data into the cloud. I can spin up, compute and storage, and not have to do all heavy lifting. And that was really when infrastructure as a service was born. And you had, CFOs loved it because they could shift CAPEX to OPEX, developers loved it 'cause they could treat infrastructure as code, and it really has become the new model. But what happened initially was people either did a lot of development in the cloud, or they would take applications and workloads running on Prim, put them in the cloud, and get benefits: lower cost, better agility, much simpler management. And they would be able to sort of retrain people, or shift people to more strategic workloads and activities. This became critical starting at around the 2015-16 time frame with all the talk around digital transformation. But in and of itself, what customers tell us is what they really want to do with the cloud is change their operating model. So you think about new programming models, you think about agile programming, new methodologies, DevOps really comes into the fore. The whole big data meme involved into data and digital transformation, you're seeing people take advantage of data lakes, and then of course, you've got all this data, what's the next level of innovation? It's to take machine learning and put it on top of all that data. So the innovation engine is no longer Moore's Law, it's now a cocktail of data, plus machine intelligence, or AI, and then the cloud gives you scale. Global scale, which is very important. We're going to drill down today and talk about Amazon's philosophy on regions and availability zones, and really try to poke at how that's maybe different from some of the other cloud providers. But really the most important thing here I want you to think about is business impact. If you can change the operating model, you can get more out of your IT infrastructure and your infrastructure as a service than just lower cost or even better agility, you can actually transform your business, create new types of business models. Now, underneath all this is storage. You got to have sets of storage services that can support these new emerging workloads. So we started out with S3 which is object, EBS was file and supported database, and you've seen Amazon's database business explode, it's a multi-billion dollar business. And now, we're really digging into file, as an opportunity for customers, and of course, for AWS. So "theCube" is thrilled to be covering this. Stay with us, we got a full day of programming. Keep it right there. (upbeat music)

Published Date : Nov 20 2019

SUMMARY :

and you think about the launch of Amazon Web Services,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

AmazonORGANIZATION

0.99+

AWSORGANIZATION

0.99+

BostonLOCATION

0.99+

2006DATE

0.99+

Amazon Web ServicesORGANIZATION

0.99+

S3TITLE

0.98+

todayDATE

0.98+

first storage serviceQUANTITY

0.98+

EBSORGANIZATION

0.97+

AWSEVENT

0.94+

Storage DayEVENT

0.9+

multi-billion dollarQUANTITY

0.86+

theCubeORGANIZATION

0.77+

Storage Day 2019EVENT

0.72+

theCUBEORGANIZATION

0.7+

DevOpsTITLE

0.63+

OPEXORGANIZATION

0.61+

re:InventEVENT

0.6+

MooreORGANIZATION

0.57+

2015-16DATE

0.54+

Wayne Duso, Amazon Web Services | AWS Storage Day 2019


 

>>This is >>Dave Volante, and welcome to Storage Day. We're here at Amazon and Boston and you're watching the Cube. Wayne do so is here. He's the general manager of a lot of stuff. File hybrid edge transfer and data protection Service is at Amazon. Web service is good to see you, Wayne. Thanks. >>Good to see you. >>So let's talk about that. That's a pretty vast portfolio that you have explained that to our audience. >>Sure thinks so. The portfolio that I'm responsible for covers a vast swath of our stories portfolio on AWS. So in that we cover all of our files. Service's s Oh, that's E f s and FSX. Our data transport service is which includes data sync, transfer for sftp and our snowball or snow service's. And then also hybrid edge, which includes our snowball, compute and our stories, Gateway Service's and then data protection, which includes a W s back up. >>Wow. Okay, great. Congratulations on that portfolio. And, you know, I said I said earlier on it started with s3, and it's just exploded. Now all the service is this is part of what we sometimes call tongue and cheek cloud to 0.0, there's more work loads, more capabilities, more granularity. But talk about some of the big picture macro trends that you guys see in the marketplace. Specific Thio Sort of your area. >>Yes. So, uh, actually, it's so many, uh, think you said things are expanding. Things are accelerating in our space. One of things I like thio talk about with respect to our portfolio is we have storage service is dated. Transport service is to match the needs of your workloads and your applications. So all of these service is a purpose built for the type of storage that you need, the programming model that you need for your applications and workloads. So whether it's object storage with s3 and glacier or block storage with BBS or most recently, file service with F s and F S X file service is so you have the tools at your disposal. It'll that you need based on your on your application. Workloads. >>Talk more about the programming model. What? How do you envision that? What do you What do you mean? What's your mental model of the different >>process? You're so forever. People have been programming based on, you know, whether it's performance or or some scale of some sort. Um, you know, uh, databases traditionally used block storage because they don't need a lot of logic between them and the storage medium itself. File storage is been used for 50 years and has a very specific program model that exist in every operating system in every programming language. You know, whether it's an open, ah, read right, see close. It's a common paradigm that is used all over the place and that capability in the performance that you need to satisfy those applications and workloads very specific. And so for for aws, we provide those final systems for for Lennox, if you would with F House Windows, which is ever sex for Windows and for very high performance computing on luster. We've had an amazing storage platform, which is s3 and S three forms the basis for a lot of our customers data lakes on and basically storage data repositories, for which there are many integrations. With that, there are other >>sword service's. I often joke that, you know, if your expertise is is unpacking boxes plugging in setting up storage arrays, managing London's you, you might want to think about updating. You know your skill sets right, But so that's another big mega trend that we certainly see is people just don't see a lot of value in planning and managing and migrating over six month periods. Storage a raise. It's It's something that really doesn't have a lot of value to the business. So you guys have announced all these service is over the years and you've got some new announcements as well, that kind of play into some of the trends that we've been talking about. Talk about the news. >>Yes, that the news is pretty rich. Uh, for this season, let's let's start off with FSX eso FSX is our service for bringing fully manage third party or open source file systems, um, to our customers. And so Fsx Windows, as example, was launched last year, reinvent and has been rolling out the whole Siri's of features throughout the year, and we have a nice set of features coming out this year. So, as example today, effort, Sex Windows is a single ese service. We are rolling out multi easy capability, >>okay? And you you Sometimes you guys make the point that the beauty is there's no change required in APS, and we talked earlier about the program. We'll talk a little bit more about that. Why is that important to customers, >>you know and all index on FXX windows for another minute. A lot of abs been written to use the semantics of a particular file system in case of Windows will say NT f s and their written for that specific file system. We've provided customers with the capability of bringing those applications to AWS without any wary of compatibility. It's a pure lift and shift model. S O makes it really easy for them to bring their workloads. They should bring their workload so they don't have to deal with some of things you brought up early around provisioning buying systems, having to worry about saying that, planning for all of that. We take all of that work away from them and they get full compatibility based on what they need today and with some of the additional capabilities we're bringing to bear with the integrations in the ecosystem and heat up US ecosystem, they'll be able to appreciate those as well. >>Let's talk a little bit about more about that because you're basically, I'm inferring you're saying, Hey, this compelling reasons why you should move into the cloud. For instance, File Service's into the cloud. What's the difference between my on Prem? Isn't just on Prem Nass stuffing it into the cloud? Or is it more than you touched on integration? So convince me, why should I move? >>It's so much more than that. So if we if we look at the basic infrastructure once you literally click three or four buttons, Thio started files and creative file system, you no longer have to worry about it ever again. So the things that you have done on Prem, you no longer have to worry about having a sword administrator or having to provision in by storage and maintain it. We take care of all that would take care of all the security elements. I'm so important to your data to make sure that's in a in a secure environment. Security. It's job number one for us. So all of these capabilities and the ability to stand it up to never have to manage it never adorable security. We take care of all the capabilities like you should really be bringing those workloads onto a platform like this so that you can spend your time on added value. Um, service is our applications for your >>business, while in the integration is also a key piece of it. I mean, you know, for years, customers and customers still sometimes want to roll their own. You know, they like to have the you know, the knobs and turn them. But but many customers that we talked or saying Listen, it's too expensive. I don't want to be a systems integrator anymore in the cloud. How can they take advantage of those? Like sometimes they call it the flywheel effect. But the other innovations that you're bringing, whether it's machine learning or other service, is that you guys are bringing in. Is that how tight is that? Integration. >>So those integrations are ongoing, and they're there forever. It goes back to what I said a minute around over a three year period. All of these capabilities gonna be delivered to them, if you would at this at the same cost as the basic service. So let's talk about what happened this year. Um ah, lot of our customers are using sage maker for their M. L A I capabilities and sage maker is deeply integrated with both fsx luster and uh, E F s so that customers again don't have to worry about stories. They're not the way about sharing that are scaling. It's all there for them. >>You mentioned. Also you responsible for the snow product convention an edge. I was what it was to me. It was your first move, so the hybrid, I'll call it. But I always joke that, but it's true. The fastest way to get data from Point A to point B is a Chevy truck, and so, but you're referring to a sort of an edge play. You talk a little bit more about that, help us understand it. >>Sure, so Snowball, a service launched about five years ago. We initially launched a service as a bulk data migration service, and it's it's been that service for roughly four years. About a year ago, a little over a year ago, we started introducing thehe bility to have compute as part of that device, and the reason for that was customers were telling us as we're moving the data, we would like to be able to do some pre processing before it makes it onto AWS before it goes into history, is example. So we started providing that capability that ended up expanding into a full blown if you would cloud platform on a device that could be run in disconnected environments or stare environments. So with Snowball today of the ability to have easy two instances CBS storage s3 storage all in one device. And so that's a really powerful construct because you can build your applications on AWS using the same service is prove out if you wouldn't Dev UPS model that there what you need to be and then literally lift them onto, ah, snowball device and have those executing in the field as if they were running directly in the cloud. >>Change the subject a little bit when I look at the logo slide of all your customers, a lot of big names on their their global companies, a lot of things. So I run a cloud and they got a data center. You know he's Boston or something. No offense if you have a data center, East Boston, but regions are critical, um, especially for global scale. Cloud brings global scale, but it's also important to have data approximate to the users. So you're reducing late and see there's availability and redundancy aspects. Talk about your philosophy around regions and how it fits into your portfolio. How do you take advantage of all that capability? >>So a lot of our customers have global presence and the ability for them to have their application to have their business function in the regions that they're doing business and have those little agencies and also the availability model of being in multiple places. Case of disasters super important. Um, are regions are built, have at minimum three availability zones and an availability zone. You could think of boat as, ah, data center. So, for example, with the F. S. When you stand up a file system with the F S, your file system is automatically distributed, replicated across all three availability zones within that region. But as the user, you don't worry about any of that. We take care of it all for you. In the unfortunate event that our availability zone is made unavailable, your data is still fine. You still have access to that data all time? >>Yeah, and your customers, I think increasingly understanding this the beginning toe architect around regions and availability zones. It's a different way of thinking, but it's in some respects sort of the modern way of thinking. >>If you if you if you go back a few years and you think about all of the disaster recovery or business continue in software and capabilities that had been created, we're providing all of those capabilities today in our regional construct. >>Yeah, well, you know this. I mean, you both better have been around for a while, and we've seen the unnatural acts that you had to do to sort of create that level of redundancy and business continuance. And it was extremely expensive, complex and really risky to test. So I'll, uh, I'll leave you with the last word. Any other thoughts that you want to share with our audience? We're >>We're We're just first off. Thank you for giving you the time. Today. We're really excited about what we're doing with each of these. Service is we're very excited about the portfolio overall on the value that it's going to bring, and he's bringing to our customers today. We're excited about all the announcements. >>Yeah, we'll say we're seeing a lot of innovation. Expansion of the Amazon portfolio. Optionality, granularity performance, horses for courses, the right tool for the right job way. Thanks so much for coming to >>my pleasure. Thank you. >>You're welcome. All right. Keep it right to everybody. You watching the cube storage day from Amazon in Boston? Right back.

Published Date : Nov 20 2019

SUMMARY :

He's the general manager of a lot of stuff. That's a pretty vast portfolio that you have explained that to our audience. So in that we cover all of our files. And, you know, I said I said earlier on it started with s3, and it's just exploded. the programming model that you need for your applications and workloads. What do you What do you mean? that you need to satisfy those applications and workloads very specific. I often joke that, you know, if your expertise is is unpacking boxes Yes, that the news is pretty rich. And you you Sometimes you guys make the point that the you know and all index on FXX windows for another minute. Hey, this compelling reasons why you should move into the cloud. So the things that you have done on Prem, you no You know, they like to have the you know, the knobs and turn them. All of these capabilities gonna be delivered to them, if you would Also you responsible for the snow product convention an edge. you can build your applications on AWS using the same service is prove How do you take advantage of all that capability? So a lot of our customers have global presence and the ability for them to but it's in some respects sort of the modern way of thinking. If you if you if you go back a few years and you think about all of the disaster recovery or business continue in acts that you had to do to sort of create that level of redundancy and business continuance. Thank you for giving you the time. Expansion of the Amazon portfolio. Thank you. Keep it right to everybody.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
WaynePERSON

0.99+

Dave VolantePERSON

0.99+

50 yearsQUANTITY

0.99+

Amazon Web ServicesORGANIZATION

0.99+

Wayne DusoPERSON

0.99+

AWSORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

BostonLOCATION

0.99+

SiriTITLE

0.99+

last yearDATE

0.99+

TodayDATE

0.99+

threeQUANTITY

0.99+

FSXTITLE

0.99+

this yearDATE

0.99+

LennoxORGANIZATION

0.98+

USLOCATION

0.98+

four buttonsQUANTITY

0.98+

todayDATE

0.98+

first moveQUANTITY

0.98+

AWSEVENT

0.97+

one deviceQUANTITY

0.97+

this yearDATE

0.97+

singleQUANTITY

0.97+

WindowsTITLE

0.97+

bothQUANTITY

0.96+

firstQUANTITY

0.96+

F S XTITLE

0.96+

UPSORGANIZATION

0.96+

four yearsQUANTITY

0.96+

eachQUANTITY

0.96+

About a year agoDATE

0.95+

S threeTITLE

0.95+

East BostonLOCATION

0.94+

over six monthQUANTITY

0.93+

SnowballTITLE

0.93+

OneQUANTITY

0.92+

PremORGANIZATION

0.9+

ThioPERSON

0.9+

two instancesQUANTITY

0.89+

Fsx WindowsTITLE

0.85+

Point AOTHER

0.85+

F sTITLE

0.85+

LondonLOCATION

0.84+

three yearQUANTITY

0.84+

ChevyORGANIZATION

0.84+

F HouseORGANIZATION

0.83+

about five years agoDATE

0.83+

over a year agoDATE

0.82+

Storage DayEVENT

0.82+

s3TITLE

0.81+

S OTITLE

0.78+

SnowballORGANIZATION

0.77+

a minuteQUANTITY

0.74+

Storage Day 2019EVENT

0.73+

F STITLE

0.65+

point BOTHER

0.64+

BBSORGANIZATION

0.63+

CubeTITLE

0.61+

Prem NassORGANIZATION

0.59+

FXXTITLE

0.58+

eseORGANIZATION

0.49+

GatewayTITLE

0.45+

oneQUANTITY

0.42+

Duncan Lennox, Amazon Web Services | AWS Storage Day 2019


 

[Music] hi everybody this is David on tape with the Cuban welcome to Boston we're covering storage here at Amazon storage day and we're looking at all the innovations and the expansion of Amazon's pretty vast storage portfolio Duncan Lennox is here is the director of product management for Amazon DFS Duncan good to see it's great to be here so what is so EF s stands for elastic file system what is Amazon EFS that's right EFS is our NFS based filesystem service designed to make it super easy for customers to get up and running with the file system in the cloud so should we think of this as kind of on-prem file services just stuck into the cloud or is it more than that it's more than that but it's definitely designed to enable that we wanted to make it really easy for customers to take the on pram applications that they have today that depend on a file system and move those into the cloud when you look at the macro trends particularly as it relates to file services what are you seeing what a customer's telling you well the first thing that we see is that it's still very early in the move to the cloud the vast majority of workloads are still running on Prem and customers need easy ways to move those thousands of applications they might have into the cloud without having to necessarily rewrite them to take advantage of cloud native services and that's a key thing that we built EFS for to make it easy to just pick up the application and drop it into the cloud without the application even needing to know that it's now running in the cloud okay so that's transparent to the to the to the application and the workload and it absolutely is we built it deliberately using NFS so that the application wouldn't even need to know that it's now running in the cloud and we also built it to be elastic and simple for the same reason so customers don't have to worry about provisioning the storage they need it just works NFS is hard making making NFS simple and elastic is not a trivial engineering task is it it hadn't been done until we did it a lot of people said it couldn't be done how could you make something that truly was elastic in the cloud but still support that NFS but we've been able to do that for tens of thousands of customers successfully and and what's the real challenge there is it to maintain that performance and the recoverability from a technical standpoint an engineering standpoint what's yes sir it's all of the above people expect a certain level of performance whether that's latency throughput and I ops that their application is dependent on but they also want to be able to take advantage of that pay-as-you-go cloud model that AWS created back with s3 13 years ago so that elasticity that we offer to customers means they don't have to worry about capex they don't have to plan for exactly how much storage they need to provision the file system grows and shrinks as they add and remove data they pay only for what they're using and we handle all the heavy lifting for them to make that happen this this opens up a huge new set of workloads for your customers doesn't it it absolutely does and a big part of what we see is customers wanting to go on that journey through the cloud so initially there starting with lifting and shifting those applications as we talked about it but as they mature they want to be able to take advantage of newer technologies like containerization and ultimately even service all right let's talk about EFS ia infrequently access files is really what it's designed for tell us more about it right so one of the things that we heard a lot from our customers of course is can you make it cheaper we love it but we'd like to use more of it and what we discovered is that we could develop this infrequent access storage class and how it works is you turn on a capability we call lifecycle management and it's completely automated after that so we know from industry analysts and from talking to customers that the majority of data perhaps as much as 80% goes pretty cold after about a month and it's rarely touched again so we developed the infrequent access storage class to take advantage of that so once you enable it which is a single click in the console or one API call you pick a policy 14 days 30 days and we monitor the readwrite IO to every file individually and once a file hasn't been read from or written to in that policy period say 30 days we automatically and transparently move it to the infrequent access storage class which is 92% cheaper than our standard storage class it's only two and a half cents in our u.s. East one region as opposed to 30 cents for our standard storage class two and a half cents per per gigabyte per gigabyte month we've done about four customers that were particularly excited about is that it remains active file system data so we move your files to the infrequent access storage class but it does not appear to move in the file system so for your applications and your users it's the same file in the same directory so they don't even need to be aware of the fact that it's now on the infrequent access storage class you just get a bill that's 92 percent cheaper for storage for that file like that ok and it's and it's simple to set up you said it's one click and then I set my policy and I can go back and change my that's exactly right we have multiple policies available you can change it later you can turn off lifecycle management if you decide you no longer need it later so how do you see customers taking advantage of this what do you expect the adoption to be like and what are you hearing from them well what we heard from customers was that they like to keep larger workloads in their file systems but because the data tends to go cold and isn't frequently accessed it didn't make economic sense to say to keep large amounts of data in our standard storage class but there's advantages to them in their businesses for example we've got customers who are doing genomic sequencing and for them to have a larger set of data always available to their applications but not costing them as much as it was allows them to get more results faster as one example you obviously see that yeah what we're what we're trying to do all the time is help our customers be able to focus less on the infrastructure and the heavy lifting and more on being able to innovate faster for their customer so Duncan Duncan some of the sort of fundamental capabilities of EFS include high availability and durability tell us more about that yeah when we were developing EFS we heard a lot from customers that they really wanted higher levels of durability and availability than they typically been able to have on Prem it's super expensive and complex to build high availability and high durability solutions so we've baked that in as a standard part of EFS so when a file is written to an EFS file system and that acknowledgement is received back by the client at that point the data is already spread across three availability zones for both availability and durability what that means is not only are you extremely unlikely to ever lose any data if one of those AZ's goes down or becomes unavailable for some reason to your application you continue to have full read/write access to your file system from the other two available zones so traditionally this would be a very expensive proposition it was sort of on Prem and multiple data centers maybe talk about how it's different in the clouds yeah it's complex to build there's a lot of moving parts involved in it because in our case with three availability zones you were talking about three physically distinct data centers high-speed networking between those and actually moving the data so that it's written not just to one but to all three and we handled that all transparently under the hood in EFS it's all included in our standard storage to your cost as well so it's not something that customers have to worry about more either a complexity or a cost point of view it's so so very very I guess low RPO and an RTO and my essentially zero if you will between the three availability zones because once your client gets that acknowledgement back it's already durably written to the three availability zones all right we'll give you last word just in the world of file services what should we be paying attention to what kinds of things are you really trying to achieve I think it's helping people do more for less faster so there's always more we can do and helping them take advantage of all the services AWS has to offer spoken like a true Amazonian Duncan thanks so much for coming on the queue for thank you good all right and thank you for watching everybody be back from storage day in Boston you watching the cute

Published Date : Nov 20 2019

SUMMARY :

adoption to be like and what are you

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DuncanPERSON

0.99+

92%QUANTITY

0.99+

BostonLOCATION

0.99+

Amazon Web ServicesORGANIZATION

0.99+

92 percentQUANTITY

0.99+

30 centsQUANTITY

0.99+

AWSORGANIZATION

0.99+

thousandsQUANTITY

0.99+

DavidPERSON

0.99+

14 daysQUANTITY

0.99+

AmazonORGANIZATION

0.99+

Duncan LennoxPERSON

0.99+

30 daysQUANTITY

0.99+

80%QUANTITY

0.99+

one clickQUANTITY

0.99+

two available zonesQUANTITY

0.99+

tens of thousands of customersQUANTITY

0.97+

first thingQUANTITY

0.97+

three availability zonesQUANTITY

0.97+

EFSTITLE

0.96+

13 years agoDATE

0.96+

two and a half centsQUANTITY

0.96+

applicationsQUANTITY

0.95+

todayDATE

0.93+

one exampleQUANTITY

0.93+

about a monthQUANTITY

0.93+

three physically distinct data centersQUANTITY

0.92+

every fileQUANTITY

0.9+

one APIQUANTITY

0.9+

PremORGANIZATION

0.89+

three availability zonesQUANTITY

0.87+

CubanOTHER

0.87+

Duncan DuncanPERSON

0.85+

one regionQUANTITY

0.84+

a lot of peopleQUANTITY

0.84+

storage dayEVENT

0.84+

three availability zonesQUANTITY

0.82+

four customersQUANTITY

0.82+

a half centsQUANTITY

0.82+

gigabyteQUANTITY

0.79+

both availabilityQUANTITY

0.79+

single clickQUANTITY

0.78+

two andQUANTITY

0.78+

one of the thingsQUANTITY

0.78+

oneQUANTITY

0.77+

threeQUANTITY

0.76+

u.s. EastLOCATION

0.69+

capexORGANIZATION

0.69+

s3TITLE

0.66+

Storage Day 2019EVENT

0.65+

zeroQUANTITY

0.63+

AZORGANIZATION

0.62+

of dataQUANTITY

0.6+

DFSTITLE

0.52+

lot ofQUANTITY

0.51+

Edward Naim, Amazon Web Services | AWS Storage Day 2019


 

>>We're back in storage day at Amazon in Boston, on detente with the Cube. Ed name is here. He's the general manager of X ed. Welcome to the Cube. Good to say thanks for having me, Dave. Okay, So explain to me why you guys launched FSX for Windows File server. You know why now? >>Well, we did it because customers asked us to do it. What customers told us was that they were tired of all of the effort and overhead in managing windows, file systems and Windows file servers on their own, everything from routine maintenance, like patching to provisioning. They just didn't wanna have to do all of that heavy lifting. So they asked us for a simple solution. Fully manage solution on the cloud. There's a lot of windows data out there. A lot of data that's access from windows computers. Eight of us is the cloud that has the most windows workloads running on it. So it was a very natural ask for customers to ask us as they're moving their windows workloads onto eight of us to have a file system that's fully managed for them that could be accessed by those workloads. So it was It was actually very natural and, uh, unexpected. Ask from customers, >>you know, love. You may not know that, but it does kind of make sense. Is so much windows out there You're the cloud leader. So peanut butter and jelly. Um, how do you see customers using FSX for Windows? >>Yeah, What's really exciting is they're using it for a really broad spectrum of workloads eso everything from traditional user shares and home directories, Thio development environments to analytics, workloads, tau video, trance coating. So it's a very wide spectrum of workloads that they're on the service and we're continuing to see new new types of workloads every day, which is really exciting. >>So we're hearing stories. What exactly is new around FSX for Windows file service Specifically. >>Yeah, Well, we've launched a number of capabilities this year throughout the year s 01 of the significant ones that we launched was the ability for customers to use their self managed active directories on dhe join their FSX file systems to those. So we now have two options. Customers can use a fully managed AWS fully managed active directory or their own with FSX. We launched a number of capabilities around access from on premises. For example, customers can now access. Or when we launched it, we announced that they could now access their file systems over direct connect connections over VPN so they can access the Windows file systems from computers and from end users that are running on premises. So quite a few announcements this year Those are just two examples, and we're really excited about really a slew of announcements and feature features that we're launching now. And I can get into those if you like, give me some examples of your work. So one of the, uh, the most common questions we've had from customers is. Can we offer a native multi daisy capability, multi availabilities own capability? So a lot of customers are running enterprise grade workloads on FXX, and they want to move more and more of those workloads onto AWS, and they don't wanna have to, ah, manage the overhead of using something like a distributed file system or D F s replication between fsx file systems and different disease. So we're launching a fully managed, super simple, multi easy capability, and that's a deployment options that custom the customers will have in addition to what we already had, which was the single easy deployment options. >>Let me see some recurring themes when you talkto folks at Amazon announced service is it's the it's the same sort of mantra. Be able to reduce that heavy lifting, shift your focus to things that will add more value to your business. Take advantage of these other service is through these integrations that that we're doing. So I mean, it kind of feels like a no brainer, but I give you the last the last word. I mean, is it Why is it why should customers, you know, sell me on why I should move by dated to the cloud? >>Yeah. I mean, we we like to think of it as a no brainer because we are fully managing everything for the customer. Um, the the service is built on top of Windows server, so provides a fully compatible Windows file system, and we've managed that fully four customers, So you get complete compatibility with us and be complete compatibility with Auntie if s file system semantics and features. So it's a very simple move for customers to move their existing workloads onto the service and have it before we managed a couple of the other features that we're launching that I do want to mention our We're launching data D duplication we're launching. Ah, whole bunch of administrative capabilities, like user quotas were extending. Our administrative CIA lied to do things like a lot of customers to create shares programmatically so really a very exciting set of capabilities that we really think make this a ah no brainer for customers. >>Well, that's another recurring themes. You guys, you know, you dropped prices and look at the moors losses. Prices continue to drop. The differences them is on. You make it transparent on DDE. If I use a service is lower cost, my bill goes down and then, of course, I end up using more because this is an elastic world. So that's a good thing. But, Ed, thanks so much for coming on. The key. Thank you. Share is that any other thing is you guys only window specialists. It's just kind of ironic, you know, leader in windows. And, uh, >>well, it really comes from What are our customers are asking us for? So they see moving their windows. Workloads is the first step to the full modernization and being all in on the cloud. >>Great. We'll get exit. Thank you. All right. And thank you for watching everybody right back after this. Short break, Dave. A lot with the Cube.

Published Date : Nov 20 2019

SUMMARY :

Okay, So explain to me why you guys launched FSX for Windows File server. So it was a very natural ask for customers to ask us as they're moving their windows workloads onto eight of us So peanut butter and jelly. So it's a very wide spectrum of workloads that they're on the service and we're continuing So we're hearing stories. So a lot of customers are running enterprise So I mean, it kind of feels like a no brainer, a couple of the other features that we're launching that I do want to mention our We're It's just kind of ironic, you know, leader in windows. Workloads is the first step to the full modernization and being all in on the cloud. And thank you for watching everybody right back after this.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

Edward NaimPERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

CIAORGANIZATION

0.99+

AWSORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

EdPERSON

0.99+

BostonLOCATION

0.99+

eightQUANTITY

0.99+

WindowsTITLE

0.99+

two examplesQUANTITY

0.99+

FSXTITLE

0.99+

first stepQUANTITY

0.99+

oneQUANTITY

0.98+

this yearDATE

0.98+

windowsTITLE

0.97+

X edORGANIZATION

0.97+

two optionsQUANTITY

0.96+

AWSEVENT

0.92+

Eight ofQUANTITY

0.85+

CubeCOMMERCIAL_ITEM

0.82+

single easy deploymentQUANTITY

0.8+

four customersQUANTITY

0.73+

Storage Day 2019EVENT

0.62+

FXXTITLE

0.58+

01DATE

0.5+

Asa Kalavade, Amazon Web Services | AWS Storage Day 2019


 

(upbeat music) >> Hi, everybody, we're back. This is Dave Vellante with theCUBE. We're here talking storage at Amazon in Boston. Asa Kalavade's here, she's the general manager for Hybrid and Data Transfer services. >> Let me give you a perspective of how these services come together. We have DataSync, Storage Gateway, and Transfer. As a set of Hybrid and Data Transfer services. The problem that we're trying to address for customers is how to connect their on premises infrastructure to the cloud. And we have customers at different stages of their journey to the cloud. Some are just starting out to use the cloud, some are migrating, and others have migrated, but they still need access to the cloud from on-prem. So the broad charter for these services is to enable customers to use AWS Storage from on-premises. So for example, DataStorage Gateway today is used by customers to get unlimited access to cloud storage from on-premises. And they can do that with low latency, so they can run their on-prem workloads, but still leverage storage in the cloud. In addition to that, we have DataSync, which we launched at re:Invent last year, in 2018. And DataSync essentially is designed to help customers move a lot of their on-premises storage to the cloud, and back and forth for workloads that involve replication, migration, or ongoing data transfers. So together, Gateway and DataSync help solve the access and transfer problem for customers. >> Let's double down on the benefits. You started the segment just sort of describing the problem that you're solving, connecting on-prem to cloud, sort of helping create these hybrid environments. So that's really the other benefit for customers, really simplifying that sort of hybrid approach, giving them high performance confidence that it actually worked. >> Maybe talk a little bit more about that. >> So with DataSync, we see two broad use cases. There is a class of customers that have adopted DataSync for migration. So we have customers like Autodesk who've migrated hundreds of terabytes from their on-premises storage to AWS. And that has allowed them to shut down their data center, or retire their existing storage, because they're on their journey to the cloud. The other class of use cases is customers that have ongoing data that they need to move to the cloud for a workload. So it could be data from video cameras, or gene sequencers that they need to move to a data pipeline in the cloud, and they can do further processing there. And in some cases, bring the results back. So that's the second continuous data transfer use case, that DataSync allows customers to address. >> You're also talking today, about Storage Gateway high availability version of Storage Gateway. What's behind that? >> Storage Gateway today is used by customers to get access to data in the cloud, from on-premises. So if we continue this migration story that I mentioned with DataSync, now you have a customer that has moved a large amount of data to the cloud. They can now access that same data from on-premises for latency reasons, or if they need to distribute data across organizations and so on. So that's where the Gateway comes into play. Today we have 10's of thousands of customers that are using Gateway to do their back-ups, do archiving, or in some cases, use it as a target to replace their on-premises storage, with cloud backed storage. So a lot of these customers are running business critical applications today. But then some of our customers have told us they want to do additional workloads that are uninterruptible. So they can not tolerate downtime. So with that requirement in mind, we are launching this new capability around high availability. And we're quite excited, because that's solving, yet allowing us to do even more workloads on the Gateway. This announcement will allow customers to have a highly available Gateway, in a VMware environment. With that, their workloads can continue running, even if one of the Gateways goes down, if they have a hardware failure, a networking event, or software error such as the file shares becoming unavailable. The Gateway automatically restarts, so the workloads remain uninterrupted. >> So talk a little bit more about how it works, just in terms of anything customers have to do, any prerequisites they have. How does it all fit? >> Customers can essentially use this in their VMware H.A. environment today. So they would deploy their Gateway much like they do today. They can download the Gateway from the AWS console. If they have an existing Gateway, the software gets updated so they can take advantage of the high availability feature as well. The Gateway integrates into the VMware H.A. environment. It builds up a number of health checks, so we keep monitoring for the application up-time, network up-time, and so on. And if there is an event, the health check gets communicated back to VMware, and the Gateway gets restarted within, in most typical cases, under 60 seconds. >> So customers that are VMware customers, can take advantage of this, and to them, it's very non disruptive it sounds like. That's one of the benefits. But maybe talk about some of the other benefits. >> We saw a large number of our on-premises customers, especially in the enterprise environments, use VMware today. And they're using VMware HA for a number of their other applications. So we wanted to plug into that environment so the Gateway is as well highly available. So all their applications just work in that same framework. And then along with high availability, we're also introducing two additional capabilities. One is real time reports and visibility into the Gateway's resource consumption. So customers can now see embedded cloud watch graphs on how is their storage being consumed, what's their cache utilization, what's the network utilization. And then the administrators can use that to, in fairly real time, adapt the resources that they've allocated to the Gateway. So with that, as their workloads change, they can continue to adapt their Gateway resources, so they're getting the maximum performance out of the Gateway. >> So if they see a performance problem, and it's a high priority, they can put more resources on it-- >> They can attach more storage to it, or move it to a higher resourced VM, and they can continue to get the performance they need. Previously they could still do that, but they had to have manual checks. Now this is all automated, we can get this in a single pane of control. And they can use the AWS console today, like they do for their in cloud workloads. They can use that to look at performance of their on-premises Gateway's as well. So it's one pane of control. They can get CloudWatch health reports on their infrastructure on-prem. >> And if course it's cloud, so I can assume this is a service, I pay for it when I used it, I don't have to install any infrastructure, right? >> So the Gateways, again, consumption based, much like all AWS services. You download the Gateway, it doesn't cost you anything. And we charge one cent per gigabyte of data transfer through the Gateway, and it's capped at $125 a month. And you just pay for whatever storage is consumed by the Gateway. >> When you talk to senior exec's like Andy Jassy, always says "We focus on the customers." And sometimes people roll their eyes, but it's true. This is a hybrid world. Years ago, you didn't really hear much talk about hybrid. You talked to your customers and say, "Hey, we want to connect our on-prem to the public cloud." You're bringing services to do that. Asa, thanks so much for coming to theCUBE. Appreciate it. >> Thank you, thanks for your time. >> You're welcome. And thank you for watching everybody. This is Dave Vellante with theCUBE. We'll be back right after this short break. (upbeat music)

Published Date : Nov 20 2019

SUMMARY :

Asa Kalavade's here, she's the general manager for but they still need access to the cloud from on-prem. So that's really the other benefit for customers, or gene sequencers that they need to move to You're also talking today, about Storage Gateway for latency reasons, or if they need to distribute just in terms of anything customers have to do, So they would deploy their Gateway So customers that are VMware customers, they can continue to adapt their Gateway resources, and they can continue to get the performance they need. So the Gateways, again, consumption based, You talked to your customers and say, This is Dave Vellante with theCUBE.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Asa KalavadePERSON

0.99+

Andy JassyPERSON

0.99+

Dave VellantePERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

AWSORGANIZATION

0.99+

BostonLOCATION

0.99+

AmazonORGANIZATION

0.99+

AutodeskORGANIZATION

0.99+

TodayDATE

0.99+

2018DATE

0.99+

OneQUANTITY

0.99+

hundreds of terabytesQUANTITY

0.99+

AsaPERSON

0.99+

todayDATE

0.99+

under 60 secondsQUANTITY

0.99+

last yearDATE

0.96+

one paneQUANTITY

0.96+

10'sQUANTITY

0.94+

two broad use casesQUANTITY

0.94+

re:InventEVENT

0.93+

two additional capabilitiesQUANTITY

0.93+

oneQUANTITY

0.93+

one cent per gigabyteQUANTITY

0.91+

VMwareORGANIZATION

0.91+

$125 a monthQUANTITY

0.91+

DataSyncTITLE

0.9+

VMwareTITLE

0.88+

single paneQUANTITY

0.85+

VMware HATITLE

0.81+

theCUBEORGANIZATION

0.78+

DataSyncORGANIZATION

0.77+

second continuousQUANTITY

0.77+

thousands of customersQUANTITY

0.75+

Storage Day 2019EVENT

0.73+

Years agoDATE

0.73+

GatewayTITLE

0.66+

StorageORGANIZATION

0.55+

useQUANTITY

0.55+

AWSEVENT

0.53+

CloudWatchTITLE

0.53+

DataStorage GatewayTITLE

0.53+

StorageTITLE

0.51+

GatewayCOMMERCIAL_ITEM

0.47+

Storage GatewayTITLE

0.44+

GatewayORGANIZATION

0.37+

Kevin Miller, Amazon Web Services | AWS Storage Day 2019


 

>>day, Volonte. And welcome to the Cuban special presentation here from Amazon in Boston. We're talking storage, really? A group of intelligent people here in the storage world and really excited to have Kevin Miller. You got hard news today around this. Think of replication. Time >>control. Yeah. >>What? What's that all about? What should we know about S3 replication? What problems isn't solving for customers? Why'd you do this? >>Yeah, absolutely. So we're very >>pleased to announce the launch today of s three replication Time control. >>This is a >>future that a number of customers across really across the board, large enterprise as well as public sector customers have asked us >>for to really give >>them insight and confidence that critical data they need to have replicated will be done in the time frames that they require. So we're actually today offering industry first s L. A of 99.9% of data will be replicated with within 15 minutes when using replication, time control and really, most data is replicated within a matter of seconds. But then having that escalate to >>back up that promise. So >>we have a number of customers who >>use as three replication today. Both is in our cross region replication as well as same region replication. And so the >>use cases really >>span the gamut from customers were looking to just back up their data so they might make a copy into a lower cost storage class to have a backup of that data. A CZ well as customers that I want to have on always on disaster recovery site, where they can replicate the data and then have a live hot, ready to go replication in another region for disaster >>recovery. Okay, so let's double click on that a little bit. Cross region replication. C r R >>r r >>Tell us more about that. What should we know there? >>Well, see, you're ours. The >>capability we've had for a long time, and it's it's a really critical capability. Ah, building block that our customers used to ensure that they can maintain a second copy of the data in another region. And so, and with Sierra are they can not only replicate the data, but they can actually replicate it into a completely different accounts so they can actually have two accounts that with potentially different access control and different administrators who can access those accounts. So they really have confidence that even if there was a knish you with their application in one region, that they can immediately begin operating in that second region. And so so we have customers who use replication for backup recovery, but also for a ZAY said sort of live replication to have ah always on D our site. >>Okay. And you also just recently announced the same reason. Region replication tell us more about that. >>Well, same region replication. It provides many of the benefits of cross region replication, but does so within one region. So we do have some customers >>who would like to, for example, make a backup copy of their data into a different account. But they >>need to >>maintain that data within the same geography, but perhaps for data, sovereignty reasons, or that they just want Thio, keep everything in one region but still have that second copy. So with same region replication, it's really just one parameter in the replication configuration, and they have all the benefits that we have historically had with across region application. >>So, Kevin, what should we be watching for? Just in terms of s three replication. Replication generally is very important for customers, really. But what's next for Amazon as three replication that we should be paying attention to? >>Well, you know, we, uh we think the >>replication today has ah range of different differentiated capabilities in terms of the ability to replicate on a tag level or replicate subset of the data. And so, you know, >>really, our goal with replication is just to make it as easy as >>possible for customers. Thio configure the replication they need Andi provide that flexibility while also providing the sort of the fully managed experience that we have with us three where you don't have to build your own software to do it. So, >>you know, we're gonna be continuing >>to work with customers. Uh, Thio simplify the things that they need to d'oh to configure replication for their different use cases. >>Let's talk about that customer angle you're just thinking about as three replication time control. What you expect customers to be saying about this, how they're going to be using it. What kind of problems are they going to be solved? >>Yeah, well, we have customers, you know, particularly those >>in regulated industries or in government public sector where they are under very stringent requirements. Thio be able to prove that they always have a second copy of the data and and this is the way that they can do that. So we air, you know, working with customers in with some of the tightest regulations you can imagine who were saying, Yeah, this is what I need with this capability Now I can I can watch it. I can monitor it. Ah, and I and more importantly, I know that the data is there for them. They can't start processing the data until they know that that second copy is is made. So they're using the replication time control metrics to really look at it real time and say, Okay, I'm ready to begin processing this data because I know I have both copies made. >>Well, it's great to see you guys really expanding the storage portfolio again. It started very simple, but you get that flywheel effect going. It's it's a critical part of the value chain. So congratulations. I'll give you last word, and >>I I just think that >>obviously that s three stands for simple storage service. And despite all of the flexibility and capability we're trying to build in. At the same time, simplicity is job number one for us. And so >>we're just >>really excited about with replication. Time control. Uh, we think that we've built something that both hits, that that mark of being simple but also provides just a lot of capability that that otherwise, you know, would take quite a bit of effort. >>Always a balance. Right? The simple you make it that war customers wants. Kevin, thanks so much for coming on The Cube. Really appreciate it. >>Absolutely. Thanks for having me. >>You're welcome. And thank you for watching everybody, right? Right back, Right after this short break.

Published Date : Nov 20 2019

SUMMARY :

people here in the storage world and really excited to have Kevin Miller. control. So we're very them insight and confidence that critical data they need to have replicated will be done So And so the to have on always on disaster recovery site, where they can replicate the data Okay, so let's double click on that a little bit. What should we know there? Well, see, you're ours. And so so we have customers who use replication tell us more about that. So we do have some customers But they and they have all the benefits that we have historically had with across region application. Just in terms of s three replication. in terms of the ability to replicate on a tag level or replicate subset that we have with us three where you don't have to build your own software to do it. things that they need to d'oh to configure replication for their different use cases. What kind of problems are they going to be solved? of the tightest regulations you can imagine who were saying, Yeah, this is what I need with this capability Well, it's great to see you guys really expanding the storage portfolio again. And despite all of the flexibility and capability that otherwise, you know, would take quite a bit of effort. The simple you make it that war customers wants. Thanks for having me. And thank you for watching everybody, right?

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Kevin MillerPERSON

0.99+

KevinPERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

BostonLOCATION

0.99+

second copyQUANTITY

0.99+

two accountsQUANTITY

0.99+

both copiesQUANTITY

0.99+

VolontePERSON

0.99+

todayDATE

0.99+

second regionQUANTITY

0.99+

threeQUANTITY

0.99+

BothQUANTITY

0.99+

one parameterQUANTITY

0.98+

one regionQUANTITY

0.98+

15 minutesQUANTITY

0.98+

three standsQUANTITY

0.97+

ThioPERSON

0.96+

AWSEVENT

0.96+

both hitsQUANTITY

0.93+

firstQUANTITY

0.89+

doubleQUANTITY

0.85+

CubanOTHER

0.81+

secondsQUANTITY

0.79+

SierraTITLE

0.78+

three replicationQUANTITY

0.76+

99.9% of dataQUANTITY

0.72+

AndiORGANIZATION

0.71+

Storage Day 2019EVENT

0.67+

ThioORGANIZATION

0.58+

CubeCOMMERCIAL_ITEM

0.38+

oneQUANTITY

0.35+