Image Title

Search Results for EFS:

Mat Mathews & Randy Boutin, AWS | AWS Storage Day 2022


 

(upbeat music) >> Welcome to theCube's coverage of AWS Storage Day. We're here with a couple of AWS product experts. Covering AWS's migration and transfer services, Randy Boutin is the general manager of AWS DataSync, and Mat Matthews, GM of AWS Transfer Family. Guys, good to see you again. Thanks for coming on. >> Dave, thanks. >> So look, we saw during the pandemic, the acceleration to cloud migration. We've tracked that, we've quantified that. What's driving that today? >> Yeah, so Dave, great to be back here. Saw you last year at Storage Day. >> Nice to be in studio too, isn't it? Thanks, guys, for coming in. >> We've conquered COVID. >> So yeah, I mean, this is a great question. I think digital transformation is really what's driving a lot of the focus right now from companies, and it's really not about just driving down costs. It's also about what are the opportunities available once you get into the cloud in terms of, what does that unlock in terms of innovation? So companies are focused on the usual things, optimizing costs, but ensuring they have the right security and agility. You know, a lot has happened over the last year, and companies need to be able to react, right? They need to be able to react quickly, so cloud gives them a lot of these capabilities, but the real benefit that we see is that once your data's in the cloud, it opens up the power of the cloud for analytics, for new application development, and things of that sort, so what we're seeing is that companies are really just focused on understanding cloud migration strategy, and how they can get their data there, and then use that to unlock that data for the value. >> I mean, if I've said it once, I've said it 100 times, if you weren't a digital business during the pandemic, you were out of business. You know, migration historically is a bad word in IT. Your CIOs see it and go, "Ugh." So what's the playbook for taking years of data on-prem, and moving it into the cloud? What are you seeing as best practice there? >> Yeah, so as you said, the migration historically has been painful, right? And it's a daunting task for any business or any IT executive, but fortunately, AWS has a broad suite of capabilities to help enable these migrations. And by that, I mean, we have tools to help you understand your existing on-prem workloads, understand what services in the AWS offering align to those needs, but also help you estimate the cost, right? Cost is a big part of this move. We can help you estimate that cost, and predict that cost, and then use tools like DataSync to help you move that data when that time comes. >> So you're saying you help predict the cost of the migration, or the cost of running in the cloud? >> Running in the cloud, right. Yeah, we can help estimate the run time. Based on the performance that we assess on-prem, we can then project that into a cloud service, and estimate that cost. >> So can you guys explain DataSync? Sometimes I get confused, DataSync, what's the difference between DataSync and Storage Gateway? And I want to get into when we should use each, but let's start there if we could. >> Yeah, sure, I'll take that. So Storage Gateway is primarily a means for a customer to access their data in the cloud from on-prem. All right, so if you have an application that you want to keep on-prem, you're not ready yet to migrate that application to the cloud, Gateway is a strong solution, because you can move a lot of that data, a lot of your cold or long tail data into something like S3 or EFS, but still access it from your on-prem location. DataSync's all about data movement, so if you need to move your data from A to B, DataSync is your optimized solution to do that. >> Are you finding that people, that's ideally a one time move, or is it actually, sometimes you're seeing customers do it more? Again, moving data, if I don't- Move as much data as you need to, but no more, to paraphrase Einstein. >> What we're seeing in DataSync is that customers do use DataSync for their initial migration. They'll also, as Matt was mentioning earlier, once you get your data into the cloud, that flywheel of potential starts to take hold, and customers want to ultimately move that data within the cloud to optimize its value. So you might move from service to service. You might move from EFS to S3, et cetera, to enable the cloud flywheel to benefit you. DataSync does that as well, so customers use us to initially migrate, they use us to move within the cloud, and also we just recently announced service for other clouds, so you can actually bring data in now from Google and Azure as well. >> Oh, how convenient. So okay, so that's cool. So you helped us understand the use cases, but can we dig one more layer, like what protocols are supported? I'm trying to understand really the right fit for the right job. >> Yeah, so that's really important. So for transfer specifically, one of the things that we see with customers is you've got obviously a lot of internal data within your company, but today it's a very highly interconnected world, so companies deal with lots of business partners, and historically they've used, there's a big prevalence of using file transfer to exchange data with business partners, and as you can imagine, there's a lot of value in that data, right? Sometimes it's purchase orders, inventory data from suppliers, or things like that. So historically customers have had protocols like SFTP or FTP to help them interface with or exchange data or files with external partners. So for transfer, that's what we focus on is helping customers exchange data over those existing protocols that they've used for many years. And the real focus is it's one thing to migrate your own data into the cloud, but you can't force thousands or tens of thousands sometimes of partners to also work in a different way to get you their data, so we want to make that very seamless for customers using the same exact protocols like SFTP that they've used for years. We just announced AS2 protocol, which is very heavily used in supply chains to exchange inventory and information across multi-tiers of partners, and things of that nature. So we're really focused on letting customers not have to impact their partners, and how they work and how they exchange, but also take advantage of the data, so get that data into the cloud so they can immediately unlock the value with analytics. >> So AS2 is specifically in the context of supply chain, and I'm presuming it's secure, and kind of governed, and safe. Can you explain that a little bit? >> Yeah, so AS2 has a lot of really interesting features for transactional type of exchanges, so it has signing and encryption built in, and also has notification so you can basically say, "Hey, I sent you this purchase order," and to prove that you received it, it has capability called non-repudiation, which means it's actually a legal transaction. So those things are very important in transactional type of exchanges, and allows customers in supply chains, whether it's vendors dealing with their suppliers, or transportation partners, or things like that to leverage file transfer for those types of exchanges. >> So encryption, providence of transactions, am I correct, without having to use the blockchain, and all the overhead associated with that? >> It's got some built in capabilities. >> I mean, I love blockchain, but there's drawbacks. >> Exactly, and that's why it's been popular. >> That's really interesting, 'cause Andy Jassy one day, I was on a phone call with him and John Furrier, and we were talking up crypto and blockchain. He said, "Well, why do, explain to me." You know Jassy, right? He always wants to go deeper. "Explain why I can't do this with some other approach." And so I think he was recognizing some of the drawbacks. So that's kind of a cool thing, and it leads me- We're running this obviously today, August 10th. Yesterday we had our Supercloud event in Palo Alto on August 9th, and it's all about the ecosystem. One of the observations we made about the 2020s is the cloud is totally different now. People are building value on top of the infrastructure that you guys have built out over the last 15 years. And so once an organization's data gets into the cloud, how does it affect, and it relates to AS2 somewhat, how does it affect the workflows in terms of interacting with external partners, and other ecosystem players that are also in the cloud? >> Yeah, great, yeah, again, we want to try and not have to affect those workflows, take them as they are as much as possible, get the data exchange working. One of the things that we focus on a lot is, how do you process this data once it comes in? Every company has governance requirements, security requirements, and things like that, so they usually have a set of things that they need to automate and orchestrate for the data as it's coming in, and a lot of these companies use something called Managed File Transfer Solutions that allow them to automate and orchestrate those things. We also see that many times this is very customer specific, so a bank might have a certain set of processes they have to follow, and it needs to be customized. As you know, AWS is a great solution for building custom solutions, and actually today, we're just announcing a new set of of partners in a program called the Service Delivery Program with AWS Transfer Family that allows customers to work with partners that are very well versed in transfer family and related services to help build a very specific solution that allows them to build that automation orchestration, and keep their partners kind of unaware that they're interfacing in a different way. >> And once this data is in the cloud, or actually, maybe stays on-prem in some cases, but it basically plugs in to the AWS services portfolio, the whole security model, the governance model, shared responsibility comes in, is that right? It's all, sort of all in there? >> Yeah, that's right, that's exactly right, and we're working with it's all about the customer's needs, and making sure that their investment in AWS doesn't disrupt their existing workflows and their relationships with their customers and their partners, and that's exactly what Matt's been describing is we're taking a close look at how we can extend the value of AWS, integrate into our customer's workflows, and bring that value to them with minimal investment or disruption. >> So follow up on that. So I love that, because less disruption means it's easier, less friction, and I think of like, trying to think of examples. Think about data de-duplication like purpose-built backup appliances, right? Data domain won that battle, because they could just plug right in. Avamar, they were trying to get you to redo everything, okay, and so we saw that movie play out. At the same time, I've talked to CIOs that say, "I love that, but the cloud opens up all these cool new opportunities for me to change my operating model." So are you seeing that as well? Where okay, we make it easy to get in. We're not disrupting workflows, and then once they get in, they say, "Well if we did it this way, we'd take out a bunch of costs. We'd accelerate our business." What's that dynamic like? >> Exactly that, right. So that moved to the Cloud Continuum. We don't think it's going to be binary. There's always going to be something on-prem. We accept that, but there's a continuum there, so day one, they'll migrate a portion of that workload into the cloud, start to extract and see value there, but then they'll continue, as you said, they'll continue to see opportunities. With all of the various capabilities that AWS has to offer, all the value that represents, they'll start to see that opportunity, and then start to engage and consume more of those features over time. >> Great, all right, give us the bumper sticker. What's next in transfer services from your perspectives? >> Yeah, so we're obviously always going to listen to our customers, that's our focus. >> You guys say that a lot. (all laughing) We say it a lot. But yeah, so we're focused on helping customers again increase that level of automation orchestration, again that suite of capability, generally, in our industry, known as managed file transfer, when a file comes in, it needs to get maybe encrypted, or decrypted, or compressed, or decompressed, scanned for viruses, those kind of capabilities, make that easier for customers. If you remember last year at Storage Day, we announced a low code workflow framework that allows customers to kind of build those steps. We're continuing to add built-in capabilities to that so customers can easily just say, "Okay, I want these set of activities to happen when files come in and out." So that's really what's next for us. >> All right, Randy, we'll give you the last word. Bring us home. >> I'm going to surprise you with the customer theme. >> Oh, great, love it. >> Yeah, so we're listening to customers, and what they're asking for our support for more sources, so we'll be adding support for more cloud sources, more on-prem sources, and giving the customers more options, also performance and usability, right? So we want to make it easier, as the enterprise continues to consume the cloud, we want to make DataSync and the movement of their data as easy as possible. >> I've always said it starts with the data. S3, that was the first service, and the other thing I've said a lot is the cloud is expanding. We're seeing connections to on-prem. We're seeing connections out to the edge. It's just becoming this massive global system, as Werner Vogels talks about all the time. Thanks, guys, really appreciate it. >> Dave, thank you very much. >> Thanks, Dave. >> All right, keep it right there for more coverage of AWS Storage Day 2022. You're watching theCube. (upbeat music)

Published Date : Aug 12 2022

SUMMARY :

Guys, good to see you again. the acceleration to cloud migration. Yeah, so Dave, great to be back here. Nice to be in studio too, isn't it? and companies need to and moving it into the cloud? in the AWS offering align to those needs, Running in the cloud, right. So can you guys explain DataSync? All right, so if you have an application but no more, to paraphrase Einstein. for other clouds, so you can for the right job. so get that data into the cloud and kind of governed, and safe. and to prove that you received it, but there's drawbacks. Exactly, and that's One of the observations we made that they need to automate and orchestrate and making sure that their investment for me to change my operating model." So that moved to the Cloud Continuum. services from your perspectives? always going to listen that allows customers to give you the last word. I'm going to surprise the movement of their data We're seeing connections out to the edge. of AWS Storage Day 2022.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Randy BoutinPERSON

0.99+

AWSORGANIZATION

0.99+

JassyPERSON

0.99+

Mat MatthewsPERSON

0.99+

MattPERSON

0.99+

Palo AltoLOCATION

0.99+

thousandsQUANTITY

0.99+

DavePERSON

0.99+

100 timesQUANTITY

0.99+

Andy JassyPERSON

0.99+

EinsteinPERSON

0.99+

August 9thDATE

0.99+

first serviceQUANTITY

0.99+

RandyPERSON

0.99+

last yearDATE

0.99+

John FurrierPERSON

0.99+

GoogleORGANIZATION

0.99+

todayDATE

0.99+

Werner VogelsPERSON

0.99+

YesterdayDATE

0.99+

tens of thousandsQUANTITY

0.99+

S3TITLE

0.98+

one timeQUANTITY

0.98+

one more layerQUANTITY

0.98+

Mat MathewsPERSON

0.98+

OneQUANTITY

0.97+

2020sDATE

0.97+

AWS DataSyncORGANIZATION

0.97+

Storage DayEVENT

0.97+

pandemicEVENT

0.97+

AWS Transfer FamilyORGANIZATION

0.97+

one thingQUANTITY

0.97+

eachQUANTITY

0.96+

oneQUANTITY

0.94+

EFSTITLE

0.94+

DataSyncTITLE

0.93+

SupercloudEVENT

0.89+

COVIDOTHER

0.85+

AzureORGANIZATION

0.82+

August 10thDATE

0.79+

last 15 yearsDATE

0.79+

today,DATE

0.77+

AS2OTHER

0.75+

yearsQUANTITY

0.7+

Transfer FamilyTITLE

0.69+

Storage Day 2022EVENT

0.66+

Service Delivery ProgramOTHER

0.64+

AS2ORGANIZATION

0.63+

thingsQUANTITY

0.58+

onceQUANTITY

0.57+

DataSyncORGANIZATION

0.57+

AWSEVENT

0.56+

GatewayORGANIZATION

0.52+

theCubePERSON

0.52+

AS2TITLE

0.5+

Storage GatewayORGANIZATION

0.48+

AvamarORGANIZATION

0.47+

dayQUANTITY

0.47+

SFTPOTHER

0.45+

SFTPTITLE

0.44+

2022DATE

0.43+

Storage GatewayTITLE

0.31+

Ed Casmer, Cloud Storage Security | CUBE Conversation


 

(upbeat music) >> Hello, and welcome to "theCUBE" conversation here in Palo Alto, California. I'm John Furrier, host of "theCUBE," got a great security conversation, Ed Casper who's the founder and CEO of Cloud Storage Security, the great Cloud background, Cloud security, Cloud storage. Welcome to the "theCUBE Conversation," Ed. Thanks for coming on. >> Thank you very much for having me. >> I got Lafomo on that background. You got the nice look there. Let's get into the storage blind spot conversation around Cloud Security. Obviously, reinforced has came up a ton, you heard a lot about encryption, automated reasoning but still ransomware was still hot. All these things are continuing to be issues on security but they're all brought on data and storage, right? So this is a big part of it. Tell us a little bit about how you guys came about the origination story. What is the company all about? >> Sure, so, we're a pandemic story. We started in February right before the pandemic really hit and we've survived and thrived because it is such a critical thing. If you look at the growth that's happening in storage right now, we saw this at reinforced. We saw even a recent AWS Storage Day. Their S3, in particular, houses over 200 trillion objects. If you look just 10 years ago, in 2012, Amazon touted how they were housing one trillion objects, so in a 10 year period, it's grown to 200 trillion and really most of that has happened in the last three or four years, so the pandemic and the shift in the ability and the technologies to process data better has really driven the need and driven the Cloud growth. >> I want to get into some of the issues around storage. Obviously, the trend on S3, look at what they've done. I mean, I saw my land at storage today. We've interviewed her. She's amazing. Just the EC2 and S3 the core pistons of AWS, obviously, the silicons getting better, the IaaS layers just getting so much more innovation. You got more performance abstraction layers at the past is emerging Cloud operations on premise now with hybrid is becoming a steady state and if you look at all the action, it's all this hyper-converged kind of conversations but it's not hyper-converged in a box, it's Cloud Storage, so there's a lot of activity around storage in the Cloud. Why is that? >> Well, because it's that companies are defined by their data and, if a company's data is growing, the company itself is growing. If it's not growing, they are stagnant and in trouble, and so, what's been happening now and you see it with the move to Cloud especially over the on-prem storage sources is people are starting to put more data to work and they're figuring out how to get the value out of it. Recent analysts made a statement that if the Fortune 1000 could just share and expose 10% more of their data, they'd have net revenue increases of 65 million. So it's just the ability to put that data to work and it's so much more capable in the Cloud than it has been on-prem to this point. >> It's interesting data portability is being discussed, data access, who gets access, do you move compute to the data? Do you move data around? And all these conversations are kind of around access and security. It's one of the big vulnerabilities around data whether it's an S3 bucket that's an manual configuration error, or if it's a tool that needs credentials. I mean, how do you manage all this stuff? This is really where a rethink kind of comes around so, can you share how you guys are surviving and thriving in that kind of crazy world that we're in? >> Yeah, absolutely. So, data has been the critical piece and moving to the Cloud has really been this notion of how do I protect my access into the Cloud? How do I protect who's got it? How do I think about the networking aspects? My east west traffic after I've blocked them from coming in but no one's thinking about the data itself and ultimately, you want to make that data very safe for the consumers of the data. They have an expectation and almost a demand that the data that they consume is safe and so, companies are starting to have to think about that. They haven't thought about it. It has been a blind spot, you mentioned that before. In regards to, I am protecting my management plane, we use posture management tools. We use automated services. If you're not automating, then you're struggling in the Cloud. But when it comes to the data, everyone thinks, "Oh, I've blocked access. I've used firewalls. I've used policies on the data," but they don't think about the data itself. It is that packet that you talked about that moves around to all the different consumers and the workflows and if you're not ensuring that that data is safe, then, you're in big trouble and we've seen it over and over again. >> I mean, it's definitely a hot category and it's changing a lot, so I love this conversation because it's a primary one, primary and secondary cover data cotton storage. It's kind of good joke there, but all kidding aside, it's a hard, you got data lineage tracing is a big issue right now. We're seeing companies come out there and kind of superability tangent there. The focus on this is huge. I'm curious, what was the origination story? What got you into the business? Was it like, were you having a problem with this? Did you see an opportunity? What was the focus when the company was founded? >> It's definitely to solve the problems that customers are facing. What's been very interesting is that they're out there needing this. They're needing to ensure their data is safe. As the whole story goes, they're putting it to work more, we're seeing this. I thought it was a really interesting series, one of your last series about data as code and you saw all the different technologies that are processing and managing that data and companies are leveraging today but still, once that data is ready and it's consumed by someone, it's causing real havoc if it's not either protected from being exposed or safe to use and consume and so that's been the biggest thing. So we saw a niche. We started with this notion of Cloud Storage being object storage, and there was nothing there protecting that. Amazon has the notion of access and that is how they protect the data today but not the packets themselves, not the underlying data and so, we created the solution to say, "Okay, we're going to ensure that that data is clean. We're also going to ensure that you have awareness of what that data is, the types of files you have out in the Cloud, wherever they may be, especially as they drift outside of the normal platforms that you're used to seeing that data in. >> It's interesting that people were storing data lakes. Oh yeah, just store a womp we might need and then became a data swamp. That's kind of like go back 67 years ago. That was the conversation. Now, the conversation is I need data. It's got to be clean. It's got to feed the machine learning. This is going to be a critical aspect of the business model for the developers who are building the apps, hence, the data has code reference which we've focused on but then you say, "Okay, great. Does this increase our surface area for potential hackers?" So there's all kinds of things that kind of open up, we start doing cool, innovative, things like that so, what are some of the areas that you see that your tech solves around some of the blind spots or with object store, the things that people are overlooking? What are some of the core things that you guys are seeing that you're solving? >> So, it's a couple of things, right now, the still the biggest thing you see in the news is configuration issues where people are losing their data or accidentally opening up to rights. That's the worst case scenario. Reads are a bad thing too but if you open up rights and we saw this with a major API vendor in the last couple of years they accidentally opened rights to their buckets. Hackers found it immediately and put malicious code into their APIs that were then downloaded and consumed by many, many of their customers so, it is happening out there. So the notion of ensuring configuration is good and proper, ensuring that data has not been augmented inappropriately and that it is safe for consumption is where we started and, we created a lightweight, highly scalable solution. At this point, we've scanned billions of files for customers and petabytes of data and we're seeing that it's such a critical piece to that to make sure that that data's safe. The big thing and you brought this up as well is the big thing is they're getting data from so many different sources now. It's not just data that they generate. You see one centralized company taking in from numerous sources, consolidating it, creating new value on top of it, and then releasing that and the question is, do you trust those sources or not? And even if you do, they may not be safe. >> We had an event around super Clouds is a topic we brought up to get bring the attention to the complexity of hybrid which is on premise, which is essentially Cloud operations. And the successful people that are doing things in the software side are essentially abstracting up the benefits of the infrastructures of service from HN AWS, right, which is great. Then they innovate on top so they have to abstract that storage is a key component of where we see the innovations going. How do you see your tech that kind of connecting with that trend that's coming which is everyone wants infrastructures code. I mean, that's not new. I mean, that's the goal and it's getting better every day but DevOps, the developers are driving the operations and security teams to like stay pace, so policy seeing a lot of policy seeing some cool things going on that's abstracting up from say storage and compute but then those are being put to use as well, so you've got this new wave coming around the corner. What's your reaction to that? What's your vision on that? How do you see that evolving? >> I think it's great, actually. I think that the biggest problem that you have to do as someone who is helping them with that process is make sure you don't slow it down. So, just like Cloud at scale, you must automate, you must provide different mechanisms to fit into workflows that allow them to do it just how they want to do it and don't slow them down. Don't hold them back and so, we've come up with different measures to provide and pretty much a fit for any workflow that any customer has come so far with. We do data this way. I want you to plug in right here. Can you do that? And so it's really about being able to plug in where you need to be, and don't slow 'em down. That's what we found so far. >> Oh yeah, I mean that exactly, you don't want to solve complexity with more complexity. That's the killer problem right now so take me through the use case. Can you just walk me through how you guys engage with customers? How they consume your service? How they deploy it? You got some deployment scenarios. Can you talk about how you guys fit in and what's different about what you guys do? >> Sure, so, we're what we're seeing is and I'll go back to this data coming from numerous sources. We see different agencies, different enterprises taking data in and maybe their solution is intelligence on top of data, so they're taking these data sets in whether it's topographical information or whether it's in investing type information. Then they process that and they scan it and they distribute it out to others. So, we see that happening as a big common piece through data ingestion pipelines, that's where these folks are getting most of their data. The other is where is the data itself, the document or the document set, the actual critical piece that gets moved around and we see that in pharmaceutical studies, we see it in mortgage industry and FinTech and healthcare and so, anywhere that, let's just take a very simple example, I have to apply for insurance. I'm going to upload my Social Security information. I'm going to upload a driver's license, whatever it happens to be. I want to one know which of my information is personally identifiable, so I want to be able to classify that data but because you're trusting or because you're taking data from untrusted sources, then you have to consider whether or not it's safe for you to use as your own folks and then also for the downstream users as well. >> It's interesting, in the security world, we hear zero trust and then we hear supply chain, software supply chains. We get to trust everybody, so you got kind of two things going on. You got the hardware kind of like all the infrastructure guys saying, "Don't trust anything 'cause we have a zero trust model," but as you start getting into the software side, it's like trust is critical like containers and Cloud native services, trust is critical. You guys are kind of on that balance where you're saying, "Hey, I want data to come in. We're going to look at it. We're going to make sure it's clean." That's the value here. Is that what I'm hearing you, you're taking it and you're saying, "Okay, we'll ingest it and during the ingestion process, we'll classify it. We'll do some things to it with our tech and put it in a position to be used properly." Is that right? >> That's exactly right. That's a great summary, but ultimately, if you're taking data in, you want to ensure it's safe for everyone else to use and there are a few ways to do it. Safety doesn't just mean whether it's clean or not. Is there malicious content or not? It means that you have complete coverage and control and awareness over all of your data and so, I know where it came from. I know whether it's clean and I know what kind of data is inside of it and we don't see, we see that the interesting aspects are we see that the cleanliness factor is so critical in the workflow, but we see the classification expand outside of that because if your data drifts outside of what your standard workflow was, that's when you have concerns, why is PII information over here? And that's what you have to stay on top of, just like AWS is control plane. You have to manage it all. You have to make sure you know what services have all of a sudden been exposed publicly or not, or maybe something's been taken over or not and you control that. You have to do that with your data as well. >> So how do you guys fit into the security posture? Say it a large company that might want to implement this right away. Sounds like it's right in line with what developers want and what people want. It's easy to implement from what I see. It's about 10, 15, 20 minutes to get up and running. It's not hard. It's not a heavy lift to get in. How do you guys fit in once you get operationalized when you're successful? >> It's a lightweight, highly scalable serverless solution, it's built on Fargate containers and it goes in very easily and then, we offer either native integrations through S3 directly, or we offer APIs and the APIs are what a lot of our customers who want inline realtime scanning leverage and we also are looking at offering the actual proxy aspects. So those folks who use the S3 APIs that our native AWS, puts and gets. We can actually leverage our put and get as an endpoint and when they retrieve the file or place the file in, we'll scan it on access as well, so, it's not just a one time data arrest. It can be a data in motion as you're retrieving the information as well >> We were talking with our friends the other day and we're talking about companies like Datadog. This is the model people want, they want to come in and developers are driving a lot of the usage and operational practice so I have to ask you, this fits kind of right in there but also, you also have the corporate governance policy police that want to make sure that things are covered so, how do you balance that? Because that's an important part of this as well. >> Yeah, we're really flexible for the different ways they want to consume and and interact with it. But then also, that is such a critical piece. So many of our customers, we probably have a 50/50 breakdown of those inside the US versus those outside the US and so, you have those in California with their information protection act. You have GDPR in Europe and you have Asia having their own policies as well and the way we solve for that is we scan close to the data and we scan in the customer's account, so we don't require them to lose chain of custody and send data outside of the accoun. That is so critical to that aspect. And then we don't ask them to transfer it outside of the region, so, that's another critical piece is data residency has to be involved as part of that compliance conversation. >> How much does Cloud enable you to do this that you couldn't really do before? I mean, this really shows the advantage of natively being in the Cloud to kind of take advantage of the IaaS to SAS components to solve these problems. Share your thoughts on how this is possible. What if there was no problem, what would you do? >> It really makes it a piece of cake. As silly as that sounds, when we deploy our solution, we provide a management console for them that runs inside their own accounts. So again, no metadata or anything has to come out of it and it's all push button click and because the Cloud makes it scalable because Cloud offers infrastructure as code, we can take advantage of that and then, when they say go protect data in the Ireland region, they push a button, we stand up a stack right there in the Ireland region and scan and protect their data right there. If they say we need to be in GovCloud and operate in GovCloud East, there you go, push the button and you can behave in GovCloud East as well. >> And with server lists and the region support and all the goodness really makes a really good opportunity to really manage these Cloud native services with the data interaction so, really good prospects. Final question for you. I mean, we love the story. I think it is going to be a really changing market in this area in a big way. I think the data storage relationship relative to higher level services will be huge as Cloud native continues to drive everything. What's the future? I mean, you guys see yourself as a all encompassing, all singing and dancing storage platform or a set of services that you're going to enable developers and drive that value. Where do you see this going? >> I think that it's a mix of both. Ultimately, you saw even on Storage Day the announcement of file cash and file cash creates a new common name space across different storage platforms and so, the notion of being able to use one area to access your data and have it come from different spots is fantastic. That's been in the on-prem world for a couple of years and it's finally making it to the Cloud. I see us following that trend in helping support. We're super laser-focused on Cloud Storage itself so, EBS volumes, we keep having customers come to us and say, "I don't want to run agents in my EC2 instances. I want you to snap and scan and I don't want to, I've got all this EFS and FSX out there that we want to scan," and so, we see that all of the Cloud Storage platforms, Amazon work docs, EFS, FSX, EBS, S3, we'll all come together and we'll provide a solution that's super simple, highly scalable that can meet all the storage needs so, that's our goal right now and where we're working towards. >> Well, Cloud Storage Security, you couldn't get a more a descriptive name of what you guys are working on and again, I've had many contacts with Andy Jassy when he was running AWS and he always loves to quote "The Innovator's Dilemma," one of his teachers at Harvard Business School and we were riffing on that the other day and I want to get your thoughts. It's not so much "The Innovator's Dilemma" anymore relative to Cloud 'cause that's kind of a done deal. It's "The Integrator's Dilemma," and so, it's the integrations are so huge now. If you don't integrate the right way, that's the new dilemma. What's your reaction to that? >> A 100% agreed. It's been super interesting. Our customers have come to us for a security solution and they don't expect us to be 'cause we don't want to be either. Our own engine vendor, we're not the ones creating the engines. We are integrating other engines in and so we can provide a multi engine scan that gives you higher efficacy. So this notion of offering simple integrations without slowing down the process, that's the key factor here is what we've been after so, we are about simplifying the Cloud experience to protecting your storage and it's been so funny because I thought customers might complain that we're not a name brand engine vendor, but they love the fact that we have multiple engines in place and we're bringing that to them this higher efficacy, multi engine scan. >> I mean the developer trends can change on a dime. You make it faster, smarter, higher velocity and more protected, that's a winning formula in the Cloud so Ed, congratulations and thanks for spending the time to riff on and talk about Cloud Storage Security and congratulations on the company's success. Thanks for coming on "theCUBE." >> My pleasure, thanks a lot, John. >> Okay. This conversation here in Palo Alto, California I'm John Furrier, host of "theCUBE." Thanks for watching.

Published Date : Aug 11 2022

SUMMARY :

the great Cloud background, You got the nice look there. and driven the Cloud growth. and if you look at all the action, and it's so much more capable in the Cloud It's one of the big that the data that they consume is safe and kind of superability tangent there. and so that's been the biggest thing. the areas that you see and the question is, do you and security teams to like stay pace, problem that you have to do That's the killer problem right now and they distribute it out to others. and during the ingestion and you control that. into the security posture? and the APIs are what of the usage and operational practice and the way we solve for of the IaaS to SAS components and because the Cloud makes it scalable and all the goodness really and so, the notion of and so, it's the and so we can provide a multi engine scan I mean the developer I'm John Furrier, host of "theCUBE."

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Ed CasperPERSON

0.99+

Ed CasmerPERSON

0.99+

AmazonORGANIZATION

0.99+

Andy JassyPERSON

0.99+

CaliforniaLOCATION

0.99+

John FurrierPERSON

0.99+

2012DATE

0.99+

USLOCATION

0.99+

JohnPERSON

0.99+

200 trillionQUANTITY

0.99+

AWSORGANIZATION

0.99+

FebruaryDATE

0.99+

IrelandLOCATION

0.99+

EuropeLOCATION

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

65 millionQUANTITY

0.99+

S3TITLE

0.99+

10%QUANTITY

0.99+

information protection actTITLE

0.99+

15QUANTITY

0.99+

FSXTITLE

0.99+

EdPERSON

0.99+

DatadogORGANIZATION

0.99+

one timeQUANTITY

0.99+

GDPRTITLE

0.99+

10 years agoDATE

0.99+

one trillion objectsQUANTITY

0.99+

two thingsQUANTITY

0.99+

100%QUANTITY

0.98+

billions of filesQUANTITY

0.98+

20 minutesQUANTITY

0.98+

Harvard Business SchoolORGANIZATION

0.98+

AsiaLOCATION

0.98+

bothQUANTITY

0.98+

67 years agoDATE

0.98+

over 200 trillion objectsQUANTITY

0.98+

50/50QUANTITY

0.97+

Cloud Storage SecurityORGANIZATION

0.97+

oneQUANTITY

0.96+

pandemicEVENT

0.96+

todayDATE

0.95+

HN AWSORGANIZATION

0.95+

CloudTITLE

0.94+

The Integrator's DilemmaTITLE

0.94+

theCUBEORGANIZATION

0.94+

EC2TITLE

0.93+

zero trustQUANTITY

0.93+

last couple of yearsDATE

0.93+

about 10QUANTITY

0.93+

EFSTITLE

0.9+

one areaQUANTITY

0.88+

The Innovator's DilemmaTITLE

0.87+

10 year periodQUANTITY

0.81+

GovCloudTITLE

0.78+

Cloud StorageTITLE

0.77+

The Innovator's DilemmaTITLE

0.75+

LafomoPERSON

0.75+

EBSTITLE

0.72+

last threeDATE

0.71+

Storage DayEVENT

0.7+

Cloud SecurityTITLE

0.69+

CUBEORGANIZATION

0.67+

Fortune 1000ORGANIZATION

0.61+

EBSORGANIZATION

0.59+

Danny Allan, Veeam & James Kirschner, Amazon | AWS re:Invent 2021


 

(innovative music) >> Welcome back to theCUBE's continuous coverage of AWS re:Invent 2021. My name is Dave Vellante, and we are running one of the industry's most important and largest hybrid tech events of the year. Hybrid as in physical, not a lot of that going on this year. But we're here with the AWS ecosystem, AWS, and special thanks to AMD for supporting this year's editorial coverage of the event. We've got two live sets, two remote studios, more than a hundred guests on the program. We're going really deep, as we enter the next decade of Cloud innovation. We're super excited to be joined by Danny Allan, who's the Chief Technology Officer at Veeam, and James Kirschner who's the Engineering Director for Amazon S3. Guys, great to see you. >> Great to see you as well, Dave. >> Thanks for having me. >> So let's kick things off. Veeam and AWS, you guys have been partnering for a long time. Danny, where's the focus at this point in time? What are customers telling you they want you to solve for? And then maybe James, you can weigh in on the problems that customers are facing, and the opportunities that they see ahead. But Danny, why don't you start us off? >> Sure. So we hear from our customers a lot that they certainly want the solutions that Veeam is bringing to market, in terms of data protection. But one of the things that we're hearing is they want to move to Cloud. And so there's a number of capabilities that they're asking us for help with. Things like S3, things like EC2, and RDS. And so over the last, I'll say four or five years, we've been doing more and more together with AWS in, I'll say, two big categories. One is, how do we help them send their data to the Cloud? And we've done that in a very significant way. We support obviously tiering data into S3, but not just S3. We support S3, and S3 Glacier, and S3 Glacier Deep Archive. And more importantly than ever, we do it with immutability because customers are asking for security. So a big category of what we're working on is making sure that we can store data and we can do it securely. Second big category that we get asked about is "Help us to protect the Cloud-Native Workloads." So they have workloads running in EC2 and RDS, and EFS, and EKS, and all these different services knowing Cloud-Native Data Protection. So we're very focused on solving those problems for our customers. >> You know, James, it's interesting. I was out at the 15th anniversary of S3 in Seattle, in September. I was talking to Mai-Lan. Remember we used to talk about gigabytes and terabytes, but things have changed quite dramatically, haven't they? What's your take on this topic? >> Well, they sure have. We've seen the exponential growth data worldwide and that's made managing backups more difficult than ever before. We're seeing traditional methods like tape libraries and secondary sites fall behind, and many organizations are moving more and more of their workloads to the Cloud. They're extending backup targets to the Cloud as well. AWS offers the most storage services, data transfer methods and networking options with unmatched durability, security and affordability. And customers who are moving their Veeam Backups to AWS, they get all those benefits with a cost-effective offsite storage platform. Providing physical separation from on-premises primary data with pay-as-you-go economics, no upfront fees or capital investments, and near zero overhead to manage. AWS and APM partners like Veeam are helping to build secure, efficient, cost-effective backup, and restore solutions using the products you know and trust with the scale and reliability of the AWS Cloud. >> So thank you for that. Danny, I remember I was way back in the old days, it was a VeeamON physical event. And I remember kicking around and seeing this company called Kasten. And I was really interested in like, "You protect the containers, aren't they ephemeral?" And we started to sort of chit-chat about how that's going to change and what their vision was. Well, back in 2020, you purchased Kasten, you formed the Veeam KBU- the Kubernetes Business Unit. What was the rationale behind that acquisition? And then James, I'm going to get you to talk a little bit about modern apps. But Danny, start with the rationale behind the Kasten acquisition. >> Well, one of the things that we certainly believe is that the next generation of infrastructure is going to be based on containers, and there's a whole number of reasons for that. Things like scalability and portability. And there's a number of significant value-adds. So back in October of last year in 2020, as you mentioned, we acquired Kasten. And since that time we've been working through Kasten and from Veeam to add more capabilities and services around AWS. For example, we supported the Bottlerocket launch they just did and actually EKS anywhere. And so we're very focused on making sure that our customers can protect their data no matter whether it's a Kubernetes cluster, or whether it's on-premises in a data center, or if it's running up in the Cloud in EC2. We give this consistent data management experience and including, of course, the next generation of infrastructure that we believe will be based on containers. >> Yeah. You know, James, I've always noted to our audience that, "Hey AWS, they provide rich set of primitives and API's that ISV's like Veeam can take advantage of it." But I wonder if you could talk about your perspective, maybe what you're seeing in the ecosystem, maybe comment on what Veeam's doing. Specifically containers, app modernization in the Cloud, the evolution of S3 to support all these trends. >> Yeah. Well, it's been great to see Veeam expands for more and more AWS services to help joint customers protect their data. Especially since Veeam stores their data in Amazon S3 storage classes. And over the last 15 years, S3 has helped companies around the world optimize their work, so I'd be happy to share some insights into that with you today. When you think about S3 well, you can find virtually every use case across all industries running on S3. That ranges from backup, to (indistinct) data, to machine learning models, the list goes on and on. And one of the reasons is because S3 provides industry leading scalability, availability, durability, security, and performance. Those are characteristics customers want. To give you some examples, S3 stores exabytes the data across millions of hard drives, trillions of objects around the world and regularly peaks at millions of requests per second. S3 can process in a single region over 60 terabytes a second. So in summary, it's a very powerful storage offering. >> Yeah, indeed. So you guys always talking about, you know, working backwards, the customer centricity. I think frankly that AWS sort of change the culture of the entire industry. So, let's talk about customers. Danny do you have an example of a joint customer? Maybe how you're partnering with AWS to try to address some of the challenges in data protection. What are customers is seeing today? >> Well, we're certainly seeing that migration towards the Cloud as James alluded today. And actually, if we're talking about Kubernetes, actually there's a customer that I know of right now, Leidos. They're a fortune 500 Information Technology Company. They deal in the engineering and technology services space, and focus on highly regulated industry. Things like defense and intelligence in the civil space. And healthcare in these very regulated industries. Anyway, they decided to make a big investment in continuous integration, continuous development. There's a segment of the industry called portable DevSecOps, and they wanted to build infrastructure as code that they could deploy services, not in days or weeks or months, but they literally wanted to deploy their services in hours. And so they came to us, and with Kasten K10 actually around Kubernetes, they created a service that could enable them to do that. So they could be fully compliant, and they could deliver the services in, like I say, hours, not days or months. And they did that all while delivering the same security that they need in a cost-effective way. So it's been a great partnership, and that's just one example. We see these all the time, customers who want to combine the power of Kubernetes with the scale of the Cloud from AWS, with the data protection that comes from Veeam. >> Yes, so James, you know at AWS you don't get dinner if you don't have a customer example. So maybe you could share one with us. >> Yeah. We do love working backwards from customers and Danny, I loved hearing that story. One customer leveraging Veeam and AWS is Maritz. Maritz provides business performance solutions that connect people to results, ensuring brands deliver on their customer promises and drive growth. Recently Maritz moved over a thousand VM's and petabytes of data into AWS, using Veeam. Veeam Backup for AWS enables Maritz to protect their Amazon EC2 instances with the backup of the data in the Amazon S3 for highly available, cost-effective, long-term storage. >> You know, one of the hallmarks of Cloud is strong ecosystem. I see a lot of companies doing sort of their own version of Cloud. I always ask "What's the partner ecosystem look like?" Because that is a fundamental requirement, in my view anyway, and attribute. And so, a big part of that, Danny, is channel partners. And you have a 100 percent channel model. And I wonder if we could talk about your strategy in that regard. Why is it important to be all channel? How to consulting partners fit into the strategy? And then James, I'm going to ask you what's the fit with the AWS ecosystem. But Danny, let's start with you. >> Sure, so one of the things that we've learned, we're 15 years old as well, actually. I think we're about two months older, or younger I should say than AWS. I think their birthday was in August, ours was in October. But over that 15 years, we've learned that our customers enjoy the services, and support, and expertise that comes from the channel. And so we've always been a 100 percent channel company. And so one of the things that we've done with AWS is to make sure that our customers can purchase both how and when they want through the AWS marketplace. They have a program called Consulting Partners Private Agreements, or CPPO, I think is what it's known as. And that allows our customers to consume through the channel, but with the terms and bill that they associate with AWS. And so it's a new route-to-market for us, but we continue to partner with AWS in the channel programs as well. >> Yeah. The marketplace is really impressive. James, I wonder if you could maybe add in a little bit. >> Yeah. I think Danny said it well, AWS marketplace is a sales channel for ISV's and consulting partners. It lets them sell their solutions to AWS customers. And we focus on making it really easy for customers to find, buy, deploy, and manage software solutions, including software as a service in just a matter of minutes. >> Danny, you mentioned you're 15 years old. The first time I mean, the name Veeam. The brilliance of tying it to virtualization and VMware. I was at a VMUG when I first met you guys and saw your ascendancy tied to virtualization. And now you're obviously leaning heavily into the Cloud. You and I have talked a lot about the difference between just wrapping your stack in a container and hosting it in the Cloud versus actually taking advantage of Cloud-Native Services to drive further innovation. So my question to you is, where does Veeam fit on that spectrum, and specifically what Cloud-Native Services are you leveraging on AWS? And maybe what have been some outcomes of those efforts, if in fact that's what you're doing? And then James, I have a follow-up for you. >> Sure. So the, the outcomes clearly are just more success, more scale, more security. All the things that James is alluding to, that's true for Veeam it's true for our customers. And so if you look at the Cloud-Native capabilities that we protect today, certainly it began with EC2. So we run things in the Cloud in EC2, and we wanted to protect that. But we've gone well beyond that today, we protect RDS, we protect EFS- Elastic File Services. We talked about EKS- Elastic Kubernetes Services, ECS. So there's a number of these different services that we protect, and we're going to continue to expand on that. But the interesting thing is in all of these, Dave, when we do data protection, we're sending it to S3, and we're doing all of that management, and tiering, and security that our customers know and love and expect from Veeam. And so you'll continue to see these types of capabilities coming from Veeam as we go forward. >> Thank you for that. So James, as we know S3- very first service offered in 2006 on the AWS' Cloud. As I said, theCUBE was out in Seattle, September. It was a great, you know, a little semi-hybrid event. But so over the decade and a half, you really expanded the offerings quite dramatically. Including a number of, you got on-premise services things, like Outposts. You got other services with "Wintery" names. How have you seen partners take advantage of those services? Is there anything you can highlight maybe that Veeam is doing that's notable? What can you share? >> Yeah, I think you're right to call out that growth. We have a very broad and rich set of features and services, and we keep growing that. Almost every day there's a new release coming out, so it can be hard to keep up with. And Veeam has really been listening and innovating to support our joint customers. Like Danny called out a number of the ways in which they've expanded their support. Within Amazon S3, I want to call out their support for our infrequent access, infrequent access One-Zone, Glacier, and Glacier Deep Archive Storage Classes. And they also support other AWS storage services like AWS Outposts, AWS Storage Gateway, AWS Snowball Edge, and the Cold-themed storage offerings. So absolutely a broad set of support there. >> Yeah. There's those, winter is coming. Okay, great guys, we're going to leave it there. Danny, James, thanks so much for coming to theCUBE. Really good to see you guys. >> Good to see you as well, thank you. >> All right >> Thanks for having us. >> You're very welcome. You're watching theCUBE's coverage of 2021 AWS re:Invent, keep it right there for more action on theCUBE, your leader in hybrid tech event coverage, right back. (uplifting music)

Published Date : Nov 30 2021

SUMMARY :

and special thanks to AMD and the opportunities that they see ahead. And so over the last, I'll I was out at the 15th anniversary of S3 of the AWS Cloud. And then James, I'm going to get you is that the next generation the evolution of S3 to some insights into that with you today. of the entire industry. And so they came to us, So maybe you could share one with us. that connect people to results, And then James, I'm going to ask you and expertise that comes from the channel. James, I wonder if you could And we focus on making it So my question to you is, And so if you look at the in 2006 on the AWS' Cloud. AWS Snowball Edge, and the Really good to see you guys. coverage of 2021 AWS re:Invent,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DannyPERSON

0.99+

JamesPERSON

0.99+

AWSORGANIZATION

0.99+

Dave VellantePERSON

0.99+

OctoberDATE

0.99+

Danny AllanPERSON

0.99+

2006DATE

0.99+

James KirschnerPERSON

0.99+

SeattleLOCATION

0.99+

AugustDATE

0.99+

100 percentQUANTITY

0.99+

2020DATE

0.99+

DavePERSON

0.99+

AWS'ORGANIZATION

0.99+

fourQUANTITY

0.99+

AMDORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

VeeamORGANIZATION

0.99+

SeptemberDATE

0.99+

APMORGANIZATION

0.99+

S3TITLE

0.99+

two remote studiosQUANTITY

0.99+

five yearsQUANTITY

0.99+

OneQUANTITY

0.99+

LeidosORGANIZATION

0.99+

KubernetesTITLE

0.99+

KastenORGANIZATION

0.99+

two live setsQUANTITY

0.99+

oneQUANTITY

0.99+

15 yearsQUANTITY

0.98+

todayDATE

0.98+

more than a hundred guestsQUANTITY

0.98+

bothQUANTITY

0.98+

Ashish Palekar & Cami Tavares | AWS Storage Day 2021


 

(upbeat music) >> Welcome back to theCUBE's continuous coverage of AWS storage day. My name is Dave Vellante and we're here from Seattle. And we're going to look at the really hard workloads, those business and mission critical workloads, the most sensitive data. They're harder to move to the cloud. They're hardened. They have a lot of technical debt. And the blocker in some cases has been storage. Ashish Palekar is here. He's the general manager of EBS snapshots, and he's joined by Cami Tavares who's a senior manager of product management for Amazon EBS. Folks, good to see you. >> Ashish: Good to see you again Dave. >> Dave: Okay, nice to see you again Ashish So first of all, let's start with EBS. People might not be familiar. Everybody knows about S3 is famous, but how are customers using EBS? What do we need to know? >> Yeah, it's super important to get the basics, right? Right, yeah. We have a pretty broad storage portfolio. You talked about S3 and S3 glacier, which are an object and object and archival storage. We have EFS and FSX that cover the file site, and then you have a whole host of data transfer services. Now, when we think about block, we think of a really four things. We think about EBS, which is the system storage for EC2 volumes. When we think about snapshots, which is backups for EBS volumes. Then we think about instant storage, which is really a storage that's directly attached to an instance and manages and then its life cycle is similar to that of an instance. Last but not the least, data services. So things like our elastic volumes capability of fast snapshot restore. So the answer to your question really is EBS is persistent storage for EC2 volumes. So if you've used EC2 instances, you'll likely use EBS volumes. They service boot volumes and they service data volumes, and really cover a wide gamut of workloads from relational databases, no SQL databases, file streaming, media and coding. It really covers the gamut of workloads. >> Dave: So when I heard SAN in the cloud, I laughed out loud. I said, oh, because I could think about a box, a bunch of switches and this complicated network, and then you're turning it into an API. I was like, okay. So you've made some announcements that support SAN in the cloud. What, what can you tell us about? >> Ashish: Yeah, So SANs and for customers and storage, those are storage area networks, really our external arrays that customers buy and connect their performance critical and mission critical workloads. With block storage and with EBS, we got a bunch of customers that came to us and said, I'm thinking about moving those kinds of workloads to the cloud. What do you have? And really what they're looking for and what they were looking for is performance availability and durability characteristics that they would get from their traditional SANs on premises. And so that's what the team embarked on and what we launched at reinvent and then at GEd in July is IO2 block express. And what IO2 block express does is it's a complete ground app, really the invention of our storage product offering and gives customers the same availability, durability, and performance characteristics that can, we'll go into little later about that they're used to in their on premises. The other thing that we realized is that it's not just enough to have a volume. You need an instance that can drive that kind of throughput and IOPS. And so coupled with our trends in EC2 we launched our R5b that now triples the amount of IOPS and throughput that you can get from a single instance to EBS storage. So when you couple the sub millisecond latency, the capacity and the performance that you get from IO2 block express with R5b, what we hear from customers is that gives them enough of the performance availability characteristics and durability characteristics to move their workloads from on premises, into the cloud, for the mission critical and business critical apps. >> Dave: Thank you for that. So Cami when I, if I think about how the prevailing way in which storage works, I drop off a box at the loading dock and then I really don't know what happens. There may be a service organization that's maybe more intimate with the customer, but I don't really see the innovations and the use cases that are applied clouds, different. You know, you live it every day. So you guys always talk about customer inspired innovation. So what are you seeing in terms of how people are using this capability and what innovations they're driving? >> Cami: Yeah, so I think when we look at the EBS portfolio and this, the evolution over the years, you can really see that it was driven by customer need and we have different volume types and they have very specific performance characteristics, and they're built to meet these unique needs of customer workloads. So I'll tell you a little bit about some of our specific volume types to kind of illustrate this evolution over the years. So starting with our general purpose volumes, we have many customers that are using these volumes today. They really are looking for high performance at a low cost, and you have all kinds of transactional workloads and low-latency interactive applications and boot volumes, as Ashish mentioned. And they tell us, the customer is using these general purpose volumes, they tell us that they really like this balanced cost and performance. And customers also told us, listen, I have these more demanding applications that need higher performance. I need more IOPS, more throughput. And so looking at that customer need, we were really talking about these IO intensive applications like SAP HANA and Oracle and databases that require just higher durability. And so we looked at that customer feedback and we launched our provisioned IOPS IO2 volume. And with that volume, you get five nines of durability and four times the IOPS that you would get with general purpose volumes. So it's a really compelling offering. Again, customers came to us and said, this is great. I need more performance, I need more IOPS, more throughput, more storage than I can get with a single IO2 volume. And so these were talking about, you mentioned mission critical applications, SAP HANA, Oracle, and what we saw customers doing often is they were striping together multiple IO2 volumes to get the maximum performance, but very quickly with the most demanding applications, it got to a point where we have more IO2 volumes that you want to manage. And so we took that feedback to heart and we completely reinvented the underlying EBS hardware and the software and networking stacks. And we'll launched block express. With block express, you can get four times the IOPS throughput and storage that you would get with a single io2 volume. So it's a really compelling offering for customers. >> Dave: If I had to go back and ask you, what was the catalyst, what was the sort of business climate that really drove the decision here. Was that people were just sort of fed up with you know, I'll use the phrase, the undifferentiated, heavy lifting around SAN, what was it, was it COVID driven? What was the climate? >> You know, it's important to recognize when we are talking about business climate today, every business is a data business and block storage is really a foundational part of that. And so with SAN in the cloud specifically, we have seen enterprises for several years, buying these traditional hardware arrays for on premises SANs. And it's a very expensive investment. Just this year alone, they're spending over $22 billion on SANs. And with this old model on premises SANs, you would probably spend a lot of time doing this upfront capacity planning, trying to figure out how much storage you might need. And in the end, you'd probably end up overbuying for peak demand because you really don't want to get stuck, not having what you need to scale your business. And so now with block express, you don't have to do that anymore. You pay for what you need today, and then you can increase your storage as your business needs change. So that's cost and cost is a very important factor. But really when we're talking to customers and enterprises that are looking for SAN in the cloud, the number one reason that they want to move to the cloud with their SANs and these mission, critical workloads is agility and speed. And it's really transformational for businesses to be able to change the customer experience for their customers and innovate at a much faster pace. And so with the block express product, you get to do that much faster. You can go from an idea to an implementation orders of magnitude faster. Whereas before if you had these workloads on premises, it would take you several weeks just to get the hardware. And then you have to build all this surrounding infrastructure to get it up and running. Now, you don't have to do that anymore. You get your storage in minutes, and if you change your mind, if your business needs change, if your workloads change, you can modify your EBS volume types without interrupting your workload. >> Dave: Thank you for that. So Cami kind of addressed some of this, but I know store admins say, don't touch my SAN, I'm not moving it. This is a big decision for a lot of people. So kind of a two-part question, you know, why now, what do people need to know? And give us the north star close it out with, with where you see the future. >> Ashish: Yeah, so let's, I'll kick things off and then Cami, do jump in. So first of the volume is one part of the story, right? And with IO2 block express, I think we've given customers an extremely compelling offering to go build their mission critical and business critical applications on. We talked about the instance type R5b in terms of giving that instance level performance, but all this is on the foundation of AWS in terms of availability zones and regions. So you think about the constructs and we talk them in terms of building blocks, but our building blocks are really availability zones and regions. And that gives you that core availability infrastructure that you need to build your mission critical and business critical applications. You then take layer on top of that our regional footprint, right. And now you can spin up those workloads globally, if you need to. And then last but not the least, once you're in AWS, you have access to other services. Be it AI, be it ML, be it our relational database services that you can start to think about undifferentiated, heavy lifting. So really you get the smorgasbord really from the availability footprint to global footprint and all the way up to sort of our service stack that you get access to. >> Dave: So that's really thinking out of the box. We're out of time. Cami we'll give you the last word. >> Cami: I just want to say, if you want to learn more about EBS, there's a deep dive session with our principal engineer, Marc Olson later today. So definitely join that. >> Dave: Folks, thanks so much for coming to theCUBE. (in chorus )Thank you. >> Thank you for watching. Keep it right there for more great content from AWS storage day from Seattle.

Published Date : Sep 2 2021

SUMMARY :

And the blocker in some So first of all, let's start with EBS. and then you have a whole host What, what can you tell us about? that you can get from a single So what are you seeing in And with that volume, you that really drove the decision here. and then you can increase your storage So kind of a two-part question, you know, And that gives you that core Cami we'll give you the last word. if you want to learn more about EBS, much for coming to theCUBE. Thank you for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

Ashish PalekarPERSON

0.99+

Dave VellantePERSON

0.99+

AshishPERSON

0.99+

Cami TavaresPERSON

0.99+

Marc OlsonPERSON

0.99+

SeattleLOCATION

0.99+

CamiPERSON

0.99+

AWSORGANIZATION

0.99+

EBSORGANIZATION

0.99+

two-partQUANTITY

0.99+

one partQUANTITY

0.99+

JulyDATE

0.99+

over $22 billionQUANTITY

0.99+

EC2TITLE

0.99+

FSXTITLE

0.99+

EFSTITLE

0.99+

firstQUANTITY

0.99+

EBSTITLE

0.98+

four timesQUANTITY

0.98+

IO2 block expressTITLE

0.97+

OracleORGANIZATION

0.96+

todayDATE

0.94+

five ninesQUANTITY

0.93+

this yearDATE

0.92+

SQLTITLE

0.92+

theCUBEORGANIZATION

0.92+

singleQUANTITY

0.91+

later todayDATE

0.87+

SAP HANATITLE

0.86+

four thingsQUANTITY

0.86+

single instanceQUANTITY

0.85+

R5bOTHER

0.85+

block expressTITLE

0.84+

block expressORGANIZATION

0.76+

S3TITLE

0.75+

Amazon EBSORGANIZATION

0.74+

oneQUANTITY

0.71+

AWS Storage Day 2021EVENT

0.69+

GEdORGANIZATION

0.63+

storage dayEVENT

0.59+

starLOCATION

0.58+

several weeksQUANTITY

0.56+

COVIDOTHER

0.53+

S3COMMERCIAL_ITEM

0.51+

IO2TITLE

0.44+

Duncan Lennox | AWS Storage Day 2021


 

>>Welcome back to the cubes, continuous coverage of AWS storage day. We're in beautiful downtown Seattle in the great Northwest. My name is Dave Vellante and we're going to talk about file systems. File systems are really tricky and making those file systems elastic is even harder. They've got a long history of serving a variety of use cases as with me as Duncan Lennox. Who's the general manager of Amazon elastic file system. Dunkin. Good to see you again, Dave. Good to see you. So tell me more around the specifically, uh, Amazon's elastic file system EFS you, you know, broad file portfolio, but, but let's narrow in on that. What do we need to know? >>Yeah, well, Amazon elastic file system or EFS as we call it is our simple serverless set and forget elastic file system service. So what we mean by that is we deliver something that's extremely simple for customers to use. There's not a lot of knobs and levers. They need to turn or pull to make it work or manage it on an ongoing basis. The serverless part of it is there's absolutely no infrastructure for customers to manage. We handled that entirely for them. The elastic part then is the file system automatically grows and shrinks as they add and delete data. So they never have to provision storage or risk running out of storage and they pay only for the storage they're actually using. >>What are the sort of use cases and workloads that you see EFS supporting? >>Yeah. Yeah. It has to support a broad set of customer workloads. So it's everything from, you know, serial, highly latency, sensitive applications that customers might be running on-prem today and want to move to the AWS cloud up to massively parallel scale-out workloads that they have as well. >>So. Okay. Are there any industry patterns that you see around that? Are there other industries that sort of lean in more or is it more across the board? We >>See it across the board, although I'd have to say that we see a lot of adoption within compliance and regulated industries. And a lot of that is because of not only our simplicity, but the high levels of availability and durability that we bring to the file system as well. The data is designed for 11 nines of durability. So essentially you don't need to be worrying about your anything happening into your data. And it's a regional service meaning that your file system is available from all availability zones in a particular region for high availability. >>So as part of storage data, we, we saw some, some new tiering announcements. W w w what can you tell us about those >>Super excited to be announcing EFS intelligent tiering? And this is a capability that we're bringing to EFS that allows customers to automatically get the best of both worlds and get cost optimization for their workloads and how it works is the customer can select, uh, using our lifecycle management capability, a policy for how long they want their data to remain active in one of our active storage classes, seven days, for example, or 30 days. And what we do is we automatically monitor every access to every file they have. And if we see no access to a file for their policy period, like seven days or 30 days, we automatically and transparently move that file to one of our cost optimized, optimized storage classes. So they can save up to 92% on their storage costs. Um, one of the really cool things about intelligent tiering then is if that data ever becomes active again and their workload or their application, or their users need to access it, it's automatically moved back to a performance optimized storage class, and this is all completely transparent to their applications and users. >>So, so how, how does that work? Are you using some kind of machine intelligence to sort of monitor things and just learn over time? And like, what if I policy, what if I don't get it quite right? Or maybe I have some quarter end or maybe twice a year, you know, I need access to that. Can you, can the system help me figure >>That out? Yeah. The beauty of it is you don't need to know how your application or workload is accessing the file system or worry about those access patterns changing. So we'll take care of monitoring every access to every file and move the file either to the cost optimized storage class or back to the performance optimized class as needed by your application. >>And then optimized storage classes is again, selected by the system. I don't have to >>It that's right. It's completely transparent. So we will take care of that for you. So you'll set the policy by which you want active data to be moved to the infrequent access cost optimized storage class, like 30 or seven days. And then you can set a policy that says if that data is ever touched again, to move it back to the performance optimized storage class. So that's then all happened automatically by the service on our side. You don't need to do anything >>It's, it's it's serverless, which means what I don't have to provision any, any compute infrastructure. >>That's right. What you get is an end point, the ability to Mount your file system using NFS, or you can also manage your file system from any of our compute services in AWS. So not only directly on an instance, but also from our serverless compute models like AWS Lambda and far gays, and from our container services like ECS and EKS, and all of the infrastructure is completely managed by us. You don't see it, you don't need to worry about it. We scale it automatically for you. >>What was the catalyst for all this? I mean, you know, you got to tell me it's customers, but maybe you could give me some, some insight and add some, some color. Like, what would you decoded sort of what the customers were saying? Did you get inputs from a lot of different places, you know, and you had to put that together and shape it. Uh, tell us, uh, take us inside that sort of how you came to where you are >>Today. Well, you know, I guess at the end of the day, when you think about storage and particularly file system storage, customers always want more performance and they want lower costs. So we're constantly optimizing on both of those dimensions. How can we find a way to deliver more value and lower cost to customers, but also meet the performance needs that their workloads have. And what we found in talking to customers, particularly the customers that EFS targets, they are application administrators, their dev ops practitioners, their data scientists, they have a job they want to do. They're not typically storage specialists. They don't want to have know or learn a lot about the bowels of storage architecture, and how to optimize for what their applications need. They want to focus on solving the business problems. They're focused on whatever those are >>You meaning, for instance. So you took tiering is obvious. You're tiering to lower cost storage, serverless. I'm not provisioning, you know, servers, myself, the system I'm just paying for what I use. The elasticity is a factor. So I'm not having to over provision. And I think I'm hearing, I don't have to spend my time turning knobs. You've talked about that before, because I don't know how much time is spent, you know, tuning systems, but it's gotta be at least 15 to 20% of the storage admins time. You're eliminating that as well. Is that what you mean by sort of cost optimum? Absolutely. >>So we're, we're providing the scale of capacity of performance that customer applications need as they needed without the customer needing to know exactly how to configure the service, to get what they need. We're dealing with changing workloads and changing access patterns. And we're optimizing their storage costs. As at the same time, >>When you guys step back, you get to the whiteboard out, say, okay, what's the north star that you're working because you know, you set the north star. You don't want to keep revisiting that, right? This is we're moving in this direction. How do we get there might change, but what's your north star? Where do you see the future? >>Yeah, it's really all about delivering simple file system storage that just works. And that sounds really easy, but there's a lot of nuance and complexity behind it, but customers don't want to have to worry about how it works. They just need it to work. And we, our goal is to deliver that for a super broad cross section of applications so that customers don't need to worry about how they performance tune or how they cost optimize. We deliver that value for them. >>Yeah. So I'm going to actually follow up on that because I feel like, you know, when you listen to Werner Vogels talk, he gives takes you inside. It's a plumbing sometimes. So what is the, what is that because you're right. That it, it sounds simple, but it's not. And as I said up front file systems, getting that right is really, really challenging. So technically what's the challenges, is it doing this at scale? And, and, and, and, and, and having some, a consistent experience for customers, there's >>Always a challenge to doing what we do at scale. I mean, the elasticity is something that we provide to our customers, but ultimately we have to take their data as bits and put them into Adams at some point. So we're managing infrastructure on the backend to support that. And we also have to do that in a way that delivers something that's cost-effective for customers. So there's a balance and a natural tension there between things like elasticity and simplicity, performance, cost, availability, and durability, and getting that balance right. And being able to cover the maximum cross section of all those things. So for the widest set of workloads, we see that as our job and we're delivering value, and we're doing that >>For our customers. Then of course, it was a big part of that. And of course, when we talk about, you know, the taking away the, the need for tuning, but, but you got to get it right. I mean, you, you, you can't, you can't optimize for every single use case. Right. But you can give great granularity to allow those use cases to be supported. And that seems to be sort of the balancing act that you guys so >>Well, absolutely. It's focused on being a general purpose file system. That's going to work for a broad cross section of, of applications and workloads. >>Right. Right. And that's, that's what customers want. You know, generally speaking, you go after that, that metal Dunkin, I'll give you the last word. >>I just encourage people to come and try out EFS it's as simple as a single click in our console to create a file system and get started. So come give it a, try the >>Button Duncan. Thanks so much for coming back to the cube. It's great to see you again. Thanks, Dave. All right. And keep it right there for more great content from AWS storage day from Seattle.

Published Date : Sep 2 2021

SUMMARY :

Good to see you again, Dave. So they never have to provision storage or risk running out of storage and they pay only for the storage they're actually you know, serial, highly latency, sensitive applications that customers might be running on-prem today Are there other industries that sort of lean in more or is it more across the board? So essentially you don't need to be worrying can you tell us about those And if we see no access to a file for their policy period, like seven days or 30 days, twice a year, you know, I need access to that. access to every file and move the file either to the cost optimized storage class or back I don't have to And then you can set a policy that says if that data is ever touched What you get is an end point, the ability to Mount your file system using NFS, I mean, you know, you got to tell me it's customers, but maybe you could give me some, of storage architecture, and how to optimize for what their applications need. Is that what you mean by sort of cost optimum? to get what they need. When you guys step back, you get to the whiteboard out, say, okay, what's the north star that you're working because you know, a super broad cross section of applications so that customers don't need to worry about how they performance So what is the, what is that because you're right. And being able to cover the maximum cross section And that seems to be sort of the balancing act that you guys so That's going to work for a broad cross section that metal Dunkin, I'll give you the last word. I just encourage people to come and try out EFS it's as simple as a single click in our console to create a file It's great to see you again.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

DavePERSON

0.99+

SeattleLOCATION

0.99+

Duncan LennoxPERSON

0.99+

seven daysQUANTITY

0.99+

30 daysQUANTITY

0.99+

AmazonORGANIZATION

0.99+

30QUANTITY

0.99+

Werner VogelsPERSON

0.99+

AWSORGANIZATION

0.99+

TodayDATE

0.99+

11 ninesQUANTITY

0.98+

bothQUANTITY

0.98+

up to 92%QUANTITY

0.97+

both worldsQUANTITY

0.97+

DunkinPERSON

0.97+

oneQUANTITY

0.97+

todayDATE

0.97+

twice a yearQUANTITY

0.94+

20%QUANTITY

0.93+

AWSEVENT

0.92+

single clickQUANTITY

0.88+

single use caseQUANTITY

0.85+

LambdaTITLE

0.77+

ECSTITLE

0.71+

dayEVENT

0.7+

EFSTITLE

0.65+

Storage Day 2021EVENT

0.61+

northORGANIZATION

0.6+

DuncanPERSON

0.59+

north starORGANIZATION

0.57+

at least 15QUANTITY

0.56+

EKSTITLE

0.53+

AdamsPERSON

0.43+

Updatable Encryption


 

>>Hi, everyone. My name is Dan Bonnie and I want to thank the organizers for inviting me to speak. Since I only have 15 >>minutes, I decided to talk about something relatively simple that will hopefully be useful to entity. This is joint work with my students Sabah Eskandarian and Sam Kim. And with Morrissey, this work will appear it, uh, the upcoming Asia crypt and is available on E print if anyone wants this to learn more about what I'm going to talk about, So >>I want to tell you the story >>of storing encrypted data in the cloud. >>So all of us have lots of data, and typically we'd rather not >>store the data on our local machines. But rather we'd like to move the data to the cloud so that the cloud can handle back up in the cloud, can handle access control on this data and allow us to share it with others. However, for some types of data, we'd rather not have the data available in the cloud in the clear. And so what we dio is we encrypt the data before we send it to the cloud, and the customer is the one that's holding the key. So the cloud has cipher text, and the customer is the only one that has the key that could decrypt that data. >>Now, whenever dealing with encrypted data, there is a very common requirements called key rotation. So key rotation refers to the act of taking a cipher text and basically re encrypting it under a different key without changing the underlying data. Okay. And the reason we do that is so that an old key basically >>stops working, right? So we re encrypt the data under a new key, and as a result, the old red key can no longer decrypt the data. So it's a way for us to expire keys so that Onley the new key can decrypt the current data stored in the cloud. Of >>course, when we do this, we have to assume that the cloud actually doesn't store the old cipher text. So we're just going to assume that the cloud deletes the old cipher text, and the only thing the cloud has is on Lee, >>the latest version of the cipher text which can only be decrypted using the latest version of the key. >>So why do we do key rotations. Well, it turns out it's actually quite a good idea for one reason. Like we said, it limits the lifetime of a key. If I give you a key today, you can decrypt the data today. But after I do key rotation on my data, the key that I gave you no longer works. Okay, so it's a way to limit the lifetime of a key. And it's a good idea, for example, in an organization that might have temporary employees. Basically, you might give those temporary employees a key. But once they leave effectively, >>the keys will stop working after the key rotation has been done. >>Not only is it a good idea, it's actually >>a requirement in many standards. So, for example, this requires key rotation, the payment industry and requires periodic he rotation. So it's a fairly common requirement out there. The >>problem is, how do we do key >>rotation when the data is stored in the cloud? Yeah, so there are >>two options that immediately come to mind, but both are problematic. The first option is we can download the entire data >>set onto our client machines. Things could be terabytes or petabytes of data so it's a huge amount of data that we might need to download on to the client >>machine, decrypt it under the old Ke re encrypted under the new key and then upload all >>that data back to the cloud. So that works and it's fine. The only problem is it's very expensive. You have to move the data back and forth in and out of the cloud. The >>other option, of course, is to send the actual old key in the new key to the cloud and then have the cloud re encrypt using the old key and re encrypt, then using the new key. And of course, that also works. >>But it's insecure because now the cloud will get to see your data in the clear. So >>the question is what to do. And it turns out there is a better option, which is called up datable encryption, so obtainable encryption works as follows. What we do is we take our old key and our new key, and we combine them together using some sort of ah kee Reekie generation algorithm. What this algorithm will do is it will generate a short key. That's a combination of the old and new key. We can then send the re encryption key over to the cloud. The cloud can then use this key to encrypt re encrypt the entire data in the cloud. So in doing so, basically, the cloud is able to do the rotation for us. But the hope is that the cloud learns >>nothing about the data in doing that. Okay, so the re encryption key that we send to the cloud should reveal nothing to the cloud about the actual data that's being held in the cloud. So obtainable encryption is relatively old concept. I guess it was first studied in one of our papers back from 2013. There were stronger definitions given in the work of Everest power it all in 2017. And there's been a number of papers studying this this concept since. So >>before we talk about the constructions for available encryption, let me just quickly make >>sure the syntax is clear. Just so we see how this works. So basically there's a key generation algorithm that generates a key from a security parameter. Then, when we encrypt a message using a particular key, we're gonna break the cipher text into a short header and the actual cipher text the hitter and the cipher text gets into the >>cloud. And like I said, this header is going to be short and independent of the message length. Then when we want to do rotation, what we'll do is basically will use the old key in the new key along with the cipher text header to produce what we call >>a re encryption key will denote that by Delta. Okay, so the way this works is we will download the header from the >>Cloud Short header Computer Encryption key, send their encryption key to the cloud, and then the cloud will use the re encrypt algorithm that uses the re encryption key and the old cipher >>text to produce the new cipher text. And then this new cipher text will be stored in the cloud. And again, I repeat, the assumption is that the cloud is gonna erase the old cipher text. It is going to erase the re encryption key that we send to it. >>And finally, at the end of the day, when we want to decrypt the actual cipher text in the cloud, we download >>the cipher text on the cloud we decrypted using the key K and recover the actual message in. >>Okay, So in this new work with my students, we set out to look Atmore efficient constructions for available encryption. So the first thing we did is we realize there's some issues >>with the current security definitions and so we strengthen the security definitions in particular, we strengthen them in a couple of ways, but in particular, we'd like to make sure that the actual cipher text has stored in the cloud doesn't actually revealed a number of key rotations. Yeah, so a rotated cipher text should look indistinguishable from a fresh cipher text. >>But not only that, That actually should also guarantee >>that the number of key rotations is not leaked by from just looking at the cipher text. So generally, we'd like to hide the number of key rotations so that it doesn't reveal private information about what's what's encrypted inside the cipher text. >>But our main goal was to look at more efficient construction. So we looked at two constructions, one based >>on a lattice based key home or fake. Prof. So actually, the main point of this work was actually to study the performance of a lattice based key home or fake prof relative to the existing of datable encryption systems >>and then the other. The other construction we give is what's called a nested. Construction would just uses plain old symmetric encryption. And interestingly, what we show is that in fact, the nested construction is actually the best construction we have as long as the number of key rotations is not too high. Yes, so if we do under 50 re encryptions, just go ahead and use the nested construction basically from symmetric encryption. However, if we do more than 50 key rotations, all of a sudden the lattice >>based construction becomes the best one that we have. >>I want to emphasize here that are our goal for using lattices. That was not to get quantum resistance. We wanted to use lattices just because >>lettuces are fast. Yeah, and so we wanted to gain from the performance of lattice is not from the security that they provide >>eso I guess before I talk about the constructions, I have to quickly just remind you of how >>what what the security model is, what it is we're trying to achieve and I have to say the security model for available encryption is not that easy to explain here, You know, the adversary gets to see lots of keys. He gets to see lots of re encryption keys. He gets to see lots of >>cipher text. So instead of giving you the full definition, I'm just gonna give you kind >>of the intuition for what this definition is trying to achieve. And I'm going to point you to the paper for the details. So >>really, what the definition is trying to say >>is the following settings. Right. So imagine we have a cipher text that's encrypted under a certain key K. At >>some point later on in the future, the cipher text gets re encrypted using a re encryption key Delta. Okay, so now the new cipher text is encrypted under the key K prime. And what we're basically trying to achieve in the definition is to say that well, if the adversary gets to see the old cipher text >>the new cipher text and they re encryption key, then they learn nothing about the message. And they can't harm the integrity of the cipher text. >>Similarly, if they just see the old key and the new >>cipher text. They learn nothing about the message, and they can't harm the integrity of the cipher text. And similarly, if you see an old cipher text in a new key, same thing. Yeah, this is again overly simplified because in reality, the adversary gets to see lots of cipher, text and lots of keys and lots of encryption keys. And there are all these correctness conditions for when he's supposed Thio learn something and whatnot. And so I'm going to defer this to the paper. But this gives you at least the intuition for what the definition is trying to >>achieve. So now let's turn to constructions, so the first construction we'll look >>at it is kind of the classic way to construct available encryption using what's called the key home or fake. Prof. Sochi Home or for Pierre Efs were used by the or Pincus and Rain go back in 99 there were defined in our paper. BLM are back in 2013 the point of the BLM. Our paper was mainly to construct key home or fake pl refs without random oracles. So first, let me explain what Akiyama Murphy pf >>is. So it's basically a Pierre F where we have home amorphous, um, >>relative to the key. So you can see here if I give you the prof under two different keys at the point X, I can add those values and get the PF under the some of the keys at the same point x. Okay, so that's what the key home or fake property lets >>us dio. And so keyhole Norfolk PRS were used to construct a datable encryption schemes. The first thing we show is that, in fact, using keyhole graphic PRS, weaken build an update Abel encryption scheme that satisfies are stronger security definitions. So again, I'm not going to go through this construction. But just to give you intuition for why key Horrific Pff's are useful for update Abel encryption. Let me just say that the re encryption key is gonna be the some of the old key and the new key. And to see why that's useful. Let's imagine we're encrypting >>a message using counter mode so you can see here a message is being encrypted using a P r f applied to a counter, I >>Well, if I give the cloud K one plus K to the cloud >>can evaluate F F K one plus K two at the point I and if we subtract that from the >>cipher text, then by the key home or FIC properties, you'll see that F K one cancels out. And basically we're left with an encryption of them under the ki minus K two. So we were able to transform the cipher text for an encryption under K one to an encryption under minus K two. Yeah, and that's kind of the reason why they're useful. But of course, in reality, the construction >>has many, many more bells and whistles to it to satisfy the security definition. Okay, so >>what do we know about Qihoo? Norfolk? Pff's? Well, the first key home or fake prof is based on the d. D H assumption. And that's just the standards PF from D d H. It's not difficult to see that this >>construction actually is key human Norfolk. >>In this work, we're particularly interested in the keyhole morphing prof that comes from lattices. So our question was, can we optimize the ski home amorphous prof to get a very fast update Abel encryption scheme? And so the answer is yes, we can. And to do that we use the ring learning with error problems. So our goal was really to kind of evaluate obtainable encryption as it applies to lattices. So that's the first construction. The second construction, like I said, is purely based on symmetric encryption, and it's kind of an enhancement of what we call the Trivial Update Abel encryption scheme. So what's the Trivial Update? Abel encryption scheme? Well, basically, we would look at >>a standard encryption where we encrypt the message using some message key. And then we encrypt the message key using the actual client key. These are all symmetric encryptions. The client basically clinic. He would be >>K, and the header would be the message encryption key. Now, when we want to rotate the keys, all we will do is basically we would generate a new message. >>Encryption key will call a K body prime. We'll send that over to the cloud that the >>cloud will encrypt the entire old cipher text under the new key and then encrypt a new key along with the old key under a new clients key, which we call Cape Prime. So what gets sent to the cloud is this K body prime and header prime and the cloud is able to do its operation and re encrypt the old cipher text. The new client key becomes K prime. And of course, we can continue this over and over in kind of an onion like encryption where we keep encrypting the old cipher text under a new message. He The benefit of the scheme, of course, is that it only uses >>symmetric encryption, so it's actually quite fast, so that's pretty good. >>Unfortunately, this is not quite secure. And the reason this is not secure is because the cipher >>text effectively grows with a number of key rotations. So the cipher text actually leaks the number of key rotations, and so it doesn't actually satisfy our definitions. Nevertheless, we're able to give a nest of construction that does satisfy our definitions. So it does hide the number of key rotations. And again, there are lots of details in this constructions. I'm going to point you to the paper for how the nested encryption works. So >>now we get to the main point that I wanted to make, which is >>comparing the different constructions. So let's compare the lattice based construction with a D. D H but its construction and the symmetric nested construction for the DTH based construction. We're going to use the GPRS system just for a comparison point, >>so you can see that for four kilobyte message >>blocks, the lattice based system is about 300 times faster than the D. D H P A system. And the reason we're able to get such a high throughput is, of course, lattices air more efficient but also were able to use the A V X instructions for speed up. And we've also optimized the ring that we're using quite a bit specifically for this purpose. Nevertheless, when we compared to the symmetric system, we see that the symmetric system is still in order of magnitude faster than even a lot of system. And so for encryption and re encryption purposes that the symmetric based system is the fastest that we have. When we go to a larger message blocks 32 kilobyte message blocks, you see that the benefit of the latter system is even greater over the D d H system. But the symmetric system performs even better Now if you think back to how the symmetric system works. It creates many layers of encryption and >>as a result, during decryption, we have to decrypt all these >>layers. So decryption in the symmetric system takes linear time in the number of re encryptions. So you can see this in this graph where the time to decrypt increases linearly with the number of re encryptions, whereas the key home or FIC methods take constant amount of time to decrypt, no matter how many re encryptions there are, the crossover point is about 50 re encryptions, Which is why we said that if in the lifetime of the cipher text we expect fewer than 50 re encryptions, you might as well use the symmetric nested system. But if you're doing frequently encryptions, let's say weekly re encryptions, you might end up with many more than 50 re encryptions, in which case the lattice based key home or fix scheme is the best up datable system we have today. >>So I'm going to stop here. But let me leave you with one open problem if you're interested in questions in this area. So let me say that in our latest based construction, because of the noise that's involved in latest constructions. It turns out we had toe slightly weaken >>our definitions of security to get the security proof to go through. I think it's an interesting problem to see if we can build a lattice based system that's as efficient as the one that we have, but one that satisfies our full security definition. Okay, so I'll stop here, and I'm happy to take any questions. Thank you very much.

Published Date : Sep 21 2020

SUMMARY :

My name is Dan Bonnie and I want to thank the organizers for inviting me to speak. minutes, I decided to talk about something relatively simple that will hopefully be useful to entity. So the cloud has cipher text, And the reason we do that is so that an old key basically so that Onley the new key can decrypt the current data stored in the cloud. So we're just going to assume that the cloud deletes the old cipher text, and the only thing the cloud But after I do key rotation on my data, the key that I gave you no longer the payment industry and requires periodic he rotation. The first option is we can download the entire data it's a huge amount of data that we might need to download on to the client that data back to the cloud. other option, of course, is to send the actual old key in the new key to the cloud and But it's insecure because now the cloud will get to see your data in the clear. So in doing so, basically, the cloud is able to do the rotation for us. Okay, so the re encryption key that we send to the cloud should reveal hitter and the cipher text gets into the And like I said, this header is going to be short and independent of the message length. Okay, so the way this works is we will download the header from And again, I repeat, the assumption is that the cloud is gonna erase the old cipher text. So the first thing we did is we realize there's some issues cipher text has stored in the cloud doesn't actually revealed a number of key rotations. that the number of key rotations is not leaked by from just looking at the cipher So we looked at two constructions, one based Prof. So actually, the main point of this work was actually the nested construction is actually the best construction we have as long as the number of key rotations I want to emphasize here that are our goal for using lattices. from the security that they provide encryption is not that easy to explain here, You know, the adversary gets to see lots of keys. So instead of giving you the full definition, I'm just gonna give you kind of the intuition for what this definition is trying to achieve. is the following settings. if the adversary gets to see the old cipher text integrity of the cipher text. And so I'm going to defer this to the paper. So now let's turn to constructions, so the first construction we'll look at it is kind of the classic way to construct available encryption using what's called the key home or fake. So you can see here if I give you the prof under two different keys at the point X, Let me just say that the re encryption key is gonna be the some of the old key and the new key. Yeah, and that's kind of the reason why they're useful. Okay, so And that's just the standards PF from D d H. It's not difficult to see that this And so the answer is yes, we can. And then we encrypt the message key using the actual client key. K, and the header would be the message encryption key. We'll send that over to the cloud that the He The benefit of the scheme, of course, is that it only uses And the reason this is not secure is because the cipher So the cipher text actually leaks So let's compare the lattice based construction with a D. And so for encryption and re encryption purposes that the So decryption in the symmetric system takes linear time in the number of re encryptions. So let me say that in our latest based construction, because of the noise that's involved in latest constructions. our definitions of security to get the security proof to go through.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
2013DATE

0.99+

Dan BonniePERSON

0.99+

Sam KimPERSON

0.99+

2017DATE

0.99+

first optionQUANTITY

0.99+

MorrisseyPERSON

0.99+

two constructionsQUANTITY

0.99+

bothQUANTITY

0.99+

second constructionQUANTITY

0.99+

todayDATE

0.99+

two optionsQUANTITY

0.99+

Pierre EfsPERSON

0.99+

one reasonQUANTITY

0.99+

32 kilobyteQUANTITY

0.99+

firstQUANTITY

0.98+

Akiyama MurphyPERSON

0.98+

DeltaORGANIZATION

0.98+

under 50 re encryptionsQUANTITY

0.97+

K body primeCOMMERCIAL_ITEM

0.97+

more than 50 key rotationsQUANTITY

0.97+

99DATE

0.97+

SochiPERSON

0.96+

first constructionQUANTITY

0.96+

first thingQUANTITY

0.95+

K twoOTHER

0.95+

first keyQUANTITY

0.94+

oneQUANTITY

0.93+

more than 50 re encryptionsQUANTITY

0.92+

two different keysQUANTITY

0.92+

ThioPERSON

0.92+

15 >>minutesQUANTITY

0.9+

petabytesQUANTITY

0.88+

K primeCOMMERCIAL_ITEM

0.88+

about 50 re encryptionsQUANTITY

0.87+

K oneOTHER

0.86+

four kilobyteQUANTITY

0.86+

NorfolkLOCATION

0.85+

Pincus and RainORGANIZATION

0.85+

Prof.PERSON

0.83+

one of our papersQUANTITY

0.82+

about 300 timesQUANTITY

0.81+

lots of cipherQUANTITY

0.77+

lots of keysQUANTITY

0.76+

terabytesQUANTITY

0.76+

50 re encryptionsQUANTITY

0.73+

one openQUANTITY

0.71+

F K oneOTHER

0.69+

Cape PrimeCOMMERCIAL_ITEM

0.69+

Trivial UpdateOTHER

0.63+

K twoOTHER

0.61+

fewer thanQUANTITY

0.59+

Sabah EskandarianPERSON

0.57+

TrivialOTHER

0.56+

AbelORGANIZATION

0.55+

K bodyCOMMERCIAL_ITEM

0.54+

OnleyORGANIZATION

0.53+

lotsQUANTITY

0.52+

QihooORGANIZATION

0.52+

LeeORGANIZATION

0.48+

primeOTHER

0.42+

AsiaLOCATION

0.33+

EverestTITLE

0.29+

AbelTITLE

0.29+

Duncan Lennox, Amazon Web Services | AWS Storage Day 2019


 

[Music] hi everybody this is David on tape with the Cuban welcome to Boston we're covering storage here at Amazon storage day and we're looking at all the innovations and the expansion of Amazon's pretty vast storage portfolio Duncan Lennox is here is the director of product management for Amazon DFS Duncan good to see it's great to be here so what is so EF s stands for elastic file system what is Amazon EFS that's right EFS is our NFS based filesystem service designed to make it super easy for customers to get up and running with the file system in the cloud so should we think of this as kind of on-prem file services just stuck into the cloud or is it more than that it's more than that but it's definitely designed to enable that we wanted to make it really easy for customers to take the on pram applications that they have today that depend on a file system and move those into the cloud when you look at the macro trends particularly as it relates to file services what are you seeing what a customer's telling you well the first thing that we see is that it's still very early in the move to the cloud the vast majority of workloads are still running on Prem and customers need easy ways to move those thousands of applications they might have into the cloud without having to necessarily rewrite them to take advantage of cloud native services and that's a key thing that we built EFS for to make it easy to just pick up the application and drop it into the cloud without the application even needing to know that it's now running in the cloud okay so that's transparent to the to the to the application and the workload and it absolutely is we built it deliberately using NFS so that the application wouldn't even need to know that it's now running in the cloud and we also built it to be elastic and simple for the same reason so customers don't have to worry about provisioning the storage they need it just works NFS is hard making making NFS simple and elastic is not a trivial engineering task is it it hadn't been done until we did it a lot of people said it couldn't be done how could you make something that truly was elastic in the cloud but still support that NFS but we've been able to do that for tens of thousands of customers successfully and and what's the real challenge there is it to maintain that performance and the recoverability from a technical standpoint an engineering standpoint what's yes sir it's all of the above people expect a certain level of performance whether that's latency throughput and I ops that their application is dependent on but they also want to be able to take advantage of that pay-as-you-go cloud model that AWS created back with s3 13 years ago so that elasticity that we offer to customers means they don't have to worry about capex they don't have to plan for exactly how much storage they need to provision the file system grows and shrinks as they add and remove data they pay only for what they're using and we handle all the heavy lifting for them to make that happen this this opens up a huge new set of workloads for your customers doesn't it it absolutely does and a big part of what we see is customers wanting to go on that journey through the cloud so initially there starting with lifting and shifting those applications as we talked about it but as they mature they want to be able to take advantage of newer technologies like containerization and ultimately even service all right let's talk about EFS ia infrequently access files is really what it's designed for tell us more about it right so one of the things that we heard a lot from our customers of course is can you make it cheaper we love it but we'd like to use more of it and what we discovered is that we could develop this infrequent access storage class and how it works is you turn on a capability we call lifecycle management and it's completely automated after that so we know from industry analysts and from talking to customers that the majority of data perhaps as much as 80% goes pretty cold after about a month and it's rarely touched again so we developed the infrequent access storage class to take advantage of that so once you enable it which is a single click in the console or one API call you pick a policy 14 days 30 days and we monitor the readwrite IO to every file individually and once a file hasn't been read from or written to in that policy period say 30 days we automatically and transparently move it to the infrequent access storage class which is 92% cheaper than our standard storage class it's only two and a half cents in our u.s. East one region as opposed to 30 cents for our standard storage class two and a half cents per per gigabyte per gigabyte month we've done about four customers that were particularly excited about is that it remains active file system data so we move your files to the infrequent access storage class but it does not appear to move in the file system so for your applications and your users it's the same file in the same directory so they don't even need to be aware of the fact that it's now on the infrequent access storage class you just get a bill that's 92 percent cheaper for storage for that file like that ok and it's and it's simple to set up you said it's one click and then I set my policy and I can go back and change my that's exactly right we have multiple policies available you can change it later you can turn off lifecycle management if you decide you no longer need it later so how do you see customers taking advantage of this what do you expect the adoption to be like and what are you hearing from them well what we heard from customers was that they like to keep larger workloads in their file systems but because the data tends to go cold and isn't frequently accessed it didn't make economic sense to say to keep large amounts of data in our standard storage class but there's advantages to them in their businesses for example we've got customers who are doing genomic sequencing and for them to have a larger set of data always available to their applications but not costing them as much as it was allows them to get more results faster as one example you obviously see that yeah what we're what we're trying to do all the time is help our customers be able to focus less on the infrastructure and the heavy lifting and more on being able to innovate faster for their customer so Duncan Duncan some of the sort of fundamental capabilities of EFS include high availability and durability tell us more about that yeah when we were developing EFS we heard a lot from customers that they really wanted higher levels of durability and availability than they typically been able to have on Prem it's super expensive and complex to build high availability and high durability solutions so we've baked that in as a standard part of EFS so when a file is written to an EFS file system and that acknowledgement is received back by the client at that point the data is already spread across three availability zones for both availability and durability what that means is not only are you extremely unlikely to ever lose any data if one of those AZ's goes down or becomes unavailable for some reason to your application you continue to have full read/write access to your file system from the other two available zones so traditionally this would be a very expensive proposition it was sort of on Prem and multiple data centers maybe talk about how it's different in the clouds yeah it's complex to build there's a lot of moving parts involved in it because in our case with three availability zones you were talking about three physically distinct data centers high-speed networking between those and actually moving the data so that it's written not just to one but to all three and we handled that all transparently under the hood in EFS it's all included in our standard storage to your cost as well so it's not something that customers have to worry about more either a complexity or a cost point of view it's so so very very I guess low RPO and an RTO and my essentially zero if you will between the three availability zones because once your client gets that acknowledgement back it's already durably written to the three availability zones all right we'll give you last word just in the world of file services what should we be paying attention to what kinds of things are you really trying to achieve I think it's helping people do more for less faster so there's always more we can do and helping them take advantage of all the services AWS has to offer spoken like a true Amazonian Duncan thanks so much for coming on the queue for thank you good all right and thank you for watching everybody be back from storage day in Boston you watching the cute

Published Date : Nov 20 2019

SUMMARY :

adoption to be like and what are you

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DuncanPERSON

0.99+

92%QUANTITY

0.99+

BostonLOCATION

0.99+

Amazon Web ServicesORGANIZATION

0.99+

92 percentQUANTITY

0.99+

30 centsQUANTITY

0.99+

AWSORGANIZATION

0.99+

thousandsQUANTITY

0.99+

DavidPERSON

0.99+

14 daysQUANTITY

0.99+

AmazonORGANIZATION

0.99+

Duncan LennoxPERSON

0.99+

30 daysQUANTITY

0.99+

80%QUANTITY

0.99+

one clickQUANTITY

0.99+

two available zonesQUANTITY

0.99+

tens of thousands of customersQUANTITY

0.97+

first thingQUANTITY

0.97+

three availability zonesQUANTITY

0.97+

EFSTITLE

0.96+

13 years agoDATE

0.96+

two and a half centsQUANTITY

0.96+

applicationsQUANTITY

0.95+

todayDATE

0.93+

one exampleQUANTITY

0.93+

about a monthQUANTITY

0.93+

three physically distinct data centersQUANTITY

0.92+

every fileQUANTITY

0.9+

one APIQUANTITY

0.9+

PremORGANIZATION

0.89+

three availability zonesQUANTITY

0.87+

CubanOTHER

0.87+

Duncan DuncanPERSON

0.85+

one regionQUANTITY

0.84+

a lot of peopleQUANTITY

0.84+

storage dayEVENT

0.84+

three availability zonesQUANTITY

0.82+

four customersQUANTITY

0.82+

a half centsQUANTITY

0.82+

gigabyteQUANTITY

0.79+

both availabilityQUANTITY

0.79+

single clickQUANTITY

0.78+

two andQUANTITY

0.78+

one of the thingsQUANTITY

0.78+

oneQUANTITY

0.77+

threeQUANTITY

0.76+

u.s. EastLOCATION

0.69+

capexORGANIZATION

0.69+

s3TITLE

0.66+

Storage Day 2019EVENT

0.65+

zeroQUANTITY

0.63+

AZORGANIZATION

0.62+

of dataQUANTITY

0.6+

DFSTITLE

0.52+

lot ofQUANTITY

0.51+

Erik Kaulberg, Infinidat | CUBEConversation, November 2019


 

(jazzy music) >> From our studios in the heart of Silicon Valley, Palo Alto, California, this is a CUBE conversation. >> Hello, and welcome to theCUBE studios in Palo Alto, California for another CUBE conversation, where we go in depth with thought leaders driving innovation across the tech industry. I'm your host, Peter Burris. It's going to be a multi-cloud world. It's going to be a multi-cloud world because enterprises are so diverse, have so many data requirements and application needs that it's going to be serviced by a panoply of players, from public cloud to private cloud and SaaS companies. That begs the question, if data is the centerpiece of a digital strategy, how do we assure that we remain in control of our data even as we exploit this marvelous array of services from a lot of different public and private cloud providers and technology companies? So the question, then, is data sovereignty. How do we stay in control of our data? To have that conversation, we're joined by Erik Kaulberg, who's a vice president at Infinidat. Erik, welcome back to theCUBE. >> Thanks, nice to be here. >> So before we get into this, what's a quick update on Infinidat? >> Well, we just crossed the 5.4 exabyte milestone deployed around the world, and for perspective, a lot of people don't appreciate the scale at which Infinidat operates. That's about five and a half Dropboxes worth of content on our systems and on our cloud services deployed around the world today. So it's an exciting time. It's great being able to deliver these kinds of transformations at large enterprises all over the place. Business has been ramping wonderfully, and the other elements of our product portfolio that we announced earlier in the year are really coming to bear for us. >> Well, let's talk about some of those product, or some of those announcements in the product portfolio, because you have traditionally been more of an interestingly and importantly architected box company, but now you're looking at becoming more of a full player, a primary citizen in the cloud world. How has that been going? >> It's been great. So we announced our Elastic Data Fabric program, which is really our vision for how enterprises should deal with data in a multi-cloud world, in May, and that unified several different product silos within our company. You had InfiniBox on the primary storage appliance platform standpoint. You have Neutrix Cloud on the primary storage for public clouds. You have InfiniGuard for the secondary storage environments, and now we've been able to articulate this vision of enterprises should be able to access the data services that they want at scale and consume them in however way they prefer, whether that be on a private cloud environment with an appliance or whether that be in an environment where they're accessing the same data from multiple public clouds. >> So they should be able to get the cloud experience without compromising on the quality and the characteristics of the data service. >> Exactly. And fundamentally, since we deliver our value in the form of software, the customer shouldn't have to really care on what infrastructure it's running. So Elastic Data Fabric really broadens that message so that customers can understand, yes, they can get all the value of Infinidat wherever they'd prefer it. >> Okay, so let's dig into this. So the basic problem that companies face, to kind of lay this up front, the basic problems that companies face is they want to be able to tap into this incredible array of services that you can get out of the cloud, but they don't necessarily want to force their data into a particular cloud vendor or particular cloud silo. So they want the services, but they want to retain control over their data and their data destiny. How do you, in your conversations with customers, how do you see your customers articulating that tension? >> I think when I deal with the typical CIO, and I was in a couple of these conversations literally yesterday, it all comes back to the fundamental idea of do you want to pledge allegiance to a single public cloud provider forever? If the answer to that is no or if there's any hesitation in that answer, then you need to be considering services that go beyond the walled gardens of individual public clouds. And so that's where services like our Neutrix Cloud service can allow customers to keep control, keep sovereignty over their data in order to make the right decisions about where the compute should reside across whichever public cloud might offer the best combination of capabilities for a given workload. >> So it has been historically a quid pro quo where, give me your data, says the public cloud provider, and then I'll make available this range of services to you. And enterprises are saying, well, I want to get access to the services without giving you my data. How are companies generally going to solve this? Because it's not going to be by not working with public cloud or cloud companies, and it's not going to be by wanting to think too hard about which cloud companies to work with for which types of workloads. So what is the solution that folks have to start considering? Not just product level, but just generally speaking. >> Speaking broadly, I would say that there's no single answer for every company, but most large enterprises are going to want some sort of solution that allows their data to transcend the boundaries of public clouds. And there's a couple of different approaches to doing that. Some approaches just take software and then knit together multiple data silos across clouds, but you still have the data physically reside in different cloud environments, and then there are some approaches where they abstract away the data, where the data's physically stored, so that it can be accessed by multiple public clouds. And I think some mix of those approaches, depending on the scale of the company, is probably going to be one element of the solution. Now, data and how you treat the locations of data isn't the whole solution to the problem. There's many things to consider about your application state, about the security, about all that stuff, but-- >> Intellectual property, compliance, you name it. >> Absolutely. But if you don't get the data problem figured out, then everything else becomes a whole lot more complicated and a whole lot more expensive. >> So if we think about that notion of getting the data problem right, that should, we should start thinking in terms of what services does this data with these characteristics, by workload, location, intellectual property controls, whatever else they might be, what service does that data require? Today, the range of services that are available on more traditional approaches to thinking about storage are a little bit more mature. They're a little bit more, the options are a little bit greater, and the performance is often a lot better than you get out of the public cloud. Would you agree with that and can you give us some examples? >> Of course, yeah. And I think that in general, the public cloud providers have a different design point from traditional enterprise environments. You prioritize scale over resilience, for example. And specific features that we see come up a lot in our conversations with large enterprises are snapshots, replication with on-prem environments, and the ability to compress or reduce data as necessary depending on the workload requirements. There's a bunch of other things that get rolled into all of that. >> But those are three big ones. >> But those are big ones, absolutely. >> So how are enterprises thinking about being able to access all that's available in the cloud while also getting access to the data services they need for their data? >> Well, in the early days of public cloud deployments, we saw a lot of people either compromising on the data services and rearchitecting their applications accordingly or choosing to bring in more expensive layers to put on top of the standard hyperscale public cloud storage services and try and amalgamate them into a better solution. And of course we think that those are kind of suboptimal approaches, but if you have the engineering resources to invest or if you're really viewing that as something you can differentiate your business on, you want to make yourself a good storage provider, then by all means have at it. We think most enterprises don't want to go down that path. >> So what's your approach? How does Infinidat and your company provide that capability for customers? >> Well, step one is recognizing that we have a robust data services platform already out there. It's software, and we happen to package it in an appliance format for large enterprises today. That's that 5.4 exabytes, that's mostly the InfiniBox product, which is that software in an appliance. And so we've proven our core capabilities on the InfiniBox platform, and then about two and a half years ago now, we launched a service called Neutrix Cloud. And Neutrix Cloud takes that robust set of capabilities, that set of expectations that enterprises have around how they're going to handle multi-petabyte datasets, and delivers all those software-driven values as a public cloud service. So you can subscribe to the value of Infinidat without having any boxes involved or anything like that. And then you can use it for two things, basically. One is general purpose public cloud storage. So a better alternative or a more enterprise-grad alternative to things like AWS, EBS, or EFS. And another use case that is surprisingly popular for us is customers coming from on-prem environments and using the Neutrix Cloud service as just a replication target to get started. Kind of a bridge to the cloud approach. So we can support any combination of those types of scenarios, and then it gets most interesting when you combine them and add the multi-cloud piece, because then you're really seeing the benefits of eliminating the data silos in each individual public cloud when you can have, say, a file system that can be simultaneously mounted and used by applications in AWS, Azure, and GCP. >> Well, that's where, I would've thought that that would've been a third use case, right? >> Yeah. >> Is that multi-cloud and being able to mount the data wherever it's required is also obviously a very rich and important use case that's not generally available from most suppliers of data-oriented services. So where do you think this goes? Give us a kind of a visibility in where your customers are pointing as they think about incorporating and utilizing more fully this flexibility and new data services, the ability to extend and enhance the data services they get from traditional public cloud players. >> I think it's still early innings in general for the use of enterprise-grade public cloud services. I think NetApp actually just recently said that they're at $74 million annual run rate for their entire cloud data services business. So we have yet to see the full potential in general through the entire market of those capabilities in public clouds. But I think that in the long term, we get to this world where cloud compute providers can compete, truly have to compete for enterprise workloads, where you essentially have a marketplace where the customer gets to say, I have a workload. I need X cores. I need X capabilities. The data's right here in Neutrix or in something like Neutrix. And what will you offer me to run this workload for 35 minutes in Amazon? Same thing to Azure, same thing to GCP. I think that kind of competitive marketplace for public cloud compute is the natural endpoint for a disaggregated storage approach like ours, and that's what frankly gets some of our investors very excited about Infinidat, as well, because we're really the only ones who are making a strong investment in a multi-cloud piece first and foremost. >> So the ability to have greater control over your data means you can apply it in a market competitive way to whatever compute resource you want to utilize. >> Exactly. Spot instance pricing, for example, is only the beginning, because, I assume you're familiar with this, you can basically get Amazon to give you a discounted rate on a block of compute resources, similar to the other public clouds. But if your data happens to be in Amazon but Azure's giving you a lower spot instance rate, you're kind of SOL or you're going to pay egress fees and stuff like that. And I think that just disaggregating the data makes it a more competitive marketplace and better for customers. I think there's even more improvements to be had as the granularity of spot instance pricing becomes higher and higher so that customers can really pick with maximum economic efficiency where they want a workload to go for how long and ultimately drive that value back into the return that IT delivers to the business. >> So, Erik, you mentioned there's this enormous amount of data that's now running on Infinidat's platforms. Can you give us any insight into the patterns, particular industries, size of companies, workloads, that are being featured, or is it just general purpose? >> It's always a tough question for us because it is truly a horizontal platform. The one unifying characteristic of pretty much every Infinidat user is scale. If you're in the petabyte arena, then we're talking. If you're not in the petabyte arena, then you're probably talking to one of the upstart vendors in our space. It's business-critical workloads. It's enterprise-grade, whether you talk about enterprise-grade in the sense of replacing VMAX-type solutions or whether you talk about enterprise-grade in terms of modernizing cloud environments like what I've just described. It's all about scale, enterprise-grade capabilities. >> Erik Kaulberg, Infinidat, thanks again for being on theCUBE. >> Thanks. >> And once again, I want to thank you for joining us for another CUBE conversation. I'm Peter Burris. See you next time. (jazzy music)

Published Date : Nov 15 2019

SUMMARY :

From our studios in the heart that it's going to be serviced by a panoply of players, and the other elements of our product portfolio a primary citizen in the cloud world. of enterprises should be able to access the data services So they should be able to get the cloud experience the customer shouldn't have to really care that you can get out of the cloud, If the answer to that is no and it's not going to be by wanting to think too hard is probably going to be one element of the solution. But if you don't get the data problem figured out, and the performance is often a lot better and the ability to compress or reduce data as necessary Well, in the early days of public cloud deployments, and add the multi-cloud piece, the ability to extend and enhance the data services for public cloud compute is the natural endpoint So the ability to have greater control over your data back into the return that IT delivers to the business. Can you give us any insight into the patterns, to one of the upstart vendors in our space. And once again, I want to thank you for joining us

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ErikPERSON

0.99+

Peter BurrisPERSON

0.99+

Erik KaulbergPERSON

0.99+

$74 millionQUANTITY

0.99+

InfinidatORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

35 minutesQUANTITY

0.99+

MayDATE

0.99+

November 2019DATE

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

AWSORGANIZATION

0.99+

yesterdayDATE

0.99+

EBSORGANIZATION

0.99+

two thingsQUANTITY

0.99+

one elementQUANTITY

0.99+

TodayDATE

0.99+

NetAppORGANIZATION

0.98+

5.4 exabytesQUANTITY

0.98+

CUBEORGANIZATION

0.97+

VMAXORGANIZATION

0.97+

OneQUANTITY

0.96+

NeutrixORGANIZATION

0.95+

EFSORGANIZATION

0.95+

about two and a half years agoDATE

0.93+

GCPTITLE

0.92+

each individualQUANTITY

0.91+

Neutrix CloudTITLE

0.91+

AzureTITLE

0.91+

about five and a halfQUANTITY

0.9+

single answerQUANTITY

0.9+

todayDATE

0.9+

oneQUANTITY

0.89+

InfiniBoxTITLE

0.88+

third use caseQUANTITY

0.88+

Silicon Valley, Palo Alto, CaliforniaLOCATION

0.86+

InfiniBoxORGANIZATION

0.84+

5.4 exabyte milestoneQUANTITY

0.83+

three bigQUANTITY

0.82+

DropboxesORGANIZATION

0.77+

AzureORGANIZATION

0.75+

single publicQUANTITY

0.74+

firstQUANTITY

0.73+

step oneQUANTITY

0.71+

egressORGANIZATION

0.69+

FabricOTHER

0.69+

Elastic DataTITLE

0.63+

InfiniGuardTITLE

0.53+

theCUBEORGANIZATION

0.52+

CloudCOMMERCIAL_ITEM

0.26+

David Chang, HelloSign, a Dropbox Company | Coupa Insp!re19


 

>> from the Cosmopolitan Hotel in Las Vegas, Nevada. It's the Cube covering Cooper inspired. 2019. Brought to you by Cooper. >> Welcome to the Cube. Lisa Martin on the ground at Cooper Inspire 19 at the Cosmopolitan, the chic Cosmopolitan in Las Vegas. Very pleased to be joined by my friend David Chang, the VP of business from Hello. Sign a drop box company. David, Welcome to the Cube. >> Thank you for having me on. >> Great to have you here. It is a lot of fun. You could really geek out talking technology all day >> too much. So, >> yeah, there's that >> play that you gotta gamble. It'll keep it real. >> You know, I have no skills in that whatsoever, but maybe I'll try it. I'll take your advice. Give her audience an overview of Hello. Sign. Sure. Drop box Company. What? You guys are what you do. All that good stuff. >> Great. Great. So hello. Sign is today one of the fastest growing, if not the fastest growing electronic signature company in market place today and today we host, I think, over 100,000 paying businesses that use one of our products and over 150 different countries. today we actually were acquired by Dropbox. Sure, everybody's familiar Dropbox or one of the biggest brands in the Internet industry today by the leader in consumer and business files Thinking chair. So John Box actually purchased this, you know, for a number of reasons. First of all, even amazing product and cultural fit with them. But also, Electronic Signature Day is an enormous market. It is one piece of the overall digital transformation, but Elektronik, six year alone, analysts view, is probably a $25,000,000,000 industry, which we've only barely scratched the surface. So it's a huge opportunity, absolutely, and it's that big. That's exactly the you know. That's actually what's shocking about how big it is, because if you think about almost in every business, there are not just one, but probably dozens of different use cases where you need to sign documents. So electronic signature honestly is relevant for everything from all your sales agreements to all of your HR and offer letter and on boarding agreement. It's relevant specifically for all of your procurement and buying agreements, all your vendors contracts that need to be signed, your supply agreements that needs to be signed and D A s o purchase orders. All these documents need to be signed. And today you know, only a few of these use cases have been brought into the digital arena. So there's a whole huge area to grow. And with Dropbox being a leader and content management, where you normally store your documents, >> right, it's >> a natural workflow extension two haven't signed by. Hello, son. >> Excellent. Well, one of the things that we've been talking a lot about we talk about this in every show is the effects of consumer is Asian. And we talked about this yesterday with Rob Bernstein, Cooper's CEO in a number of gas yesterday and today is that we're consumers every day, even when we're at work. Oh, I forgot. I gotta buy this when we go on Amazon, we know we could get it in a day, but now we have the same expectations whether we're buying business, you know, software or what not? And we also want to be able to do things from our mobile phone, including sign. Hey, I got this new job offer or whatever happens to be without having out. Oh my God, there's a pdf. I have to go home, get to my desktop, talk to me about PDS because I can imagine when people either fill them out manually, then they scanning back in and somebody's gotta print it out or fax it. That date is stuck in Pdf. How does hello sign work to free dot data in a Pierre? >> Sure, our design philosophy really is about, you know, make making a superior user experience both for the person who needs to get a document, a document side, but also somebody who's actually gonna be signing it. So when we designed our products, you might as easy as possible for user's to sign that and recognizing some of the difficulties with P D EFS and signing on your mobile phone. We've made our products specifically Mobley responsive, so they don't have to pension, screen, pension, pension scan and all that kind of stuff and typing data. We make it very easy walking through the data entry process to streamline the whole process. We just want to make user customer satisfaction first and foremost >> moving the friction, probably getting documents signed much faster. >> Absolutely. I mean the base, you know, benefits associated the signature. Overall you know, our honestly getting your documents signed significantly faster and more efficiently. We have customers that used to take up to two weeks to get a contract signed. And, you know, as a salesperson, that gets your real nervous, right? So we've seen those contracts now get signed in less than a day. Also, Elektronik senator provides a tonic transparency. So throughout the process, we can actually provide notifications that let the sales people know that somebody's opened up the the >> end. Lt >> looked at the document, reviewed it, signed it, completed it. And even if the document has been signed, the consent of reminders to make sure to sign it. And the third thing is, you know you can't can't emphasize this enough. The value associate with productivity increases. Come on. Everyone's gone out. Printed out the document, walked it over to the scanning machine, you know, then uploading it back in your computer, you know that that whole step, you know, should be completely digital and automated as >> much as >> possible. So we see productivity increases to some of our customers between two x three x for X right in the number in reducing the number of man hours people have to spend to get >> documents only. Is that a cost savings? But all of the you can think of all the other benefits like we're talking about, even for the procurement officers were talking about it at Kuba inspires. It's not just saving money. It's all of the other ripple effects that cost savings, resource, reallocations, speed. All enable this digital transformation, which then enables the business Thio capture new customers. Increased customer, lifetime value, shareholder value. There's a lot of upside to this, >> especially for a company like Cooper. First of all, it's an incredible fit for what we do. Procurement documents. That whole host, um, they need to be signed but by, you know, utilizing Hello, son. We really facilitate that whole experience, and we're very excited to expand our partnership today. We're Cooper Advantage partner. >> Tell me about the Cooper Advantage program benefits. Who wins your >> coop? Advantage is this very unique marketplace that Cooper's brought together. They're pulling together both their customers, some of their lead customers and their matching them with some of the suppliers selected suppliers that provide their customers. Ah, whole host of service is that they need so it could be everything from goods and office supplies. All the way to service is like travel service is, and staffing service is all the way to software key software that their customers would utilize in conjunction with their procurement business. Spend management So companies like close on. So by matchmaking it for the suppliers, they get some pre negotiated discounts that offer them immediate savings off of buying direct from retail and then from ah, supplier side. We get huge benefits because we get to meet some of the most targeted companies that we want. So Cooper effectively is one of our favorite matchmakers. >> Nice. So, yeah, there's a tremendous amount of suppliers in their program. I forget the number and I don't want to misquote it. But I can imagine Cooper customer that's using them for procurement and expenses and invoices and payments. I talked a lot about Cooper pains of new things today. Well, then have the opportunity through the Cooper Advantage program to do prick human contract Scorpios with Hello sign as the e signature. >> Exactly, really, exactly. And that that is, like I said, a great match for what their customers need and by being virtue of a coupe advantage part. Sorry. Keep advantage Supplier. We've been pre vetted by Cooper have also worked out some special pre negotiated discounts with Cooper to make sure we passed that value on to their customers. >> So some of the things that came out today regarding yesterday as well with the Amazon extension you and I talked about the consumer ization affect a few minutes ago. What opportunities is that? Open up to Hello, sign for Cooper paid to be able to enable I t folks to have this visibility for the entire software from search to management. With this consume arised approach, open up doors for Hello Sign. >> Well, I think you know, if you look at the total life cycle of any purchase right from from beginning to end from everything from identifying the products that you want to being able to, you know, negotiate and secure a price that is good for you, you know that whole process. There's always tradition, but a lot of friction there. So the same way that there's friction on the e commerce side, we'll check out and purchase right and getting lining up your payment and Internet payment information Cooper. Streamlining that whole thing for the customer so long without sod is if there's documents they're associated with that with that workflow than by using companies like Hello Sign and our products were able to continue that process of digital izing the end and purchase cycle. >> And I imagine, from an information security perspective, everything >> Come on the old >> days usedto signed >> a contract and I thought, Oh, my boss's desk, Anybody could come by and pick that up So nowadays we you know nowadays we keep it stored securely in the cloud. We have some of the highest security requirements of any signature company out there, and that really matches Cupid's philosophy as well. They go overboard on security, which we really appreciate. That mission is completely lard with each other. >> Awesome. So last few seconds here. I know that you guys are early in the acquisition with Dropbox. What's exciting You for the rest of the calendar. 19. Since all these fiscal years are different. And what's next with you guys in Cuba? Yeah, >> So first of all, with Dropbox, we're just excited to be part of an enormous community of over 500,000,000 users globally So it's It's It's the reach is insane. >> I know >> my mom. Yeah, I think everybody has a DROPBOX account on >> eso getting introduced to their segments, whether it's a consumer segment, SMB and increasingly, the business segment offers huge brand recognition and the potential for new customers with Dropbox. So there's a great synergy from a go to market perspective, and with Cooper, we're very excited about the next stage of our partnership is entering the Cooper Link program. So, uh, you know said Now Cooper customers will be able to sign and send for signature from within the Cooper clr module. Eso any of their contracts vendor agreements that are stored within Cooper without ever having to leave Cooper. You consent for signature and seek the document back. And for a company like Cooper, this is a great strategic value. A because of the benefit it brings its customers, but also with all the great features that Cooper's coming out with leading edge. They want to keep a cz much of that procurement experience from within Cooper. They want Cooper to be that system of record per se and system of transaction for all your business. Ben Management So now you don't have to leave Cooper to perform to get your contract signed. You can do it from all within one place within Cooper, and we enable that. >> That's awesome. That's that's what we want. Keep him. In the experience of that, they actually adopted. They get it done. They're more efficient and and and well, David, it's been such a pleasure to >> have you on >> the Cube. Thank you for joining me today. >> Thanks, Lisa. >> All right, we'll see you next. Time for David Chang. I'm Lisa Martin. You're watching the Cube from Cooper Inspired 19. Thanks for watching.

Published Date : Jun 26 2019

SUMMARY :

Brought to you by Cooper. the chic Cosmopolitan in Las Vegas. Great to have you here. So, play that you gotta gamble. You guys are what you do. That's exactly the you know. a natural workflow extension two haven't signed by. Well, one of the things that we've been talking a lot about we talk about this in every show is Sure, our design philosophy really is about, you know, make making a superior user experience I mean the base, you know, benefits associated the signature. And the third thing is, you know you can't can't emphasize right in the number in reducing the number of man hours people have to spend to get But all of the you can think of all the other benefits like we're you know, utilizing Hello, son. Tell me about the Cooper Advantage program benefits. and staffing service is all the way to software key software that their customers would utilize in I forget the number and I don't want And that that is, like I said, a great match for what their customers So some of the things that came out today regarding yesterday end from everything from identifying the products that you want to being able to, We have some of the highest security And what's next with you guys in Cuba? So first of all, with Dropbox, we're just excited to be part of an enormous community of over Yeah, I think everybody has a DROPBOX account on A because of the benefit it brings its customers, but also with all the great features that Cooper's coming In the experience of that, they actually adopted. All right, we'll see you next.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
David ChangPERSON

0.99+

Lisa MartinPERSON

0.99+

CooperPERSON

0.99+

DavidPERSON

0.99+

$25,000,000,000QUANTITY

0.99+

Rob BernsteinPERSON

0.99+

CubaLOCATION

0.99+

2019DATE

0.99+

DropboxORGANIZATION

0.99+

todayDATE

0.99+

yesterdayDATE

0.99+

LisaPERSON

0.99+

six yearQUANTITY

0.99+

less than a dayQUANTITY

0.99+

CooperORGANIZATION

0.99+

ScorpiosORGANIZATION

0.99+

twoQUANTITY

0.99+

Las VegasLOCATION

0.99+

AmazonORGANIZATION

0.99+

bothQUANTITY

0.99+

threeQUANTITY

0.99+

oneQUANTITY

0.99+

over 150 different countriesQUANTITY

0.98+

third thingQUANTITY

0.98+

BenPERSON

0.98+

one pieceQUANTITY

0.98+

one placeQUANTITY

0.97+

John BoxPERSON

0.97+

Las Vegas, NevadaLOCATION

0.97+

over 100,000 paying businessesQUANTITY

0.97+

over 500,000,000 usersQUANTITY

0.96+

FirstQUANTITY

0.96+

SignORGANIZATION

0.96+

DROPBOXORGANIZATION

0.95+

Cosmopolitan HotelORGANIZATION

0.93+

dozens of different use casesQUANTITY

0.93+

Drop box CompanyORGANIZATION

0.92+

HelloSignORGANIZATION

0.91+

CosmopolitanORGANIZATION

0.9+

HelloTITLE

0.89+

few minutes agoDATE

0.89+

a dayQUANTITY

0.88+

KubaORGANIZATION

0.84+

Hello SignORGANIZATION

0.81+

up to two weeksQUANTITY

0.79+

Electronic Signature DayEVENT

0.79+

firstQUANTITY

0.77+

Hello signTITLE

0.76+

Inspire 19ORGANIZATION

0.75+

Dr. Hákon Guðbjartsson, WuxiNextcode & Jonsi Stefansson, NetApp | AWS re:Invent 2018


 

Live from Las Vegas, it's the Cube. Covering AWS re:Invent 2018. Brought to you by Amazon Web Services, Intel, and their ecosystem partners. And welcome to Las Vegas! We're at AWS re:Invent, day one of three days of coverage here on the Cube. Along with Justin Warren, I'm John Walls. Glad to have you with us here for our live coverage. We're joined now by Jonsi Stefansson, who's the vice-president of Cloud Services at NetApp and Hákon Guöbjartsson, who's the CIO of WuxiNextcode. Gentlemen, thanks for joining us, good to have you here. >> Thank you. >> Yeah, thank you for having, >> Having us. >> And I think, not only your first time on the Cube, but I believe the first time we have had natives of Iceland, I believe. (laughs) >> So, a first for us as well. But glad to have you. First off, Háken if you will, tell us a little bit about WuxiNextcode, what you do and why you're here. >> Yeah, so we are a company that specializes in analysis of genomic data, all the way from gathering cohorts for our pharma customers into providing sequencing services, data analytics, and AI. So we basically cover the full end to end solution space for genomic analysis. >> Okay, and now let's talk about the partnership, or at least the work that's going on between you, if you would, Jonsi, a little bit about when you have a client like this, genomics, what exactly are you trying to peel back for them? What's the challenge that you're trying to address for them? >> So we started Cloud Volumes Services on AWS roughly eight, uh, six months ago. And we've been running it with very selected customer base that is focusing on very specific workloads, like genome sequencing, rendering, database workloads, like workloads that have traditionally have had a hard time finding themselves into the cloud. So we've had a very deep partnership with WuxiNextcode in sort of customizing our offering that fits their needs. So we've been working very closely with them for the past I would say four to five months, and now we've moved their entire production sets into AWS. So that's been something that these research companies have been struggling with. And the Cloud Volumes addresses that, with the data management capabilities and the performance tiers that we offer. >> Could you give us a bit more detail on what it is about Cloud Volumes that's special and different compared to what you would generically get from AWS. Because people have been able to put storage into the cloud >> for some time, >> Of course. >> so what is it about Cloud Volumes that's unique? >> So I think we're very complementary to the storage offerings that AWS has currently. Like WuxiNextcode is running for traditional database, they are using 53 instances, EC2 instances, that all have EPS volumes. But for the analytic data, it actually gets pushed to NFS. So we are basically just have a more performance solution for shared everything solution. If you compare that to EFS for example, EFS is a great offering that AWS already has, but it doesn't reach into that scale, for example, when it comes to the performance tiers that we are offering. We also offer a differentiator for the customers to be able to clone and snapshot data, and only the tester, not to a full copy. So for example, it's really important for data scientists like WuxiNextcode to always be working on production datasets, for like data scientists. So for them to be able to replicate the data across all different environments, testing, staging, development, and production, they basically only have a small tester difference in all those volumes. Which is really important, instead of always having to copy 40 terabyte chunks, they're basically just taking the different between all of them and using the on tap cloning technology. So that's a very unique value proposition. Another unique value proposition of Cloud Volumes is you can automatically or dynamically change the performance tiers of the volume. So you can go from standard, premium, to extreme dynamically, based on when you actually need that extra level of performance. So you don't need to be continuously running at extreme, but only when you actually need to. >> So Háken, what was it about the Cloud Volumes that got your attention initially, that said "actually, this is something "that we should probably look at." >> I mean, so a little bit of a background, we kind of grew out of an environment where we were sort of evolving our architecture around an HPC cluster architecture with highly scalable storage, and actually we were using NextApp storage in our early days when we were developing. Then as we moved into the clouds, we were somewhat struggling with the NFS scalabilities that were available in the cloud. So I sort of like to say that we are kind of reborn now in the clouds, because we have lots of interactive analytics that are user-driven, so high-speed IO is fundamental in our analysis. And we were in a way struggling to self-manage NFS storage in the clouds. And now, Cloud Volumes was in a way, sort of like a dream come true. It's a lot of simplification for us in terms of deployment and management, to have a scalable service providing the NFS sort of service to our applications. So it was a perfect marriage in that regard. It fitted very well with our architecture, even though we use some of our storage relies on optive storage, but all the interactive analytics are performing way better using NFS storage. >> Yeah, Hákon, were there reservation making this move? I mean when, or capabilities that you thought maybe it sounds good, but I don't know if you can deliver on that and things on which you've been pleasently surprised? >> To a certain extent, because we had actually tried several experiments with other solutions, trying to solve sort of the NFS bottleneck for us, and so when we tried this it actually went extremely smoothly. We onboarded 50 terabytes of data over less than a weekend. And when we ran our first sort of test cases to see whether this was working as expected, we actually found it worked over three times better than with our conventional storage. And not only that, there were certain use cases that we had never completed really to the full end, and we were finishing them in times that we were very pleased with, so. >> I mean they were actually running, I mean our goal for the workshop that we did, and we've been doing this with a lot of customers, one of the sort of challenges Hákon came up with was query, a genome query that he created that he was never able to complete. And he wanted to see if by switching this out, he could actually complete that query. And it used to time out in like three or four hours in his time down. >> It was essentially a query that was touching on something on the the order of 20 trillion data points, so we were using lots of quartz. We have a database solution that we have developed which is sort of a proprietary database for genomic analytics, and it was spending up over 500 quartz essentially. And so it was a very kind of a IO intensive query. But as I said, we were able to run that to completion actually in a time that we were very satisfied with, so. >> That's pretty amazing. >> Yeah. >> Absolutely. >> So Hákon, what's your impression of NetApp's data fabric vision? They've been talking about that for a little while, and I'm just curious to hear what your take on it is. >> Yeah, I think it makes a lot of sense. I mean, we work with many pharma customers that have lots of data locally, but are also looking at the cloud as a solution for growth and for new endeavors. And having a data fabric infrastructure that allows you to bridge the two I think is something makes a lot of sense with where people want to go in the future. >> Yeah, what are you hoping to hear from Amazon and the show around that idea of being able to live outside of the cloud? Traditionally, Amazon's been very keen on saying, "no, no, everything must be here and in the cloud." They're not so keen on this idea of a data fabric that could move things around in different locations. What are you expecting to hear from them this week? >> I mean, I wouldn't say so much that I'm expecting to hear something, but it's clear for me that customers are more willing now to go into the cloud, but regardless of that, there's still certain reasons to keep certain infrastructures still where it is, moving legacy infrastructure into the cloud may not be necessarily the best way forward, rather to be able to integrate it more seamlessly with the cloud and evolve the new functionality, new features in the cloud. And also there are some, I wouldn't call it privacy, but there are lots of data sets that people are reluctant to move into the cloud still because of the way they are managed, et cetera. And being able to bridge those two things is something that I think is valuable for our customers. >> I actually don't think that the decision to move into the cloud, it's never been a cost decision, in my opinion. It is for companies to actually be able to compete with other companies within their sector and to take advantage of the rapid innovation that is happening in the cloud. I mean, if you take autonomous vehicles for example, the companies that are actually in the cloud and taking advantage of like Changemaker and like this deep learning and machine learning algorithms, it's really hard to compete with AWS, it's really hard to compete with Google or Azure. These are really big companies that are pouring a lot of money into innovation. So I think it's always, it's driven by necessity to stay competitive, to go into the cloud, and being able to tap into that innovation. This actually brings into the sort of, what does it mean to be cloud native? If you're cloud native, it means that your solution, even though it's being serviced through a marketplace, it needs to be able to tap into that innovation. You need to connect to that ecosystem that AWS has. To me, that's a much stronger driving force to drive those legacy applications into the cloud. But with the data fabric, we want to really bridge the gap. So it should be relatively easy for your application or your workload to find the best hope at any given time. Whether that's on premise of in the public cloud, you should have like a, an intelligent way of deciding where each one of your workloads should go. And that's the whole point of the data fabric. Make that really, really easy. >> Well you said the partnership's been about four months, so you're still in the honeymoon, but here's to continued success and thanks for being with us here on the Cube. We appreciate it. >> Thank you so much for having us. >> We are happy to be here. >> Have a great show. Back with more, we are live here on the Cube at AWS re:Invent and we'll be back with more in just a moment. (energetic music)

Published Date : Nov 27 2018

SUMMARY :

Glad to have you with us but I believe the first But glad to have you. all the way from gathering cohorts the performance tiers that we offer. compared to what you would So for them to be able about the Cloud Volumes in the clouds, because we have lots of that we were very pleased with, so. I mean our goal for the that we have developed and I'm just curious to hear infrastructure that allows you around that idea of being able to live And being able to bridge those two things that the decision to move but here's to continued success and we'll be back with

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Justin WarrenPERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

Jonsi StefanssonPERSON

0.99+

John WallsPERSON

0.99+

Hákon GuöbjartssonPERSON

0.99+

AWSORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

threeQUANTITY

0.99+

Las VegasLOCATION

0.99+

two thingsQUANTITY

0.99+

GoogleORGANIZATION

0.99+

IcelandLOCATION

0.99+

40 terabyteQUANTITY

0.99+

four hoursQUANTITY

0.99+

WuxiNextcodeORGANIZATION

0.99+

JonsiPERSON

0.99+

fourQUANTITY

0.99+

NetAppORGANIZATION

0.99+

53 instancesQUANTITY

0.99+

first timeQUANTITY

0.99+

20 trillion data pointsQUANTITY

0.99+

six months agoDATE

0.99+

IntelORGANIZATION

0.99+

FirstQUANTITY

0.98+

five monthsQUANTITY

0.98+

twoQUANTITY

0.98+

firstQUANTITY

0.98+

three daysQUANTITY

0.98+

this weekDATE

0.98+

Cloud VolumesTITLE

0.98+

first timeQUANTITY

0.98+

over 500 quartzQUANTITY

0.98+

HákonPERSON

0.97+

about four monthsQUANTITY

0.96+

Cloud ServicesORGANIZATION

0.96+

Hákon GuðbjartssonPERSON

0.96+

HákenPERSON

0.96+

less than a weekendQUANTITY

0.95+

oneQUANTITY

0.93+

EFSTITLE

0.92+

Dr.PERSON

0.91+

each oneQUANTITY

0.86+

first sortQUANTITY

0.83+

50 terabytes ofQUANTITY

0.82+

AWS re:InventEVENT

0.81+

AWS re:Invent 2018EVENT

0.8+

re:Invent 2018EVENT

0.8+

eightDATE

0.8+

EC2TITLE

0.79+

over three timesQUANTITY

0.77+

ChangemakerORGANIZATION

0.76+

re:InventEVENT

0.71+

NextAppTITLE

0.69+

WuxiNextcodeTITLE

0.69+

WuxiNextcodePERSON

0.63+

CubeORGANIZATION

0.58+

AzureTITLE

0.53+

CubeCOMMERCIAL_ITEM

0.48+

CloudOTHER

0.48+

quartzQUANTITY

0.45+

INFINIDAT Portfolio Launch 2018


 

>> Announcer: From the SiliconANGLE Media office, in Boston Massachusetts, it's The Cube! Now, here's your host, Dave Vellante. >> Hi everybody! My name is Dave Vellante. Welcome to this special presentation on The Cube. Infinidat is a company that we've been following since it's early days. A hot storage company, growing like crazy, doing things differently than most storage companies. We've basically been doubling revenues every year for quite some time now. And Brian Carmody is here to help me kick off this announcement and the presentation today. Brian, thanks for coming back on. >> Hey Dave, thanks for having me. >> So, you may have noticed we have a crowd chat going on live. It's crowdchat.net/Infinichat. You can ask any question you want, it's an ask me anything chat about this announcement. This is a bi-coastal program that we're running today between here and our offices in Palo Alto. So, Brian let's get into it. Give us the update on Infinidat. >> Things are going very well at Infinidat. We're just coming out of our 17th consecutive quarter of revenue growth, so we have a healthy, sustainable, profitable business. We have happy, loyal customers. 71% of our revenue in 2017 came from existing customers that were increasing their investment in our technologies. We're delighted by that. And we have surpassed three exabytes of customer deployments. So, things are wonderful. >> And you've done this essentially as a one product company. Is that correct? Yes, so going back to our first sale in the summer of 2013, that growth has been on the back of a single product, InfiniBox, targeted at primary storage. >> Okay, so what's inside of InfiniBox? Tell me about some of the innovations. In speaking to some of your customers, and I've spoken to a number of them, they tell me that one of the things they like, is that from early on, I think serial number 0001, they can take advantage of any innovations that you've produced within that product, is that right? >> Yeah, exactly, so InfiniBox is a software product. It has dumb hardware, dumb commodity hardware, and it has it has very smart intelligent software. This allows us to kind of break from this forklift upgrade model, and move to a model where the product gets better over time. So if you look at the history of InfiniBox going back to the beginning, with each successive release of our software, latency goes down, new features are added, and capacity increases become available. And this is the difference between the software versus a hardware based innovation model. >> One of the interesting things I'll note about Infinidat is you're doing software defined, you don't really use that terminology, it's the buzzword in the industry. The other buzzword is artificial intelligence, machine learning. You're actually using machine intelligence, You and I have talked about this before, to optimize the placement of data that allows you to use much less expensive media than some of the other guys, and deliver more value to customers. Can you talk about that a little bit? >> Yeah, absolutely, and by the way the reason why that is is because we're an engineering company, not a marketing company, so we prefer just doing things rather than talking about them. So InfiniBox is the first expression of a set of fundamental technologies of our technology platform, and the first piece of that is what you're talking about. It's called NeuroCache. And it's our ML and AI infrastructure for learning customer workloads and using that insight in real time to optimize data placement. And the end result of this is driving cost out of storage infrastructure and driving up performance. That's the first piece. That's NeuroCache. The second piece of our technology foundations is INFINISNAP. So this is our snapshot mechanism that allows infinite, lock-free, copy data management with absolutely no performance impact. So that's the second. And then the third is INFINIRAID and our Raz platform. So this is our distributed raid architecture that allows us to have multi pedibytes scale, extremely high durability, but also have extremely high availability of the services and that what enables our seven nines reliability guarantee. Those things together are the basis of our products. >> Okay, so sort of, we're here today and now what's exciting is that you're expanding beyond just the one product company into a portfolio of products, so sort of take us through what you're announcing today. >> Yeah so this is a really exciting day, and it's a milestone for Infinidat because InfiniBox now has some brothers and sisters in the family. The first thing that we are announcing is a new F Series InfiniBox model which we call F6212. So this is the same feature set, it's the same software, it's the same everything as its smaller InfiniBox models, but it is extremely high capacity. It's our largest InfiniBox. It's 8.3 pedibytes of capacity in that same F6000 form factor. So that's number one. Numnber two, we're announcing a product called InfiniGuard. InfiniGuard is pedibytes scale, data protection, with lightening-fast restores. The third thing that we're announcing, is a new product called InfiniSync. InfiniSync is a revolutionary business continuity appliance that allows synchronous RPO zero replication over infinite distances. It's the first ever in this category. And then the fourth and final thing that we're announcing is a product called Neutrix Cloud. Neutrix Cloud is sovereign storage that enable real-time competition between public cloud providers. The ultimate in agility, which is the ability to go polycloud. And that's the content of the portfolio announcement. >> Excellent, okay, great! Thanks, Brian, for helping us set that up. The program today, as you say, there's a cloud chat going on. Crowdchat.net/infinichat. Ask any question that you want. We're going to cover all these announcements today. InfiniSync is the next segment that's up. Dr. Ricco is here. We're going to do a quick switch and I'll be interviewing doc, and then we're going to kick it over to our studio in Palo Alto to talk about InfiniGuard, which is essentially, what was happening, Infinidat customers were using InfiniBox as a back-up target, and then asked Infinidat, "Hey, can you actually make this a product and start "partnering with software companies, "back-up software companies, and making it a robust, "back-up and recovery solution?" And then MultiCloud, is one of the hottest topics going, really interested to hear more about that. And then we're going to bring on Eric Burgener from IDC to get the analyst perspective, that's also going to be on the West coast and then Brian and I are come back, and wrap up, and then we're going to dive in to the crowd chat. So, keep it right there everybody, we'll be back with Dr. Ricco, right after this short break. >> Narrator: InfiniBox was created to help solve one of the biggest data challenges in existence, the mapping of the human geno. Today InfiniBox is enabling the competitive business processes of some of the most dynamic companies in the world. It is the apex product of generations of technology, and lifetimes of engineering innovation. It's a system with seven nines of reliability making it the most available storage solution in the market InfiniBox is both powerful and simple to use. InfiniBox will transform how you experience your data. It is so intuitive, it will inform you about potential problems, and take corrective action before they happen. This is InfiniBox. This is confidence. >> We're back with Dr. Ricco, who's the CMO of Infinidat. Doc, welcome! >> Thank you, Dave. >> I've got to ask you, we've known each for a long time. >> We have. >> Chief Marketing Officer, you're an engineer. >> I am. >> Explain that please. >> Yeah, I have a PhD in engineering and I have 14 patents in the storage industry from my prior job, Infinidat is an unconventional company, and we're using technology to solve problems in an unconventional way. >> Well, congratulations. >> Dr. Ricco: Thank you. >> It's great to have you back on The Cube. Okay, InfiniSync, I'm very excited about this solution, want to understand it better. What is InfiniSync. >> Well, Dave, before we talk about InfiniSync directly, let's expand on what Brian talked about is the foundation technologies of Infinidat and the InfiniBox. In the InfiniBox we provide InfiniSnap, which is a near zero performance impact to the application with near zero overhead, just of course the incremental data that you write to it. We also provide async and we provide syncronous replication. Our async replication provides all that zero overhead that we talked about in InfiniSnap with a four-second interval. We can replicate data four seconds apart, nearly a four second RPO, recovery point objective. And our sync technology is built on all of that as well. We provide the lowest overhead, the lowest latency in the industry at only 400 microseconds, which provides an RPO of zero, with near zero performance impact application as well, which is exciting. But syncronis replication, for those applications while there's values to that, and by the way all of the technology I just talked about, is just as Brian said, it's zero additional cost to the customer with Infinidat. There are some exciting business cases why you'd use any of those technologies, but if you're in a disaster-recovery mode and you do need an RPO of zero, you need to recognize that disasters happen not just locally, not just within your facility, they happen in a larger scale regionally. So you need to locate your disaster recovery centers somewhere else, and when you do that, you're providing additional and additional performance overhead just replicating the data over distance. You're providing additional cost and you're providing additional complexity. So what we're providing is InfiniSync and InfiniSync extends the customer's ability to provide business continuity over long distances at an RPO of zero. >> Okay, so talk more about this. So, you're essentially putting in a hardened box on site and you're copying data synchronously to that, and then you're asynchronously going to distance. Is that correct? >> Yes, and in a traditional sense what a normal solution would do, is you would implement a multi-site or a multi-hop type of topology. You build out a bunker site, you'd put another box there, another storage unit there, you'd replicate synchronously to that, and you would either replicate asynchronously from there to a disaster recovery site, or you'd replicate from your initial primary source storage device to your disaster recovery site which would be a long distance away. The problem with that of course is complexity and management, the additional cost and overhead, the additional communications requirements. And, you're not necessarily guaranteeing an RPO of zero, depending upon the type of outage. So, what we're doing is we're providing in essence that bunker, by providing the InfiniSync black box which you can put right next to your InfiniBox. The synchronous replication happens behind the scenes, right there, and the asynchronous replication will happen automatically to your remote disaster recovery site. The performance that we provide is exceptional. In fact, the performance overhead of a right-to-earn InfiniSync black box is less than the right latency to your average all flasher right. And then, we have that protected, from any man-made or natural disaster, fire, explosion, earthquake, power outages, which of course you can protect with generators, but you can't protect from a communications outage, and we'll protect from a communications outage as well. So the asynchronous communication would use your wide area communications, it can use any other type of wifi communications, or if you lose all of that, it will communicate celluarly. >> So the problem you're solving is eliminating the trade-off, if I understand it. Previously, I would have to either put in a bunker site which is super expensive, I got to a huge telecommunications cost, and just a complicated infrastructure, or I would have to expose myself to a RPO nowhere close to zero, expose myself to data loss. Is that right? >> Correct. We're solving a performance problem because your performance overhead is extremely low. We're solving a complexity problem because you don't have to worry about managing that third location. You don't have to worry about the complexity of keeping three copies of your data in sync, we're solving the risk by protecting against any natural or man-made disaster, and we're significantly improving the cost. >> Let's talk about the business case for a moment, if we can. So, I got to buy this system from you, so there's a cost in, but I don't have to buy a bunker site, I don't have to rent, lease, buy staff, et cetera, I don't have to pay for the telecommunications lines, yet I get the same or actually even better RPO? >> You'll get an RPO of zero which is better than the worse case scenario in a bunker, and even if we lose your telecommunications you can still maintain an RPO of zero, again because of the cellular back-up or in the absolute worse case, you can take the InfiniSync black box to your remote location, plug it in, and it will synchronize automatically. >> And I can buy this today? >> You can buy it today and you can buy it today at a cost that will be less than a telecommunications equipment and subscriptions that you need at a bunker site. >> Excellent, well great. I'm really excited to see how this product goes in the market place. Congratulations on getting it out and good luck with it. >> Thank you, Dave. >> You're welcome, alright, now we're going to cut over to Peter Burris in Palo Alto with The Cube Studios there, and we're going to hear about InfiniGuard, which is an interesting solution. Infinidat customers were actually using InfiniBox as a back-up target, so they went to Infinidat and said, "Hey can you make this a back-up and recovery "solution and partner with back-up software companies." We're going to talk about MultiCloud, it's one of the hottest topics in the business, want to learn more about that, and then Eric Burgener from IDC is coming in to give us the analyst perspective, and then back here to back here to wrap up with Brian Carmody. Over to you, Peter. >> Thanks, Dave I'm Peter Burris and I'm here in our Palo Alto, The Cube studios, and I'm being joined here by Bob Cancilla, who's the Executive Vice President of Business Development and Relationships, and Neville Yates, who's a Business Continuity Consultant. Gentlemen, thank you very much for being here on The Cube with us. >> Thanks, Peter, thanks for being here. >> So, there is a lot of conversation about digital business and the role that data plays in it. From our perspective, we have a relatively simple way of thinking about these things, and we think that the difference between a business and digital business is the role the data plays in the digital business. A business gets more digital as it uses it's data differently. Specifically it's data assets, which means that the thinking inside business has to change from data protection or asset or server protection, or network protection to truly digital business protection. What do you guys say? >> Sure we're seeing the same thing, as you're saying there Peter. In fact, our customers have asked us to spread our influence in their data protection. We have been evaluating ways to expand our business, to expand our influence in the industry, and they came back and told us, if we wanted to help them the best way that we could help them is to go on and take on the high-end back-up and recovery solutions where there really is one major player in the market today. Effectively, a monopoly. Our customers' words, not our own. At the same time, our product management team was looking into ways of expanding our influence as well, and they strongly believed and convinced me, convinced us, our leadership team within side of Infinidat to enter into the secondary storage market. And it was very clear that we could build upon the foundation, the pillars of what we've done on the primary storage side and the innovations that we brought to the market there. Things around or multiple pedibyte scale, with incredible density, faster than flash performance, the extreme ease of use and lowering the total cost of operation at the enterprise client. >> So, I want to turn that into some numbers. We've done some research here now at Wikibon that suggests that a typical Fortune 1000 company, because of brittle and complex restore processes specifically, too many cooks involved, a focus on not the data but on devices, means that there's a lot of failure that happens especially during restore processes, and that can cause, again a typical Fortune 1000 company, 1.25 plus billion dollars revenue over a four year period. What do you say as you think about business continuity for some of these emerging and evolving companies? >> That translates into time is money. And if you need to recover data in support of revenue-generating operations and applications, you've got to have that data come back to be productively usable. What we do with InfiniGuard is ensure that those recovery time objectives are met in support of that business application and it is the leveraging of the pillars that Bob talked about in terms of performance, the way we are unbelievable custodians of data, and then we're able to deliver that data back faster than what people expect. They're used today to mediocrity. It takes too long. I was with a customer two weeks ago. We were backing up a three terabyte data base. This is not a big amount of data. It takes about half and hour. We would say, "Let's do a restore" and the gentleman looked at me and said, "We don't have time." I said, "No, it's a 30 minute process." This person expected it to take five and six hours. Add that up in terms of dollars per hours, what it means to that revenue-generating application, and that's where those numbers come from. >> Yeah, especially for fails because of, as you said, Bob, the lack of ease of use and the lack of simplicity. So, we're here to talk about something. What is it that we're talking about and how does it work? >> Let me tell ya, I'll cover the what it is. I'll let Nevil get into a little bit how it works. So the what it is, we built it off the building block of our InfiniBox technology. We started with our model F4260, a one pedibyte usable configuration, we integrated in stainless, deduplication engines, what we call DBEs, and a high availability topology that effectively protects up to 20 pedibytes of data. We combined that with a vast certification and openness of independent software vendors in the data protections space. We want to encourage openness, and an open ecosystem. We don't want to lock any customer out of their preferred software solution in that space. And, you can see that with the recent announcements that we've made about expanding our partnerships in this space specifically, Commvault and B. >> Well, very importantly, the idea of partnership and simplicity in these of views, you want your box, the InfiniGuard to be as high quality and productive as possible, but you don't want to force a dramatic change on how an organization works, so let's dig into some of that Nevil. How does this work in practice? >> It's very simple. We have these deduplication engines that front end the InfiniBox storage. But what is unique, because there's others ways of packaging this sort of thing, but what is unique is when the InfiniGuard gets the data, it builds knowledge of relationships of that data. Deduplication is a challenge for second tier storage systems because it is a random IO profile that has to be gathered in the fashion to sequentially feed this data back. Our knowledge-building engine, which we call NeuroCache in the InfiniBox is the means by which we understand how to gather this data in a timely fashion. >> So, NeuroCache helps essentially sustain some degree of organization of the data within the box. >> Absolutely. And there's a by-product of that organization that the ability to go and get it ahead of the ask allows us to respond to meet recovery time objectives. >> And that's where you go from five to six hours for a relatively small restore to >> To 30 minutes. >> Exactly. >> Yeah, exactly. >> By feeding the data back out to the system in a pre-organized way, the system's taking care of a lot of the randomness and therefore the time necessary to perform a restore. >> Exactly and other systems don't have that capability, and so they are six hours. >> So we're talking about a difference between 30 minutes and six hours and I also wanted very quickly, Bob, to ask you a question the last couple minutes here, you mentioned partnerships. We also want to make sure that we have a time to value equation that works for your average business. Because the box can work with a lot of different software that really is where the operations activities are defined, presumably it comes in pretty quickly and it delivers value pretty quickly. Have I got that right? >> Absolutely, so we have done a vast amount of testing, certification, demos, POCs, you name it, with all the major players out there that are in this market on the back-up software side, the data protection side of the business. All of them have commented about the better business continuity solution that we put together, in conjunction with their product as well. And, the number one feedback that comes back is, "Wow, the restore times that you guys deliver to the market "are unlike anything we've seen before." >> So, to summarize, it goes in faster, it works faster, and it scales better, so the business truly can think of itself as being protected, not just sets of data. >> Absolutely. >> Agreed. >> Alright, hey Bob Cancilla, EDP of Business Development Partnerships, Neville Yates, Business Continuity Consultant, thanks very much for being on The Cube, and we'll be right back to talk Multicloud after this short break. >> With our previous storage provider, we faced many challenges. We were growing so fast, that our storage solution wasn't able to keep up. We were having large amounts of downtime, problems with the infrastructure, problems with getting support. We needed a system that was scalable, that was cost effective, and allow our business to grow as our customers' demands were growing. We needed a product that enabled us to manage the outward provision customer workloads quickly and efficiently, be able to report on the amount of data that the customer was using. The solution better enabled us to replicate our customers' data between different geos. >> We're back. Joining me now are Gregory Touretsky and Erik Kaulberg, both senior directors at Infinidat, overseeing much of the company's portfolio. Gregory, let's talk Multicloud. It's become a default part of almost all IT strategies, but done wrong, it can generate a lot of data-related costs and risks. What's Infinidat's perspective? >> So yeah, before we go there, I will mention this phenomemon of the data gravity. So we see, as many of our customers report that, as much as amount of data grows in the organization, it becomes much harder for them to move applications and services to a different data center, or to a different oblicloud. So, the more data they accumulate, the harder it becomes to move it, and they get locked into this, so we believe that any organization deserves a way to move freely between different obliclouds or data centers, and that's the reason we are thinking about the multicloud solution and how we can provide an easy way for the companies to move between data centers. >> So, clearly there's a need to be able to optimize your costs to the benefits associated with data, Erik, as we think about this, what are some of the key considerations most enterprises have to worry about? >> The biggest one overall is the strategic nature of cloud choices. At one point, cloud was a back room, the shadow IT kind of thing. You saw some IT staff member go sign up for gmail and spread or dropbox %or things like that, but now CIOs are thinking, well, I've got to get all these cloud services under control and I'm spending a whole lot of money with one of the big two cloud providers. And so that's really the strategic rationale of why were saying, "Organizations, especially large enterprises require this kind of sovereign storage that disagregates the data from the public clouds to truly enable the possibility cloud competition as well as to truly deliver on the promise of the agility of public clouds. >> So, great conversation, but we're here to actually talk about something specifically Neutrix. Gregory, what is it? >> Sure, so Neutrix, is a completely new offering that we come with. We are not selling here any box or appliance for the customers to deploy in their data center. We're talking about a cloud service that is provided by Infinidat. >> We are building our infrastructure in a major colo, partnering with Equinix and others, we are finding data centers that are adjacent public clouds, such as AWS or Azure to ensure very low latency and high bandwidth connectivity. And then we build our infrastructure there with InfiniBox storage and networking gear that allows our customers to really use this for two main reasons. So one use case, is disaster recovery. If a customer has our storage on prem in his data center, they may use our efficient application mechanism to copy data and get second copy outside of the data center without building the second data center. So, in case of disaster, they can recover. The other use case we see is very interesting for the customers, is an ability to consume while running the application in the public cloud directly from our storage. So they can do any first mount or iSCSi mount to storage available from our cloud, and then run the application. We are also providing the capability to consume the sane file system from multiple clouds at the same time. So you may run your application both in Amazon and Microsoft clouds and still access and share the data. >> Sounds like it's also an opportunity to simplify ramping into a cloud as well. Is that one of the use cases? >> Absolutely. So it's basically a combination of those two use cases that I described. The customers may replicate data from their own prem environment into the Neutrix Cloud, and then consume it from the public cloud. >> Erik, this concept has been around for a while, even if it hasn't actually been realized. What makes this in particular different? I think there's a couple of elements to it. So number one is we don't really see that there's a true enterprise grade public cloud storage offering today for active data. And so we're basically bringing in that rich heritage of InfiniBox capabilities and those technologies we've developed over a number of years to deliver an enterprise grade storage except without the box as a service. So that's a big differentiator for us versus the native public cloud storage offerings. And then when you look at the universe of other companies who are trying to develop let's say, cloud adjacent type offerings, we believe we have the right combination of that scalable technology with the correct business model that is aligned in a way that people are buying cloud today. So that's kind of the differentiation in a nutshell. >> But it's not just the box, there's also some managed servces associated with it, right? >> Well, actually, it's not a box, that's the whole idea. So, the entire thing is a consumable service, you're paying by the drink, it's a simple flat pricing of nine cents per gigabyte per month, and it's essentially as easy to consume as the native public cloud storage offerings. >> So as you look forward and imagine the role that this is going to play in conjunction with some of the other offerings, what should customers be looking to out of Neutrix, in conjunction with the rest of the portfolio. >> So basically they can get, as Erik mentioned, what they like with InfiniBox, without dealing with the box. They get fully-managed service, they get freedom of choice, they can move applications easily between different public clouds and to or from the own prem environment without thinking about the egress costs, and they can get great capabilities, great features like snapshots writeables, snapshots without overpaying to the public cloud providers. >> So, better economics, greater flexibility, better protection and de-risking of the data overall. >> Absolutely. >> At scale. >> Yes. >> Alright, great. So I want to thank very much, Gregory, Erik being here on The Cube. We'll be right back to get the analyst perspective from Eric Burgener from IDC. >> And one of our challenges of our industry as a whole, is that it operates to four nines as a level of excellence for example. And what that means is well it could be down for 30 seconds a month. I can't think of anything worse than me having me to turn around to my customers and say, "Oh, I am sorry. "We weren't available for 30 seconds." And yet most people that work in our IT industry seem to think that's acceptable, but it's not when it comes to data centers, clouds, and the sort of stuff that we're doing. So, the fundamental aspect is that can we run storage that is always available? >> Welcome back. Now we're sitting here with Eric Burgener, who is a research vice-president and the storage at IDC. Eric, you've listened to Infinidat's portfolio announcement. What do you think? >> Yeah, Peter, thanks for having me on the show. So, I've got a couple of reactions to that. I think that what they've announced is playing into a couple of major trends that we've seen in the enterprise. Number one is, as companies undergo digital transformation, efficiency of the IT operations is really a critical issue. And so, I'm seeing a couple of things in this announcement that will really play into that area. They've got a much larger, much denser platform at this point that will allow a lot more consolidation of workload, and that's sort of an area that Infinidat has focused on in the past to consolidate a lot of different workloads under one platform, so I think the efficiency of those kind of operations will increase going forward with this announcement. Another area that sort of plays into this is every organization needs multiple storage platforms to be able to meet their business requirements. And what we've seen with announcement is their basically providing multiple platforms, but that are all built around the same architecture, so that has management ease of use advantages associated with that, so that's a benefit that will potentially allow CIOs to move to a smaller number of vendors and fewer administrative skill sets, yet still meet their requirements. And I think the other area that's sort of a big issue here, is what their announcing in the hybrid cloud arena. So, clearly, enterprises are operating as hybrid clouds today, well over 70% of all organizations actually have hybrid cloud operations in place. What we've seen with this announcement, is an ability for people to leverage the full storage mnagement data set of an Infinidat platform while they leverage multiple clouds on the back end. And if they need to move between clouds they have an ability to do that with this new feature, the Neutrix cloud. And so that really breaks the lock-in that you see from a lot of cloud operations out there today that in certain cases can really limit the flexibility that a CIO has to meet their business requirements. >> Let me build on that a second. So, really what you're saying is that by not binding the data to the cloud, the business gets greater flexibility in how they're going to use the data, how they're going to apply the data, both from an applications standpoint as well as resource and cost standpoint. >> Yeah, absolutely. I mean moving to the cloud is actually sort of a fluid decision that sometimes you need to move things back. We've actually seen a lot of repatriation going on, people that started in the cloud, and then as things changed they needed to move things back, or maybe they want to move to another cloud operation. They might want to move from Amazon to Google or Microsoft. What we're seeing with Neutrix Cloud is an ability basically to do that. It's breaks that lock-in. >> Great. >> They can still take advantage to those back end platforms. >> Fantastic. Eric Burgener, IDC Research Vice-President, Storage. Back to you, Dave. >> Thanks, Peter. We're back with Brian Cormody. We're going to summarize now. So we're seeing the evolution of Infinidat going from a single product company going to a portfolio company. Brian, I want to ask you to summarize. I want to start with InfiniBox, I'm also going to ask you "Is this the same software, and does it enable new use cases, or is this just bigger, better, faster?" >> Yeah, it's the same software that runs on all of our InfiniBox systems, it has the same feature set, it's completely compatible for replication and everything like that. It's just more capacity to use, 8.4 pedibytes of effective capacity. And the use cases that are pulling this into the field, are deep-learning, analytics, and IOT. >> Alright, let's go into the portfolio. I'm going to ask you, do you have a favorite child, do you have a favorite child in the portfolio. Let's start with InfiniSync. >> Sure, so I love them all equally. InfiniSync is a revolutionary appliance for banking and other highly regulated industries that have a requirement to have zero RPO, but also have protection against rolling disasters and regional disasters. Traditionally the way that that gets solved, you have a data center, say, in lower Manhatten where you do your primary computing, you do synchronous to a data bunker, say in northern New Jersey, and then you asynchronous out of region, say out to California. So, under our model with InfiniSync, it's a 450 pound, ballistically protected data bunker appliance, InfiniSync guarantees that with no data loss, and no reduction in performance, all transactions are guaranteed for delivery to the remote out-of-region site. So what this allows customers to do, is to erase data centers out of their terpology. Northern New Jersey, the bunker goes away, and customers, again in highly rated industries, like banking that have these requirements, they're going to save 10s of millions of dollars a year in cost avoidance by closing down unnecessary data centers. >> Dramatically sort of simplify their infrastructure and operations. Alright, InfiniGuardm I stumbled into it at another event, you guys hadn't announced it yet, and I was like, "Hmmm, what's this?" But tell us about InfiniGuard. >> Yeah, so InfiniGuard is a multi-pedibyte appliance that's 20 pedibytes of data protection in a single rack, in a single system, and it has 10 times the restore performance of data domain, at a fraction of the cost. >> Okay, and then the Neutrix Cloud, this is to me maybe the most interesting of all the announcements. What's your take on that? So, like I said, I love them all equally, but Neutrix Cloud for sure is the most disruptive of all the technologies that we're announcing this week. The idea of Neutrix Cloud is that it is neutral storage for consumption in the public cloud. So think about it like this. Do you think it's weird, that EBS and EFS are only compatible with Amazon coputing? And Google Cloud storage is only compatible with Google. Think about it for a second if IBM only worked with IBM servers. That's bringing us back to the 1950s and 60s. Or if EMC storage was only compatible with Dell servers, customers would never accept that, but in the Silicon Valley aligargic, wall-garden model, they can't help themselves. They just have to get your data. "And just give us your data, it'll be great. "We'll send a snowball or a truck to go pick it up." Because they know once they have your data, they have you locked in. They cannot help themselves from creating this wall-garden proprietary model. Well, like we call it a walled, prison yard. So the idea is with Neutrix Cloud, rather than your storage being weaponized as a customer to lock you in, what if they didn't get your data and what if instead you stored your data with a trusted, neutral, third party, that practices data neutrality. Because we guarantee contractually to every customer, that we will never take money and we will never shake down any of the cloud providers in order to access our Neutrix Cloud network, and we will never do side deals and partnerships with any of them to favor one cloud over the other. So the end result, you end up having for example, a couple of pedibytes of file systems, where you can have thousands of guests that have that file system mounted simultaneously from your V-Net and Azure, from your VPCs into AWS, and they all have simultaneous, screaming high performance access to one common set of your data. So by pulling and ripping your data from the arms of those public cloud providers, and instead only giving them shared common neutral access, we can now get them to start competing against each other for business. So rather than your storage being weaponized you, it's a tool that you can use to force the cloud providers to compete against each other for your business. >> So, I'm sure you guys may have a lot of questions there, hop into the crowd chat, it's crowdchat.net/infinichat. Ask me anything, ama crowdchat, Brian will be in there in a moment. I got to ask ya couple of more questions before I let you go. >> Sure. >> What was your motivation for this portfolio explansion. >> So the motivation was that at the end of the day, customers are very clear to us that they do not want to focus on their infrastructure. They want to focus on their businesses. And as their infrastructure scales, it becomes exponentially more complex to deal with issues of reliability, economics and performance. And, so we realized that if we're going to fulfill our company's mission, that we have to expand our mission, and help customers solves problems throughout more of the data lifecycle and focus on some of the pain points that extend beyond primary storage. That we have to start bringing solutions to market that help customers get to the cloud faster, and when they get there, to be more agile. And to focus on data protection, which again is a huge pain point. So the motivation at the end of the day is about helping customers do more with less. >> And the mission again, can you just summarize that, multi pedibyte? >> Yeah, the corporate mission of Infinidat is to store humanity's knowledge and to make new forms of computing possible. >> Big mission. >> Our humble mission. >> Humble, right. The reason I ask that question of your motivation, people might say, "Oh obviously, to make more money." But they're been a lot of single-product companies, feature companies that have done quite well, so in order to fulfill that mission, you really need a portfolio. What should we be watching as barometers of success? How are you guys measuring yourselves, How should we be measuring you? >> Oh I think the most fair way to do that is to measure us on successful execution of that mission, and at the end of the day, it's about helping customers compute harder and deeper on larger data sets, and to do so at lower costs than the competitor down the road, because at the end of the day, that's the only source of competitive advantage, that companies get out of their infrastructure. The better we help customers do that, the more that we consider ourselves succeeding in our mission. >> Alright, Brian, thank you, no kids but new products are kind of like giving birth. >> It's really cool. >> So hop into the crowd chat, it's an ask me anything questions. Brian will be in there, we got analysts in there, a bunch of experts as well. Brian, thanks very much. It was awesome having you on. >> Thanks, Dave. >> Thanks for watching everybody. We'll see you in the crowd chat. (upbeat digital music)

Published Date : Mar 21 2018

SUMMARY :

Announcer: From the SiliconANGLE Media office, And Brian Carmody is here to help me kick off this This is a bi-coastal program that we're running today of revenue growth, so we have a healthy, sustainable, that growth has been on the back of a single product, and I've spoken to a number of them, to the beginning, with each successive release to optimize the placement of data that allows you to use and the first piece of that is what you're talking about. just the one product company into a portfolio of products, And that's the content of the portfolio announcement. the analyst perspective, that's also going to be of the biggest data challenges in existence, We're back with Dr. Ricco, who's the CMO of Infinidat. and I have 14 patents in the storage industry It's great to have you back on The Cube. and InfiniSync extends the customer's ability to provide and then you're asynchronously going to distance. the InfiniSync black box which you can put So the problem you're solving is eliminating the You don't have to worry about the complexity of keeping I don't have to pay for the telecommunications lines, or in the absolute worse case, you can take the InfiniSync and subscriptions that you need at a bunker site. in the market place. and then back here to back here to wrap up I'm Peter Burris and I'm here in our Palo Alto, that the thinking inside business has to change the best way that we could help them a focus on not the data but on devices, of that business application and it is the leveraging and the lack of simplicity. So the what it is, we built it off the building block box, the InfiniGuard to be as high quality in the fashion to sequentially feed this data back. of organization of the data within the box. that the ability to go and get it ahead of the ask By feeding the data back out to the system Exactly and other systems don't have that capability, to ask you a question the last couple minutes here, "Wow, the restore times that you guys deliver to the market and it scales better, so the business truly can think and we'll be right back to talk Multicloud that the customer was using. of the company's portfolio. for the companies to move between data centers. that disagregates the data from the public clouds So, great conversation, but we're here to actually for the customers to deploy in their data center. We are also providing the capability to consume the sane Is that one of the use cases? environment into the Neutrix Cloud, So that's kind of the differentiation in a nutshell. and it's essentially as easy to consume as the native is going to play in conjunction with some of the other public clouds and to or from the own prem environment better protection and de-risking of the data overall. We'll be right back to get the analyst perspective is that it operates to four nines as a What do you think? And so that really breaks the lock-in that you see from the data to the cloud, the business gets greater people that started in the cloud, and then as things Back to you, Dave. I want to start with InfiniBox, I'm also going to ask you of our InfiniBox systems, it has the same feature set, Alright, let's go into the portfolio. is to erase data centers out of their terpology. you guys hadn't announced it yet, and I was like, performance of data domain, at a fraction of the cost. any of the cloud providers in order to access I got to ask ya couple of more questions before I let you go. that help customers get to the cloud faster, Yeah, the corporate mission of Infinidat is to store so in order to fulfill that mission, and at the end of the day, it's about helping customers are kind of like giving birth. So hop into the crowd chat, it's an We'll see you in the crowd chat.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Eric BurgenerPERSON

0.99+

BrianPERSON

0.99+

PeterPERSON

0.99+

Brian CarmodyPERSON

0.99+

EricPERSON

0.99+

Bob CancillaPERSON

0.99+

EquinixORGANIZATION

0.99+

ErikPERSON

0.99+

Brian CormodyPERSON

0.99+

InfinidatORGANIZATION

0.99+

DavePERSON

0.99+

Dave VellantePERSON

0.99+

2017DATE

0.99+

IBMORGANIZATION

0.99+

Erik KaulbergPERSON

0.99+

30 minuteQUANTITY

0.99+

GregoryPERSON

0.99+

Peter BurrisPERSON

0.99+

AmazonORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

six hoursQUANTITY

0.99+

CaliforniaLOCATION

0.99+

Palo AltoLOCATION

0.99+

10 timesQUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

BobPERSON

0.99+

30 secondsQUANTITY

0.99+

fiveQUANTITY

0.99+

450 poundQUANTITY

0.99+

14 patentsQUANTITY

0.99+

second pieceQUANTITY

0.99+

first pieceQUANTITY

0.99+

DellORGANIZATION

0.99+

71%QUANTITY

0.99+

NeutrixORGANIZATION

0.99+

RiccoPERSON

0.99+

AWSORGANIZATION

0.99+

todayDATE

0.99+

30 minutesQUANTITY

0.99+

crowdchat.net/infinichatOTHER

0.99+

20 pedibytesQUANTITY

0.99+

firstQUANTITY

0.99+

Boston MassachusettsLOCATION

0.99+

Silicon ValleyLOCATION

0.99+

Northern New JerseyLOCATION

0.99+

IDCORGANIZATION

0.99+

F4260COMMERCIAL_ITEM

0.99+

crowdchat.net/InfinichatOTHER

0.99+

Infinidat portfolio Outro


 

>> Narrator: From the SiliconANGLE Media office in Boston, Massachusetts, it's the CUBE. Now, here's your host, Dave Vellante. (electronic pop music) >> Thanks, Peter. We're back with Brian Carmody. We're going to summarize now. So we're seeing the evolution of Infinidat going from a single-product company to a portfolio company. Brian, I'm going to ask you to summarize. I want to start with InfiniBox. I'm also going to ask you, is this the same software, and does it enable new use cases, or is it just bigger, better, faster? >> It's the same software that runs on all of our InfiniBox systems. It has the same feature set, it's completely compatible for replication and everything like that. It's just more capacity. It's 8.4 petabytes of effective capacity. The use cases that are pulling this into the field are deep learning, analytics, and IOT. >> All right, let's go into the portfolio. I'm going to ask you, it's like, "Do you have a favorite child? Do you have a favorite child in the portfolio?" Let's start with InfiniSync. >> Sure. I love them all equally. InfiniSync is a revolutionary appliance for banking and other highly-regulated industries that have a requirement to have 0 RPO but also have protection against rolling disasters and regional disasters. Traditionally, the way that that gets solved is you have a data center, say, in lower Manhattan where you do your primary computing. You do synchronous to a data bunker, say, in northern New Jersey, and then you do asynchronous out of region, say, out to California. Under our model with InfiniSync, it's a 450-pound ballistically-protected data bunker appliance. InfiniSync guarantees that with no data loss and no reduction in performance, all transactions are guaranteed for delivery to the remote, out-of-region site. What this allows customers to do is to erase data centers out of their topology. Northern New Jersey, the bunker goes away. Again, highly-regulated industries like banking that have these requirements, they're going to save tens of millions of dollars a year in cost avoidance by closing down unnecessary data centers. >> And dramatically simplify their infrastructure and operations. >> Absolutely. >> InfiniGuard, I stumbled into it at another event. You guys hadn't announced it yet. I was like, "Hmm, what's this?" Tell us about InfiniGuard. >> InfiniGuard is a multi-petabyte appliance that fits 20 petabytes of data protection in a single rack, in a single system, and it has 10 times the restore performance of data domain at a fraction of the cost. >> Okay, and then the Nutrix cloud ... This is, to me, maybe the most interesting of all the announcements. What's your take on that? >> Like I said, I love them all equally, but Nutrix cloud for sure is the most disruptive of all the technologies that we're announcing this week. The idea of Nutrix cloud is that it is neutral storage for consumption in the public cloud. So think about it like this. Don't you think it's weird that EBS and EFS are only compatible with Amazon computing and Google cloud storage is only compatible with Google? Think about it for a second. If IBM storage only worked with IBM servers, that's bringing us back to the 1950s and '60s. Or if EMC storage was only compatible with Dell servers, customers would never accept that. But in the Silicon Valley oligarchic, walled-garden model, they can't help themselves. They just have to get your data. "Just give us your data. It'll be great. We'll send a snowball or a truck to go pick it up." Because they know once they have your data, they have you locked in. They cannot help themselves from creating this walled-garden proprietary model, or like we call it, a walled prison yard. So the idea is, with Nutrix cloud, rather than your storage being weaponized against you as a customer to lock you in, what if they didn't get your data? What if instead, you stored your data with a trusted, neutral third party that practices data neutrality? Because we guarantee contractually to every customer that we will never take money, and we will never shake down any of the cloud providers in order to get access to our Nutrix cloud network, and we will never do side deals and partnerships with any of them to favor one cloud over the other. So the end result is that you end up having, for example, a couple of petabyte-scale file systems where you can have thousands of guests that have that file system mounted simultaneously from your VNet in Azure, from your VPC's in AWS, and they all have simultaneous screaming high-performance access to one common set of your data. So by pulling and ripping your data out of the arms of those public cloud providers and instead, only giving them shared, common, neutral access, we can now get them to start competing against each other for business. Rather than your storage being weaponized against you, it's a tool which you can use to force the cloud providers to compete against each other for your business. >> I'm sure you guys may have a lot of questions there. Hop into the CrowdChat. It's crowdchat.net/infinichat. Ask Me Anything, AMA CrowdChat. Brian will be in there in a moment. I got to ask a couple of questions before I let you go. >> Brian: Sure. >> What was your motivation for this portfolio expansion? >> The motivation was that at the end of the day, customers are very clear to us that they do not want to focus on their infrastructure. They want to focus on their businesses. As their infrastructure scales, it becomes exponentially more complex. They deal with issues of reliability, and economics, and performance. We realized that if we're going to fulfill our company's mission, that we have to expand our mission and help customers solve problems throughout more of the data lifecycle, and focus on some of the pain points that extend beyond primary storage. We have to start bringing solutions to market that help customers get to the cloud faster, and when they get there, to be more agile, and to focus on data protection, which, again, is a huge pain point. The motivation at the end of the day is about helping customers do more with less. >> And the mission again, can you just summarize that? Multi-petabyte, and ... ? >> The corporate mission of Infinidat is to store humanity's knowledge and to make new forms of computing possible. >> Big mission. (laughs) Okay, fantastic. >> Our humble mission, yes. >> Humble, right. The reason I asked that question of your motivation, people always say, "Oh, obviously to make more money." But there have been a lot of single-product companies or feature companies that have done quite well. In order to fulfill that mission, you really need a portfolio. What should we be watching as barometers of success? How are you guys measuring yourselves? How should we be measuring you? >> I think the most fair way to do that is to measure us on successful execution of that mission. At the end of the day, it's about helping customers compute harder and deeper on larger data sets, and to do so at lower cost than the competitor down the road. Because at the end of the day, that's the only source of competitive advantage that companies get out of their infrastructure. The better we help customers do that, the more we consider ourselves succeeding in our mission. >> All right, Brian, thank you. No kids, but new products are kind of like giving birth. Best I can say. >> I have dogs. They're like dogs. >> So hop into the CrowdChat. It's an Ask Me Anything questions. Brian will be in there, we've got analysts in there, a bunch of experts as well. Brian, thanks very much. It was awesome having you on. >> Thanks, Dave. >> Thanks for watching, everybody. See you in the CrowdChat. (electronic pop music)

Published Date : Mar 16 2018

SUMMARY :

in Boston, Massachusetts, it's the CUBE. Brian, I'm going to ask you to summarize. It's the same software that runs on I'm going to ask you, it's like, that have a requirement to have 0 RPO And dramatically simplify their I was like, "Hmm, what's this?" of data domain at a fraction of the cost. interesting of all the announcements. So the end result is that you end up having, I got to ask a couple of questions before I let you go. The motivation at the end of the day is about And the mission again, can you just summarize that? The corporate mission of Infinidat is to Okay, fantastic. The reason I asked that question of your motivation, and to do so at lower cost than Best I can say. I have dogs. So hop into the CrowdChat. See you in the CrowdChat.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Brian CarmodyPERSON

0.99+

CaliforniaLOCATION

0.99+

BrianPERSON

0.99+

PeterPERSON

0.99+

10 timesQUANTITY

0.99+

IBMORGANIZATION

0.99+

DavePERSON

0.99+

450-poundQUANTITY

0.99+

DellORGANIZATION

0.99+

20 petabytesQUANTITY

0.99+

Silicon ValleyLOCATION

0.99+

8.4 petabytesQUANTITY

0.99+

Northern New JerseyLOCATION

0.99+

1950sDATE

0.99+

AWSORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

Boston, MassachusettsLOCATION

0.99+

AmazonORGANIZATION

0.99+

northern New JerseyLOCATION

0.99+

0 RPOQUANTITY

0.98+

CrowdChatTITLE

0.98+

crowdchat.net/infinichatOTHER

0.98+

single systemQUANTITY

0.97+

InfinidatORGANIZATION

0.97+

tens of millions of dollars a yearQUANTITY

0.95+

this weekDATE

0.95+

NutrixORGANIZATION

0.95+

thousands of guestsQUANTITY

0.93+

single rackQUANTITY

0.92+

AzureTITLE

0.9+

ManhattanLOCATION

0.87+

InfiniGuardTITLE

0.86+

InfiniGuardORGANIZATION

0.85+

VNetORGANIZATION

0.85+

HumbleORGANIZATION

0.84+

'60sDATE

0.84+

EMCORGANIZATION

0.83+

one common setQUANTITY

0.81+

one cloudQUANTITY

0.81+

single-productQUANTITY

0.8+

SiliconANGLEORGANIZATION

0.75+

EBSORGANIZATION

0.74+

InfiniBoxTITLE

0.72+

secondQUANTITY

0.72+

InfiniSyncORGANIZATION

0.7+

multiQUANTITY

0.69+

couple of questionsQUANTITY

0.68+

MultiQUANTITY

0.55+

cloudTITLE

0.53+

petabyteQUANTITY

0.53+

AMAORGANIZATION

0.52+

EFSORGANIZATION

0.48+

cloudCOMMERCIAL_ITEM

0.43+

NutrixCOMMERCIAL_ITEM

0.34+