Ed Casmer, Cloud Storage Security & James Johnson, iPipeline | AWS Startup Showcase S2 E4
(upbeat music) >> Hello, everyone. Welcome back to theCUBE's presentation of the AWS Startup Showcase. This is season two, episode four of the ongoing series covering the exciting startups from the AWS ecosystem. And talking about cybersecurity. I'm your host, John Furrier. Excited to have two great guests. Ed Casmer, founder and CEO of Cloud Storage Security, back CUBE alumni, and also James Johnson, AVP of Research and Development at iPipeline. Here to talk about cloud storage security antivirus on S3. James, thanks for joining us today. >> Thank you, John. >> Thank you. >> So the topic here is cloud security, storage security. Ed, we had a great CUBE conversation previously, earlier in the month. Companies are modernizing their apps and migrating the cloud. That's fact. Everyone kind of knows that. >> Yeah. >> Been there, done that. Clouds have the infrastructure, they got the OS, they got protection, but the end of the day, the companies are responsible and they're on the hook for their own security of their data. And this is becoming more permanent now that you have hybrid cloud, cloud operations, cloud native applications. This is the core focus right now in the next five years. This is what everyone's talking about. Architecture, how to build apps, workflows, team formation. Everything's being refactored around this. Can you talk about how organizations are adjusting and how they view their data security in light of how applications are being built and specifically around the goodness of say S3? >> Yep, absolutely. Thank you for that. So we've seen S3 grow 20,000% over the last 10 years. And that's primarily because companies like James with iPipeline are delivering solutions that are leveraging this object storage more and above the others. When we look at protection, we typically fall into a couple of categories. The first one is, we have folks that are worried about the access of the data. How are they dealing with it? And so they're looking at configuration aspects. But the big thing that we're seeing is that customers are blind to the fact that the data itself must also be protected and looked at. And so we find these customers who do come to the realization that it needs to happen, finding out, asking themselves, how do I solve for this? And so they need lightweight, cloud native built solutions to deliver that. >> So what's the blind spot? You mentioned there's a blind spot. They're kind of blind to that. What specifically are you seeing? >> Well so, when we get into these conversations, the first thing that we see with customers is I need to predict how I access it. This is everyone's conversation. Who are my users? How do they get into my data? How am I controlling that policy? Am I making sure there's no east-west traffic there, once I've blocked the north-south? But what we really find is that the data is the key packet of this whole process. It's what gets consumed by the downstream users. Whether that's an employee, a customer, a partner. And so it's really, the blind spot is the fact that we find most customers not looking at whether that data is safe to use. >> It's interesting. When you talk about that, I think about all the recent breaches and incidents. "Incidents," they call them. >> Yeah. >> They've really been around user configurations. S3 buckets not configured properly. >> Absolutely. >> And this brings up what you're saying, is that the users and the customers have to be responsible for the configurations, the encryption, the malware aspect of it. Don't just hope that AWS has the magic to do it. Is that kind of what you're getting at here? Is that the similar, am I correlating that properly? >> Absolutely. That's perfect. And we've seen it. We've had our own customers, luckily iPipeline's not one of them, that have actually infected their end users because they weren't looking at the data. >> And that's a huge issue. So James, let's get in, you're a customer partner. Talk about your relationship with these guys and what's it all about? >> Yeah, well, my pipeline is building a digital ecosystem for life insurance and wealth management industries to enable the sale of life insurance to under-insured and uninsured Americans, to make sure that they have the coverage that they need, should something happen. And our solutions have been around for many years. In a traditional data center type of an implementation. And we're in process now of migrating that to the cloud, moving it to AWS, in order to give our customers a better experience, a better resiliency, better reliability. And with that, we have to change the way that we approach file storage and how we approach scanning for vulnerabilities in those files that might come to us via feeds from third parties or that are uploaded directly by end users that come to us from a source that we don't control. So it was really necessary for us to identify a solution that both solved for these vulnerability scanning needs, as well as enabling us to leverage the capabilities that we get with other aspects of our move to the cloud and being able to automatically scale based on load, based on need, to ensure that we get the performance that our customers are looking for. >> So tell me about your journey to the cloud, migrating to the cloud and how you're using S3 specifically. What led you to determine the need for the cloud based AV solution? >> So when we looked to begin moving our applications to the cloud, one of the realizations that we had is that our approach to storing certain types of data was a bit archaic. We were storing binary files in a database, which is not the most efficient way to do things. And we were scanning them with the traditional antivirus engines that would've been scaled in traditional ways. So as our need grew, we would need to spin up additional instances of those engines to keep up with load. And we wanted a solution that was cloud native and would allow us to scan more dynamically without having to manage the underlying details of how many engines do I need to have running for a particular load at a particular time and being able to scan dynamically. And also being able to move that out of the application layer, being able to scan those files behind the scenes. So scanning in, when the file's been saved in S3, it allows us to scan and release the file once it's been deemed safe rather than blocking the user while they wait for that scan to take place. >> Awesome. Well, thanks for sharing that. I got to ask Ed, and James, same question next. It's, how does all this factor in to audits and self compliance? Because when you start getting into this level of sophistication, I'm sure it probably impacts reporting workflows. Can you guys share the impact on that piece of it? The reporting? >> Yeah. I'll start with a comment and James will have more applicable things to say. But we're seeing two things. One is, you don't want to be the vendor whose name is in the news for infecting your customer base. So that's number one. So you have to put something like this in place and figure that out. The second part is, we do hear that under SOC 2, under PCI, different aspects of it, there are scanning requirements on your data. Traditionally, we've looked at that as endpoint data and the data that you see in your on-prem world. It doesn't translate as directly to cloud data, but it's certainly applicable. And if you want to achieve SOC 2 or you want to achieve some of these other pieces, you have to be scanning your data as well. >> Furrier: James, what's your take? As practitioner, you're living it. >> Yeah, that's exactly right. There are a number of audits that we go through where this is a question that comes up both from a SOC perspective, as well as our individual customers who reach out and they want to know where we stand from a security perspective and a compliance perspective. And very often this is a question of how are you ensuring that data that is uploaded into the application is safe and doesn't contain any vulnerabilities. >> James, if you don't mind me asking, I have to kind of inquire because I can imagine that you have users on your system but also you have third parties, relationships. How does that impact this? What's the connection? >> That's a good question. We receive data from a number of different locations from our customers directly, from their users and from partners that we have as well as partners that our customers have. And as we ingest that data, from an implementation perspective, the way we've approached this, there's a minimal impact there in each one of those integrations. Because everything comes into the S3 bucket and is scanned before it is available for consumption or distribution. But this allows us to ensure that no matter where that data is coming from, that we are able to verify that it is safe before we allow it into our systems or allow it to continue on to another third party whether that's our customer or somebody else. >> Yeah, I don't mean to get in the weeds there, but it's one of those things where, this is what people are experiencing right now. Ed, we talked about this before. It's not just siloed data anymore. It's interactive data. It's third party data from multiple sources. This is a scanning requirement. >> Agreed. I find it interesting too. I think James brings it up. We've had it in previous conversations that not all data's created equal. Data that comes from third parties that you're not in control of, you feel like you have to scan. And other data you may generate internally. You don't have to be as compelled to scan that although it's a good idea, but you can, as long as you can sift through and determine which data is which and process it appropriately, then you're in good shape. >> Well, James, you're living the cloud security, storage security situation here. I got to ask you, if you zoom out and not get in the weeds and look at the board room or the management conversation. Tell me about how you guys view the data security problem. I mean, obviously it's important. So can you give us a level of how important it is for iPipeline and with your customers and where does this S3 piece fit in? I mean, when you guys look at this holistically, for data security, what's the view, what's the conversation like? >> Yeah. Well, data security is critical. As Ed mentioned a few minutes ago, you don't want to be the company that's in the news because some data was exposed. That's something that nobody has the appetite for. And so data security is first and foremost in everything that we do. And that's really where this solution came into play, in making sure that we had not only a solution but we had a solution that was the right fit for the technology that we're using. There are a number of options. Some of them have been around for a while. But this was focused on S3, which we were using to store these documents that are coming from many different sources. And we have to take all the precautions we can to ensure that something that is malicious doesn't make its way into our ecosystem or into our customers' ecosystems through us. >> What's the primary use case that you see the value here with these guys? What's the aha moment that you had? >> With the cloud storage security specifically, it goes beyond the security aspects of being able to scan for vulnerable files, which is, there are a number of options and they're one of those. But for us, the key was being able to scale dynamically without committing to a particular load whether that's under committing or overcommitting. As we move our applications from a traditional data center type of installation to AWS, we anticipated a lot of growth over time and being able to scale up very dynamically, literally moving a slider within the admin console, was key to us to be able to meet our customer's needs without overspending, by building up something that was dramatically larger than we needed in our initial rollout. >> Not a bad testimonial there, Ed. >> I mean, I agree. >> This really highlights the applications using S3 more in the file workflow for the application in real time. This is where you start to see the rise of ransomware other issues. And scale matters. Can you share your thoughts and reaction to what James just said? >> Yeah. I think it's critical. As the popularity of S3 has increased, so has the fact that it's an attack vector now. And people are going after it whether that's to plant bad malicious files, whether it's to replace code segments that are downloaded and used in other applications, it is a very critical piece. And when you look at scale and you look at the cloud native capability, there are lots of ways to solve it. You can dig a hole with a spoon, but a shovel works a lot better. And in this case, we take a simple example like James. They did a weekend migration, so they've got new data coming in all the time, but we did a massive migration 5,000 files a minute being ingested. And like he said, with a couple of clicks, scale up, process that over sustained period of time and then scale back down. So I've said it before, I said it on the previous one. We don't want to get in the way of someone's workflow. We want to help them secure their data and do it in a timely fashion that they can continue with their proper processing and their normal customer responses. >> Frictionless has to be key. I know you're in the marketplace with your antivirus for S3 on the AWS. People can just download it. So people are interested, go check it out. James, I got to ask you and maybe Ed can chime in over the top, but it seems so obvious. Data. Secure the data. Why is it so hard? Why isn't this so obvious? What's the problem? Why is it so difficult? Why are there so many different solutions? It just seems so obvious. You know, you got ransomware, you got injection of different malicious payloads. There's a ton of things going on around the data. Why is, this so obvious? Why isn't it solved? >> Well, I think there have been solutions available for a long time. But the challenge, the difficulty that I see, is that it is a moving target. As bad actors learn new vulnerabilities, new approaches and as new technology becomes available, that opens additional attack vectors. >> Yeah. >> That's the challenge, is keeping up on the changing world including keeping up on the new ways that people are finding to exploit vulnerabilities. >> And you got sensitive data at iPipeline. You do a lot of insurance, wealth management, all kinds of sensitive data, super valuable. This brings me up, reminds me of the Sony hack Ed, years ago. Companies are responsible for their own militia. I mean, cybersecurity is no government help for sure. I mean, companies are on the hook. As we mentioned earlier at the top of this interview, this really is highlighted that IT departments have to evolve to large scale cloud, cloud native applications, automation, AI machine learning all built in, to keep up at the scale. But also from a defense standpoint. I mean, James you're out there, you're in the front lines, you got to defend yourself basically, and you got to engineer it. >> A hundred percent. And just to go on top of what James was saying is, I think there, one of the big factors and we've seen this. There's skill shortages out there. There's also just a pure lack of understanding. When we look at Amazon S3 or object storage in general, it's not an executable file system. So people sort of assume that, oh, I'm safe. It's not executable. So I'm not worried about it traversing my storage network. And they also probably have the assumption that the cloud providers, Amazon is taking care of this for them. And so it's this aha moment. Like you mentioned earlier, that you start to think, oh it's not about where the data is sitting per se. It's about scanning it as close to the storage spot. So when it gets to the end user, it's safe and secure. And you can't rely on the end user's environment and system to be in place and up to date to handle it. So it's that really, that lack of understanding that drives some of these folks into this. But for a while, we'll walk into customers and they'll say the same thing you said, John. Why haven't I been doing this for so long? And it's because they didn't understand that it was such a risk. That's where that blind spot comes in. >> James, it's just a final note on your environment. What's your goals for the next year? How's things going over there on your side? How you look at the security posture? What's on your agenda for the next year? How are you guys looking at the next level? >> Yeah. Well, our goal as it relates to this is to continue to move our existing applications over to AWS to run natively there. Which includes moving more data into S3 and leveraging the cloud storage security solution to scan that and ensure that there are no vulnerabilities that are getting in. >> And the ingestion, is there like a bottlenecks log jams? How do you guys see that scaling up? I mean, what's the strategy there? Just add more S3? >> Well, S3 itself scales automatically for us and the cloud storage solution gives us leverage to pull to do that. As Ed mentioned, we ingested a large amount of data during our initial migration which created a bottleneck for us. As we were preparing to move our users over, we were able to make an adjustment in the admin console and spin up additional processes entirely behind the scenes and broke the log jam. So I don't see any immediate concerns there, being able to handle the load. >> The term cloud native and hyperscale native, cloud native, one cloud's hybrid. All these things are native. We have antivirus native coming soon. And I mean, this is what we're basically doing is making it native into the workflows. Security native. And soon there's going to be security clouds out there. We're starting to see the rise of these new solutions. Can you guys share any thoughts or vision around how you see the industry evolving and what's needed? What's working and what's needed? Ed, we'll start with you. What's your vision? >> So I think the notion of being able to look at and view the management plane and control that has been where we're at right now. That's what everyone seems to be doing and going after. I think there are niche plays coming up. Storage is one of them, but we're going to get to a point where storage is just a blanket term for where you put your stuff. I mean, it kind of already is that. But in AWS, it's going to be less about S3. Less about work docs, less about EVS. It's going to be just storage and you're going to need a solution that can span all of that to go along with where we're already at the management plane. We're going to keep growing the data plane. >> James, what's your vision for what's needed in the industry? What's the gaps, what's working, and where do you see things going? >> Yeah, well, I think on the security front specifically, Ed's probably a little bit better equipped to speak to them than I am since that his primary focus. But I see the need for just expanded solutions that are cloud native that fit and fit nicely with the Amazon technologies. Whether that comes from Amazon or other partners like Cloud Storage Security to fill those gaps. We are focused on the financial services and insurance industries. That's our niche. And we look to other partners like Ed to help be the experts in these areas. And so that's really what I'm looking for, is the experts that we can partner with that are going to help fill those gaps as they come up and as they change in the future. >> Well, James, I really appreciate you coming on, sharing your story and I'll give you the final word. Put a quick, spend a minute to talk about the company. I know Cloud Storage Security is an AWS partner with the security software competency and is one of I think 16 partners listed in the competency and the data category. So take a minute to explain what's going on with the company, where people can find more information, how they buy and consume the products. >> Okay. >> Put the plug in. >> Yeah, thank you for that. So we are a fast growing startup. We've been in business for two and a half years now. We have achieved our security competency as John indicated. We're one of 16 data protection security competent ISV vendors globally. And our goal is to expand and grow a platform that spans all storage types that you're going to be dealing with and answer basic questions. What do I have and where is it? Is it safe to use? And am I in proper control of it? Am I being alerted appropriate? So we're building this storage security platform, very laser focused on the storage aspect of it. And if people want to find out more information, you're more than welcome to go and try the software out on Amazon marketplace. That's basically where we do most of our transacting. So find it there. Start of free trial. Reach out to us directly from our website. We are happy to help you in any way that you need it. Whether that's storage assessments, figuring out what data is important to you and how to protect it. >> All right, Ed. Thank you so much. Ed Casmer, founder and CEO of Cloud Storage Security. And of course James Johnson, AVP of Research and Development, iPipeline customer. Gentlemen, thank you for sharing your story and featuring the company and the value proposition, certainly needed. This is season two, episode four. Thanks for joining us. Appreciate it. >> Casmer: Thanks John. >> Okay. I'm John Furrier. That is a wrap for this segment of the cybersecurity season two, episode four. The ongoing series covering the exciting startups from Amazon's ecosystem. Thanks for watching. (upbeat music)
SUMMARY :
of the AWS Startup Showcase. and migrating the cloud. now that you have hybrid cloud, that it needs to happen, They're kind of blind to that. that data is safe to use. When you talk about that, S3 buckets not configured properly. is that the users and the customers that have actually and what's it all about? migrating that to the cloud, for the cloud based AV solution? move that out of the application layer, I got to ask Ed, and and the data that you see Furrier: James, what's your take? audits that we go through I have to kind of inquire partners that we have get in the weeds there, You don't have to be as and look at the board room or the precautions we can and being able to scale This is where you start to see and you look at the James, I got to ask you But the challenge, the that people are finding to I mean, companies are on the hook. that the cloud providers, at the next level? and leveraging the cloud and the cloud storage And soon there's going to be of being able to look at is the experts that we can partner with and the data category. We are happy to help you in and featuring the company the exciting startups
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
James | PERSON | 0.99+ |
Ed Casmer | PERSON | 0.99+ |
Ed | PERSON | 0.99+ |
John | PERSON | 0.99+ |
James Johnson | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Casmer | PERSON | 0.99+ |
SOC 2 | TITLE | 0.99+ |
5,000 files | QUANTITY | 0.99+ |
iPipeline | ORGANIZATION | 0.99+ |
16 partners | QUANTITY | 0.99+ |
20,000% | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
two and a half years | QUANTITY | 0.99+ |
CUBE | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
Cloud Storage Security | ORGANIZATION | 0.99+ |
S3 | TITLE | 0.99+ |
today | DATE | 0.99+ |
Sony | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.98+ |
second part | QUANTITY | 0.98+ |
two great guests | QUANTITY | 0.98+ |
first one | QUANTITY | 0.98+ |
first | QUANTITY | 0.97+ |
first thing | QUANTITY | 0.97+ |
each one | QUANTITY | 0.95+ |
years ago | DATE | 0.94+ |
theCUBE | ORGANIZATION | 0.93+ |
Ed Casmer, Cloud Storage Security | CUBE Conversation
(upbeat music) >> Hello, and welcome to "theCUBE" conversation here in Palo Alto, California. I'm John Furrier, host of "theCUBE," got a great security conversation, Ed Casper who's the founder and CEO of Cloud Storage Security, the great Cloud background, Cloud security, Cloud storage. Welcome to the "theCUBE Conversation," Ed. Thanks for coming on. >> Thank you very much for having me. >> I got Lafomo on that background. You got the nice look there. Let's get into the storage blind spot conversation around Cloud Security. Obviously, reinforced has came up a ton, you heard a lot about encryption, automated reasoning but still ransomware was still hot. All these things are continuing to be issues on security but they're all brought on data and storage, right? So this is a big part of it. Tell us a little bit about how you guys came about the origination story. What is the company all about? >> Sure, so, we're a pandemic story. We started in February right before the pandemic really hit and we've survived and thrived because it is such a critical thing. If you look at the growth that's happening in storage right now, we saw this at reinforced. We saw even a recent AWS Storage Day. Their S3, in particular, houses over 200 trillion objects. If you look just 10 years ago, in 2012, Amazon touted how they were housing one trillion objects, so in a 10 year period, it's grown to 200 trillion and really most of that has happened in the last three or four years, so the pandemic and the shift in the ability and the technologies to process data better has really driven the need and driven the Cloud growth. >> I want to get into some of the issues around storage. Obviously, the trend on S3, look at what they've done. I mean, I saw my land at storage today. We've interviewed her. She's amazing. Just the EC2 and S3 the core pistons of AWS, obviously, the silicons getting better, the IaaS layers just getting so much more innovation. You got more performance abstraction layers at the past is emerging Cloud operations on premise now with hybrid is becoming a steady state and if you look at all the action, it's all this hyper-converged kind of conversations but it's not hyper-converged in a box, it's Cloud Storage, so there's a lot of activity around storage in the Cloud. Why is that? >> Well, because it's that companies are defined by their data and, if a company's data is growing, the company itself is growing. If it's not growing, they are stagnant and in trouble, and so, what's been happening now and you see it with the move to Cloud especially over the on-prem storage sources is people are starting to put more data to work and they're figuring out how to get the value out of it. Recent analysts made a statement that if the Fortune 1000 could just share and expose 10% more of their data, they'd have net revenue increases of 65 million. So it's just the ability to put that data to work and it's so much more capable in the Cloud than it has been on-prem to this point. >> It's interesting data portability is being discussed, data access, who gets access, do you move compute to the data? Do you move data around? And all these conversations are kind of around access and security. It's one of the big vulnerabilities around data whether it's an S3 bucket that's an manual configuration error, or if it's a tool that needs credentials. I mean, how do you manage all this stuff? This is really where a rethink kind of comes around so, can you share how you guys are surviving and thriving in that kind of crazy world that we're in? >> Yeah, absolutely. So, data has been the critical piece and moving to the Cloud has really been this notion of how do I protect my access into the Cloud? How do I protect who's got it? How do I think about the networking aspects? My east west traffic after I've blocked them from coming in but no one's thinking about the data itself and ultimately, you want to make that data very safe for the consumers of the data. They have an expectation and almost a demand that the data that they consume is safe and so, companies are starting to have to think about that. They haven't thought about it. It has been a blind spot, you mentioned that before. In regards to, I am protecting my management plane, we use posture management tools. We use automated services. If you're not automating, then you're struggling in the Cloud. But when it comes to the data, everyone thinks, "Oh, I've blocked access. I've used firewalls. I've used policies on the data," but they don't think about the data itself. It is that packet that you talked about that moves around to all the different consumers and the workflows and if you're not ensuring that that data is safe, then, you're in big trouble and we've seen it over and over again. >> I mean, it's definitely a hot category and it's changing a lot, so I love this conversation because it's a primary one, primary and secondary cover data cotton storage. It's kind of good joke there, but all kidding aside, it's a hard, you got data lineage tracing is a big issue right now. We're seeing companies come out there and kind of superability tangent there. The focus on this is huge. I'm curious, what was the origination story? What got you into the business? Was it like, were you having a problem with this? Did you see an opportunity? What was the focus when the company was founded? >> It's definitely to solve the problems that customers are facing. What's been very interesting is that they're out there needing this. They're needing to ensure their data is safe. As the whole story goes, they're putting it to work more, we're seeing this. I thought it was a really interesting series, one of your last series about data as code and you saw all the different technologies that are processing and managing that data and companies are leveraging today but still, once that data is ready and it's consumed by someone, it's causing real havoc if it's not either protected from being exposed or safe to use and consume and so that's been the biggest thing. So we saw a niche. We started with this notion of Cloud Storage being object storage, and there was nothing there protecting that. Amazon has the notion of access and that is how they protect the data today but not the packets themselves, not the underlying data and so, we created the solution to say, "Okay, we're going to ensure that that data is clean. We're also going to ensure that you have awareness of what that data is, the types of files you have out in the Cloud, wherever they may be, especially as they drift outside of the normal platforms that you're used to seeing that data in. >> It's interesting that people were storing data lakes. Oh yeah, just store a womp we might need and then became a data swamp. That's kind of like go back 67 years ago. That was the conversation. Now, the conversation is I need data. It's got to be clean. It's got to feed the machine learning. This is going to be a critical aspect of the business model for the developers who are building the apps, hence, the data has code reference which we've focused on but then you say, "Okay, great. Does this increase our surface area for potential hackers?" So there's all kinds of things that kind of open up, we start doing cool, innovative, things like that so, what are some of the areas that you see that your tech solves around some of the blind spots or with object store, the things that people are overlooking? What are some of the core things that you guys are seeing that you're solving? >> So, it's a couple of things, right now, the still the biggest thing you see in the news is configuration issues where people are losing their data or accidentally opening up to rights. That's the worst case scenario. Reads are a bad thing too but if you open up rights and we saw this with a major API vendor in the last couple of years they accidentally opened rights to their buckets. Hackers found it immediately and put malicious code into their APIs that were then downloaded and consumed by many, many of their customers so, it is happening out there. So the notion of ensuring configuration is good and proper, ensuring that data has not been augmented inappropriately and that it is safe for consumption is where we started and, we created a lightweight, highly scalable solution. At this point, we've scanned billions of files for customers and petabytes of data and we're seeing that it's such a critical piece to that to make sure that that data's safe. The big thing and you brought this up as well is the big thing is they're getting data from so many different sources now. It's not just data that they generate. You see one centralized company taking in from numerous sources, consolidating it, creating new value on top of it, and then releasing that and the question is, do you trust those sources or not? And even if you do, they may not be safe. >> We had an event around super Clouds is a topic we brought up to get bring the attention to the complexity of hybrid which is on premise, which is essentially Cloud operations. And the successful people that are doing things in the software side are essentially abstracting up the benefits of the infrastructures of service from HN AWS, right, which is great. Then they innovate on top so they have to abstract that storage is a key component of where we see the innovations going. How do you see your tech that kind of connecting with that trend that's coming which is everyone wants infrastructures code. I mean, that's not new. I mean, that's the goal and it's getting better every day but DevOps, the developers are driving the operations and security teams to like stay pace, so policy seeing a lot of policy seeing some cool things going on that's abstracting up from say storage and compute but then those are being put to use as well, so you've got this new wave coming around the corner. What's your reaction to that? What's your vision on that? How do you see that evolving? >> I think it's great, actually. I think that the biggest problem that you have to do as someone who is helping them with that process is make sure you don't slow it down. So, just like Cloud at scale, you must automate, you must provide different mechanisms to fit into workflows that allow them to do it just how they want to do it and don't slow them down. Don't hold them back and so, we've come up with different measures to provide and pretty much a fit for any workflow that any customer has come so far with. We do data this way. I want you to plug in right here. Can you do that? And so it's really about being able to plug in where you need to be, and don't slow 'em down. That's what we found so far. >> Oh yeah, I mean that exactly, you don't want to solve complexity with more complexity. That's the killer problem right now so take me through the use case. Can you just walk me through how you guys engage with customers? How they consume your service? How they deploy it? You got some deployment scenarios. Can you talk about how you guys fit in and what's different about what you guys do? >> Sure, so, we're what we're seeing is and I'll go back to this data coming from numerous sources. We see different agencies, different enterprises taking data in and maybe their solution is intelligence on top of data, so they're taking these data sets in whether it's topographical information or whether it's in investing type information. Then they process that and they scan it and they distribute it out to others. So, we see that happening as a big common piece through data ingestion pipelines, that's where these folks are getting most of their data. The other is where is the data itself, the document or the document set, the actual critical piece that gets moved around and we see that in pharmaceutical studies, we see it in mortgage industry and FinTech and healthcare and so, anywhere that, let's just take a very simple example, I have to apply for insurance. I'm going to upload my Social Security information. I'm going to upload a driver's license, whatever it happens to be. I want to one know which of my information is personally identifiable, so I want to be able to classify that data but because you're trusting or because you're taking data from untrusted sources, then you have to consider whether or not it's safe for you to use as your own folks and then also for the downstream users as well. >> It's interesting, in the security world, we hear zero trust and then we hear supply chain, software supply chains. We get to trust everybody, so you got kind of two things going on. You got the hardware kind of like all the infrastructure guys saying, "Don't trust anything 'cause we have a zero trust model," but as you start getting into the software side, it's like trust is critical like containers and Cloud native services, trust is critical. You guys are kind of on that balance where you're saying, "Hey, I want data to come in. We're going to look at it. We're going to make sure it's clean." That's the value here. Is that what I'm hearing you, you're taking it and you're saying, "Okay, we'll ingest it and during the ingestion process, we'll classify it. We'll do some things to it with our tech and put it in a position to be used properly." Is that right? >> That's exactly right. That's a great summary, but ultimately, if you're taking data in, you want to ensure it's safe for everyone else to use and there are a few ways to do it. Safety doesn't just mean whether it's clean or not. Is there malicious content or not? It means that you have complete coverage and control and awareness over all of your data and so, I know where it came from. I know whether it's clean and I know what kind of data is inside of it and we don't see, we see that the interesting aspects are we see that the cleanliness factor is so critical in the workflow, but we see the classification expand outside of that because if your data drifts outside of what your standard workflow was, that's when you have concerns, why is PII information over here? And that's what you have to stay on top of, just like AWS is control plane. You have to manage it all. You have to make sure you know what services have all of a sudden been exposed publicly or not, or maybe something's been taken over or not and you control that. You have to do that with your data as well. >> So how do you guys fit into the security posture? Say it a large company that might want to implement this right away. Sounds like it's right in line with what developers want and what people want. It's easy to implement from what I see. It's about 10, 15, 20 minutes to get up and running. It's not hard. It's not a heavy lift to get in. How do you guys fit in once you get operationalized when you're successful? >> It's a lightweight, highly scalable serverless solution, it's built on Fargate containers and it goes in very easily and then, we offer either native integrations through S3 directly, or we offer APIs and the APIs are what a lot of our customers who want inline realtime scanning leverage and we also are looking at offering the actual proxy aspects. So those folks who use the S3 APIs that our native AWS, puts and gets. We can actually leverage our put and get as an endpoint and when they retrieve the file or place the file in, we'll scan it on access as well, so, it's not just a one time data arrest. It can be a data in motion as you're retrieving the information as well >> We were talking with our friends the other day and we're talking about companies like Datadog. This is the model people want, they want to come in and developers are driving a lot of the usage and operational practice so I have to ask you, this fits kind of right in there but also, you also have the corporate governance policy police that want to make sure that things are covered so, how do you balance that? Because that's an important part of this as well. >> Yeah, we're really flexible for the different ways they want to consume and and interact with it. But then also, that is such a critical piece. So many of our customers, we probably have a 50/50 breakdown of those inside the US versus those outside the US and so, you have those in California with their information protection act. You have GDPR in Europe and you have Asia having their own policies as well and the way we solve for that is we scan close to the data and we scan in the customer's account, so we don't require them to lose chain of custody and send data outside of the accoun. That is so critical to that aspect. And then we don't ask them to transfer it outside of the region, so, that's another critical piece is data residency has to be involved as part of that compliance conversation. >> How much does Cloud enable you to do this that you couldn't really do before? I mean, this really shows the advantage of natively being in the Cloud to kind of take advantage of the IaaS to SAS components to solve these problems. Share your thoughts on how this is possible. What if there was no problem, what would you do? >> It really makes it a piece of cake. As silly as that sounds, when we deploy our solution, we provide a management console for them that runs inside their own accounts. So again, no metadata or anything has to come out of it and it's all push button click and because the Cloud makes it scalable because Cloud offers infrastructure as code, we can take advantage of that and then, when they say go protect data in the Ireland region, they push a button, we stand up a stack right there in the Ireland region and scan and protect their data right there. If they say we need to be in GovCloud and operate in GovCloud East, there you go, push the button and you can behave in GovCloud East as well. >> And with server lists and the region support and all the goodness really makes a really good opportunity to really manage these Cloud native services with the data interaction so, really good prospects. Final question for you. I mean, we love the story. I think it is going to be a really changing market in this area in a big way. I think the data storage relationship relative to higher level services will be huge as Cloud native continues to drive everything. What's the future? I mean, you guys see yourself as a all encompassing, all singing and dancing storage platform or a set of services that you're going to enable developers and drive that value. Where do you see this going? >> I think that it's a mix of both. Ultimately, you saw even on Storage Day the announcement of file cash and file cash creates a new common name space across different storage platforms and so, the notion of being able to use one area to access your data and have it come from different spots is fantastic. That's been in the on-prem world for a couple of years and it's finally making it to the Cloud. I see us following that trend in helping support. We're super laser-focused on Cloud Storage itself so, EBS volumes, we keep having customers come to us and say, "I don't want to run agents in my EC2 instances. I want you to snap and scan and I don't want to, I've got all this EFS and FSX out there that we want to scan," and so, we see that all of the Cloud Storage platforms, Amazon work docs, EFS, FSX, EBS, S3, we'll all come together and we'll provide a solution that's super simple, highly scalable that can meet all the storage needs so, that's our goal right now and where we're working towards. >> Well, Cloud Storage Security, you couldn't get a more a descriptive name of what you guys are working on and again, I've had many contacts with Andy Jassy when he was running AWS and he always loves to quote "The Innovator's Dilemma," one of his teachers at Harvard Business School and we were riffing on that the other day and I want to get your thoughts. It's not so much "The Innovator's Dilemma" anymore relative to Cloud 'cause that's kind of a done deal. It's "The Integrator's Dilemma," and so, it's the integrations are so huge now. If you don't integrate the right way, that's the new dilemma. What's your reaction to that? >> A 100% agreed. It's been super interesting. Our customers have come to us for a security solution and they don't expect us to be 'cause we don't want to be either. Our own engine vendor, we're not the ones creating the engines. We are integrating other engines in and so we can provide a multi engine scan that gives you higher efficacy. So this notion of offering simple integrations without slowing down the process, that's the key factor here is what we've been after so, we are about simplifying the Cloud experience to protecting your storage and it's been so funny because I thought customers might complain that we're not a name brand engine vendor, but they love the fact that we have multiple engines in place and we're bringing that to them this higher efficacy, multi engine scan. >> I mean the developer trends can change on a dime. You make it faster, smarter, higher velocity and more protected, that's a winning formula in the Cloud so Ed, congratulations and thanks for spending the time to riff on and talk about Cloud Storage Security and congratulations on the company's success. Thanks for coming on "theCUBE." >> My pleasure, thanks a lot, John. >> Okay. This conversation here in Palo Alto, California I'm John Furrier, host of "theCUBE." Thanks for watching.
SUMMARY :
the great Cloud background, You got the nice look there. and driven the Cloud growth. and if you look at all the action, and it's so much more capable in the Cloud It's one of the big that the data that they consume is safe and kind of superability tangent there. and so that's been the biggest thing. the areas that you see and the question is, do you and security teams to like stay pace, problem that you have to do That's the killer problem right now and they distribute it out to others. and during the ingestion and you control that. into the security posture? and the APIs are what of the usage and operational practice and the way we solve for of the IaaS to SAS components and because the Cloud makes it scalable and all the goodness really and so, the notion of and so, it's the and so we can provide a multi engine scan I mean the developer I'm John Furrier, host of "theCUBE."
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Ed Casper | PERSON | 0.99+ |
Ed Casmer | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
California | LOCATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
2012 | DATE | 0.99+ |
US | LOCATION | 0.99+ |
John | PERSON | 0.99+ |
200 trillion | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
February | DATE | 0.99+ |
Ireland | LOCATION | 0.99+ |
Europe | LOCATION | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
65 million | QUANTITY | 0.99+ |
S3 | TITLE | 0.99+ |
10% | QUANTITY | 0.99+ |
information protection act | TITLE | 0.99+ |
15 | QUANTITY | 0.99+ |
FSX | TITLE | 0.99+ |
Ed | PERSON | 0.99+ |
Datadog | ORGANIZATION | 0.99+ |
one time | QUANTITY | 0.99+ |
GDPR | TITLE | 0.99+ |
10 years ago | DATE | 0.99+ |
one trillion objects | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
100% | QUANTITY | 0.98+ |
billions of files | QUANTITY | 0.98+ |
20 minutes | QUANTITY | 0.98+ |
Harvard Business School | ORGANIZATION | 0.98+ |
Asia | LOCATION | 0.98+ |
both | QUANTITY | 0.98+ |
67 years ago | DATE | 0.98+ |
over 200 trillion objects | QUANTITY | 0.98+ |
50/50 | QUANTITY | 0.97+ |
Cloud Storage Security | ORGANIZATION | 0.97+ |
one | QUANTITY | 0.96+ |
pandemic | EVENT | 0.96+ |
today | DATE | 0.95+ |
HN AWS | ORGANIZATION | 0.95+ |
Cloud | TITLE | 0.94+ |
The Integrator's Dilemma | TITLE | 0.94+ |
theCUBE | ORGANIZATION | 0.94+ |
EC2 | TITLE | 0.93+ |
zero trust | QUANTITY | 0.93+ |
last couple of years | DATE | 0.93+ |
about 10 | QUANTITY | 0.93+ |
EFS | TITLE | 0.9+ |
one area | QUANTITY | 0.88+ |
The Innovator's Dilemma | TITLE | 0.87+ |
10 year period | QUANTITY | 0.81+ |
GovCloud | TITLE | 0.78+ |
Cloud Storage | TITLE | 0.77+ |
The Innovator's Dilemma | TITLE | 0.75+ |
Lafomo | PERSON | 0.75+ |
EBS | TITLE | 0.72+ |
last three | DATE | 0.71+ |
Storage Day | EVENT | 0.7+ |
Cloud Security | TITLE | 0.69+ |
CUBE | ORGANIZATION | 0.67+ |
Fortune 1000 | ORGANIZATION | 0.61+ |
EBS | ORGANIZATION | 0.59+ |
Breaking Analysis: Legacy Storage Spending Wanes as Cloud Momentum Builds
(digital music) >> From theCUBE Studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE and ETR. This is Breaking Analysis with Dave Vellante. >> The storage business as we know it has changed forever. On-prem storage was once a virtually unlimited and untapped bastion of innovation, VC funding and lucrative exits. Today it's a shadow of its former self and the glory days of storage will not return. Hello everyone, and welcome to this week's Wikibon CUBE Insights Powered by ETR. In this breaking analysis, we'll lay out our premise for what's happening in the storage industry, and share some fresh insights from our ETR partners, and data that supports our thinking. We've had three decades of tectonic shifts in the storage business. From the simplified history of this industry shows us there've been five major waves of innovation spanning five decades. The dominant industry model has evolved from what was first the mainframe centric vertically integrated business, but of course by IBM and it became a disintegrated business that saw between like 70 or 80 Winchester disk drive companies that rose and then fell. They served a booming PC industry in this way it was led by the likes of Seagate. Now Seagate supplied the emergence of an intelligent controller based external disc array business that drove huge margins for functions that while lucrative was far cheaper than captive storage from system vendors, this era of course was led by EMC and NetApp. And then this business was disrupted by a flash and software defined model that was led by Pure Storage and also VMware. Now the future of storage is being defined by cloud and intelligent data management is being led by AWS and a three letter company that we'll just call TBD, otherwise known as Jump Ball Incorporated. Now, let's get into it here, the impact of AWS cannot be overstated now while legacy storage players, they're sick and tired of talking about the cloud, the reality cannot be ignored. The cloud has been the most disruptive force in storage over the past 10 years, and we've reported on the spending impact extensively. But cloud is not the only factor pressuring the on-prem storage business, flash has killed what we call performance by spindles. In other words, the practice of adding more disk drives to keep performance from tanking. So much flash has been injected into the data center that that no longer is required. But now as you drill down into the cloud, AWS has been by far the most significant factor in our view. Lots of people talked about object storage before AWS, but there sure wasn't much spending going on, S3 changed that. AWS is getting much more aggressive about expanding its storage portfolio and its offerings. S3 came out in 2006 and it was the very first AWS service and then Elastic Block Service EBS came out a couple of years later, nobody really paid much attention. Well last fall at storage day, we saw AWS announce a number of services, many fire-related and this year we saw four new announcements of Amazon at re:Invent. We think AWS' storage revenue will surpass 8 billion this year and could be as high as 10 billion. There's not much data out there, but this would mean that AWS' storage biz is larger than that of a NetApp, which means AWS is larger than every traditional storage player with the exception of Dell. Here's a little glimpse of what's coming at the legacy storage business. It's a clip of the vice-president of AWS storage, her name is Mahlon Thompson Bukovec, watch this. Okay now, you may say Dave, what the heck does that have to do with anything? Yeah, I don't know, but as an older white guy, that's been in this business for awhile, I just think it's badass that this woman boxes and runs a business that we think is approaching $10 billion. Now let's take a quick look at the storage announcements AWS made at re:Invent. The company made four announcements this year, let me try to be brief, the first is EBS io2 Block Express Volumes, got to love the names. AWS was claims this is the first storage area network or sand for the cloud and it offers up to 256,000 IOPS and 4,000 megabytes per second throughput and 64 terabytes of capacity. Hey, sounds pretty impressive right, Well let's dig in a little bit okay, first of all, this is not the first sand in the cloud, at least in my view there may be others but Pure Storage announced cloud block store in 2019 at its annual accelerate customer conference and it's pretty comparable here. Maybe not so much in the speeds and feeds, but the concept of better block storage in the cloud with higher availability. Now, as you may also be saying, what's the big deal? The performance come on, we can smoke that we're on-prem vendor We can bury that. Compared to what we do, AWS' announcement is really not that impressive okay, let me give you a point of comparison there's a startup out there called VAST Data. Just there for you and closure with bundled storage and compute can do 400,000 IOPS and 40,000 megabytes per second and that can be scaled, so yeah, I get it. And AWS also announced that io2 two was priced at 20% less than previous generation volumes, which you might say is also no big deal and I would agree 20% is not as aggressive as the average price decline per gigabyte of any storage technology. AWS loves to make a big deal about its price declines, it's essentially following the industry trends but the point is that this feature will be great for a lot of workloads and it's fully integrated with AWS services meaning for example, it will be very convenient for AWS customers to invoke this capability for example Aurora and other AWS databases through its RDS service, just another easy button for developers to push. This is specially important as we see AWS rapidly expanding its machine learning in AI capabilities with SageMaker, it's embedding ML into things like Redshift and driving analytics, so integration is very key for its customers. Now, is Amazon retail going to run its business on io2 volumes? I doubt it. I believe they're running on Oracle and they need much better performance, but this is a mainstream service for the EBS masses to tap. Now, the other notable announcement was EBS Gp3 volumes. This is essentially a service that lets let you programmatically set SLAs for IOPS and throughput independently without needing to add additional storage. Again, you may be saying things like, well atleast I remember when SolidFire let me do this several years ago and gave me more than 3000 IOPS and 125 megabytes per a second performance, but look, this is great for mainstream customers that want more consistent and predictable performance and that want to set some kind of threshold or floor and it's integrated again into the AWS stack. Two other announcements were made, one that automatically tiers data to colder storage tiers and a replication service. On the former, data migrates to tier two after 90 days of inaccess and tier three, after 180 days. AWS remember, they hired a bunch of folks out of EMC years ago and they put them up in the Boston Seaport area, so they've acquired lots of expertise in a lot of different areas I'm not sure if tiering came out of that group but look, this stuff is not rocket science, but it saves customers money. So these are tried and true techniques that AWS is applying but the important thing is it's in the cloud. Now for sure we'd like to see more policy options than say for example, a fixed 90 day or 180 day policy and more importantly we'd like to see intelligent tiering where the machine is smart enough to elevate and promote certain datasets when they're needed for instance, at the end of a quarter for comparison purposes or at the end of the year, but as NFL Hall of Fame Coach Hank Stram would have said, AWS is matriculating the ball down the field. Okay, let's look at some of the data that supports what we're saying here in our premise today. This chart shows spending across the ETR taxonomy. It depicts the net score or spending velocity for different sectors. We've highlighted storage, now don't put too much weight on the January data because the survey was just launched, but you can see storage continues to be a back burner item relative to some other spending priorities. Now as I've reported, CIOs are really focused on cloud, containers, container orchestration, automation, productivity and other key areas like security. Now let's take a look at some of the financial data from the storage crowd. This chart shows data for eight leading names in storage and we put storage in quotes because as we said earlier, the market is shifting and for sure companies like Cohesity and Rubrik, they're not positioning as storage players in fact, that's the last thing they want to do. Rather they're category creators around data management or intelligent data management but their inadjacency to storage, they're partnering with all the primary storage companies and they're in the ETR taxonomy. Okay, so as you can see, we're showing the year over year, quarterly revenue growth for the leading storage companies. NetApp is a big winner, they're growing at a whopping 2%. They beat expectations, but expectations were way down so you can see in the right most column upper right, we've added the ETR net score from October and net score of 10% says that if you ask customers, are you spending more or less with a company, there are 10% of the customers that are essentially spending more than are spending less, get into that a little further later. For comparison, a company like Snowflake, it has a net score approaching 70% Pure Storage used to be that high several years ago or high sixties anyway. So 10% is in the red zone and yet NetApp, is the big winner this quarter. Now Nutanix isn't really again a storage company, but they're an adjacency and they sell storage and like many of these companies, it's transitioning to a subscription pricing model, so that puts pressure on the income statement, that's why they went out and did a deal with Bain, Bain put in $750 million to help Bridge that transition so that's kind of an interesting move. Every company in this chart is moving to an annual recurring revenue model and that as a service approach is going to be the norm by the end of the decade. HPE's doing it with GreenLake, Dell has announced Apex, virtually every company is headed in this direction. Now speaking of HPE, it's Nimble business that has momentum, but other parts of the storage portfolio are quite a bit softer. Dell continues to see pressure on its storage business although VxRail is a bright spot. Everybody's got a bright spot, everybody's got new stuff that's growing much faster than the old stuff, the problem is the old stuff is much much bigger than the new stuff. IBM's mainframe storage cycle, well that's seems to have run its course, they had been growing for the last several quarters that looks like it's over. And so very very cyclical businesses here now as you can see, The data protection data management companies, they are showing spending momentum but they're not public so we don't have revenue data. But you got to wonder with all the money these guys have raised and the red hot IPO and tech markets, why haven't these guys gone public? The answer has to be that they're either not ready or maybe their a numbers weren't where they want them to be, maybe they're not predictable enough, maybe they don't have their operational act together or maybe they need to you get that in order, some combination of those factors is likely. They'll tell you, they'll give other answers if you ask them, but if they had their stuff together they'd be going out right now. Now here's another look at the spending data in terms of net score, which is again spending velocity. The ETR here is measuring the percent of respondents that are adopting new, spending more, spending flat, spending less or retiring the platform. So net score is adoptions, which is the lime green plus the spending more, which is the forest green. Add those two and then subtract spending less, which is the pink and then leaving the platform, which is the bright red, what's left over is net score. So, let's look at the picture here, Cohesity leads all players in the storage taxonomy, the ETR storage taxonomy, again they don't position that way, but that's the way the customers are answering. They've got 55% net score which is really solid and you can see the data in the upper right-hand corner, it's followed by Nutanix. Now they're really not again in the scope of Pure play storage play but speaking of Pure, its net score has come down from its high of 73% in January, 2016. It's not going to climb back up there, but it's going to be interesting to see if Pure net scorecard rebound in a post COVID world. We're also watching what Pure does in terms of unifying file and object and how it's fairing in cloud and what it does with the Portworx acquisition which is really designed to bring forth a new programming model. Now, Dell is doing fine with VxRail, but VSAN is well off its net score highs which we're in the 60% plus range a couple of years ago, VSAN is definitely been a factor from VMware, but again that's come off its highs, HPE with Nimble still has some room to improve, I think it actually will I think that these figures that we're showing here they're are somewhat depressed by the COVID factor, I expect Nimble is going to bounce back in future surveys. Dell and NetApp are the big leaders in terms of presence or market share in the data other than VMware, 'cause VMware has a lot of instances, it's software defined that's why they're so prominent. And with VMware's large share you'd expect them to have net scores that are tepid and you can see a similar pattern with IBM. So Dell, NetApp, tepid net scores as is IBM because of their large market share VMware, kind of a newer entry into the play and so doing pretty well there from a net score standpoint. Now Commvault like Cohesity and Rubrik is really around intelligent data management, trying to go beyond backup into business recovery, data protection, DevOps, bringing that analytics, bringing that to the cloud, we didn't put Veeam in here and we probably should have. They had pre-COVID net scores well in to the thirties and they have a steadily increasing share of the market, so we expect good things from Veeam going forward. They were acquired earlier this year by Insight, capital private equity firm. So big changes there as well, that was their kind of near-term exit maybe more to come. But look, it's all relative, this is a large and mature market that is moving to the cloud and moving to other adjacencies. And the core is still primary storage, that's the main supreme prerequisite and everything else flows from there, data protection, replication, everything else. This chart gives you another view of the competitive landscape, it's that classic XY chart it plots net score in the vertical axis and market share on the horizontal axis, market share remember is a measure of presence in the dataset. Now think about this from the CIO's perspective, they have their on-prem estate, got all this infrastructure and they're putting a brick wall around their core systems. And what do they want out of storage for that class of workload? They want it to perform consistently, they want it to be efficient and they want it to be cost-effective, so what are they going to do? they're going to consolidate, They're going to consolidate the number of vendors, they're going to consolidate the storage, they're going to minimize complexity, yeah, they're going to worry about the blast radius, but there's ways to architect around that. The last thing they want to worry about is managing a zillion storage vendors this business is consolidating, it has been for some time, we've seen the number of independent storage players that are going public as consolidated over the years, and it's going to continue. so on-prem storage arrays are not giving CIOs the innovation and strategic advantage back when things like storage virtualization, space efficient snapshots, data de-duplication and other storage services were worth maybe taking a flyer on a feature product like for example, a 3PAR or even a Data Domain. Now flash gave the CIOs more headroom and better performance and so as I said earlier, they're not just buying spindles to increase performance, so as more and more work gets pushed to the cloud, you're seeing a bunkering in on these large scale mission-critical workloads. As you saw earlier, the legacy storage market is consolidating and has been for a while as I just said, it's essentially becoming a managed decline business where RnD is going to increasingly get squeezed and go to other areas, both from the vendor community and on the buy-side where they're investing on things like cloud, containers and in building new layers in their business and of course the DX, the Digital Transformation. I mentioned VAST Data before, it is a company that's growing and another company that's growing is Infinidat and these guys are traditional storage on-prem models they don't bristle If I say traditional they're nexgen if you will but they don't own a cloud, so they were selling to the data center. Now Infinidat is focused on petabyte scale and as they say, they're growing revenues, they're having success consolidating storage that thing that I just talked about. Ironically, these are two Israeli founder based companies that are growing and you saw earlier, this is a share shift the market is not growing overall the part of that's COVID, but if you exclude cloud, the market is under pressure. Now these two companies that I'm mentioning, they're kind of the exception to the rule here, they're tiny in the grand scheme of things, they're really not going to shift the market and their end game is to get acquired so they can still share, but they're not going to reverse these trends. And every one on this chart, every on-prem player has to have a cloud strategy where they connect into the cloud, where they take advantage of native cloud services and they help extend their respective install bases into the cloud, including having a capability that is physically proximate to the cloud with a colo like an Equinix or some other approach. Now, for example at re:Invent, we saw that AWS has hybrid strategy, we saw that evolving. AWS is trying to bring AWS to the edge and they treat the data center as just another edge note, so outposts and smaller versions of outposts and things like local zones are all part of bringing AWS to the edge. And we saw a few companies Pure, Infinidant, Veeam come to mind that are connecting to outpost. They saw the Qumulo was in there, Clumio, Commvault, WekaIO is also in there and I'm sure I'm missing some so, DM me, email me, yell at me, I'm sorry I forgot you but you get the point. These companies that are selling on-prem are connecting to the cloud, they're forced to connect to the cloud much in the same way as they were forced to join the VMware ecosystem and try to add value, try to keep moving fast. So, that's what's going on here, what's the prognosis for storage in the coming year? Well, where've of all the good times gone? Look, we would never bet against data but the days of selling storage controllers that masks the deficiencies of spinning disc or add embedded hardware functions or easily picking off a legacy install base with flash, well, those days are gone. Repatriation, it ain't happening it's maybe tiny little pockets. CIOs are rationalizing their on-premises portfolios so they can invest in the cloud, AI, machine learning, machine intelligence, automation and they're re-skilling their teams. Low latency high bandwidth workloads with minimal jitter, that's the sweet spot for on-prem it's becoming the mainframe of storage. CIOs are also developing a cloud first strategy yes, the world is hybrid but what does that mean to CIOs? It means you're going to have some work in the cloud and some work on-prem, there's a hybrid We've got both. Everything that can go to the cloud, will go to the cloud, in our opinion and everything that can't or shouldn't won't. Yes, people will make mistakes and they'll "repatriate" but generally that's the trend. And the CIOs they're building an abstraction layer to connect workloads from an observability and manageability standpoint so they can maintain control and manage lock-in risk, they have options. Everything that doesn't go to the cloud will likely have some type of hybridicity to it, the reverse won't likely be the case. For vendors, cloud strategies involve supporting your install basis migration to the cloud, that's where they're going, that's where they want to go, they want your help there's business to be made there so enabling low latency hybrids in accommodating subscription models, well, that's a whole another topic, but that's the trend that we see and you rethink the business that you're in, for instance, data management and developing an edge strategy that recognizes that edge workloads are going to require new architecture and that's more efficient than what we've seen built around general purpose systems, and wow, that's a topic for another day. You're seeing this whole as a service model really reshape the entire cultures in the way in which the on-prem vendors are operating no longer is it selling a box that has dramatically marked up controllers and disc drives, it's really thinking about services that could be invoked in the cloud. Now remember, these episodes are all available as podcasts, wherever you listen, just search Breaking Analysis podcasts and please subscribe, I'd appreciate that checkout etr.plus for all the survey action. We also publish a full report every week on wikibon.com and siliconangle.com. A lot of ways to get in touch. You can email me at david.vellante@siliconangle.com. you could DM me @dvellante on Twitter, comment on our LinkedIn posts, I always appreciate that. This is Dave Vellante for theCUBE Insights Powered by ETR. Thanks for watching everyone stay safe and we'll see you next time. (upbeat music)
SUMMARY :
This is Breaking Analysis and of course the DX, the
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Seagate | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
2006 | DATE | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
AWS' | ORGANIZATION | 0.99+ |
Hank Stram | PERSON | 0.99+ |
January, 2016 | DATE | 0.99+ |
October | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
TBD | ORGANIZATION | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Jump Ball Incorporated | ORGANIZATION | 0.99+ |
Infinidat | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
20% | QUANTITY | 0.99+ |
January | DATE | 0.99+ |
64 terabytes | QUANTITY | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
60% | QUANTITY | 0.99+ |
55% | QUANTITY | 0.99+ |
two companies | QUANTITY | 0.99+ |
Boston Seaport | LOCATION | 0.99+ |
90 day | QUANTITY | 0.99+ |
73% | QUANTITY | 0.99+ |
125 megabytes | QUANTITY | 0.99+ |
70% | QUANTITY | 0.99+ |
180 day | QUANTITY | 0.99+ |
8 billion | QUANTITY | 0.99+ |
Snowflake | ORGANIZATION | 0.99+ |
$750 million | QUANTITY | 0.99+ |
2019 | DATE | 0.99+ |
10% | QUANTITY | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
10 billion | QUANTITY | 0.99+ |
NetApp | ORGANIZATION | 0.99+ |
Veeam | ORGANIZATION | 0.99+ |
david.vellante@siliconangle.com | OTHER | 0.99+ |
Pure Storage | ORGANIZATION | 0.99+ |
400,000 IOPS | QUANTITY | 0.99+ |
$10 billion | QUANTITY | 0.99+ |
Apex | ORGANIZATION | 0.99+ |
2% | QUANTITY | 0.99+ |
Cohesity | ORGANIZATION | 0.99+ |
Nimble | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
this year | DATE | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Rubrik | ORGANIZATION | 0.99+ |
Infinidant | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
90 days | QUANTITY | 0.99+ |
siliconangle.com | OTHER | 0.98+ |
wikibon.com | OTHER | 0.98+ |
AI and Hybrid Cloud Storage | Wikibon Action Item | May 2019
Hi, I'm Peter Burris, and this is Wikibon's Action Item. We're joined here in the studio by David Floyer. Hi David. >> Hi there. >> And remote, we've got Jim Kobielus. Hi, Jim. >> Hi everybody. >> Now, Jim, you probably can't see this, but for those who are watching, when we do see the broad set, notice that David Floyer's got his Game of Thrones coffee cup with us. Now that has nothing to do with the topic. David, and Jim, we're going to be talking about this challenge that businesses have, that enterprises have, as they think about making practical use of AI. The presumption for many years was that we were going to move all the data up into the Cloud in a central location, and all workloads were going to be run there. As we've gained experience, it's very clear that we're actually going to see a greater distribution function, partly in response to a greater distribution of data. But what does that tell about the relationship between AI, AI workloads, storage, and hybrid Cloud? David, why don't you give us a little clue as to where we're going to go from here. >> Well I think the first thing we have to do is separate out the two types of workload. There's the development of the AI solution, the inference code, et cetera, the dealing with all of the data required for that. And then there is the execution of that code, which is the inference code itself. And the two are very different in characteristics. For the development, you've got a lot of data. It's very likely to be data-bound. And storage is a very important component of that, as well as computer and the GPUs. For the inference, that's much more compute-bound. Again, compute neural networks, GPUs, are very, very relevant to that portion. Storage is much more ephemeral in the sense that the data will come in and you will need to execute on it. But that data will be part of the, the compute will be part of that sensor, and you will want the storage to be actually in the DIMM itself, or non-volatile DIMM, right up as part of the processing. And you'll want to share that data only locally in real time, through some sort of mesh computing. So, very different compute requirements, storage requirements, and architectural requirements. >> Yeah, let's go back to that notion of the different storage types in a second, but Jim, David described how the workloads are going to play out. Give a sense of what the pipelines are going to look like, because that's what people are building right now, is the pipelines for actually executing these workloads. How will they differ? How do they differ in the different locations? >> Yeah, so the entire DataOps pipeline for data science, data analytics, AI in other words. And so what you're looking at here is all the processes from discovering and adjusting the data to transforming and preparing and correcting it, cleansing it, to modeling and training the AI models, to serving them out for inferencing along the lines of what David's describing. So, there's different types of AI models and one builds from different data to do different types of inferencing. And each of these different pipelines might be highly, often is, highly specific to a particular use case. You know, AI for robotics, that's a very different use case from AI for natural language processing, embedded for example in an e-commerce portal environment. So, what you're looking at here is different pipelines that all share a common sort of flow of activities and phases. And you need a data scientist to build and test, train and evaluate and serve out the various models to the consuming end devices or application. >> So, David we've got 50 or so years of computing. Where the primary role of storage was to assist a transaction and the data associated with that transaction that has occurred. And that's you know, disk and then you have all the way out to tape if we're talking about archive. Flash changes that equation. >> Absolutely changes it. >> AI absolutely demands a different way of thinking. Here we're not talking about persisting our data we're talking about delivering data, really fast. As you said, sometimes very ephemeral. And so, it requires a different set of technologies. What are some of the limitations that historically storage has been putting on some of these workloads? And how are we breaching those limitations, to make them possible? >> Well if we take only 10 years ago, the start of the big data was Hadoop. And that was spreading the data over very cheap disks and hard disks. With the compute there, and you spread that data and you did it all in parallel on very cheap nodes. So, that was the initial but that is a very expensive way of doing it now because you're tying the data to that set of nodes. They're all connected together so, a more modern way of doing it is to use Flash, to use multiple copies of that data but logical copies or snapshots of that Flash. And to be able to apply as many processes, nodes as is appropriate for that particular workload. And that is a far more efficient and faster way of processing that or getting through that sort of workload. And it really does make a difference of tenfold in terms of elapsed time and ability to get through that. And the overall cost is very similar. >> So that's true in the inferencing or, I'm sorry, in the modeling. What about in the inferencing side of things? >> Well, the inferencing side is again, very different. Because you are dealing with the data coming in from the sensors or coming in from other sensors or smart sensors. So, what you want to do there is process that data with the inference code as quickly as you can, in real time. Most of the time in real time. So, when you're doing that, you're holding the current data actually in memory. Or maybe in what's called non-volatile DIMM and VDIMM. Which gives you a larger amount. But, you almost certainly don't have the time to go and store that data and you certainly don't want to store it if you can avoid it because it is a large amount of data and if I open my... >> Has limited derivative use. >> Exactly. >> Yeah. >> So you want to get all or quickly get all the value out of that data. Compact it right down using whatever techniques you can, and then take just the results of that inference up to other ones. Now at the beginning of the cycle, you may need more but at the end of the cycle, you'll need very little. >> So Jim, the AI world has built algorithms over many, many, many years. Many which still persist today but they were building these algorithms with the idea that they were going to use kind of slower technologies. How is the AI world rethinking algorithms, architectures, pipelines, use cases? As a consequence of these new storage capabilities that David's describing? >> Well yeah, well, AI has become widely distributed in terms of its architecture increasingly and often. Increasingly it's running over containerized, Kubernetes orchestrated fabrics. And a lot of this is going on in the area of training, of models and distributing pieces of those models out to various nodes within an edge architecture. It may not be edge in the internet of things sense but, widely distributed, highly parallel environments. As a way of speeding up the training and speeding up the modeling and really speeding up the evaluation of many models running in parallel in an approach called ensemble modeling. To be able to converge on a predictive solution, more rapidly. So, that's very much what David's describing is that that's leveraging the fact that memory is far faster than any storage technology we have out there. And so, being able to distribute pieces of the overall modeling and training and even data prep of workloads. It's able to speed up the deployment of highly optimized and highly sophisticated AI models for the cutting edge, you know, challenges we face like the Event Horizon telescope for example. That we're all aware of when they were able to essentially make a visualization of a black hole. That relied on a form of highly distributed AI called Grid Computing. For example, I mean the challenges like that demand a highly distributed memory-centric orchestrated approach to tackling. >> So, you're essentially moving the code to the data as opposed to moving all of the data all the way out to the one central point. >> Well so if we think about that notion of moving code to the data. And I started off by suggesting that. In many respects, the Cloud is an architectural approach to how you distribute your workloads as opposed to an approach to centralizing everything in some public Cloud. I think increasingly, application architects and IT organizations and service providers are all seeing things in that way. This is a way of more broadly distributing workloads. Now as we think about, we talked briefly about the relationship between storage and AI workloads but we don't want to leave anyone with the impression that we're at a device level. We're really talking about a network of data that has to be associated with a network of storage. >> Yes. >> Now that suggests a different way of thinking about how - about data and data administration storage. We're not thinking about devices, we're really trying to move that conversation up into data services. What kind of data services are especially crucial to supporting some of these distributed AI workloads? >> Yes. So there are the standard ones that you need for all data which is the backup and safety and encryption security, control. >> Primary storage allocation. >> All of that, you need that in place. But on top of that, you need other things as well. Because you need to understand the mesh, the distributed hybrid Cloud that you have, and you need to know what the capabilities are of each of those nodes, you need to know the latencies between each of those nodes - >> Let me stop you here for a second. When you say "you need to know," do you mean "I as an individual need to know" or "the system needs to know"? >> It needs to be known, and it's too complex, far too complex for an individual ever to solve problems like this so it needs, in fact, its own little AI environment to be able to optimize and check the SLAs so that particular inference coding can be achieved in the way that it's set up. >> So it sounds like - >> It's a mesh type of computer. >> Yeah, so it sounds like one of the first use cases for AI, practical, commercial use cases, will be AI within the data plane itself because the AI workloads are going to drive such a complex model and utilization of data that if you don't have that the whole thing will probably just fold in on itself. Jim, how would you characterize this relationship between AI inside the system, and how should people think about that and is that really going to be a practical, near-term commercial application that folks should be paying attention to? >> Well looking at the Cloud native world, what we need and what we're increasingly seeing out there are solutions, tools, really data planes, that are able to associate a distributed storage infrastructure of a very hybridized nature in terms of disk and flash and so forth with a highly distributed containerized application environment. So for example just last week at Jeredhad I met with the folks from Robin Systems and they're one of the solution providers providing those capabilities to associate, like I said, the storage Cloud with the containerized, essentially application, or Cloud applications that are out there, you know, what we need there, like you've indicated, are the ability to use AI to continue to look for patterns of performance issues, bottlenecks, and so forth and to drive the ongoing placement of data storage nodes and servers which in clusters and so forth as way of making sure that storage resources are always used efficiently that SLAs as David indicated are always observed in an automated fashion as the native placement and workload placement decisions are being made and so ultimately that the AI itself, whatever it's doing like recognizing faces or recognizing human language, is able to do it as efficiently and really as cheaply as possible. >> Right, so let me summarize what we've got so far. We've got that there is a relationship between storage and AI, that the workload suggests that we're going to have centralized modeling, large volumes of data, we're going to have distributed inferencing, smaller on data, more complex computing. Flash is crucial, mesh is crucial, and increasingly because of the distributed nature of these applications, there's going to have to be very specific and specialized AI in the infrastructure, in that mesh itself, to administer a lot of these data resources. >> Absolutely. >> So, but we want to be careful here, right David? We don't want to suggest that we have, just as the notion of everything goes into a centralized Cloud under a central administrative effort, we also don't want to suggest this notion that there's this broad, heterogeneous, common, democratized, every service available everywhere. Let's bring hybrid Cloud into this. >> Right. >> How will hybrid Cloud ultimately evolve to ensure that we get common services where we need them? And know where we don't have common services so that we can factor those constraints? >> So it's useful to think about the hybrid Cloud from the point of view of the development which will be fairly normal types of computing and be in really large centers and the edges themselves, which will be what we call autonomous Clouds. Those are the ones at the edge which need to be self-sufficient. So if you have an autonomous car, you can't guarantee that you will have communication to it. And most - a lot of IOTs in distant places which again, on chips or distant places, where you can't guarantee. So they have to be able to run much more by themselves. So that's one important characteristic so that autonomous one needs to be self-sufficient itself and have within it all the capabilities of running that particular code. And then passing up data when it can. >> Now you gave examples where it's physically required to do that, but it's also OT examples. >> Exactly. >> Operational technologies where you need to have that air gap to ensure that bad guys can't get into your data. >> Yes, absolutely, I mean if you think about a boat, a ship, it has multiple very clear air gaps and a nuclear power station has a total air gap around it. You must have those sort of air gaps. So it's a different architecture for different uses for different areas. But of course data is going to come up from those autonomous, upwards, but it will be a very small amount of the data that's actually being processed. The data, and there'll be requests down to those autonomous Clouds for additional processing of one sort or another. So there still will be a discussion, communication, between them, to ensure that the final outcome, the business outcome, is met. >> All right, so I'm going to ask each of you guys to give me a quick prediction. David, I'm going to ask you about storage and then Jim I'm going to ask you about AI in light of David's prediction about storage. So David, as we think about where these AI workloads seem to be going, how is storage technology going to evolve to make AI applications easier to deal with, easier to run, cheaper to run, more secure? >> Well, the fundamental move is towards larger amounts of Flash. And the new thing is that larger amounts of non-volatile DIMM, the memory in the computer itself, those are going to get much, much bigger, those are going to help with the execution of these real-time applications and there's going to be high-speed communication between short distances between the different nodes and this mesh architecture. So that's on the inference side, there's a big change happening in that space. On the development side the storage will move towards sharing data. So having a copy of the data which is available to everybody, and that data will be distributed. So sharing that data, having that data distributed, will then enable the sorts of ways of using that data which will retain context, which is incredibly important, and avoid the cost and the loss of value because of the time taken of moving that data from A to B. >> All right, so to summarize, we've got a new level in the storage hierarchy that puts between Flash and memory to really accelerate things, and then secondly we've got this notion that increasingly we have to provide a way of handling time and context so that we sustain fidelity especially in more real-time applications. Jim, given that this is where storage is going to go, what does that say about AI? >> What it says about AI is that first of all, we're talking about like David said, meshes of meshes, every edge node is increasingly becoming a mesh in its own right with disparate CPUs and GPUs and whatever, doing different inferencing on each device, but every one of these, like a smart car, will have plenty of embedded storage to process a lot of data locally that may need to be kept locally for lots of very good reasons, like a black box in case of an accident, but also in terms of e-discovery of the data and the models that might have led up to an accident that might have caused fatalities and whatnot. So when we look at where AI is going, AI is going into the mesh of mesh, meshes of meshes, where there's AI running it in each of the nodes within the meshes, and the meshes themselves will operate as autonomous decisioning nodes within a broader environment. Now in terms of the context, the context increasingly that surrounds all of the AI within these distributed architectures will be in the form of graphs and graphs are something distinct from the statistical algorithms that we built AI out of. We're talking about knowledge graphs, we're talking about social graphs, we're talking about behavioral graphs, so graph technology is just getting going. For example, Microsoft recently built, they made a big continued push into threading graph - contextual graph technology - into everything they do. So that's where I see AI going is up from statistical models to graph models as the broader metadata framework for binding everything together. >> Excellent. All right guys, so Jim, I think another topic another time might be the mesh mess. (laughs) But we won't do that now. All right, let's summarize really quickly. We've talked about how the relationship between AI, storage and hybrid Clouds are going to evolve. Number one, AI workloads are at least differentiated by where we handle modeling, large amounts of data still need a lot of compute, but we're really focused on large amounts of data and moving that data around very, very quickly. But therefore proximate to where the workload resides. Great, great application for Clouds, large, public as well as private. On the other side, where the inferencing work is done, that's going to be very compute-bound, smaller data volumes, but very, very fast data. Lot of flash everywhere. The second thing we observed is that these new AI applications are going to be used and applied in a lot of different domains, both within human interaction as well as real-time domains within IOT, et cetera, but that as we evolve, we're going to see a greater relationship between the nature of the workload and the class of the storage, and that is going to be a crucial feature for storage administrators and storage vendors over the next few year is to ensure that that specialization is reflected in what's known. What's needed. Now the last point that we'll make very quickly is that as we look forward, the whole concept of hybrid Cloud where we can have greater predictability into the nature of data-oriented services that are available for different workloads is going to be really, really important. We're not going to have all data services common in all places. But we do want to make sure that we can assure whether it's a container-based application or some other structure, that we can ensure that the data that is required will be there in the context, form and metadata structures that are required. Ultimately, as we look forward, we see new classes of storage evolving that bring data even closer to the compute side, and we see new data models emerging, such as graph models, that are a better overall reflection of how this distributed data is going to evolve within hybrid Cloud environments. David Floyer, Jim Kobielus, Wikibon analysts, I'm Peter Burris, once again, this has been Action Item.
SUMMARY :
We're joined here in the studio by David Floyer. And remote, we've got Jim Kobielus. Now that has nothing to do with the topic. in the sense that the data will come in of the different storage types in a second, and adjusting the data to transforming out to tape if we're talking about archive. What are some of the limitations that historically storage of the big data was Hadoop. What about in the inferencing side of things? and store that data and you certainly don't want to store it Now at the beginning of the cycle, you may need more but So Jim, the AI world has built algorithms for the cutting edge, you know, challenges we face as opposed to moving all of the data that has to be associated with a network of storage. to supporting some of these distributed AI workloads? and encryption security, control. the distributed hybrid Cloud that you have, "I as an individual need to know" in the way that it's set up. and is that really going to be a practical, are the ability to use AI to continue to look and increasingly because of the distributed nature just as the notion of everything goes and the edges themselves, which will be what we call to do that, but it's also OT examples. to have that air gap to ensure But of course data is going to come up and then Jim I'm going to ask you about AI because of the time taken of moving that data from A to B. and context so that we sustain fidelity and the models that might have led up to an accident and that is going to be a crucial feature
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
Jim Kobielus | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Robin Systems | ORGANIZATION | 0.99+ |
May 2019 | DATE | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
Game of Thrones | TITLE | 0.99+ |
each | QUANTITY | 0.99+ |
last week | DATE | 0.99+ |
Wikibon | ORGANIZATION | 0.99+ |
two types | QUANTITY | 0.99+ |
second thing | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
each device | QUANTITY | 0.98+ |
Flash | TITLE | 0.97+ |
10 years ago | DATE | 0.96+ |
Jeredhad | ORGANIZATION | 0.95+ |
today | DATE | 0.9+ |
first use cases | QUANTITY | 0.85+ |
first thing | QUANTITY | 0.84+ |
one important characteristic | QUANTITY | 0.76+ |
secondly | QUANTITY | 0.76+ |
one central point | QUANTITY | 0.74+ |
Event Horizon | COMMERCIAL_ITEM | 0.72+ |
many years | QUANTITY | 0.71+ |
50 or so years | QUANTITY | 0.7+ |
Cloud | TITLE | 0.67+ |
first | QUANTITY | 0.66+ |
next few year | DATE | 0.65+ |
lot of data | QUANTITY | 0.62+ |
VDIMM | OTHER | 0.59+ |
every one | QUANTITY | 0.58+ |
second | QUANTITY | 0.57+ |
DataOps | TITLE | 0.46+ |
Kubernetes | TITLE | 0.44+ |
Patrick Osborne, HPE | HPE Secondary Storage for Hybrid cloud
>> From the SiliconANGLE Media Office in Boston, Massachusetts, it's theCUBE! Now, here's your host, Dave Vellante. >> Hi everybody, welcome to the special CUBE conversation on secondary storage and data protection, which is one of the hottest topics in the business right now. Cloud, multi-cloud, bringing the Cloud experience to wherever your data lives and protecting that data driven by digital transformation. We're gonna talk about that with Patrick Osborne, the Vice President and General Manager for big data and secondary storage at HPE, good friend and CUBE alum. Great to see you again. Thanks for coming on. >> Great, thanks for having us. >> So let's start with some of those trends that I mentioned. I think, let's start with digital transformation. It's a big buzzword in the industry but it's real. I travel around, I talk to customers all the time, everybody's trying to get digital transformation right. And digital means data, data needs to be protected in new ways now, and so when we trickle down into your world, data protection, what are you seeing in terms of the impact of digital and digital transformation on data protection? >> Absolutely, great question. So the winds of change in secondary storage are blowing pretty hard right now. I think there's a couple different things that are driving that conversation. A, the specialization of people with specific backup teams, right, that's moving away, right. You're moving away from general storage administration and specialized teams to people focusing a lot of those resources now on Cloud Ops team, DevOps team, application development. So they want that activity of data protection to be automated and invisible. Like you said before, in terms of being able to re-use that data, the old days of essentially having a primary dataset and then pushing it off to some type of secondary storage which just sits there over time, is not something that customers want anymore. >> Right. >> They wanna be able to use that data, they wanna be able to generate copies of that, do test and dev, gain insight from that, being able to move that to the Cloud, for example, to be able to burst out there or do it for DR activities. So I think there's a lot of things that are happening when it comes to data that are certainly changing the requirements and expectations around secondary storage. >> So the piece I want to bring to the conversation is Cloud and I saw a stat recently that the average company, the average enterprise has, like, eight clouds, and I was thinking, sheesh, small company like ours has eight clouds, so I mean, the average enterprise must have 80 clouds when you start throwing in all the saas. >> Yeah. >> So Cloud and specifically, multi-cloud, you guys, HPEs, always been known for open platform, whatever the customer wants to do, we'll do it. So multi-cloud becomes really important. And let's expand the definition of Cloud to include private cloud on PRM, what we call True Private Cloud in the Wikibon world, but whether it's Azure, AWS, Google, dot, dot, dot, what are you guys seeing in terms of the pressure from customers to support multi... They don't want a silo, a data protection silo for each cloud, right? >> Absolutely. So they don't want silos in general, right? So I think a couple of key things that you brought up, private cloud is very interesting for customers. Whether they're gonna go on PRM or off PRM, they absolutely want to have the experience on PRM. So what we're providing customers is the ability, through APIs and seamless integration into their existing application frameworks, the ability to move data from point A to point B to point C, which could be primary all-flash, secondary systems, cloud targets, but have that be able to be automated full API set and provide a lot of those capabilities, those user stories around data protection and re-use, directly to the developers, right, and the database admins and whoever's doing this news or DevOps area. The second piece is that, like you said, everyone's gonna have multiple clouds, and what we want to do is we want to be able to give customers an intelligent experience around that. We don't necessarily need to own all the infrastructure, right, but we need to be able to facilitate and provide the visibility of where that data's gonna land, and over time, with our capabilities that we have around InfoSight, we wanna be able to do that predictably, make recommendations, have that whole population of customers learn from each other and provide some expert analysis for our customers as to where to place workloads. >> These trends, Patrick, they're all interrelated, so they're not distinct and before we get into the hard news, I wanna kinda double down on another piece of this. So you got data, you got digital, which is data, you've got new pressures on data protection, you've got the cloud-scale, a lot of diversity. We haven't even talked about the edge. That's another, sort of, piece of it. But people wanna get more out of their data protection investment. They're kinda sick of just spending on insurance. They'd like to get more value out of it. You've mentioned DevOps before. >> Yep. >> Better access to that data, certainly compliance. Things like GDPR have heightened awareness of things that you can do with the data, not just for backup, and not even just for compliance, but actually getting value out of the data. Your thoughts on that trend? >> Yeah, so from what we see for our customers, they absolutely wanna reuse data, right? So we have a ton of solutions for our customers around very low latency, high performance optimized flash storage in 3PAR and Nimble, different capabilities there, and then being able to take that data and move it off to a hybrid flash array, for example, and then do workloads on that, is something that we're doing today with our customers, natively as well as partnering with some of our ISV ecosystem. And then sort of a couple new use cases that are coming is that I want to be able to have data providence. So I wanna share some of my data, keep that in a colo but be able to apply compute resources, whether those are VMs, whether they are functions, lambda functions, on that data. So we wanna bring the compute to the data, and that's another use case that we're enabling for our customers, and then ultimately using the Cloud as a very, very low-cost, scalable and elastic tier storage for archive and retention. >> One of the things we've been talking about in theCUBE community is you hear that Bromite data is the new oil, and somebody in the community was saying, you know what? It's actually more valuable than oil. When I have oil, I can put it in my house or I can put it my car. But data, the unique attribute of data is I can use it over and over and over again. And again, that puts more pressure on data protection. All right, let's get into some of the hard news here. You've got kind of a four-pack of news that we wanna talk about. Let's start with StoreOnce. It's a platform that you guys announced several years ago. You've been evolving it regularly. What's the StoreOnce news? >> Yes, so in the secondary storage world, we've seen the movement from PBBA, so Purpose-Built Backup Appliances, either morphing into very intelligent software that runs on commodity hardware, or an integrated appliance approach, right? So you've got a integrated DR appliance that seamlessly integrates into your environment. So what we've been doing with StoreOnce, this is our 4th generation system and it's got a lot of great attributes. It has a system, right. It's available in a rote form factor at different capacities. It's also available as a software-defined version so you can run that on PRM, you can run it off PRM. It scales up to multiple petabytes in a software-only version. So we've got a couple different use cases for it, but what I think is one of the key things is that we're providing a very integrated experience for customers who are 3PAR Nimble customers. So it allows you to essentially federate your primary all-flash storage with secondary. And then we actually provide a number of use cases to go out to the Cloud as well. Very easy to use, geared towards the application admin, very integrative. >> So it's bigger, better, faster, and you've got this integration, a confederation as you called it, across different platforms. What's the key technical enabler there? >> Yeah, so we have a really extensible platform for software that we call Recovery Manager Central. Essentially, it provides a number of different use cases and user stories around copy data management. So it's gonna allow you to take application integrated snapshots. It's gonna allow you to do that either in the application framework, so if you're a DVA and you do Arman, you could do it in there, or if you have your own custom applications, you can write to the API. So it allows you to do snapshots, full clones, it'll allow you to do DR, so one box to another similar system, it'll allow you to go from primary to secondary, it'll allow you to archive out to the Cloud, and then all of that in reverse, right? So you can pull all of that data back and it'll give you visibility across all those assets. So, the past where you, as a customer, did all this on your own, right, bought on horizontal lines? We're giving a customer, based on a set of outcomes and applications, a complete vertically-oriented solution. >> Okay, so that's the, really, second piece of hard news. >> Yeah. >> Recovery Manager Central, RMC, 6.0, right-- >> Yeah. >> Is the release that we're on? And that's copy data management essentially-- >> Absolutely. >> Is what you're talking about. It's your catalog, right, so your tech underneath that, and you're applying that now across the portfolio, right? >> Absolutely. So, we're extending that from... We've had, for the past year, that ability to do the copy data management directly from 3PAR. We're extending that to provide that for Nimble. Right, so for Nimble customers that want to use all-flash, they want to use hybrid flash arrays from Nimble, you can go to secondary storage in StoreOnce and then out to the Cloud. >> Okay, and that's what 6.0 enables-- >> Yeah, exactly. >> That Nimble piece and then out to the Cloud. Okay, third piece of news is an ecosystem announcement with Commvault. Take us through that. >> Yeah, so we understand at HPE, given the fact that we're very, very focused on hybrid Cloud and we have a lot of customers that have been our customers for a long time, none of these opportunities are greenfield, right, at the end of the day. So your customers are, they have to integrate with existing solutions, and in a lot of cases, they have some partners for data protection. So one of the things that we've done with this ecosystem is made very public our APIs and how to integrate our systems. So we're storage people, we are data management folks, we do big data, we also do infrastructure. So we know how to manage the infrastructure, move data very seamlessly between primary, secondary, and the Cloud. And what we do is, we open up those APIs in those use cases to all of our partners and our customers. So, in that, we're announcing a number of integrations with Commvault, so they're gonna be integrating with our de-duplication and compression framework, as well as being able to program to what we call Cloud Bank, right? So, we'll be able to, in effect, integrate with Commvault with our primary storage, be able to do rapid recovery from StoreOnce in a number of backup use cases, and then being able to go out to the cloud, all managed through customers' Commvault interface. >> All right, so if I hear you correctly, you've just gotta double click on the Commvault integration. It's not just a go-to-market setup. It's deeper engineering and integration that you guys are doing. >> Absolutely. >> Okay, great. And then, of course the fourth piece is around, so your bases are loaded here, the fourth piece is around the Cloud economics, Cloud pricing model. Your GreenLake model, the utility pricing has gotten a lot of traction. When we're at HPE Discover, customers talking about it, you guys have been leaders there. Talk about GreenLake and how that model fits into this. >> Yeah, so, in the technology talk track we talk about, essentially, how to make this simple and how to make it scalable. At the end of the day, on the buying pattern side, customers expect elasticity, right? So, what we're providing for our customers is when they want to do either a specific integration or implementation of one of those components from a technology perspective, we can provide that. If they're doing a complete re-architecture and want to understand how I can essentially use secondary storage better and I wanna take advantage of all that data that I have sitting in there, I can provide that whole experience to customers as a service, right? So, the primary storage, your secondary storage, the Cloud capacity, even some of the ISV partner software that we provide, I can take that as an entire, vetted solution, with reference architectures and the expertise to implement, and I can give that to a customer in an OpEx as a service elastic purchasing model. And that is very unique for HPE and that's what we've gone to market with GreenLake, and we're gonna be providing more solutions like that, but in this case, we're announcing the fact that you can buy that whole experience, backup as a service, data protection as a service, through GreenLake from HPE. >> So how does that work, Patrick, practically speaking? A customer will, what, commit to some level of capacity, let's say, as an example, and then HPE will put in some extra headroom if, in fact, that's needed, you maybe sit down with the customer and do some kind of capacity planning, or how does that actually work, practically speaking? >> Yeah, absolutely. So we work with customers on the architecture, right, up front. So we have a set of vetted architectures. We try to avoid snowflakes, right, at the end of the day. We want to talk to customers around outcomes. So if a customer is trying to reach outcome XYZ, we come with a recommendation on how to do that. And what we can do is, we don't have very high up-front commitments and it's very elastic in the way that we approach the purchasing experience. So we're able to fit those modules in. And then we've made some number of acquisitions over the last couple years, right? So, on the advisory side, we have Cloud Technology Partners. We come in and talk about how do you do a hybrid cloud backup as a service, right? So we can advise customers on how to do that and build that into the experience. We acquired CloudCruiser, right? So we have the billing and the monitoring and everything that gets very, very granular on how you use that service, and that goes into how we bill customers on a per-metric usage format. And so we're able to package all of that up and we have, this is a kind of a little-known fact, very, very high NPS score for HPE financial services. Right, so the combination of our point next services, advisory, financial services, really puts a lot of meat behind GreenLake as a really good customer experience around elasticity. >> Okay, now all this stuff is gonna be available calendar Q4 of 2018, correct? >> Correct. >> Okay, so if you've seen videos like this before, we like to talk about what it is, how it works, and then we like to bring it home with the business impact. So thinking about these four announcements, and you can drill deeper on any one that you like, but I'd like to start, at least, holistically, what's the business impact of all of this? Obviously, you've got Cloud, we talked about some of the trends up front, but what are you guys telling customers is the real ROI? >> So, I think the big ROI is it moves secondary storage from a TCO conversation to an ROI conversation. Right, so instead of selling customers a solution where you're gonna have data that sits there waiting for something to happen, I'm giving customers a solution that's consumed as a service to be able to mine and utilize that secondary data, right? Whether it's for simple tasks like patch verification, application rollouts, things like that, and actually lowering the cost of your primary storage in doing that, which is usually pretty expensive from a storage perspective. I'm also helping customers save time, right? By providing these integrated experiences from primary to secondary to Cloud and making that automatic, I do help customers save quite a bit in OpEx from an operator perspective. And they can take those resources and move them on to higher impact projects like DevOps, CloudOps, things of that nature. That's a big impact from a customer perspective. >> So there's a CapEx to OpEx move for those customers that want to take advantage of GreenLake. [Patrick] Yep. >> So certain CFOs will like that story. But I think the other piece that, to me anyway, is most important is, especially in this world of digital transformation, I know it's a buzzword, but it's real. When you go to talk to people, they don't wanna do the heavy lifting of infrastructure management, the day-to-day infrastructure management. A lot of mid-size customers, they just don't have the resources to do it anymore. >> Correct. >> And they're under such pressure to digitize, every company wants to become a software company. Benioff talks about that, Satya Nadella talks about that, Antonio talks about digital transformation. And so it's on CEOs' minds. They don't want to be paying people for these mundane tasks. They really wannna shift them to these digital transformation initiatives and drive more business value. >> Absolutely. So you said it best, right, we wanna drive the customer experience to focusing on high-value things that'll enable their digital transformation. So, as a vision, what we're gonna keep on providing, and you've seen that with InfoSight on Nimble, InfoSight for 3PAR, and our vision around AI for the data center, these tasks around data protection, they're repeatable tasks, how to protect data, how to move data, how to mine that data. So if we can provide recommendations and some predictive analytics and experiences to the customers around this, and essentially abstract that and just have the customers focus on defining their SLA, and we're worried about delivering that SLA, then that's a huge win for us and our customers. And that's our vision, that's what we're gonna be providing them. >> Yeah, automation is the key. You've got some tools in the toolkit to help do that and it's just gonna escalate from here. It feels like we're on the early part of the S-curve and it's just gonna really spike. >> Absolutely. >> All right, Patrick. Hey, thanks for coming in and taking us through this news, and congratulations on getting this stuff done and we'll be watching the marketplace. Thank you. >> Great. Kudos to the team, great announcement, and we look forward to working with you guys again. >> All right, thanks for watching, everybody. We'll see you next time. This is Dave Vellante on theCUBE. (gentle music)
SUMMARY :
From the SiliconANGLE Media Office Great to see you again. It's a big buzzword in the industry but it's real. So the winds of change in secondary storage for example, to be able to burst out there So the piece I want to bring to the And let's expand the definition of Cloud the ability to move data from point A to point B So you got data, you got digital, which is data, of things that you can do with the data, So we have a ton of solutions for our customers It's a platform that you guys announced So it allows you to essentially federate What's the key technical enabler there? primary to secondary, it'll allow you to Okay, so that's the, really, second piece across the portfolio, right? We're extending that to provide that for Nimble. That Nimble piece and then out to the Cloud. So one of the things that we've done that you guys are doing. Talk about GreenLake and how that model fits into this. and I can give that to a customer in an OpEx and build that into the experience. of the trends up front, but what are you guys and actually lowering the cost of your primary So there's a CapEx to OpEx move for those have the resources to do it anymore. and drive more business value. the customer experience to focusing on Yeah, automation is the key. this stuff done and we'll be watching the marketplace. and we look forward to working with you guys again. We'll see you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Patrick Osborne | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Satya Nadella | PERSON | 0.99+ |
Antonio | PERSON | 0.99+ |
Patrick | PERSON | 0.99+ |
80 clouds | QUANTITY | 0.99+ |
Nimble | ORGANIZATION | 0.99+ |
Benioff | PERSON | 0.99+ |
second piece | QUANTITY | 0.99+ |
fourth piece | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
each cloud | QUANTITY | 0.99+ |
Boston, Massachusetts | LOCATION | 0.98+ |
one | QUANTITY | 0.98+ |
GreenLake | ORGANIZATION | 0.98+ |
GDPR | TITLE | 0.98+ |
SiliconANGLE Media Office | ORGANIZATION | 0.98+ |
HPE | ORGANIZATION | 0.98+ |
CUBE | ORGANIZATION | 0.98+ |
Cloud Technology Partners | ORGANIZATION | 0.98+ |
ORGANIZATION | 0.98+ | |
HPE Discover | ORGANIZATION | 0.97+ |
InfoSight | ORGANIZATION | 0.97+ |
today | DATE | 0.96+ |
several years ago | DATE | 0.96+ |
third piece | QUANTITY | 0.96+ |
RMC | ORGANIZATION | 0.96+ |
four-pack | QUANTITY | 0.95+ |
HPEs | ORGANIZATION | 0.95+ |
Cloud | TITLE | 0.94+ |
past year | DATE | 0.94+ |
One | QUANTITY | 0.94+ |
Commvault | ORGANIZATION | 0.93+ |
eight clouds | QUANTITY | 0.93+ |
CloudOps | TITLE | 0.93+ |
four announcements | QUANTITY | 0.93+ |
4th generation system | QUANTITY | 0.91+ |
Cloud Bank | TITLE | 0.9+ |
OpEx | TITLE | 0.9+ |
Cloud Ops | ORGANIZATION | 0.86+ |
StoreOnce | TITLE | 0.86+ |
DevOps | ORGANIZATION | 0.86+ |
Bromite | ORGANIZATION | 0.85+ |
last couple years | DATE | 0.82+ |
3PAR | ORGANIZATION | 0.81+ |
Commvault | TITLE | 0.8+ |
3PAR | TITLE | 0.79+ |
couple | QUANTITY | 0.79+ |
Arman | TITLE | 0.78+ |
DevOps | TITLE | 0.76+ |
2018 | DATE | 0.76+ |
CapEx | TITLE | 0.71+ |
Q4 of | DATE | 0.71+ |
StoreOnce | ORGANIZATION | 0.71+ |
theCUBE | ORGANIZATION | 0.63+ |
3PAR Nimble | ORGANIZATION | 0.63+ |
PBBA | TITLE | 0.59+ |
Azure | TITLE | 0.57+ |
GreenLake | TITLE | 0.57+ |
HPE Secondary Storage for Hybrid cloud
>> From the SiliconANGLE Media Office in Boston, Massachusetts, it's theCUBE! Now, here's your host, Dave Vellante. >> Hi everybody, welcome to the special CUBE conversation on secondary storage and data protection, which is one of the hottest topics in the business right now. Cloud, multi-cloud, bringing the Cloud experience to wherever your data lives and protecting that data driven by digital transformation. We're gonna talk about that with Patrick Osborne, the Vice President and General Manager for big data and secondary storage at HPE, good friend and CUBE alum. Great to see you again. Thanks for coming on. >> Great, thanks for having us. >> So let's start with some of those trends that I mentioned. I think, let's start with digital transformation. It's a big buzzword in the industry but it's real. I travel around, I talk to customers all the time, everybody's trying to get digital transformation right. And digital means data, data needs to be protected in new ways now, and so when we trickle down into your world, data protection, what are you seeing in terms of the impact of digital and digital transformation on data protection? >> Absolutely, great question. So the winds of change in secondary storage are blowing pretty hard right now. I think there's a couple different things that are driving that conversation. A, the specialization of people with specific backup teams, right, that's moving away, right. You're moving away from general storage administration and specialized teams to people focusing a lot of those resources now on Cloud Ops team, DevOps team, application development. So they want that activity of data protection to be automated and invisible. Like you said before, in terms of being able to re-use that data, the old days of essentially having a primary dataset and then pushing it off to some type of secondary storage which just sits there over time, is not something that customers want anymore. >> Right. >> They wanna be able to use that data, they wanna be able to generate copies of that, do test and dev, gain insight from that, being able to move that to the Cloud, for example, to be able to burst out there or do it for DR activities. So I think there's a lot of things that are happening when it comes to data that are certainly changing the requirements and expectations around secondary storage. >> So the piece I want to bring to the conversation is Cloud and I saw a stat recently that the average company, the average enterprise has, like, eight clouds, and I was thinking, sheesh, small company like ours has eight clouds, so I mean, the average enterprise must have 80 clouds when you start throwing in all the sass. >> Yeah. >> So Cloud and specifically, multi-cloud, you guys, HPEs, always been known for open platform, whatever the customer wants to do, we'll do it. So multi-cloud becomes really important. And let's expand the definition of Cloud to include private cloud on PRM, what we call True Private Cloud in the Wikibon world, but whether it's Azure, AWS, Google, dot, dot, dot, what are you guys seeing in terms of the pressure from customers to support multi... They don't want a silo, a data protection silo for each cloud, right? >> Absolutely. So they don't want silos in general, right? So I think a couple of key things that you brought up, private cloud is very interesting for customers. Whether they're gonna go on PRM or off PRM, they absolutely want to have the experience on PRM. So what we're providing customers is the ability, through APIs and seamless integration into their existing application frameworks, the ability to move data from point A to point B to point C, which could be primary all-flash, secondary systems, cloud targets, but have that be able to be automated full API set and provide a lot of those capabilities, those user stories around data protection and re-use, directly to the developers, right, and the database admins and whoever's doing this news or DevOps area. The second piece is that, like you said, everyone's gonna have multiple clouds, and what we want to do is we want to be able to give customers an intelligent experience around that. We don't necessarily need to own all the infrastructure, right, but we need to be able to facilitate and provide the visibility of where that data's gonna land, and over time, with our capabilities that we have around InfoSight, we wanna be able to do that predictably, make recommendations, have that whole population of customers learn from each other and provide some expert analysis for our customers as to where to place workloads. >> These trends, Patrick, they're all interrelated, so they're not distinct and before we get into the hard news, I wanna kinda double down on another piece of this. So you got data, you got digital, which is data, you've got new pressures on data protection, you've got the cloud-scale, a lot of diversity. We haven't even talked about the edge. That's another, sort of, piece of it. But people wanna get more out of their data protection investment. They're kinda sick of just spending on insurance. They'd like to get more value out of it. You've mentioned DevOps before. >> Yep. >> Better access to that data, certainly compliance. Things like GDPR have heightened awareness of things that you can do with the data, not just for backup, and not even just for compliance, but actually getting value out of the data. Your thoughts on that trend? >> Yeah, so from what we see for our customers, they absolutely wanna reuse data, right? So we have a ton of solutions for our customers around very low latency, high performance optimized flash storage in 3PAR and Nimble, different capabilities there, and then being able to take that data and move it off to a hybrid flash array, for example, and then do workloads on that, is something that we're doing today with our customers, natively as well as partnering with some of our ISV ecosystem. And then sort of a couple new use cases that are coming is that I want to be able to have data providence. So I wanna share some of my data, keep that in a colo but be able to apply compute resources, whether those are VMs, whether they are functions, lambda functions, on that data. So we wanna bring the compute to the data, and that's another use case that we're enabling for our customers, and then ultimately using the Cloud as a very, very low-cost, scalable and elastic tier storage for archive and retention. >> One of the things we've been talking about in theCUBE community is you hear that Bromite data is the new oil, and somebody in the community was saying, you know what? It's actually more valuable than oil. When I have oil, I can put it in my house or I can put it my car. But data, the unique attribute of data is I can use it over and over and over again. And again, that puts more pressure on data protection. All right, let's get into some of the hard news here. You've got kind of a four-pack of news that we wanna talk about. Let's start with StoreOnce. It's a platform that you guys announced several years ago. You've been evolving it regularly. What's the StoreOnce news? >> Yes, so in the secondary storage world, we've seen the movement from PBBA, so Purpose-Built Backup Appliances, either morphing into very intelligent software that runs on commodity hardware, or an integrated appliance approach, right? So you've got a integrated DR appliance that seamlessly integrates into your environment. So what we've been doing with StoreOnce, this is our 4th generation system and it's got a lot of great attributes. It has a system, right. It's available in a rote form factor at different capacities. It's also available as a software-defined version so you can run that on PRM, you can run it off PRM. It scales up to multiple petabytes in a software-only version. So we've got a couple different use cases for it, but what I think is one of the key things is that we're providing a very integrated experience for customers who are 3PAR Nimble customers. So it allows you to essentially federate your primary all-flash storage with secondary. And then we actually provide a number of use cases to go out to the Cloud as well. Very easy to use, geared towards the application admin, very integrative. >> So it's bigger, better, faster, and you've got this integration, a confederation as you called it, across different platforms. What's the key technical enabler there? >> Yeah, so we have a really extensible platform for software that we call Recovery Manager Central. Essentially, it provides a number of different use cases and user stories around copy data management. So it's gonna allow you to take application integrated snapshots. It's gonna allow you to do that either in the application framework, so if you're a DVA and you do Arman, you could do it in there, or if you have your own custom applications, you can write to the API. So it allows you to do snapshots, full clones, it'll allow you to do DR, so one box to another similar system, it'll allow you to go from primary to secondary, it'll allow you to archive out to the Cloud, and then all of that in reverse, right? So you can pull all of that data back and it'll give you visibility across all those assets. So, the past where you, as a customer, did all this on your own, right, bought on horizontal lines? We're giving a customer, based on a set of outcomes and applications, a complete vertically-oriented solution. >> Okay, so that's the, really, second piece of hard news. >> Yeah. >> Recovery Manager Central, RMC, 6.0, right-- >> Yeah. >> Is the release that we're on? And that's copy data management essentially-- >> Absolutely. >> Is what you're talking about. It's your catalog, right, so your tech underneath that, and you're applying that now across the portfolio, right? >> Absolutely. So, we're extending that from... We've had, for the past year, that ability to do the copy data management directly from 3PAR. We're extending that to provide that for Nimble. Right, so for Nimble customers that want to use all-flash, they want to use hybrid flash arrays from Nimble, you can go to secondary storage in StoreOnce and then out to the Cloud. >> Okay, and that's what 6.0 enables-- >> Yeah, exactly. >> That Nimble piece and then out to the Cloud. Okay, third piece of news is an ecosystem announcement with Commvault. Take us through that. >> Yeah, so we understand at HPE, given the fact that we're very, very focused on hybrid Cloud and we have a lot of customers that have been our customers for a long time, none of these opportunities are greenfield, right, at the end of the day. So your customers are, they have to integrate with existing solutions, and in a lot of cases, they have some partners for data protection. So one of the things that we've done with this ecosystem is made very public our APIs and how to integrate our systems. So we're storage people, we are data management folks, we do big data, we also do infrastructure. So we know how to manage the infrastructure, move data very seamlessly between primary, secondary, and the Cloud. And what we do is, we open up those APIs in those use cases to all of our partners and our customers. So, in that, we're announcing a number of integrations with Commvault, so they're gonna be integrating with our de-duplication and compression framework, as well as being able to program to what we call Cloud Bank, right? So, we'll be able to, in effect, integrate with Commvault with our primary storage, be able to do rapid recovery from StoreOnce in a number of backup use cases, and then being able to go out to the cloud, all managed through customers' Commvault interface. >> All right, so if I hear you correctly, you've just gotta double click on the Commvault integration. It's not just a go-to-market setup. It's deeper engineering and integration that you guys are doing. >> Absolutely. >> Okay, great. And then, of course the fourth piece is around, so your bases are loaded here, the fourth piece is around the Cloud economics, Cloud pricing model. Your GreenLake model, the utility pricing has gotten a lot of traction. When we're at HPE Discover, customers talking about it, you guys have been leaders there. Talk about GreenLake and how that model fits into this. >> Yeah, so, in the technology talk track we talk about, essentially, how to make this simple and how to make it scalable. At the end of the day, on the buying pattern side, customers expect elasticity, right? So, what we're providing for our customers is when they want to do either a specific integration or implementation of one of those components from a technology perspective, we can provide that. If they're doing a complete re-architecture and want to understand how I can essentially use secondary storage better and I wanna take advantage of all that data that I have sitting in there, I can provide that whole experience to customers as a service, right? So, the primary storage, your secondary storage, the Cloud capacity, even some of the ISV partner software that we provide, I can take that as an entire, vetted solution, with reference architectures and the expertise to implement, and I can give that to a customer in an OpEx as a service elastic purchasing model. And that is very unique for HPE and that's what we've gone to market with GreenLake, and we're gonna be providing more solutions like that, but in this case, we're announcing the fact that you can buy that whole experience, backup as a service, data protection as a service, through GreenLake from HPE. >> So how does that work, Patrick, practically speaking? A customer will, what, commit to some level of capacity, let's say, as an example, and then HPE will put in some extra headroom if, in fact, that's needed, you maybe sit down with the customer and do some kind of capacity planning, or how does that actually work, practically speaking? >> Yeah, absolutely. So we work with customers on the architecture, right, up front. So we have a set of vetted architectures. We try to avoid snowflakes, right, at the end of the day. We want to talk to customers around outcomes. So if a customer is trying to reach outcome XYZ, we come with a recommendation on how to do that. And what we can do is, we don't have very high up-front commitments and it's very elastic in the way that we approach the purchasing experience. So we're able to fit those modules in. And then we've made some number of acquisitions over the last couple years, right? So, on the advisory side, we have Cloud Technology Partners. We come in and talk about how do you do a hybrid cloud backup as a service, right? So we can advise customers on how to do that and build that into the experience. We acquired CloudCruiser, right? So we have the billing and the monitoring and everything that gets very, very granular on how you use that service, and that goes into how we bill customers on a per-metric usage format. And so we're able to package all of that up and we have, this is a kind of a little-known fact, very, very high NPS score for HPE financial services. Right, so the combination of our point next services, advisory, financial services, really puts a lot of meat behind GreenLake as a really good customer experience around elasticity. >> Okay, now all this stuff is gonna be available calendar Q4 of 2018, correct? >> Correct. >> Okay, so if you've seen videos like this before, we like to talk about what it is, how it works, and then we like to bring it home with the business impact. So thinking about these four announcements, and you can drill deeper on any one that you like, but I'd like to start, at least, holistically, what's the business impact of all of this? Obviously, you've got Cloud, we talked about some of the trends up front, but what are you guys telling customers is the real ROI? >> So, I think the big ROI is it moves secondary storage from a TCO conversation to an ROI conversation. Right, so instead of selling customers a solution where you're gonna have data that sits there waiting for something to happen, I'm giving customers a solution that's consumed as a service to be able to mine and utilize that secondary data, right? Whether it's for simple tasks like patch verification, application rollouts, things like that, and actually lowering the cost of your primary storage in doing that, which is usually pretty expensive from a storage perspective. I'm also helping customers save time, right? By providing these integrated experiences from primary to secondary to Cloud and making that automatic, I do help customers save quite a bit in OpEx from an operator perspective. And they can take those resources and move them on to higher impact projects like DevOps, CloudOps, things of that nature. That's a big impact from a customer perspective. >> So there's a CapEx to OpEx move for those customers that want to take advantage of GreenLake. [Patrick] Yep. >> So certain CFOs will like that story. But I think the other piece that, to me anyway, is most important is, especially in this world of digital transformation, I know it's a buzzword, but it's real. When you go to talk to people, they don't wanna do the heavy lifting of infrastructure management, the day-to-day infrastructure management. A lot of mid-size customers, they just don't have the resources to do it anymore. >> Correct. >> And they're under such pressure to digitize, every company wants to become a software company. Benioff talks about that, Satya Nadella talks about that, Antonio talks about digital transformation. And so it's on CEOs' minds. They don't want to be paying people for these mundane tasks. They really wannna shift them to these digital transformation initiatives and drive more business value. >> Absolutely. So you said it best, right, we wanna drive the customer experience to focusing on high-value things that'll enable their digital transformation. So, as a vision, what we're gonna keep on providing, and you've seen that with InfoSight on Nimble, InfoSight for 3PAR, and our vision around AI for the data center, these tasks around data protection, they're repeatable tasks, how to protect data, how to move data, how to mine that data. So if we can provide recommendations and some predictive analytics and experiences to the customers around this, and essentially abstract that and just have the customers focus on defining their SLA, and we're worried about delivering that SLA, then that's a huge win for us and our customers. And that's our vision, that's what we're gonna be providing them. >> Yeah, automation is the key. You've got some tools in the toolkit to help do that and it's just gonna escalate from here. It feels like we're on the early part of the S-curve and it's just gonna really spike. >> Absolutely. >> All right, Patrick. Hey, thanks for coming in and taking us through this news, and congratulations on getting this stuff done and we'll be watching the marketplace. Thank you. >> Great. Kudos to the team, great announcement, and we look forward to working with you guys again. >> All right, thanks for watching, everybody. We'll see you next time. This is Dave Vellante on theCUBE. (gentle music)
SUMMARY :
From the SiliconANGLE Media Office Great to see you again. It's a big buzzword in the industry but it's real. So the winds of change in secondary storage for example, to be able to burst out there So the piece I want to bring to the And let's expand the definition of Cloud the ability to move data from point A to point B So you got data, you got digital, which is data, of things that you can do with the data, So we have a ton of solutions for our customers It's a platform that you guys announced So it allows you to essentially federate What's the key technical enabler there? primary to secondary, it'll allow you to Okay, so that's the, really, second piece across the portfolio, right? We're extending that to provide that for Nimble. That Nimble piece and then out to the Cloud. So one of the things that we've done that you guys are doing. Talk about GreenLake and how that model fits into this. and I can give that to a customer in an OpEx and build that into the experience. of the trends up front, but what are you guys and actually lowering the cost of your primary So there's a CapEx to OpEx move for those have the resources to do it anymore. and drive more business value. the customer experience to focusing on Yeah, automation is the key. this stuff done and we'll be watching the marketplace. and we look forward to working with you guys again. We'll see you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Patrick Osborne | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Satya Nadella | PERSON | 0.99+ |
Antonio | PERSON | 0.99+ |
Patrick | PERSON | 0.99+ |
80 clouds | QUANTITY | 0.99+ |
Nimble | ORGANIZATION | 0.99+ |
Benioff | PERSON | 0.99+ |
fourth piece | QUANTITY | 0.99+ |
second piece | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
each cloud | QUANTITY | 0.99+ |
Boston, Massachusetts | LOCATION | 0.98+ |
one | QUANTITY | 0.98+ |
GreenLake | ORGANIZATION | 0.98+ |
GDPR | TITLE | 0.98+ |
ORGANIZATION | 0.98+ | |
SiliconANGLE Media Office | ORGANIZATION | 0.98+ |
HPE | ORGANIZATION | 0.98+ |
CUBE | ORGANIZATION | 0.98+ |
Cloud Technology Partners | ORGANIZATION | 0.98+ |
HPE Discover | ORGANIZATION | 0.97+ |
InfoSight | ORGANIZATION | 0.97+ |
today | DATE | 0.96+ |
several years ago | DATE | 0.96+ |
third piece | QUANTITY | 0.96+ |
RMC | ORGANIZATION | 0.96+ |
four-pack | QUANTITY | 0.95+ |
Cloud | TITLE | 0.94+ |
past year | DATE | 0.94+ |
HPEs | ORGANIZATION | 0.94+ |
One | QUANTITY | 0.94+ |
Commvault | ORGANIZATION | 0.93+ |
eight clouds | QUANTITY | 0.93+ |
CloudOps | TITLE | 0.93+ |
four announcements | QUANTITY | 0.93+ |
4th generation system | QUANTITY | 0.91+ |
Cloud Bank | TITLE | 0.9+ |
OpEx | TITLE | 0.9+ |
Cloud Ops | ORGANIZATION | 0.86+ |
StoreOnce | TITLE | 0.86+ |
DevOps | ORGANIZATION | 0.86+ |
Bromite | ORGANIZATION | 0.85+ |
last couple years | DATE | 0.82+ |
3PAR | ORGANIZATION | 0.81+ |
Commvault | TITLE | 0.8+ |
3PAR | TITLE | 0.79+ |
couple | QUANTITY | 0.79+ |
Arman | TITLE | 0.78+ |
DevOps | TITLE | 0.76+ |
2018 | DATE | 0.76+ |
CapEx | TITLE | 0.71+ |
Q4 of | DATE | 0.71+ |
StoreOnce | ORGANIZATION | 0.71+ |
theCUBE | ORGANIZATION | 0.63+ |
3PAR Nimble | ORGANIZATION | 0.63+ |
PBBA | TITLE | 0.59+ |
GreenLake | TITLE | 0.57+ |
Azure | TITLE | 0.55+ |
Wasabi Founder Heats Up Cloud Storage Market
>> Hi everyone, I'm Sam Kahane and you're watching theCUBE, on the ground, extremely excited for our segment here. Wasabi just launched last week on Wednesday. We have their co-founder and CEO with us here today on theCUBE. David, thank you for coming on today. >> Hey, nice to be here Sam. Thank you. >> So, unbelievably exciting. Can you tell the world about Wasabi? >> So if you know what Amazon S3 cloud storage is, you pretty much know what Wasabi is, except we're one-fifth the price and six-times as fast. (laughing) >> Incredible. So, you know, co-founder and CEO of Carbonite decided to start Wasabi. Tell us, why Wasabi? >> Why the name Wasabi? >> Well, the name as well. >> Cause it's hot. (laughing) My co-founder Jeff Flowers, who's one of the great technical geniuses I've ever met in my life, came to me about three years ago, with this paper design for a new storage architecture, and said, "I think we could do something that's going to be far faster and far more efficient in storage than what the cloud providers Google, Amazon and Microsoft are doing," and I said okay, "Well you should go check it out." So he left Carbonite, and we spent about a year doing design work, and eventually we ended up with this design that was so compelling to me that I decided it was time to jump on board, and join Jeff again, and this is this is the sixth company that we founded together since 1980. So we kind of know how to complete each other's sentences. It's been a winning combination, there's been quite a lot of successes there. >> So, I'd love to hear about the vision of Wasabi. >> My vision of Wasabi and cloud storage in general is that cloud storage ought to be like electricity or bandwidth, it should just be a commodity. Right now you have all these silly tiers, you have Coldline and Nearline and Standard and Glacier, and these artificial tiers that Amazon, Google and Microsoft have made to try to protect their high price spread. Wasabi is faster than the fastest of them and it's cheaper than the cheapest of them, so why do you need all these silly things in the middle? It's just like electricity, you go to plug your computer or your blender into the wall, you don't have three different plugs, one for great electricity, one for so-so electricity and one for crumby but cheap electricity, you know, you just have one. So one size fits almost all needs, and I think that's the way cloud storage is going to be as well. When we get to that, it'll be best man wins, right? The guy with the best performance and the lowest cost is going to win, and we feel we can compete in that environment. >> So a buzzword I've been hearing is 'immutable buckets', can you tell me about that? >> Yeah, so that's the one functional difference between Amazon S3 and Wasabi, otherwise Wasabi is completely 100% plug compatible with Amazon. You can unplug Amazon, plug in Wasabi and all your applications should work, and the other way around too. That's part of being a commodity, right? Your suppliers should be interchangeable. But, immutable buckets is something which really came from our Carbonite heritage. We know from Carbonite that most data loss is not due to failing disk drives and things like that today, it's stupid mistakes, you know people accidentally overwrite or delete a file? It's bugs in application software cause data to get overwritten or deleted. Then you get things like Wannacry, which come in, grabs all the data on your computer and encrypts it. So immutability means if you store data in an immutable bucket, it cannot be altered, and it cannot be deleted. It can't be deleted by you, it can't be deleted by us, and it certainly can't be deleted by a hacker or somebody breaking in from the outside. So, about 10 or 20 years ago, people invented something called the WORM tape, write-once-read-many, that was really one of the first forms of immutable digital storage. Once you put your data on there, that was it, when the tape is full, you take it off, put it in the drawer, and it's safe. That's not a very good system by today's standards, but we've built immutability into Wasabi, so that when you create a bucket in Wasabi, and for those people who don't know about object storage technology, a bucket is like a folder, and an object is like a file, when you create a bucket in Wasabi, you can flip a switch and you can say, "I want to make this bucket immutable for 10 years," let's say, and any time you go in and try to erase or alter any of the data that's been written, you just get an error message, which is what the wannabe virus would have gotten had it tried to encrypt that data. So the only downside of immutability is once you put something in there, you can't go in and clean it up. You're going to be stuck paying to store that data for a long time, but at our price of 0.39 cents per gigabyte per month, I don't think anybody would bother ever trying to clean it up anyway. You know, it's like when's a good time to go empty that U-Haul storage locker? Eh, I'll write another cheque for $40 and think about it next time. (laughing) >> So your tag-on is a hot storage? >> Hot storage, yeah. >> So you launched one week ago, on Wednesday. Tell us about that first week, how crazy was it? >> Well the only thing we did was some PR, so there were a number of articles that appeared about us, and we were expecting maybe 15, 20 companies would come sign up in the first week, do a free trial. But by 48 hours in, we were over 150, and by one more day we were at over 200. And we kind of had to shut down new sign-ups because it was just more than we could handle. We were just worried that we would get overwhelmed. Now we're trying to catch up, we just put more storage online in the last 24 hours, and now we're working through the stack of people. I don't know how many more have come in since then, but it's been a lot, so we're working through that now to give people their passcodes so that they can get on the system, hopefully by this time next week we'll be caught up. >> Well congratulations. >> Thanks, thanks! >> Any last words that you want to leave the people with about Wasabi? >> Well anytime you drop the price of anything by 80%, unexpected things are going to happen. When bandwidth suddenly got cheap, you got Netflix and movies over the internet and that kind of stuff, which people hadn't even dreamed about. I'll be really interested to see what people do with really cheap, fast storage. When you think about all these storage intensive apps like Pinterest, Instagram and things that involve videos and so forth, storage has got to be your biggest cost. And most of these apps are free, so the only revenue you're going to get is going to be advertising. I'll bet there are a lot of business models that just won't work at Amazon's prices, but drop those prices by 80%, and now suddenly you say, "Wow, this could be profitable." I'm not going to invent those apps, but I'm sure that some of the people who are signing up for Wasabi today are thinking about things that didn't work in the old regime, but with commodity cloud storage at these low prices, it starts to make sense. So we'll see, I think it's going to change the world. >> I hope so, and it's going to be exciting to watch. >> Yeah, it'll be fun. >> We'll need to catch up again soon and check back in on the growth. But David, thank you for coming on theCUBE tonight! >> You're welcome Sam, thank you. >> And CUBENation, thank you for watching. (Outro music)
SUMMARY :
David, thank you for coming on today. Hey, nice to be here Sam. Can you tell the world about Wasabi? So if you know what Amazon S3 cloud storage is, So, you know, co-founder and CEO of Carbonite and said, "I think we could do something that's going to be so why do you need all these silly things in the middle? so that when you create a bucket in Wasabi, So you launched one week ago, on Wednesday. and by one more day we were at over 200. but drop those prices by 80%, and now suddenly you say, But David, thank you for coming on theCUBE tonight! And CUBENation, thank you for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
ORGANIZATION | 0.99+ | |
Amazon | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
David | PERSON | 0.99+ |
Sam Kahane | PERSON | 0.99+ |
Sam | PERSON | 0.99+ |
Jeff Flowers | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Carbonite | ORGANIZATION | 0.99+ |
Wasabi | ORGANIZATION | 0.99+ |
$40 | QUANTITY | 0.99+ |
10 years | QUANTITY | 0.99+ |
80% | QUANTITY | 0.99+ |
15 | QUANTITY | 0.99+ |
one week ago | DATE | 0.99+ |
100% | QUANTITY | 0.99+ |
six-times | QUANTITY | 0.99+ |
Coldline | ORGANIZATION | 0.99+ |
last week | DATE | 0.99+ |
Wednesday | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
48 hours | QUANTITY | 0.99+ |
sixth company | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
1980 | DATE | 0.99+ |
Nearline | ORGANIZATION | 0.99+ |
Standard | ORGANIZATION | 0.99+ |
next week | DATE | 0.98+ |
tonight | DATE | 0.98+ |
over 150 | QUANTITY | 0.98+ |
ORGANIZATION | 0.97+ | |
first forms | QUANTITY | 0.97+ |
over 200 | QUANTITY | 0.95+ |
20 years ago | DATE | 0.94+ |
three years ago | DATE | 0.94+ |
one-fifth | QUANTITY | 0.94+ |
Wasabi | LOCATION | 0.94+ |
Netflix | ORGANIZATION | 0.94+ |
first week | QUANTITY | 0.91+ |
one more day | QUANTITY | 0.89+ |
Glacier | ORGANIZATION | 0.88+ |
about a year | QUANTITY | 0.87+ |
20 companies | QUANTITY | 0.85+ |
S3 | COMMERCIAL_ITEM | 0.83+ |
ORGANIZATION | 0.82+ | |
three | QUANTITY | 0.82+ |
0.39 cents per gigabyte per month | QUANTITY | 0.77+ |
Wannacry | TITLE | 0.69+ |
last 24 hours | DATE | 0.65+ |
about 10 | DATE | 0.62+ |
theCUBE | TITLE | 0.61+ |
CUBENation | PERSON | 0.59+ |
theCUBE | ORGANIZATION | 0.48+ |
Ed Casmer & James Johnson Event Sesh (NEEDS SLIDES EDL)
(upbeat intro music) >> Hello, everyone. Welcome back to theCube's presentation of the AWS Startup Showcase. This is season two, episode four, of the ongoing series covering the exciting startups from the a AWS ecosystem. Talk about cybersecurity. I'm your host, John Furrier. Here, excited to have two great guests. Ed Casmer, Founder & CEO of Cloud Storage Security. Back, Cube alumni. And also James Johnson, AVP of Research & Development, iPipeline here. Here to talk about Cloud Storage Security, antivirus on S3. Gents, thanks for joining us today. >> Thank you, John. >> Thank you. >> So, the topic here is cloud security, storage security. Ed, we had a great Cube conversation previously, earlier in the month. You know, companies are modernizing their apps and migrating to the cloud. That's fact. Everyone kind of knows that. Been there, done that. You know, clouds have the infrastructure, they got the OS, they got protection. But, the end of the day, the companies are responsible and they're on the hook for their own security of their data. And this is becoming more preeminent now that you have hybrid cloud, cloud operations, cloud-native applications. This is the core focus right now. In the next five years. This is what everyone's talking about. Architecture, how to build apps, workflows, team formation. Everything's being refactored around this. Can you talk about how organizations are adjusting, and how they view their data security in light of how applications are being built and specifically, around the goodness of say, S3? >> Yep, absolutely. Thank you for that. So, we've seen S3 grow 20,000% over the last 10 years. And that's primarily because companies like James with iPipeline, are delivering solutions that are leveraging this object storage more and above the others. When we look at protection, we typically fall into a couple of categories. The first one is, we have folks that are worried about the access of the data. How are they dealing with it? So, they're looking at configuration aspects. But, the big thing that we're seeing is that customers are blind to the fact that the data itself must also be protected and looked at. And, so, we find these customers who do come to the realization that it needs to happen. Finding out like how asking themselves, "How do I solve for this?" And, so, they need lightweight, cloud-native built solutions to deliver that. >> So, what's the blind spot? You mentioned there's a blind spot. They're kind of blind to that. What specifically are you seeing? >> Well, so when we get into these conversations, the first thing that we see with customers is, "I need to predict how I access it." This is everyone's conversation. "Who are my users? How do they get into my data? How am I controlling that policy? Am I making sure there's no east-west traffic there, once I've blocked the north-south?" But, what we really find is that the data is the key packet of this whole process. It's what gets consumed by the downstream users. Whether that's an employee, a customer, a partner. And, so, it's really the blind spot is the fact that we find most customers not looking at whether that data is safe to use. >> It's interesting. You know, when you talk about that, I think about like all the recent breaches and incidents. "Incidents" they call them. >> Yeah. >> They're really been around user configurations. S3 buckets not configured properly. And this brings up what you're saying, is that the users and the customers have to be responsible for the configurations, the encryption, the malware aspect of it. Don't just hope that AWS has the magic to do it. Is that kind of what you're getting at here? Is that the similar? Am I correlating that properly? >> Absolutely. That's perfect. And, and we've seen it. We've had our own customers, luckily, iPipeline's not one of them, that have actually infected their end users, because they weren't looking at the data. >> Yeah. And that's a huge issue. So, James, let's get in, you're a customer-partner. Talk about your relationship with these guys and what's it all about? >> Yeah. Well, iPipeline is building a digital ecosystem for life insurance and wealth management industries to enable the sale of life insurance to underinsured and uninsured Americans, to make sure that they have the coverage that they need should something happen. And, our solutions have been around for many years in a traditional data center type of an implementation. And, we're in process now of migrating that to the cloud, moving it to AWS. In order to give our customers a better experience, better resiliency, better reliability. And, with that, we have to change the way that we approach file storage and how we approach scanning for vulnerabilities in those files that might come to us via feeds from third parties, or that are uploaded directly by end users that come to us from a source that we don't control. So, it was really necessary for us to identify a solution that both solved for these vulnerability scanning needs, as well as enabling us to leverage the capabilities that we get with other aspects of our move to the cloud. Being able to automatically scale based on load, based on need. To ensure that we get the performance that our customers are looking for. >> So, tell me about your journey to the cloud, migrating to the cloud, and how you're using S3. Specifically, what led you to determine the need for the cloud-based AV solution? >> Yeah. So, when we looked to begin moving our applications to the cloud, one of the realizations that we had is that our approach to storing certain types of data, was a bit archaic. We were storing binary files in a database, which is not the most efficient way to do things. And, we were scanning them with the traditional antivirus engines, that would've been scaled in traditional ways. So, as our need grew, we would need to spin up additional instances of those engines to keep up with load. And we wanted a solution that was cloud-native, and would allow us to scan more dynamically without having to manage the underlying details of how many engines do I need to have running for a particular load at a particular time, and being able to scan dynamically and also being able to move that out of the application layer, being able to scan those files behind the scenes. So, scanning in, when the file's been saved in S3. It allows us to scan and release the file once it's been deemed safe, rather than blocking the user while they wait for that scan to take place. >> Awesome. Well, thanks for sharing that. I got to ask Ed and James, same question. And next is, how does all this factor into audits and self-compliance? Because, when you start getting into this level of sophistication, I'm sure it probably impacts reporting, workflows. Can you guys share the impact on that piece of it? The reporting. >> Yeah, I'll start with a comment, and James will have more applicable things to say. But, we're seeing two things. One, is you don't want to be the vendor whose name is in the news for infecting your customer base. So, that's number one. so you have to put something like this in place and figure that out. The second part is, we do hear that under SOC 2, under PCI, different aspects of it, there are scanning requirements on your data. Traditionally, we've looked at that as endpoint data and the data that you see in your on-prem world. It doesn't translate as directly to cloud data, but, it's certainly applicable. And if you want to achieve SOC 2 or you want to achieve some of these other pieces, you have to be scanning your data as well. >> James, what's your take? As practitioner, you're living it. >> Yeah. That's exactly right. There are a number of audits that we go through, where this is a question that comes up both from a SOC perspective, as well as our individual customers, who reach out, and they want to know where we stand from a security perspective and a compliance perspective. And, very often, this is a question of "How are you ensuring that the data that is uploaded into the application is safe and doesn't contain any vulnerabilities?" >> James, if you don't mind me asking. I have to kind of inquire, because I can imagine that you have users on your system, but also you have third parties, relationships. How does that impact this? What's the connection? >> That's a good question. We receive data from a number of different locations. From our customers directly, from their users, and from partners that we have, as well as partners that our customers have. And, as we ingest that data, from an implementation perspective, the way we've approached this, there's minimal impact there in each one of those integrations, because everything comes into the S3 bucket and is scanned before it is available for consumption or distribution. But, this allows us to ensure that no matter where that data is coming from, that we are able to verify that it is safe before we allow it into our systems or allow it to continue on to another third party, whether that's our customer or somebody else. >> Yeah. I don't mean to get in the weeds there, but it's one of those things where, you know, this is what people are experiencing right now. You know, Ed, we talked about this before. It's not just siloed data anymore. It's interactive data. It's third party data from multiple sources. This is a scanning requirement. >> Agreed. I find it interesting, too. I think James brings it up. We've had it in previous conversations, that not all data's created equal. Data that comes from third parties that you're not in control of, you feel like you have to scan and other data you may generate internally. You don't, have to be as compelled to scan that, although it's a good idea. But it's, you can kind of, as long as you can sift through and determine which data is which, and process it appropriately, then you're in good shape. >> Well, James. You're living the cloud security storage security situation, here. I got to ask you if you zoom out, not get in the weeds, and look at kind of the boardroom or the management conversation. Tell me about how you guys view the data security problem. I mean, obviously it's important, right? So, can you give us a level of, you know, how important it is for iPipeline and with your customers and where does this S3 piece fit in? I mean, when you guys look at this holistically, for data security, what's the view? What's the conversation like? >> Yeah. Well, data security is critical. As Ed mentioned a few minutes ago, you don't want to be the company that's in the news because some data was exposed. That's something that nobody has the appetite for. And, so, data security is, first and foremost, in everything that we do. And that's really where this solution came into play and making sure that we had not only a solution, but, we had a solution that was the right fit for the technology that we're using. There are a number of options. Some of them have been around for a while. But this is focused on S3, which we were using to store these documents that are coming from many different sources. And, you know, we have to take all the precautions we can to ensure that something that is malicious doesn't make its way into our ecosystem or into our customers' ecosystems through us. >> What's the primary use case that you see the value here with these guys? What's the "aha" moment that you had? >> With the Cloud Storage Security, specifically, it was really, it goes beyond the security aspects of being able to scan for vulnerable files, which is there are a number of options and, and they're one of those. But for us, the key was being able to scale dynamically without committing to a particular load, whether that's under committing or over committing. As we move our applications from a traditional data center type of installation to AWS, we anticipated a lot of growth over time. And being able to scale up very dynamically, you know, literally moving a slider within the admin console was key to us, to be able to meet our customer's needs without overspending. By building up something that was, dramatically larger than we needed in our initial rollout. >> Not a bad testimonial there, Ed. I mean. >> I agree. >> This is really highlights the applications using S3 more in the file workflow for the application in real time. This is where you start to see the rise of ransomware, other issues and scale matters. Can you share your thoughts and reaction to what James just said? >> Yeah, I think it's critical. I mean, as the popularity of S3 has increased, so has the fact that it's an attack vector now, and people are going after it. Whether that's to plant bad, malicious files, whether it's to replace code segments that are downloaded and used in other applications, it is a very critical piece. And when you look at scale, and you look at the cloud-native capability, there are lots of ways to solve it. You can dig a hole with a spoon, but a shovel works a lot better. And, in this case, you know, we take a simple example like James. They did a weekend migration, so, they've got new data coming in all the time. But, we did a massive migration. 5,000 files a minute being ingested. And, like he said, with a couple of clicks, scale up, process that over a sustained period of time, and then scale back down. So, you know, I've said it before. I said it on the previous one. We don't want to get in the way of someone's workflow. We want to help them secure their data and do it in a timely fashion, that they can continue with their proper processing and their normal customer responses. >> Yeah. Friction always has to be key. I know you're in the marketplace with your antivirus, for S3 on AWS. People can just download it. So, people are interested, go check it out. James, I got to ask you, and maybe Ed can chime in over the top, but, it seems so obvious. Data. Secure the data. Why is it so hard? Why isn't this so obvious? What's the problem? Why is it so difficult? Why are there so many different solutions? It just seems so obvious. You know, you got ransomware, you got injection of different malicious payloads. There's a ton of things going around around the data. Why is this? This is so obvious. Why isn't it solved? >> Well, I think there have been solutions available for a long time. That the challenge, the difficulty that I see is, that it is a moving target. As bad actors learn new vulnerabilities, new approaches. And as new technology becomes available, that opens additional attack vectors. That's the challenge. Is keeping up on the changing world. Including keeping up on the new ways that people are finding to exploit vulnerabilities. >> Yeah. And you got sensitive data at iPipeline. You do a lot of insurance, wealth management, all kinds of sensitive data, super valuable. You know, just brings me up, reminds me of the Sony hack, Ed, years ago. You know, companies are responsible for their own militia. I mean, cybersecurity, there's no government help for sure. I mean, companies are on the hook, as we mentioned earlier at the top of this interview. This really is highlighted that, IT departments and are, have to evolve to large scale cloud, you know, cloud-native applications, automation, AI machine learning all built in, to keep up at the scale. But, also, from a defense standpoint, I mean, James, you're out there, you're in the front lines. You got to defend yourself, basically, and you got to engineer it. >> A hundred percent. And just to go on top of what James was saying is, I think they're one of the big factors, and we've seen this. There's skill shortages out there. There's also just a pure lack of understanding. When we look at Amazon S3 or object storage in general, it's not an executable file system. So, people sort of assume that, "Oh, I'm safe. It's not executable. So, I'm not worried about it traversing my storage network." And they also probably have the assumption that the cloud providers, Amazon, is taking care of this for 'em. And, so, it's this "aha" moment, like you mentioned earlier. That you start to think, "Oh, it's not about where the data is sitting, per se, it's about scanning it as close to the storage spot. So, when it gets to the end user, it's safe and secure. And you can't rely on the end users' environment and system to be in place and up to date to handle it. So, it's that really, that lack of understanding that drives some of these folks into this, but for a while, we'll walk into customers and they'll say the same thing you said, John. "Why haven't I been doing this for so long?" And, it's because they didn't understand that it was such a risk. That's where that blind spot comes in. >> James, it's just a final note on your environment. What's your goals for the next year? How's things going over there in your side? How do you look at the security posture? What's on your agenda for the next year? How do you guys looking at the next level? >> Yeah, well, our goal as it relates to this is, to continue to move our existing applications over to AWS, to run natively there, which includes moving more data into S3 and leveraging the cloud storage security solution to scan that and ensure that it's, that there are no vulnerabilities that are getting in. >> And the ingestion? Is there like a bottlenecks, log jams? How do you guys see that scaling up? I mean, what's the strategy there? More, just add more S3? >> Well, S3 itself scales automatically for us and, the Cloud Storage Solution gives us levers to pull to do that. As Ed mentioned, we ingested a large amount of data during our initial migration, which created a bottleneck for us, as we were preparing to move our users over. We were able to, you know, make an adjustment in the admin console and spin up additional processes entirely behind the scenes and broke the log jam. So, I don't see any immediate concerns there. Being able to handle the load. >> You know, the term cloud-native and, you know, hyperscale-native, cloud-native, OneCloud, it's hybrid. All these things are native. We have anti-virus native coming soon. And I mean, this is what we're. You're basically doing is making it native into the workflows. Security native, and soon there's going to be security clouds out there. We're starting to see the rise of these new solutions. Can you guys share any thoughts or vision around how you see the industry evolving and what's needed, what's working and what's needed? Ed, we'll start with you. What's your vision? >> So, I think the notion of being able to look at and view the management plane and control that, has been where we're at right now. that's what everyone seems to be doing and going after. I think there are niche plays coming up, storage is one of them. But, we're going to get to a point where storage is just a blanket term for where you put your stuff. I mean, it kind of already is that, but, in AWS, it's going to be less about S3, less about work docs, less about EVS. It's going to be just storage and you're going to need a solution that can span all of that, to go along with where we're already at at the management plane. We're going to keep growing the data plane. >> James, what's your vision for what's needed in the industry? What's the gaps? What's working? And where do you see things going? >> Yeah, well, I think on the security front, specifically, Ed's probably a little bit better equipped to speak to them than I am. Since that's his primary focus. But I see the need for just expanded solutions that are cloud-native, that fit and fit nicely with the Amazon technologies, Whether that comes from Amazon or other partners like Cloud Storage Security, to fill those gaps. We're focused on, you know, the financial services and insurance industries. That's our niche. And we look to other partners, like Ed, to help be the experts in these areas. And so that's really what I'm looking for is, you know, the experts that we can partner with that are going to help fill those gaps as they come up and as they change in the future. >> Well, James, I really appreciate you coming on sharing your story. Ed, I'll give you the final word. Put a quick, spend a minute to talk about the company. I know Cloud Storage Security is an AWS partner, with the Security Software Competency. And is one of, I think, 16 partners listed in the competency and data category. So, take a minute to explain, you know, what's going on with the company, where people can find more information, how they buy and consume the products. >> Okay. >> Put the plug in. >> Yeah, thank you for that. So, we are a fast growing startup. We we've been in business for two and a half years, now. We have achieved our Security Competency. As John indicated, we're one of 16 data protection, Security Competent ISV vendors, globally. And, our goal is to expand and grow a platform that spans all storage types that you're going to be dealing with. And answer basic questions. "What do I have and where is it? Is it safe to use?" And, "Am I in proper control of it? Am I being alerted appropriately?" You know, so we're building this storage security platform, very laser-focused on the storage aspect of it. And, if people want to find out more information, you're more than welcome to go and try the software out on Amazon Marketplace. That's basically where we do most of our transacting. So, find it there, start a free trial, reach out to us directly from our website. We are happy to help you in any way that you need it, whether that's storage assessments, figuring out what data is important to you, and how to protect it. >> All right, Ed, thank you so much. Ed Casmer. Founder & CEO of Cloud Storage Security and of course James Johnson, AVP Research & Development, iPipeline customer. Gentlemen, thank you for sharing your story and featuring the company and the value proposition. It's certainly needed. This is season two, episode four. Thanks for joining us. Appreciate it. >> Thanks, John. >> Okay. I'm John Furrier. That is a wrap for this segment of the cybersecurity, season two, episode four. The ongoing series covering the exciting startups from Amazon's ecosystem. Thanks for watching. (gentle outro music)
SUMMARY :
of the ongoing series and migrating to the cloud. realization that it needs to happen. They're kind of blind to that. find is that the data is You know, when you talk about that, has the magic to do it. And, and we've seen it. and what's it all about? migrating that to the cloud, migrating to the cloud, is that our approach to storing certain I got to ask Ed and James, same question. and the data that you see James, what's your take? the data that is uploaded into because I can imagine that you the way we've approached this, get in the weeds there, You don't, have to be as I got to ask you if you zoom out, and making sure that we And being able to scale up I mean. and reaction to what I mean, as the popularity and maybe Ed can chime in over the top, That's the challenge. I mean, companies are on the the same thing you said, John. How do you guys looking at the next level? and leveraging the cloud and broke the log jam. and soon there's going to be of being able to look at that are going to help fill those gaps So, take a minute to explain, you know, We are happy to help you in and featuring the company the exciting startups
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
James | PERSON | 0.99+ |
Ed Casmer | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Ed | PERSON | 0.99+ |
James Johnson | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
iPipeline | ORGANIZATION | 0.99+ |
5,000 files | QUANTITY | 0.99+ |
16 partners | QUANTITY | 0.99+ |
SOC 2 | TITLE | 0.99+ |
20,000% | QUANTITY | 0.99+ |
second part | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
S3 | TITLE | 0.99+ |
Sony | ORGANIZATION | 0.99+ |
16 | QUANTITY | 0.99+ |
first one | QUANTITY | 0.99+ |
two and a half years | QUANTITY | 0.99+ |
both | QUANTITY | 0.98+ |
Cube | ORGANIZATION | 0.98+ |
first thing | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
One | QUANTITY | 0.98+ |
two things | QUANTITY | 0.96+ |
first | QUANTITY | 0.95+ |
S3 | COMMERCIAL_ITEM | 0.94+ |
years ago | DATE | 0.93+ |
Cloud Storage Security | TITLE | 0.93+ |
two great guests | QUANTITY | 0.92+ |
Americans | PERSON | 0.92+ |
Joe CaraDonna and Devon Reed, Dell EMC | Dell Technologies World 2020
>> Voiceover: From around the globe, it's theCUBE with digital coverage of Dell Technologies World Digital Experience brought to you by Dell Technologies. >> Welcome to theCUBE's coverage of Dell Technologies World 2020, the Digital Experience this year. I'm Lisa Martin, pleased to be joined by two CUBE alumni from Dell EMC. Please welcome Joe Caradonna, the VP of Cloud Storage CTO. Joe, good to see you again, even though quite socially distant. >> Yeah, thank you, it's great to be here. >> And Devon Reed is also joining us, the Senior Director of Product Management. Devon, how are you? >> I'm good, how are you doing? >> Good. >> Nice to be here, thank you. >> Nice to be chatting with you guys, although very, very socially distant, following rules. It wouldn't be a Dell Technologies World without having you guys on theCUBE, so we appreciate you joining us. So let's dig in. So much has happened in the world since we last spoke with you. But one of the things that happened last year, around a year ago, was the Dell On Demand program was launched. And now here we are nearly a year later when Michael Dell was just talking about, "Hey, Dell's plan is to go "and deliver everything as a service." We've heard some of your competitors kind of going the same route, some kind of spurned by COVID. Talk to us, Devon, we'll start with you, about what this direction is shift to as-a-service means and what it means specifically for storage. >> Yeah, certainly. So first and foremost, what we talked about last year with respect to On Demand, Dell Technologies On Demand, we've had great success with that program. But before I get into what we're doing with as-a-service, I really want to talk about why we're doing the as-a-service. And when we talk to customers and partners, and when we look at the trends in the market, what we're seeing is that customers are more and more wanting to consume technology infrastructure as a service in an OPEX manner. And analysts are revising those estimates up almost daily. And what we're seeing is one of the things that's driving that is actually why we're here in this remote session as opposed to being in Vegas, doing this. And it's really the global uncertainty around the pandemic. So it's driving the need to free up cash and consume these infrastructure more as a service. Now, as Michael said... Yeah, as Michael said, we have the broadest set of infrastructure offerings in the market and we are number one in most categories. And we're in the process of building out an offer structure that cuts across all the different infrastructure components. But to get real specific on what we're doing with a storage as a service, we are in the process of building out the first true storage or as a service offering for our infrastructure starting with storage. It'll be a private preview as of Q4, by the end of this fiscal year and generally available in the first half of next year. And what we're doing is taking the infrastructure, the Dell Technology's storage and where we're flipping the business model as opposed to buying it outright, the customers actually just consume it as a service. So they have a very simple consumption model where they just pick their outcome, they pick their restored service, they pick their performance, they pick their capacity, and we deliver that service to their on-premise site. >> Let me unpack outcomes of it, 'cause I saw that in some of the information online, outcome driven. What do you mean by that, and can you give us some examples of those outcomes that customers are looking to achieve? >> Yeah, so in today's world, the way people mostly consume infrastructure is, or at least storage, is that they say, "I need a storage product." And what the customers do is they work with our sales representatives and say, "I need a XYZ product. "Maybe it's a PowerStore and I need this much capacity. "I can pick all of the components, "I can pick the number of drives, "the type of drives there are." And that's really from a product perspective. And what we're doing with the, as-a-service, is we're trying to flip the model and really drive to what the business outcome is. So the business outcome here is really, I need block storage, I need this performance level, I need this much capacity. And then we basically ship the infrastructure, we think, that better suits those outcomes. And we're making changes across our entire infrastructure value chain to really deliver these service. So we try to deliver these much quicker for the customer. We actually manage the infrastructure. So it enables customers to spend less time managing their infrastructure and more time actually operating the service, paying attention to their business outcomes. >> Got it, and that's what every customer wants more of is more time to actually deliver this business outcomes and make those course corrections as they need to. Joe, let's talk to you for a bit. Let's talk, what's going on with cloud? The last time we saw you, a lot of change as we talked about, but give us a picture of Dell's cloud strategy. From what you guys are doing on-prem to what you are doing with cloud partners. What is this multi-pronged cloud strategy actually mean? >> Yeah, sure, I mean, our customers want hybrid cloud solutions and we believe that to be the model going forward. And so actually what we're doing is, if you think about it, we're taking the best of public cloud and bringing it on-prem, and we're also taking the best of on-prem and bringing it to the public cloud. So, you know, Devon just talked to you about how we're bringing that public cloud operation model to the data center. But what we've also done is bring our storage arrays to the cloud as a service. And we've done that with PowerStore, we've done that with PowerMax, and we've done that with PowerScale. And in the case of PowerScale for Google cloud, I mean, you get the same performance and capacity scale out in the cloud as you do on-prem. And the systems inter-operate between on-prem and cloud so it makes it easy for fluid data mobility across these environments. And for the first time it enables our customers to get their data to the cloud in a way that they can bring their high performance file workloads to the cloud. >> So talk to me a little bit about, you mentioned PowerScale for Google cloud service, is that a Dell hardware based solution? How does that work? >> Yeah, the adoptions have been great. I mean, we launched back in May and since then we brought on customers in oil and gas and eCommerce and in health as well. And we're growing out the regions, we're going to be announcing a new region in North America soon and we're going to be building out in APJ and EMEA as well. So, customer response has been fantastic, looking forward to growing up. >> Excellent, Devon back to you, let's talk about some of the things that are going on with PowerProtect DD, some new cloud services there too. Can you unpack that for us? >> So Joe, was talking about how we were taking our storage systems and putting them in the cloud. So I just back up in, and kind of introduce real quickly or reintroduce our Dell Technologies Cloud Storage Services. And that's really, we have our primary storage systems from Unity XT, the PowerStore, to PowerScale, to ECS, and that's housed in a co-locations facility right next to hyperscalers. And then that enables us to provide a fully managed service offering to our customers to a multi-cloud. So what we're doing is we're extending the Dell Technologies Cloud Storage Services to include PowerProtect DD. So we're bringing PowerProtect DD into this managed services offering so customers can use it for cloud, longterm retention, backup, archiving, and direct backup from a multicloud environment. So extending what we've already done with the Dell Technologies Cloud Storage Services. >> So is that almost kind of like a cloud based data protection solution for those workloads that are running in the cloud VMs, SaaS applications, physical servers, spiral data, things like that? >> Yeah, there's several use cases. So you could have a primary block storage system on your premises and you could actually be providing direct backup into the cloud. You could have backups that you have on-premise that you could be then replicating with PowerProtect data, data domain replication to cloud. And you could also have data in AWS, or Azure, or Google that you could be backing up directly to the PowerProtect domain into this service. So there's multiple use cases. >> Got it, all right. Joe, let's talk about some of the extensions of cloud you guys have both been talking about the last few minutes. One of the recent announcements was about PowerMax being cloud enabled and that's a big deal to cloudify something like that. Help us understand the nature of that, the impetus, and what that means now and what customers are able to actually use today. >> Yeah sure, I mean, we've launched the PowerMax as a cloud service about a year and a half ago with our partner, Faction. And that's for those customers that want that tier zero enterprise grade data capabilities in the cloud. And not just a cloud, it also offers multicloud capabilities for both file and block. Now, in addition, the Dell Tech World, we're launching additional cloud mobility capabilities for PowerMax, where let's say you have a PowerMax on-prem, you could actually do snapshot shipping to an object repository. And that could be in AWS, that can be in Azure, or it could be locally to our local ECS object store. In addition, in the case of Amazon we go a step further where if you do snapshot shipping into Amazon S3, you can then rehydrate those snapshots directly into EBS. And that way you can do processing on that data in the cloud as well. >> Give us an idea, Joe, the last few months or so what some of your customer conversations have been like? I know you're normally in front of customers all the time. Dell Tech World is a great example. I think last year there was about 14,000 folks there, was huge. And we're all so used to that three dimensional engagement, more challenging to do remotely, but talk to me about some of the customer conversations that you've had, and how they've helped influence some of the recent announcements. >> Yeah sure, customers... It might sound a little cliche, but cloud is a journey. It's a journey for our customers. It's a journey for us too, as we build out our capabilities to best serve them. But their questions are, "I want to take advantage "of that elastic compute in the cloud." But maybe the data storage doesn't keep up with it. In the case of when we go to PowerScale for Google, the reason why we brought that platform to the cloud is 'cause you can get hundreds of gigabytes per second of throughput through that. And for our customers that are doing things like processing genomic sequencing data, they need that level of throughput, and they want to move those workloads into the cloud. The computer's there but the storage systems to keep up with it, were not. So by us bringing a solution like this to the cloud, now they can do that. So we see that with PowerScale, we see a lot of that with file in the cloud because the file services in the cloud aren't as mature as some of the other ones like with block and object. So we're helping filling some of those gaps and getting them to those higher performance tiers. And as I was mentioning, with things like PowerMax and PowerStore, it's extending their on-prem presence into the public cloud. So they can start to make decisions not based on a capability, but more based on the requirements for where they want to run their workloads. >> And let's switch gears to talking about partners now. Dell has a huge partner ecosystem. We always talk with those folks on theCUBE as well, every year. Devon, from a product management perspective, tell me about some of the things that are interesting to partners and what the advantages are for partners with this shift in what, how Dell is going to be delivering, from PCs, to storage, to HCI, for example. >> Yeah exactly, so, Joe mentioned that it's really a journey and Joe talked a lot about how customers aren't maybe not (indistinct) completely going to a hyperscale or to a complete public cloud. And what we're hearing is there's a lot of customers that are actually wanting the cloud-like experience, but wanting it on-prem. And we're hearing from our partners almost on a daily basis. I have a lot of partner customer conversations where they want to be involved in delivering this as a service. Through their customers, they want to maintain that relationship, derive that value, and in some cases even provide the services for them. And that's what we're looking do as the largest infrastructure provider with the broadest base of partnership we have an advantage there. >> Is there any specific partner certification programs that partners can get into to help start rolling this out? >> At this point, we are trying to build it, but at this point we had nothing to announce here but that's something that we're actively working on and stay tuned for that. >> I imagine there will be a lot of virtual conversations at the digital tech world this year, between the partner community when all of these things are announced. And you get those brains collectively together although obviously virtually, to start iterating on ideas and developing things that might be great to programmatize down the road. And, Joe, last question for you, second to last question actually, is this, this year as we talked about a number of times, everyone's remote, everyone's virtual. It's challenging to get that level of engagement. We're all so used to being in-person and all of the hallway conversations even that you have when you're walking around the massive show floor for example, what can participants and attendees expect from your perspective this year at Dell Technologies World? Will they be able to get the education and that engagement that Dell really wants to deliver? >> Yeah, well, clearly we had to scale things back quite, there's no way around that. But we have a lot of sessions that were designed to inform them with a new capabilities we've been building out. And not just for cloud, but across the portfolio. So I hope they get a lot out of that. We have some interactive sessions in there as well, for some interactive Q and A. And you're right, I mean, a challenge for us is connecting with the customer in this virtual reality. We're all at home, right? The customers are at home. So we've been on Zoom, like never before, reaching out to customers to better understand where they want to go, what their challenges are and how we can help them. So I would say we are connecting, it's a little different and requires a little more effort on everyone's part. We just can't all do it in the same day anymore. It is just a little more spread out. >> Well, then it kind of shows the opportunity to consume things on demand. And as consumers, we sort of have this expectation that we can get anything we want on demand. But you mentioned, Joe, in the second to last question, this is the last one. But you mentioned, everybody's at home. You have to tell us about that fantastic guitar behind you. What's the story? >> Every guitar has a story. I'll just say for today, look, this is my tribute to Eddie Van Halen. We're going to miss him for sure. >> And I'll have the audience know, I did ask Joe to play us out. He declined, but I'm going to hold them to that for next time, 'cause we're not sure when we're going to get to see you guys in person again. Joe and Devon, thank you so much for joining me on the program today. It's been great talking to you. Lots of things coming, lots of iterations, lots of influence from the customers, influence from COVID and we're excited to see what is to come. Thanks for your time. >> Both: Thank you so much. >> From my guests, Joe Caradonna and Devon Reed, I'm Lisa Martin. You're watching theCUBE's coverage of Dell Technologies World 2020, the Digital Experience. (soft music)
SUMMARY :
brought to you by Dell Technologies. Joe, good to see you again, the Senior Director of Product Management. Nice to be chatting with you guys, So it's driving the need to free up cash in some of the information and really drive to what to what you are doing with cloud partners. And in the case of Yeah, the adoptions have been great. the things that are going on from Unity XT, the PowerStore, And you could also have data and that's a big deal to on that data in the cloud as well. of customers all the time. but the storage systems to And let's switch gears to as the largest infrastructure provider nothing to announce here and all of the hallway conversations to inform them with a new capabilities the second to last question, We're going to miss him for sure. And I'll have the audience know, 2020, the Digital Experience.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Joe | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Michael | PERSON | 0.99+ |
Joe Caradonna | PERSON | 0.99+ |
Devon Reed | PERSON | 0.99+ |
Joe Caradonna | PERSON | 0.99+ |
Devon | PERSON | 0.99+ |
Eddie Van Halen | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
May | DATE | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Vegas | LOCATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Joe CaraDonna | PERSON | 0.99+ |
first | QUANTITY | 0.99+ |
Michael Dell | PERSON | 0.99+ |
Dell Tech World | ORGANIZATION | 0.99+ |
North America | LOCATION | 0.99+ |
Both | QUANTITY | 0.99+ |
Dell EMC | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Dell Technology | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
Faction | ORGANIZATION | 0.99+ |
a year later | DATE | 0.99+ |
second | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
one | QUANTITY | 0.98+ |
Q4 | DATE | 0.98+ |
both | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
first time | QUANTITY | 0.97+ |
end of this fiscal year | DATE | 0.97+ |
Dell Technologies World 2020 | EVENT | 0.97+ |
this year | DATE | 0.97+ |
around a year ago | DATE | 0.95+ |
first half of next year | DATE | 0.95+ |
about 14,000 folks | QUANTITY | 0.94+ |
PowerMax | COMMERCIAL_ITEM | 0.94+ |
PowerStore | ORGANIZATION | 0.93+ |
Dell Technologies World | EVENT | 0.93+ |
Dell Technologies World 2020 | EVENT | 0.92+ |
XYZ | ORGANIZATION | 0.92+ |
werMax | TITLE | 0.91+ |
PowerMax | ORGANIZATION | 0.9+ |
theCUBE | ORGANIZATION | 0.89+ |
Azure | TITLE | 0.89+ |
PowerProtect | TITLE | 0.89+ |
PowerMax | TITLE | 0.89+ |
pandemic | EVENT | 0.88+ |
EMEA | LOCATION | 0.87+ |
Dell Technologies | ORGANIZATION | 0.86+ |
Dell Technologies On Demand | ORGANIZATION | 0.86+ |
Tom Spoonemore, VMware and Efri Natel Shay, Dell Technologies | VMworld 2020
(bright music) >> Announcer: From around the globe, it's "theCUBE", with digital coverage of VMworld 2020, brought to you by VMware and its ecosystem partners. >> Welcome back, I'm Stu Miniman, and this is "theCUBE's" coverage of VMworld 2020. Of course, such a broad ecosystem in the VMware environment. Been talking a lot, of course, this year, about what's happened in the Cloud Native space. vSphere 7 has Kubernetes coming into the virtualized environment. And one of those key pieces of doing cloud is you need to make sure data protection still works. And, of course, VMware has a long history working with lots of companies. In this segment, we're going to be digging into the VMware, and Dell, also, solution for data protection. So, happy to welcome to the program. First, I have, from VMware, Tom Spoonemore. He is a product line manager for Modern Application Platform with VMware, and welcome back to the program, one of our CUBE alumnis, Efri Nattel-Shay, who is with Dell technologies, Director of Data Protection and Cloud Native apps. Efri, welcome back, Tom, welcome to the program. >> Thank you very much, it's good to be here. >> So, Tom, I kind of teed it up in my intro. VMware, for the longest time, for as long as I can remember, we've really talked about that ecosystem, those joint solutions. I remember, back when we started "theCUBE", in 2010, you'd go there and it would be, oh, there's $15, no, $20, for every dollar that you spend on VMware that the ecosystem kind of pulls along. When VMware started building the VMware Cloud Foundation and the VMware cloud solutions, data protection really went along with it. So, the integrations that they done with vSphere hold them in there as the environment. Tanzu Kubernetes, there's a lot of new pieces. But I think some of those principles have stayed the same. So, why don't you start us off. Tell us a little bit, philosophically, how is VMware treating this space, and how data protection fills into it, and then, Efri, we'll get your take on it, too. >> Yeah, sure, absolutely. So, from the perspective of VMware and the ecosystem, as you say, we want to be very inclusive. We want to bring the ecosystem and our partners along with what we're doing, regardless of what space it is, and in the Modern Applications Platform and Cloud Native tooling, we're very much thinking along the same lines. And as it relates to data protection in specific, Cloud Native is a place where, mainly it's been thought of as a place for stateless applications. but what we're seeing in people's deployments is more and more stateful applications are beginning to move to Kubernetes and into containers. And so the question then becomes, what do you do for data protection of those applications that are deployed into Kubernetes? And so, with Tanzu, and specifically Tanzu Mission Control, we have included a data protection capability, along with the other capabilities that come with Mission Control, that allows you to provide data protection for your fleet of Kubernetes clusters, regardless of which distribution, regardless of which cloud they're running on, and regardless of how many teams you might have running on a particular cluster or set of clusters. And so, for this reason, we have introduced a data protection capability that is focused around our open source project called Velero and Mission Control operates Velero in your clusters from a central UI API and CLI. That allows you to do data protection, initiating schedules of backups, doing restores, and even migration from cloud to cloud, from a single control point. And part of this vision is not only providing an API that we can handle directly with our own Velero-based implementation, but also opening that up to partners. And this is where we're working with Dell, specifically, to be able to provide that single API, but yet have Dell, for instance, with their PowerProtect solution, be able to plug in and be a data protection provider underneath Tanzu Mission Control. And so, that's the work that we're doing together to help satisfy this vision that we have for data protection in the Cloud Native space. >> Yeah, agree 100% with Tom. Like Tom has said, when we looked at customer environments three years ago, people talk mainly about stateless applications, but over time, when more storage solutions, persistent data solutions came along, there came the need to, not only provision the data, but also protect it, and be able to do backups, and restores, and cyber recovery solutions, and disaster recovery, and the whole set of use cases that allow a full life cycle of data along the Cloud Native set of applications, not just a traditional one. And what we've seen, we're talking, obviously, with a lot of customers, joint customers with VMware, customers that use our storage solutions, as well as others, on-prem and in the cloud. And what they have shown, to say, there, is that you have the IT infrastructure people on one hand, which have certain needs, and there is the new set of users, the DevOps people, who are writing applications in a new way, and they need to communicate and they need a solution that fits both of them. So, with VMware, with the community, with Velero, we are introducing a solution that is capable of doing both management for the DevOps people, as well as for the other team infrastructure. And, a year ago, we have talked about this coming up, and now it's really there, and it's doing great. >> Oh, Efri, I'm so glad you brought up some of those organizational issues, because it's not just, oh, we have some new applications, and, of course, we need to do data protection. Can you bring us inside a little bit? Your customers, are they aware of what they need to do? Is it central IT that's coming over and telling the DevOps team, hey, don't forget, security, data protection, still super important. How does that engagement go, and what change does that have for the Dell field and the channel? >> Yeah, I think that the more successful organizations really have that kind of dialogue. So, the developers are not operating in silos. They're not doing things themselves. They do, some of the use cases, they do need to copy data for their own use, but they understand that there are also organizational needs. Someone needs to sign the audit pass, the SLAs are in compliance, the regulations are met. So, all of these things, someone needs to do them. And there is a mutual recognition that there is a role for these people and for these people, for these use cases and for these use cases. >> Yeah, I would agree with that. One of the things that we're seeing, particularly as you think about Kubernetes as a multitenant kind of platform, what we're seeing is that central IT operations still wants to make sure that backups are happening with stateful applications, but more and more they're relying on and providing self-service capabilities to line of business and DevOps, to be able to back up their applications in the way that's best for those applications. It's a recognition of domain expertise for a particular application. So, what we've done with Mission Control is allowed central IT to define policy. And those policies then give the framework, or guidelines, if you will, that then allow the DevOps teams to make the best choices within their own field of expertise and for their own applications. >> Yeah, and what we've seen is some of the organizations really like full control over central IT, and some customers have told us, don't give anything to the developers, but most of them are asking for some self-service capabilities for the developers. But then, who is setting the policy? Who is saying, okay, I have a gold policy data protection? Does it mean I replicate to another side? Does it mean I do longterm retention for a month, or for a year? That is for someone in central IT to set up. So, saying what the policy means, or what it actually is, is the job of a central IT, whereas, this application needs application consistency, and it is of gold policy, that oftentimes is the best knowledge and domain expertise of the developer. >> So, Tom, you mentioned Tanzu Mission Control, which is the management solution. Tanzu is a portfolio. Can you help walk us through the relevant pieces here that are part of this joint solution? >> Yeah, sure. So, Tanzu is really a portfolio of applications, or a portfolio of solutions, as you've said. It's really along three main pillars. It's what we call, build, run and manage. Tanzu Mission Control fills in, along with our Tanzu Observability and Tanzu Service Mesh, in our manage pillar. The build pillar is more along the lines of supporting developing of modern applications, developing and deploying modern applications. So, many of the technologies that have come from our acquisitions of Pivotal, as well as Bitnami, make up that pillar, and these are technologies that are coming to fore, and you'll hear more and more about at this VM world and going forward. Our run pillar is really where you'll find Tanzu Kubernetes Grid. Now, this is our distribution, but it's more than just a distribution of Kubernetes. It's a distribution of Kubernetes, along with all the tools that you would need to be able to deploy modern applications. So, all of these three pillars come together, along with services provided by Pivotal labs, to really give you a full, multifaceted platform for deploying and operating modern applications. >> Great, and Efri, where are there integrations there? How does the storage fit in has been a discussion we've been having for a few years% when it comes to Kubernetes. >> Yeah, basically, PowerProtect integrates with all of these levels that Tom has mentioned, starting with the lowest levels of integration. With the storage, VMware has Cloud Native storage solutions, which allow things like incremental snapshots to be taken from the environment. And we're using this mechanism in order to copy data efficiently from TKG, Tanzu Kubernetes Grid, environment, out of the cluster, into a space-efficient data domain, as a target site. So, that's a storage integration. Then, there is qualification and support for the various run environments that Tom has mentioned, the Tanzu Kubernetes Grid, and Tanzu Kubernetes Grid Integrated, as well as things that we're working with VMware in order to enable protection for what has been called the Project Pacific, which really allows you very sophisticated capabilities of running multiple Kubernetes clusters using the Kubernetes cluster API capabilities. So, you can spin up a cluster very, very quickly by VMware. And then, we can take backups of this environment up to data domain target site. And, finally, working with Tom for tons of amount of time and effort to do the integration between Tanzu Mission Control and PowerProtect. So, allowing cloud, multicloud, multilocation environments to be provisioned and monitoring by Tanzu Mission Control, but also protected using PowerProtect. >> Yeah, so, Tom, we talked about supporting the ecosystem, and it's a much faster cadence now than it was in the past. It used to be, it felt like every other year at VMworld, we got together and talked about the major vSphere release. Of course, in the container, in Kubernetes world, we're having a much faster cadence. So, could you just help us understand, what of this is generally available today? We saw vSphere 7 back in the spring. The update, right ahead of VMworld, that really extended Kubernetes beyond just VCF, to be able to be an all vSphere 7 environment. So, we know some of this is here on the roadmap, so help map this out for us, what's here today from VMware and what the timeline is we expect for all of these pieces we've been discussing. >> Yeah, absolutely. So, Mission Control shipped in March. So we're still relatively new, but as you say, we run Cloud Native ourselves, and so we're releasing new features, new capabilities. literally every week. We have a weekly cadence for release. Our data protection capability was just introduced at the end of June, so it's fairly new, and we are still introducing capabilities, like bring your own storage, doing scheduling of backups, and this kind of thing. You'll see us adding more and more cloud providers. We have been working to open up the platform to make it available to partners. And this is, just generally, with Mission Control, across the board, but specifically, when it comes to Dell, and PowerProtect, the data protection capability, this is something that we are still actively working on, and it is past the architecture stage, but it's probably still a little ways out before we can deliver on it, but we are working on it diligently, and definitely expect to have that in the product, and available, and really providing a basis for integrations with other providers as well. >> Yeah, and in terms of PowerProtect, we have told the audience about a tech preview a year ago, and since then we have released a number of releases. We are having a quarterly cadence. So, it is available for the general consumption for quite some time. Talking about the integration layers that we have mentioned before, we are the first stack to protect VMs and Kubernetes and applications using the same platform, the same UI, the same policies, everything looks the same. And we have recently introduced capabilities such as application consistency for a number of applications. The support for TKG is available for now. And, as Tom has said, we are working on further integrations, such as the integration with Tanzu Mission Control with VMware. >> Wonderful, I want to get a final word from both of you. Efri, we'll start with you. We've got this regular cadence coming up. We know we're only a couple of weeks away from DTWE, the Dell Technology World Experience, where, of course, theCUBE will be there. What should we look for the rest of 2020, or any final comments that you have for customers that might be looking at this environment? >> Sure, I think that, two trends that I'm seeing, and they're just getting stronger over the years. The first thing is multicloud, and multicloud means many things to different people, but, basically, every customer that we are speaking to is talking about, I want to run things on-prem, but I also need to run these workloads in the hyperscaler. And I need to move from one hyperscaler region to another, or between hyperscalers, and they want to run this distribution here, and the other distribution there. And there are many combinations of stacks and Database-as-a-Service and other components of the infrastructure that different developers are using on-prem and in the cloud. So, I expect this to go even further, and solutions like PowerProtect and TKG can help customers to do that job, and, of course, Tanzu Mission Control, to monitor and manage this environment. Secondly, I think that protection is going to follow more the workloads. So, application is no longer the VM. Obviously, it's becoming many different components that are starting to span across locations and across environments. And again, the protection nature of these is going to change according to where and how these workloads are being provisioned. >> Yeah, and I would say the same thing about Mission Control, very much multicloud-focused, Today it's largely an AWS-focused solution. We're changing to add more flexible storage options, more clouds. Azure is something that we'll be doing in the short term, Google Cloud platform and Google Cloud Storage after that, as well as just the ability to use your own on-prem storage for your backup targets. Also, we're going to be focusing on driving more policy-driven backup. So, being able to define policies for groups of clusters, define RTO and RPO for groups of clusters, allowing Mission Control to help determine what the individual backup policy should be for that particular asset. And continuing to work with Dell and other partners to help extend our platform and open it up for other data protection providers. >> Tom and Efri, thanks so much for the updates. Tom, welcome to being a CUBE alumni, and Efri, I'm sure we'll be seeing you in the team, in the near future. >> Thank you. >> Thank you so much. >> Stay with us for more coverage from VMworld 2020. I'm Stu Miniman, and as always, thank you for watching theCUBE. (bright music)
SUMMARY :
brought to you by VMware in the VMware environment. it's good to be here. and the VMware cloud solutions, and in the Modern Applications Platform and the whole set of use cases and telling the DevOps So, the developers are One of the things that we're seeing, that oftentimes is the best the relevant pieces here So, many of the How does the storage fit and effort to do the integration Of course, in the container, and it is past the architecture stage, and since then we have the Dell Technology World Experience, and the other distribution there. be doing in the short term, in the near future. I'm Stu Miniman, and as always,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Tom | PERSON | 0.99+ |
Efri | PERSON | 0.99+ |
Tom Spoonemore | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
$15 | QUANTITY | 0.99+ |
$20 | QUANTITY | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
March | DATE | 0.99+ |
Tanzu | ORGANIZATION | 0.99+ |
100% | QUANTITY | 0.99+ |
Efri Nattel-Shay | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
2010 | DATE | 0.99+ |
vSphere 7 | TITLE | 0.99+ |
First | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
a year ago | DATE | 0.99+ |
end of June | DATE | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
a year | QUANTITY | 0.99+ |
CUBE | ORGANIZATION | 0.98+ |
three years ago | DATE | 0.98+ |
this year | DATE | 0.98+ |
VMware Cloud Foundation | ORGANIZATION | 0.98+ |
TKG | ORGANIZATION | 0.98+ |
VMworld | ORGANIZATION | 0.98+ |
One | QUANTITY | 0.98+ |
Cloud Native | TITLE | 0.98+ |
vSphere | TITLE | 0.98+ |
Tanzu Kubernetes Grid | TITLE | 0.98+ |
first stack | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
a month | QUANTITY | 0.97+ |
Velero | ORGANIZATION | 0.97+ |
Today | DATE | 0.97+ |
2020 | DATE | 0.97+ |
Secondly | QUANTITY | 0.97+ |
Velero | TITLE | 0.95+ |
two trends | QUANTITY | 0.94+ |
PowerProtect | TITLE | 0.93+ |
Tanzu Mission Control | ORGANIZATION | 0.93+ |
VMworld 2020 | EVENT | 0.91+ |
Pivotal | ORGANIZATION | 0.91+ |
UNLIST TILL 4/2 - Vertica Big Data Conference Keynote
>> Joy: Welcome to the Virtual Big Data Conference. Vertica is so excited to host this event. I'm Joy King, and I'll be your host for today's Big Data Conference Keynote Session. It's my honor and my genuine pleasure to lead Vertica's product and go-to-market strategy. And I'm so lucky to have a passionate and committed team who turned our Vertica BDC event, into a virtual event in a very short amount of time. I want to thank the thousands of people, and yes, that's our true number who have registered to attend this virtual event. We were determined to balance your health, safety and your peace of mind with the excitement of the Vertica BDC. This is a very unique event. Because as I hope you all know, we focus on engineering and architecture, best practice sharing and customer stories that will educate and inspire everyone. I also want to thank our top sponsors for the virtual BDC, Arrow, and Pure Storage. Our partnerships are so important to us and to everyone in the audience. Because together, we get things done faster and better. Now for today's keynote, you'll hear from three very important and energizing speakers. First, Colin Mahony, our SVP and General Manager for Vertica, will talk about the market trends that Vertica is betting on to win for our customers. And he'll share the exciting news about our Vertica 10 announcement and how this will benefit our customers. Then you'll hear from Amy Fowler, VP of strategy and solutions for FlashBlade at Pure Storage. Our partnership with Pure Storage is truly unique in the industry, because together modern infrastructure from Pure powers modern analytics from Vertica. And then you'll hear from John Yovanovich, Director of IT at AT&T, who will tell you about the Pure Vertica Symphony that plays live every day at AT&T. Here we go, Colin, over to you. >> Colin: Well, thanks a lot joy. And, I want to echo Joy's thanks to our sponsors, and so many of you who have helped make this happen. This is not an easy time for anyone. We were certainly looking forward to getting together in person in Boston during the Vertica Big Data Conference and Winning with Data. But I think all of you and our team have done a great job, scrambling and putting together a terrific virtual event. So really appreciate your time. I also want to remind people that we will make both the slides and the full recording available after this. So for any of those who weren't able to join live, that is still going to be available. Well, things have been pretty exciting here. And in the analytic space in general, certainly for Vertica, there's a lot happening. There are a lot of problems to solve, a lot of opportunities to make things better, and a lot of data that can really make every business stronger, more efficient, and frankly, more differentiated. For Vertica, though, we know that focusing on the challenges that we can directly address with our platform, and our people, and where we can actually make the biggest difference is where we ought to be putting our energy and our resources. I think one of the things that has made Vertica so strong over the years is our ability to focus on those areas where we can make a great difference. So for us as we look at the market, and we look at where we play, there are really three recent and some not so recent, but certainly picking up a lot of the market trends that have become critical for every industry that wants to Win Big With Data. We've heard this loud and clear from our customers and from the analysts that cover the market. If I were to summarize these three areas, this really is the core focus for us right now. We know that there's massive data growth. And if we can unify the data silos so that people can really take advantage of that data, we can make a huge difference. We know that public clouds offer tremendous advantages, but we also know that balance and flexibility is critical. And we all need the benefit that machine learning for all the types up to the end data science. We all need the benefits that they can bring to every single use case, but only if it can really be operationalized at scale, accurate and in real time. And the power of Vertica is, of course, how we're able to bring so many of these things together. Let me talk a little bit more about some of these trends. So one of the first industry trends that we've all been following probably now for over the last decade, is Hadoop and specifically HDFS. So many companies have invested, time, money, more importantly, people in leveraging the opportunity that HDFS brought to the market. HDFS is really part of a much broader storage disruption that we'll talk a little bit more about, more broadly than HDFS. But HDFS itself was really designed for petabytes of data, leveraging low cost commodity hardware and the ability to capture a wide variety of data formats, from a wide variety of data sources and applications. And I think what people really wanted, was to store that data before having to define exactly what structures they should go into. So over the last decade or so, the focus for most organizations is figuring out how to capture, store and frankly manage that data. And as a platform to do that, I think, Hadoop was pretty good. It certainly changed the way that a lot of enterprises think about their data and where it's locked up. In parallel with Hadoop, particularly over the last five years, Cloud Object Storage has also given every organization another option for collecting, storing and managing even more data. That has led to a huge growth in data storage, obviously, up on public clouds like Amazon and their S3, Google Cloud Storage and Azure Blob Storage just to name a few. And then when you consider regional and local object storage offered by cloud vendors all over the world, the explosion of that data, in leveraging this type of object storage is very real. And I think, as I mentioned, it's just part of this broader storage disruption that's been going on. But with all this growth in the data, in all these new places to put this data, every organization we talk to is facing even more challenges now around the data silo. Sure the data silos certainly getting bigger. And hopefully they're getting cheaper per bit. But as I said, the focus has really been on collecting, storing and managing the data. But between the new data lakes and many different cloud object storage combined with all sorts of data types from the complexity of managing all this, getting that business value has been very limited. This actually takes me to big bet number one for Team Vertica, which is to unify the data. Our goal, and some of the announcements we have made today plus roadmap announcements I'll share with you throughout this presentation. Our goal is to ensure that all the time, money and effort that has gone into storing that data, all the data turns into business value. So how are we going to do that? With a unified analytics platform that analyzes the data wherever it is HDFS, Cloud Object Storage, External tables in an any format ORC, Parquet, JSON, and of course, our own Native Roth Vertica format. Analyze the data in the right place in the right format, using a single unified tool. This is something that Vertica has always been committed to, and you'll see in some of our announcements today, we're just doubling down on that commitment. Let's talk a little bit more about the public cloud. This is certainly the second trend. It's the second wave maybe of data disruption with object storage. And there's a lot of advantages when it comes to public cloud. There's no question that the public clouds give rapid access to compute storage with the added benefit of eliminating data center maintenance that so many companies, want to get out of themselves. But maybe the biggest advantage that I see is the architectural innovation. The public clouds have introduced so many methodologies around how to provision quickly, separating compute and storage and really dialing-in the exact needs on demand, as you change workloads. When public clouds began, it made a lot of sense for the cloud providers and their customers to charge and pay for compute and storage in the ratio that each use case demanded. And I think you're seeing that trend, proliferate all over the place, not just up in public cloud. That architecture itself is really becoming the next generation architecture for on-premise data centers, as well. But there are a lot of concerns. I think we're all aware of them. They're out there many times for different workloads, there are higher costs. Especially if some of the workloads that are being run through analytics, which tend to run all the time. Just like some of the silo challenges that companies are facing with HDFS, data lakes and cloud storage, the public clouds have similar types of siloed challenges as well. Initially, there was a belief that they were cheaper than data centers, and when you added in all the costs, it looked that way. And again, for certain elastic workloads, that is the case. I don't think that's true across the board overall. Even to the point where a lot of the cloud vendors aren't just charging lower costs anymore. We hear from a lot of customers that they don't really want to tether themselves to any one cloud because of some of those uncertainties. Of course, security and privacy are a concern. We hear a lot of concerns with regards to cloud and even some SaaS vendors around shared data catalogs, across all the customers and not enough separation. But security concerns are out there, you can read about them. I'm not going to jump into that bandwagon. But we hear about them. And then, of course, I think one of the things we hear the most from our customers, is that each cloud stack is starting to feel even a lot more locked in than the traditional data warehouse appliance. And as everybody knows, the industry has been running away from appliances as fast as it can. And so they're not eager to get locked into another, quote, unquote, virtual appliance, if you will, up in the cloud. They really want to make sure they have flexibility in which clouds, they're going to today, tomorrow and in the future. And frankly, we hear from a lot of our customers that they're very interested in eventually mixing and matching, compute from one cloud with, say storage from another cloud, which I think is something that we'll hear a lot more about. And so for us, that's why we've got our big bet number two. we love the cloud. We love the public cloud. We love the private clouds on-premise, and other hosting providers. But our passion and commitment is for Vertica to be able to run in any of the clouds that our customers choose, and make it portable across those clouds. We have supported on-premises and all public clouds for years. And today, we have announced even more support for Vertica in Eon Mode, the deployment option that leverages the separation of compute from storage, with even more deployment choices, which I'm going to also touch more on as we go. So super excited about our big bet number two. And finally as I mentioned, for all the hype that there is around machine learning, I actually think that most importantly, this third trend that team Vertica is determined to address is the need to bring business critical, analytics, machine learning, data science projects into production. For so many years, there just wasn't enough data available to justify the investment in machine learning. Also, processing power was expensive, and storage was prohibitively expensive. But to train and score and evaluate all the different models to unlock the full power of predictive analytics was tough. Today you have those massive data volumes. You have the relatively cheap processing power and storage to make that dream a reality. And if you think about this, I mean with all the data that's available to every company, the real need is to operationalize the speed and the scale of machine learning so that these organizations can actually take advantage of it where they need to. I mean, we've seen this for years with Vertica, going back to some of the most advanced gaming companies in the early days, they were incorporating this with live data directly into their gaming experiences. Well, every organization wants to do that now. And the accuracy for clickability and real time actions are all key to separating the leaders from the rest of the pack in every industry when it comes to machine learning. But if you look at a lot of these projects, the reality is that there's a ton of buzz, there's a ton of hype spanning every acronym that you can imagine. But most companies are struggling, do the separate teams, different tools, silos and the limitation that many platforms are facing, driving, down sampling to get a small subset of the data, to try to create a model that then doesn't apply, or compromising accuracy and making it virtually impossible to replicate models, and understand decisions. And if there's one thing that we've learned when it comes to data, prescriptive data at the atomic level, being able to show end of one as we refer to it, meaning individually tailored data. No matter what it is healthcare, entertainment experiences, like gaming or other, being able to get at the granular data and make these decisions, make that scoring applies to machine learning just as much as it applies to giving somebody a next-best-offer. But the opportunity has never been greater. The need to integrate this end-to-end workflow and support the right tools without compromising on that accuracy. Think about it as no downsampling, using all the data, it really is key to machine learning success. Which should be no surprise then why the third big bet from Vertica is one that we've actually been working on for years. And we're so proud to be where we are today, helping the data disruptors across the world operationalize machine learning. This big bet has the potential to truly unlock, really the potential of machine learning. And today, we're announcing some very important new capabilities specifically focused on unifying the work being done by the data science community, with their preferred tools and platforms, and the volume of data and performance at scale, available in Vertica. Our strategy has been very consistent over the last several years. As I said in the beginning, we haven't deviated from our strategy. Of course, there's always things that we add. Most of the time, it's customer driven, it's based on what our customers are asking us to do. But I think we've also done a great job, not trying to be all things to all people. Especially as these hype cycles flare up around us, we absolutely love participating in these different areas without getting completely distracted. I mean, there's a variety of query tools and data warehouses and analytics platforms in the market. We all know that. There are tools and platforms that are offered by the public cloud vendors, by other vendors that support one or two specific clouds. There are appliance vendors, who I was referring to earlier who can deliver package data warehouse offerings for private data centers. And there's a ton of popular machine learning tools, languages and other kits. But Vertica is the only advanced analytic platform that can do all this, that can bring it together. We can analyze the data wherever it is, in HDFS, S3 Object Storage, or Vertica itself. Natively we support multiple clouds on-premise deployments, And maybe most importantly, we offer that choice of deployment modes to allow our customers to choose the architecture that works for them right now. It still also gives them the option to change move, evolve over time. And Vertica is the only analytics database with end-to-end machine learning that can truly operationalize ML at scale. And I know it's a mouthful. But it is not easy to do all these things. It is one of the things that highly differentiates Vertica from the rest of the pack. It is also why our customers, all of you continue to bet on us and see the value that we are delivering and we will continue to deliver. Here's a couple of examples of some of our customers who are powered by Vertica. It's the scale of data. It's the millisecond response times. Performance and scale have always been a huge part of what we have been about, not the only thing. I think the functionality all the capabilities that we add to the platform, the ease of use, the flexibility, obviously with the deployment. But if you look at some of the numbers they are under these customers on this slide. And I've shared a lot of different stories about these customers. Which, by the way, it still amaze me every time I talk to one and I get the updates, you can see the power and the difference that Vertica is making. Equally important, if you look at a lot of these customers, they are the epitome of being able to deploy Vertica in a lot of different environments. Many of the customers on this slide are not using Vertica just on-premise or just in the cloud. They're using it in a hybrid way. They're using it in multiple different clouds. And again, we've been with them on that journey throughout, which is what has made this product and frankly, our roadmap and our vision exactly what it is. It's been quite a journey. And that journey continues now with the Vertica 10 release. The Vertica 10 release is obviously a massive release for us. But if you look back, you can see that building on that native columnar architecture that started a long time ago, obviously, with the C-Store paper. We built it to leverage that commodity hardware, because it was an architecture that was never tightly integrated with any specific underlying infrastructure. I still remember hearing the initial pitch from Mike Stonebreaker, about the vision of Vertica as a software only solution and the importance of separating the company from hardware innovation. And at the time, Mike basically said to me, "there's so much R&D in innovation that's going to happen in hardware, we shouldn't bake hardware into our solution. We should do it in software, and we'll be able to take advantage of that hardware." And that is exactly what has happened. But one of the most recent innovations that we embraced with hardware is certainly that separation of compute and storage. As I said previously, the public cloud providers offered this next generation architecture, really to ensure that they can provide the customers exactly what they needed, more compute or more storage and charge for each, respectively. The separation of compute and storage, compute from storage is a major milestone in data center architectures. If you think about it, it's really not only a public cloud innovation, though. It fundamentally redefines the next generation data architecture for on-premise and for pretty much every way people are thinking about computing today. And that goes for software too. Object storage is an example of the cost effective means for storing data. And even more importantly, separating compute from storage for analytic workloads has a lot of advantages. Including the opportunity to manage much more dynamic, flexible workloads. And more importantly, truly isolate those workloads from others. And by the way, once you start having something that can truly isolate workloads, then you can have the conversations around autonomic computing, around setting up some nodes, some compute resources on the data that won't affect any of the other data to do some things on their own, maybe some self analytics, by the system, etc. A lot of things that many of you know we've already been exploring in terms of our own system data in the product. But it was May 2018, believe it or not, it seems like a long time ago where we first announced Eon Mode and I want to make something very clear, actually about Eon mode. It's a mode, it's a deployment option for Vertica customers. And I think this is another huge benefit that we don't talk about enough. But unlike a lot of vendors in the market who will dig you and charge you for every single add-on like hit-buy, you name it. You get this with the Vertica product. If you continue to pay support and maintenance, this comes with the upgrade. This comes as part of the new release. So any customer who owns or buys Vertica has the ability to set up either an Enterprise Mode or Eon Mode, which is a question I know that comes up sometimes. Our first announcement of Eon was obviously AWS customers, including the trade desk, AT&T. Most of whom will be speaking here later at the Virtual Big Data Conference. They saw a huge opportunity. Eon Mode, not only allowed Vertica to scale elastically with that specific compute and storage that was needed, but it really dramatically simplified database operations including things like workload balancing, node recovery, compute provisioning, etc. So one of the most popular functions is that ability to isolate the workloads and really allocate those resources without negatively affecting others. And even though traditional data warehouses, including Vertica Enterprise Mode have been able to do lots of different workload isolation, it's never been as strong as Eon Mode. Well, it certainly didn't take long for our customers to see that value across the board with Eon Mode. Not just up in the cloud, in partnership with one of our most valued partners and a platinum sponsor here. Joy mentioned at the beginning. We announced Vertica Eon Mode for Pure Storage FlashBlade in September 2019. And again, just to be clear, this is not a new product, it's one Vertica with yet more deployment options. With Pure Storage, Vertica in Eon mode is not limited in any way by variable cloud, network latency. The performance is actually amazing when you take the benefits of separate and compute from storage and you run it with a Pure environment on-premise. Vertica in Eon Mode has a super smart cache layer that we call the depot. It's a big part of our secret sauce around Eon mode. And combined with the power and performance of Pure's FlashBlade, Vertica became the industry's first advanced analytics platform that actually separates compute and storage for on-premises data centers. Something that a lot of our customers are already benefiting from, and we're super excited about it. But as I said, this is a journey. We don't stop, we're not going to stop. Our customers need the flexibility of multiple public clouds. So today with Vertica 10, we're super proud and excited to announce support for Vertica in Eon Mode on Google Cloud. This gives our customers the ability to use their Vertica licenses on Amazon AWS, on-premise with Pure Storage and on Google Cloud. Now, we were talking about HDFS and a lot of our customers who have invested quite a bit in HDFS as a place, especially to store data have been pushing us to support Eon Mode with HDFS. So as part of Vertica 10, we are also announcing support for Vertica in Eon Mode using HDFS as the communal storage. Vertica's own Roth format data can be stored in HDFS, and actually the full functionality of Vertica is complete analytics, geospatial pattern matching, time series, machine learning, everything that we have in there can be applied to this data. And on the same HDFS nodes, Vertica can actually also analyze data in ORC or Parquet format, using External tables. We can also execute joins between the Roth data the External table holds, which powers a much more comprehensive view. So again, it's that flexibility to be able to support our customers, wherever they need us to support them on whatever platform, they have. Vertica 10 gives us a lot more ways that we can deploy Eon Mode in various environments for our customers. It allows them to take advantage of Vertica in Eon Mode and the power that it brings with that separation, with that workload isolation, to whichever platform they are most comfortable with. Now, there's a lot that has come in Vertica 10. I'm definitely not going to be able to cover everything. But we also introduced complex types as an example. And complex data types fit very well into Eon as well in this separation. They significantly reduce the data pipeline, the cost of moving data between those, a much better support for unstructured data, which a lot of our customers have mixed with structured data, of course, and they leverage a lot of columnar execution that Vertica provides. So you get complex data types in Vertica now, a lot more data, stronger performance. It goes great with the announcement that we made with the broader Eon Mode. Let's talk a little bit more about machine learning. We've been actually doing work in and around machine learning with various extra regressions and a whole bunch of other algorithms for several years. We saw the huge advantage that MPP offered, not just as a sequel engine as a database, but for ML as well. Didn't take as long to realize that there's a lot more to operationalizing machine learning than just those algorithms. It's data preparation, it's that model trade training. It's the scoring, the shaping, the evaluation. That is so much of what machine learning and frankly, data science is about. You do know, everybody always wants to jump to the sexy algorithm and we handle those tasks very, very well. It makes Vertica a terrific platform to do that. A lot of work in data science and machine learning is done in other tools. I had mentioned that there's just so many tools out there. We want people to be able to take advantage of all that. We never believed we were going to be the best algorithm company or come up with the best models for people to use. So with Vertica 10, we support PMML. We can import now and export PMML models. It's a huge step for us around that operationalizing machine learning projects for our customers. Allowing the models to get built outside of Vertica yet be imported in and then applying to that full scale of data with all the performance that you would expect from Vertica. We also are more tightly integrating with Python. As many of you know, we've been doing a lot of open source projects with the community driven by many of our customers, like Uber. And so now with Python we've integrated with TensorFlow, allowing data scientists to build models in their preferred language, to take advantage of TensorFlow. But again, to store and deploy those models at scale with Vertica. I think both these announcements are proof of our big bet number three, and really our commitment to supporting innovation throughout the community by operationalizing ML with that accuracy, performance and scale of Vertica for our customers. Again, there's a lot of steps when it comes to the workflow of machine learning. These are some of them that you can see on the slide, and it's definitely not linear either. We see this as a circle. And companies that do it, well just continue to learn, they continue to rescore, they continue to redeploy and they want to operationalize all that within a single platform that can take advantage of all those capabilities. And that is the platform, with a very robust ecosystem that Vertica has always been committed to as an organization and will continue to be. This graphic, many of you have seen it evolve over the years. Frankly, if we put everything and everyone on here wouldn't fit on a slide. But it will absolutely continue to evolve and grow as we support our customers, where they need the support most. So, again, being able to deploy everywhere, being able to take advantage of Vertica, not just as a business analyst or a business user, but as a data scientists or as an operational or BI person. We want Vertica to be leveraged and used by the broader organization. So I think it's fair to say and I encourage everybody to learn more about Vertica 10, because I'm just highlighting some of the bigger aspects of it. But we talked about those three market trends. The need to unify the silos, the need for hybrid multiple cloud deployment options, the need to operationalize business critical machine learning projects. Vertica 10 has absolutely delivered on those. But again, we are not going to stop. It is our job not to, and this is how Team Vertica thrives. I always joke that the next release is the best release. And, of course, even after Vertica 10, that is also true, although Vertica 10 is pretty awesome. But, you know, from the first line of code, we've always been focused on performance and scale, right. And like any really strong data platform, the execution engine, the optimizer and the execution engine are the two core pieces of that. Beyond Vertica 10, some of the big things that we're already working on, next generation execution engine. We're already actually seeing incredible early performance from this. And this is just one example, of how important it is for an organization like Vertica to constantly go back and re-innovate. Every single release, we do the sit ups and crunches, our performance and scale. How do we improve? And there's so many parts of the core server, there's so many parts of our broader ecosystem. We are constantly looking at coverages of how we can go back to all the code lines that we have, and make them better in the current environment. And it's not an easy thing to do when you're doing that, and you're also expanding in the environment that we are expanding into to take advantage of the different deployments, which is a great segue to this slide. Because if you think about today, we're obviously already available with Eon Mode and Amazon, AWS and Pure and actually MinIO as well. As I talked about in Vertica 10 we're adding Google and HDFS. And coming next, obviously, Microsoft Azure, Alibaba cloud. So being able to expand into more of these environments is really important for the Vertica team and how we go forward. And it's not just running in these clouds, for us, we want it to be a SaaS like experience in all these clouds. We want you to be able to deploy Vertica in 15 minutes or less on these clouds. You can also consume Vertica, in a lot of different ways, on these clouds. As an example, in Amazon Vertica by the Hour. So for us, it's not just about running, it's about taking advantage of the ecosystems that all these cloud providers offer, and really optimizing the Vertica experience as part of them. Optimization, around automation, around self service capabilities, extending our management console, we now have products that like the Vertica Advisor Tool that our Customer Success Team has created to actually use our own smarts in Vertica. To take data from customers that give it to us and help them tune automatically their environment. You can imagine that we're taking that to the next level, in a lot of different endeavors that we're doing around how Vertica as a product can actually be smarter because we all know that simplicity is key. There just aren't enough people in the world who are good at managing data and taking it to the next level. And of course, other things that we all hear about, whether it's Kubernetes and containerization. You can imagine that that probably works very well with the Eon Mode and separating compute and storage. But innovation happens everywhere. We innovate around our community documentation. Many of you have taken advantage of the Vertica Academy. The numbers there are through the roof in terms of the number of people coming in and certifying on it. So there's a lot of things that are within the core products. There's a lot of activity and action beyond the core products that we're taking advantage of. And let's not forget why we're here, right? It's easy to talk about a platform, a data platform, it's easy to jump into all the functionality, the analytics, the flexibility, how we can offer it. But at the end of the day, somebody, a person, she's got to take advantage of this data, she's got to be able to take this data and use this information to make a critical business decision. And that doesn't happen unless we explore lots of different and frankly, new ways to get that predictive analytics UI and interface beyond just the standard BI tools in front of her at the right time. And so there's a lot of activity, I'll tease you with that going on in this organization right now about how we can do that and deliver that for our customers. We're in a great position to be able to see exactly how this data is consumed and used and start with this core platform that we have to go out. Look, I know, the plan wasn't to do this as a virtual BDC. But I really appreciate you tuning in. Really appreciate your support. I think if there's any silver lining to us, maybe not being able to do this in person, it's the fact that the reach has actually gone significantly higher than what we would have been able to do in person in Boston. We're certainly looking forward to doing a Big Data Conference in the future. But if I could leave you with anything, know this, since that first release for Vertica, and our very first customers, we have been very consistent. We respect all the innovation around us, whether it's open source or not. We understand the market trends. We embrace those new ideas and technologies and for us true north, and the most important thing is what does our customer need to do? What problem are they trying to solve? And how do we use the advantages that we have without disrupting our customers? But knowing that you depend on us to deliver that unified analytics strategy, it will deliver that performance of scale, not only today, but tomorrow and for years to come. We've added a lot of great features to Vertica. I think we've said no to a lot of things, frankly, that we just knew we wouldn't be the best company to deliver. When we say we're going to do things we do them. Vertica 10 is a perfect example of so many of those things that we from you, our customers have heard loud and clear, and we have delivered. I am incredibly proud of this team across the board. I think the culture of Vertica, a customer first culture, jumping in to help our customers win no matter what is also something that sets us massively apart. I hear horror stories about support experiences with other organizations. And people always seem to be amazed at Team Vertica's willingness to jump in or their aptitude for certain technical capabilities or understanding the business. And I think sometimes we take that for granted. But that is the team that we have as Team Vertica. We are incredibly excited about Vertica 10. I think you're going to love the Virtual Big Data Conference this year. I encourage you to tune in. Maybe one other benefit is I know some people were worried about not being able to see different sessions because they were going to overlap with each other well now, even if you can't do it live, you'll be able to do those sessions on demand. Please enjoy the Vertica Big Data Conference here in 2020. Please you and your families and your co-workers be safe during these times. I know we will get through it. And analytics is probably going to help with a lot of that and we already know it is helping in many different ways. So believe in the data, believe in data's ability to change the world for the better. And thank you for your time. And with that, I am delighted to now introduce Micro Focus CEO Stephen Murdoch to the Vertica Big Data Virtual Conference. Thank you Stephen. >> Stephen: Hi, everyone, my name is Stephen Murdoch. I have the pleasure and privilege of being the Chief Executive Officer here at Micro Focus. Please let me add my welcome to the Big Data Conference. And also my thanks for your support, as we've had to pivot to this being virtual rather than a physical conference. Its amazing how quickly we all reset to a new normal. I certainly didn't expect to be addressing you from my study. Vertica is an incredibly important part of Micro Focus family. Is key to our goal of trying to enable and help customers become much more data driven across all of their IT operations. Vertica 10 is a huge step forward, we believe. It allows for multi-cloud innovation, genuinely hybrid deployments, begin to leverage machine learning properly in the enterprise, and also allows the opportunity to unify currently siloed lakes of information. We operate in a very noisy, very competitive market, and there are people, who are in that market who can do some of those things. The reason we are so excited about Vertica is we genuinely believe that we are the best at doing all of those things. And that's why we've announced publicly, you're under executing internally, incremental investment into Vertica. That investments targeted at accelerating the roadmaps that already exist. And getting that innovation into your hands faster. This idea is speed is key. It's not a question of if companies have to become data driven organizations, it's a question of when. So that speed now is really important. And that's why we believe that the Big Data Conference gives a great opportunity for you to accelerate your own plans. You will have the opportunity to talk to some of our best architects, some of the best development brains that we have. But more importantly, you'll also get to hear from some of our phenomenal Roth Data customers. You'll hear from Uber, from the Trade Desk, from Philips, and from AT&T, as well as many many others. And just hearing how those customers are using the power of Vertica to accelerate their own, I think is the highlight. And I encourage you to use this opportunity to its full. Let me close by, again saying thank you, we genuinely hope that you get as much from this virtual conference as you could have from a physical conference. And we look forward to your engagement, and we look forward to hearing your feedback. With that, thank you very much. >> Joy: Thank you so much, Stephen, for joining us for the Vertica Big Data Conference. Your support and enthusiasm for Vertica is so clear, and it makes a big difference. Now, I'm delighted to introduce Amy Fowler, the VP of Strategy and Solutions for FlashBlade at Pure Storage, who was one of our BDC Platinum Sponsors, and one of our most valued partners. It was a proud moment for me, when we announced Vertica in Eon mode for Pure Storage FlashBlade and we became the first analytics data warehouse that separates compute from storage for on-premise data centers. Thank you so much, Amy, for joining us. Let's get started. >> Amy: Well, thank you, Joy so much for having us. And thank you all for joining us today, virtually, as we may all be. So, as we just heard from Colin Mahony, there are some really interesting trends that are happening right now in the big data analytics market. From the end of the Hadoop hype cycle, to the new cloud reality, and even the opportunity to help the many data science and machine learning projects move from labs to production. So let's talk about these trends in the context of infrastructure. And in particular, look at why a modern storage platform is relevant as organizations take on the challenges and opportunities associated with these trends. The answer is the Hadoop hype cycles left a lot of data in HDFS data lakes, or reservoirs or swamps depending upon the level of the data hygiene. But without the ability to get the value that was promised from Hadoop as a platform rather than a distributed file store. And when we combine that data with the massive volume of data in Cloud Object Storage, we find ourselves with a lot of data and a lot of silos, but without a way to unify that data and find value in it. Now when you look at the infrastructure data lakes are traditionally built on, it is often direct attached storage or data. The approach that Hadoop took when it entered the market was primarily bound by the limits of networking and storage technologies. One gig ethernet and slower spinning disk. But today, those barriers do not exist. And all FlashStorage has fundamentally transformed how data is accessed, managed and leveraged. The need for local data storage for significant volumes of data has been largely mitigated by the performance increases afforded by all Flash. At the same time, organizations can achieve superior economies of scale with that segregation of compute and storage. With compute and storage, you don't always scale in lockstep. Would you want to add an engine to the train every time you add another boxcar? Probably not. But from a Pure Storage perspective, FlashBlade is uniquely architected to allow customers to achieve better resource utilization for compute and storage, while at the same time, reducing complexity that has arisen from the siloed nature of the original big data solutions. The second and equally important recent trend we see is something I'll call cloud reality. The public clouds made a lot of promises and some of those promises were delivered. But cloud economics, especially usage based and elastic scaling, without the control that many companies need to manage the financial impact is causing a lot of issues. In addition, the risk of vendor lock-in from data egress, charges, to integrated software stacks that can't be moved or deployed on-premise is causing a lot of organizations to back off the all the way non-cloud strategy, and move toward hybrid deployments. Which is kind of funny in a way because it wasn't that long ago that there was a lot of talk about no more data centers. And for example, one large retailer, I won't name them, but I'll admit they are my favorites. They several years ago told us they were completely done with on-prem storage infrastructure, because they were going 100% to the cloud. But they just deployed FlashBlade for their data pipelines, because they need predictable performance at scale. And the all cloud TCO just didn't add up. Now, that being said, well, there are certainly challenges with the public cloud. It has also brought some things to the table that we see most organizations wanting. First of all, in a lot of cases applications have been built to leverage object storage platforms like S3. So they need that object protocol, but they may also need it to be fast. And the said object may be oxymoron only a few years ago, and this is an area of the market where Pure and FlashBlade have really taken a leadership position. Second, regardless of where the data is physically stored, organizations want the best elements of a cloud experience. And for us, that means two main things. Number one is simplicity and ease of use. If you need a bunch of storage experts to run the system, that should be considered a bug. The other big one is the consumption model. The ability to pay for what you need when you need it, and seamlessly grow your environment over time totally nondestructively. This is actually pretty huge and something that a lot of vendors try to solve for with finance programs. But no finance program can address the pain of a forklift upgrade, when you need to move to next gen hardware. To scale nondestructively over long periods of time, five to 10 years plus is a crucial architectural decisions need to be made at the outset. Plus, you need the ability to pay as you use it. And we offer something for FlashBlade called Pure as a Service, which delivers exactly that. The third cloud characteristic that many organizations want is the option for hybrid. Even if that is just a DR site in the cloud. In our case, that means supporting appplication of S3, at the AWS. And the final trend, which to me represents the biggest opportunity for all of us, is the need to help the many data science and machine learning projects move from labs to production. This means bringing all the machine learning functions and model training to the data, rather than moving samples or segments of data to separate platforms. As we all know, machine learning needs a ton of data for accuracy. And there is just too much data to retrieve from the cloud for every training job. At the same time, predictive analytics without accuracy is not going to deliver the business advantage that everyone is seeking. You can kind of visualize data analytics as it is traditionally deployed as being on a continuum. With that thing, we've been doing the longest, data warehousing on one end, and AI on the other end. But the way this manifests in most environments is a series of silos that get built up. So data is duplicated across all kinds of bespoke analytics and AI, environments and infrastructure. This creates an expensive and complex environment. So historically, there was no other way to do it because some level of performance is always table stakes. And each of these parts of the data pipeline has a different workload profile. A single platform to deliver on the multi dimensional performances, diverse set of applications required, that didn't exist three years ago. And that's why the application vendors pointed you towards bespoke things like DAS environments that we talked about earlier. And the fact that better options exists today is why we're seeing them move towards supporting this disaggregation of compute and storage. And when it comes to a platform that is a better option, one with a modern architecture that can address the diverse performance requirements of this continuum, and allow organizations to bring a model to the data instead of creating separate silos. That's exactly what FlashBlade is built for. Small files, large files, high throughput, low latency and scale to petabytes in a single namespace. And this is importantly a single rapid space is what we're focused on delivering for our customers. At Pure, we talk about it in the context of modern data experience because at the end of the day, that's what it's really all about. The experience for your teams in your organization. And together Pure Storage and Vertica have delivered that experience to a wide range of customers. From a SaaS analytics company, which uses Vertica on FlashBlade to authenticate the quality of digital media in real time, to a multinational car company, which uses Vertica on FlashBlade to make thousands of decisions per second for autonomous cars, or a healthcare organization, which uses Vertica on FlashBlade to enable healthcare providers to make real time decisions that impact lives. And I'm sure you're all looking forward to hearing from John Yavanovich from AT&T. To hear how he's been doing this with Vertica and FlashBlade as well. He's coming up soon. We have been really excited to build this partnership with Vertica. And we're proud to provide the only on-premise storage platform validated with Vertica Eon Mode. And deliver this modern data experience to our customers together. Thank you all so much for joining us today. >> Joy: Amy, thank you so much for your time and your insights. Modern infrastructure is key to modern analytics, especially as organizations leverage next generation data center architectures, and object storage for their on-premise data centers. Now, I'm delighted to introduce our last speaker in our Vertica Big Data Conference Keynote, John Yovanovich, Director of IT for AT&T. Vertica is so proud to serve AT&T, and especially proud of the harmonious impact we are having in partnership with Pure Storage. John, welcome to the Virtual Vertica BDC. >> John: Thank you joy. It's a pleasure to be here. And I'm excited to go through this presentation today. And in a unique fashion today 'cause as I was thinking through how I wanted to present the partnership that we have formed together between Pure Storage, Vertica and AT&T, I want to emphasize how well we all work together and how these three components have really driven home, my desire for a harmonious to use your word relationship. So, I'm going to move forward here and with. So here, what I'm going to do the theme of today's presentation is the Pure Vertica Symphony live at AT&T. And if anybody is a Westworld fan, you can appreciate the sheet music on the right hand side. What we're going to what I'm going to highlight here is in a musical fashion, is how we at AT&T leverage these technologies to save money to deliver a more efficient platform, and to actually just to make our customers happier overall. So as we look back, and back as early as just maybe a few years ago here at AT&T, I realized that we had many musicians to help the company. Or maybe you might want to call them data scientists, or data analysts. For the theme we'll stay with musicians. None of them were singing or playing from the same hymn book or sheet music. And so what we had was many organizations chasing a similar dream, but not exactly the same dream. And, best way to describe that is and I think with a lot of people this might resonate in your organizations. How many organizations are chasing a customer 360 view in your company? Well, I can tell you that I have at least four in my company. And I'm sure there are many that I don't know of. That is our problem because what we see is a repetitive sourcing of data. We see a repetitive copying of data. And there's just so much money to be spent. This is where I asked Pure Storage and Vertica to help me solve that problem with their technologies. What I also noticed was that there was no coordination between these departments. In fact, if you look here, nobody really wants to play with finance. Sales, marketing and care, sure that you all copied each other's data. But they actually didn't communicate with each other as they were copying the data. So the data became replicated and out of sync. This is a challenge throughout, not just my company, but all companies across the world. And that is, the more we replicate the data, the more problems we have at chasing or conquering the goal of single version of truth. In fact, I kid that I think that AT&T, we actually have adopted the multiple versions of truth, techno theory, which is not where we want to be, but this is where we are. But we are conquering that with the synergies between Pure Storage and Vertica. This is what it leaves us with. And this is where we are challenged and that if each one of our siloed business units had their own stories, their own dedicated stories, and some of them had more money than others so they bought more storage. Some of them anticipating storing more data, and then they really did. Others are running out of space, but can't put anymore because their bodies aren't been replenished. So if you look at it from this side view here, we have a limited amount of compute or fixed compute dedicated to each one of these silos. And that's because of the, wanting to own your own. And the other part is that you are limited or wasting space, depending on where you are in the organization. So there were the synergies aren't just about the data, but actually the compute and the storage. And I wanted to tackle that challenge as well. So I was tackling the data. I was tackling the storage, and I was tackling the compute all at the same time. So my ask across the company was can we just please play together okay. And to do that, I knew that I wasn't going to tackle this by getting everybody in the same room and getting them to agree that we needed one account table, because they will argue about whose account table is the best account table. But I knew that if I brought the account tables together, they would soon see that they had so much redundancy that I can now start retiring data sources. I also knew that if I brought all the compute together, that they would all be happy. But I didn't want them to tackle across tackle each other. And in fact that was one of the things that all business units really enjoy. Is they enjoy the silo of having their own compute, and more or less being able to control their own destiny. Well, Vertica's subclustering allows just that. And this is exactly what I was hoping for, and I'm glad they've brought through. And finally, how did I solve the problem of the single account table? Well when you don't have dedicated storage, and you can separate compute and storage as Vertica in Eon Mode does. And we store the data on FlashBlades, which you see on the left and right hand side, of our container, which I can describe in a moment. Okay, so what we have here, is we have a container full of compute with all the Vertica nodes sitting in the middle. Two loader, we'll call them loader subclusters, sitting on the sides, which are dedicated to just putting data onto the FlashBlades, which is sitting on both ends of the container. Now today, I have two dedicated storage or common dedicated might not be the right word, but two storage racks one on the left one on the right. And I treat them as separate storage racks. They could be one, but i created them separately for disaster recovery purposes, lashing work in case that rack were to go down. But that being said, there's no reason why I'm probably going to add a couple of them here in the future. So I can just have a, say five to 10, petabyte storage, setup, and I'll have my DR in another 'cause the DR shouldn't be in the same container. Okay, but I'll DR outside of this container. So I got them all together, I leveraged subclustering, I leveraged separate and compute. I was able to convince many of my clients that they didn't need their own account table, that they were better off having one. I eliminated, I reduced latency, I reduced our ticketing I reduce our data quality issues AKA ticketing okay. I was able to expand. What is this? As work. I was able to leverage elasticity within this cluster. As you can see, there are racks and racks of compute. We set up what we'll call the fixed capacity that each of the business units needed. And then I'm able to ramp up and release the compute that's necessary for each one of my clients based on their workloads throughout the day. And so while they compute to the right before you see that the instruments have already like, more or less, dedicated themselves towards all those are free for anybody to use. So in essence, what I have, is I have a concert hall with a lot of seats available. So if I want to run a 10 chair Symphony or 80, chairs, Symphony, I'm able to do that. And all the while, I can also do the same with my loader nodes. I can expand my loader nodes, to actually have their own Symphony or write all to themselves and not compete with any other workloads of the other clusters. What does that change for our organization? Well, it really changes the way our database administrators actually do their jobs. This has been a big transformation for them. They have actually become data conductors. Maybe you might even call them composers, which is interesting, because what I've asked them to do is morph into less technology and more workload analysis. And in doing so we're able to write auto-detect scripts, that watch the queues, watch the workloads so that we can help ramp up and trim down the cluster and subclusters as necessary. There has been an exciting transformation for our DBAs, who I need to now classify as something maybe like DCAs. I don't know, I have to work with HR on that. But I think it's an exciting future for their careers. And if we bring it all together, If we bring it all together, and then our clusters, start looking like this. Where everything is moving in harmonious, we have lots of seats open for extra musicians. And we are able to emulate a cloud experience on-prem. And so, I want you to sit back and enjoy the Pure Vertica Symphony live at AT&T. (soft music) >> Joy: Thank you so much, John, for an informative and very creative look at the benefits that AT&T is getting from its Pure Vertica symphony. I do really like the idea of engaging HR to change the title to Data Conductor. That's fantastic. I've always believed that music brings people together. And now it's clear that analytics at AT&T is part of that musical advantage. So, now it's time for a short break. And we'll be back for our breakout sessions, beginning at 12 pm Eastern Daylight Time. We have some really exciting sessions planned later today. And then again, as you can see on Wednesday. Now because all of you are already logged in and listening to this keynote, you already know the steps to continue to participate in the sessions that are listed here and on the previous slide. In addition, everyone received an email yesterday, today, and you'll get another one tomorrow, outlining the simple steps to register, login and choose your session. If you have any questions, check out the emails or go to www.vertica.com/bdc2020 for the logistics information. There are a lot of choices and that's always a good thing. Don't worry if you want to attend one or more or can't listen to these live sessions due to your timezone. All the sessions, including the Q&A sections will be available on demand and everyone will have access to the recordings as well as even more pre-recorded sessions that we'll post to the BDC website. Now I do want to leave you with two other important sites. First, our Vertica Academy. Vertica Academy is available to everyone. And there's a variety of very technical, self-paced, on-demand training, virtual instructor-led workshops, and Vertica Essentials Certification. And it's all free. Because we believe that Vertica expertise, helps everyone accelerate their Vertica projects and the advantage that those projects deliver. Now, if you have questions or want to engage with our Vertica engineering team now, we're waiting for you on the Vertica forum. We'll answer any questions or discuss any ideas that you might have. Thank you again for joining the Vertica Big Data Conference Keynote Session. Enjoy the rest of the BDC because there's a lot more to come
SUMMARY :
And he'll share the exciting news And that is the platform, with a very robust ecosystem some of the best development brains that we have. the VP of Strategy and Solutions is causing a lot of organizations to back off the and especially proud of the harmonious impact And that is, the more we replicate the data, Enjoy the rest of the BDC because there's a lot more to come
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Stephen | PERSON | 0.99+ |
Amy Fowler | PERSON | 0.99+ |
Mike | PERSON | 0.99+ |
John Yavanovich | PERSON | 0.99+ |
Amy | PERSON | 0.99+ |
Colin Mahony | PERSON | 0.99+ |
AT&T | ORGANIZATION | 0.99+ |
Boston | LOCATION | 0.99+ |
John Yovanovich | PERSON | 0.99+ |
Vertica | ORGANIZATION | 0.99+ |
Joy King | PERSON | 0.99+ |
Mike Stonebreaker | PERSON | 0.99+ |
John | PERSON | 0.99+ |
May 2018 | DATE | 0.99+ |
100% | QUANTITY | 0.99+ |
Wednesday | DATE | 0.99+ |
Colin | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Vertica Academy | ORGANIZATION | 0.99+ |
five | QUANTITY | 0.99+ |
Joy | PERSON | 0.99+ |
2020 | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
Stephen Murdoch | PERSON | 0.99+ |
Vertica 10 | TITLE | 0.99+ |
Pure Storage | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Philips | ORGANIZATION | 0.99+ |
tomorrow | DATE | 0.99+ |
AT&T. | ORGANIZATION | 0.99+ |
September 2019 | DATE | 0.99+ |
Python | TITLE | 0.99+ |
www.vertica.com/bdc2020 | OTHER | 0.99+ |
One gig | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Second | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
15 minutes | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
Ranga Rangachari, Red Hat | Google Cloud Next 2019
>> Announcer: Live from San Francisco, it's theCUBE, covering Google Cloud Next '19. Brought to you by Google Cloud, and its ecosystem partners. >> We're back at Google Cloud Next, at the new, improved Moscone Center. This is day two of theCUBE's coverage of Google's big Cloud show. theCUBE is a leader in live tech coverage, my name is Dave Vellante, I'm here with my co-host Stu Miniman. John Furrier is walking the floor, checking out the booth space. Ranga Rangachari is here, he's the Vice President and General Manager of Cloud Storage and hyper-converged infrastructure at Red Hat. Ranga, good to see you again. >> Hi Dave, hi Stu, good to see you again too. >> Thanks for coming on, this show it's, it's growing nicely, good thing Moscone is new and improved. How's the show going for you? >> Show's going really good. I just had a chance to walk around the booths and a lot of interesting conversations and, the Red Hat booth too, there've been a lot of interesting conversations with customers. >> A lot of tailwinds these days for Red Hat. We talk about that a lot on theCUBE, this whole notion of hybrid cloud, you guys have been on that since the early days. >> Yeah. >> Multi-cloud, omni-cloud, hyper-converged infrastructure, it's in your title. It's like that all the moons are lining up for you guys, you know is it just luck, skill, great predictions powers, what's your take? >> Well, I mean, I think it's a combination of those, but more importantly, it's about listening to our customers. I think that's what gives us, today, the permission to talk to our customers about some of these things they're doing, because when we talk to them, it's not just about solving today's problems, but also where they're headed, and anticipating where they're going, and the ability to meet their needs. So is, I think. >> So the Google partnership, we were talking earlier, it started 10 years ago with the hypervisor. >> Yup. >> And it's really evolved. Where is it today, from your perspective? >> Well, I think it continues to, it continues to cooperate in the technical community very well, and a couple of data points, one is on Kubernetes, that started four, five years ago, and that's going really strong. But more importantly, as the industry matures, there are, what I would call, special interest groups that are starting to emerge in the Kubernetes community. One thing that we are paying very close attention to is the storage SIG, which is the ability to federate storage across multiple clouds, and how do you do it seamlessly within the framework of Kubernetes, as opposed to trying to create a hack, or a one-off that some vendors attempted to do. So we try to take a very wholistic view of it, and make sure, I mean the industry we are in is trying to drive volumes, and volumes drives standards, so I think we pay very, very close attention-- >> And the objective there is leave the data in place if possible, provide secure access and fast access, provide high-speed data movement if necessary, protect the data in motion. That is a complex problem. >> It is, and that's why I think it's very important that the community together solves the problem, not just one vendor. But it's about how do you facilitate, the holy grail is how do you facilitate data portability and application portability across these hybrid clouds. And a lot of the things that you talked about are part and parcel of that, but what users don't wanna do is stitch them together. They want a simple, easy way. And most common example that we often get asked is can I migrate my data from one cloud to the other, from on-prem to a public cloud beta based on certain policies. That's a prototypical example of how federated storage and other things can help with that. >> Ranga, bring us inside some of those customer conversations, 'cause we talk on theCUBE, we go back to, customers always say I want multi-vendor, yes, I don't want lock-in, portability is a good thing, but at the end of the day, some of these things, if it's some science experiment or if it's difficult, well, sometimes it's easier just to kind of stick on a similar environment. We know the core of Red Hat, it's if I build on top of rail, then I know it can work lots of places, so where are customers at, how does that fit in to this whole discussion of multi-cloud. >> So, what I can kind of give you a perspective of the hybrid cloud, the product strategy that we've been on for better part of a decade now, is around facilitating the hybrid cloud. So if you look at the open, or the storage nature of the data nature of the conversations, it's almost two sides of the same coin. Which is, the developers want storage to be invisible. They don't wanna be in the business of stitching their lungs and their zone masking all that stuff. But yet at the same time they want storage to be ubiquitous. So, they want it to be invisible, they want it to be ubiquitous. So that's one of the key themes that we are in from our customer. >> Come on, Ranga, you guys are announcing storage list this year, right? >> Yeah, (laughs) exactly. (laughs) So that's a great point. The other part that we are also seeing from our customer conversations is, I think, let me give you, kind of the Red Hat inside out perspective. Is any products, any thing that we release to the market, the first filter that we run through is will it help our customers with our open hybrid cloud journey? So that kind of becomes the filter for any new features we add, any go-to-market motion, so that there is a tremendous amount of impedance match if you will. Between where we're going and how customers can succeed with their open hybrid cloud journey. >> So, in thinking about some of the discussions you're having with customers on their hybrid cloud strategy, specifically, what are those conversations like, what are the challenges that they're having? It's a maturity spectrum, obviously, but what are you seeing at each level of the spectrum, and where are some of those execution, formulation and execution challenges? >> So, as the industry evolves and the technology matures, the conversation change, and 12, 24 months ago it was a dramatically different conversation. It was an all around help me get there. Now the conversation is people really understand, and most of our conversations that we see, and even the other industry players are seeing this, is the conversation starts with on-prem looking out, as opposed to a cloud looking in. So, customers say look I've invested a tremendous amount of assets, intellectual horsepower into building my on-prem infrastructure and make it solid, now give me the degree of freedom for me to move certain workloads to one or many of these public clouds. So that's kind of a huge shift in the conversations we have with the customers. If you click one or a couple of levels below, the conversation talks about things like security as you pointed out. How do you ensure that if I move my workload my overall corporate compliance stuff aren't anywhere compromised. So that's one aspect. The other aspect is manageability. Can it really manage this infrastructure from a proverbial single pane of glass. So now the conversations are less about more theoretical, it's more about I've started the journey help me make this journey successful. >> So when you talk about the perspective of, I've built up this on-prem infrastructure, I've invested a ton it in, and now help me connect, I can see a mindset that would say think cloud first. Of course, the practical reality says I've got all this tactical debt. So how much of that is gonna be a potential pitfall down the road for some of these companies, in your view? >> Well, I think it's not so much of a technical debt. In one way you could call it a technical debt, but the other aspect is how do you really leverage the investment that you've made without having to just say well I'm gonna do things differently. So, that's why I think the conversations we have with our customers are mutually beneficial, because we can help them, but the same token they can help us understand where some of the road blocks are. And through our products, through our services, we can help them circumvent or mitigate some of those-- >> And those assets aren't depreciated on the books, they've gotta get a return on them, right? >> So, Ranga, we know that one of the areas that Red Hat and Google end up working a lot together is in the Cloud Native Computing Foundation. >> Yep. >> Bring us up to speed as to where we are with that storage discussion, 'cause I think back to when Docker launched it was oh, it's gonna be wonderful and everything, but we all live through virtualization, and we had to fix networking and storage challenges here, and networking seemed to go a little further along and there's been a few different viewpoints as to how storage should be looked at in the containerized and the Kubernetes SDO world that we're moving towards today. >> So one example that illustrates storage being the center of this is there is a project called Rook.io. If you're familiar with this, think of it as kind of sitting between the storage infrastructure and Kubernetes. And that is taking on a tremendous amount of traction, not just in the community, but even within the CNCF. I could be wrong here, but my understanding it's a project that's in incubation phase right now. So we are seeing a lot of industry commitment to that Rook project, and you're gonna see real, live use cases where customers are now able to fulfill the vision of data portability and storage portability across these multiple hybrid clouds. >> So Kubernetes is obviously taking off, although again, it's a maturity level. Some customers are diving in, and others maybe not so much. What are you seeing is some of the potential blockers, how are people getting started? Can you just download the code and go? What are you seeing there? >> That's a very interesting question, because we look at it as projects versus products. And, Kubernetes is a project. Phenomenal amount of velocity, phenomenal amount of innovation. But once you deploy it in your production environment, things like security, things like life cycle management, all those things have to be in place before somebody deploys it. That's why, in OpenShift you've seen the tremendous amount of market acceptance we've have with OpenShift is a proof point that it is kind of the best Kubernetes out there, because it's enterprise ready, people can deploy it, people can use it, people can scale with it, and not be worried about things like life cycle management, things like security, all the things that come into play when you deal with an upstream project. So, what we've seen from a customer basis, people start to dabble, and they'll look at Kubernetes, what's going on, and understand where the areas of innovation are. But once they start to say look I've got it deployed for some serious workloads, they look at a vendor who can provide all the necessary ingredients for them to be successful. >> We're having a good discussion earlier about customer's perspectives, I wanna get as much out of that asset as I possibly can. You said something that interested me. I wanna go back to it. Is customers want options to be able to migrate to various clouds. My question is do you sense that that's because they wanna manage their risk, they want an exit strategy? Or, are they actively moving more than once. Maybe they wanna go once and then run in the cloud. Or are you seeing a lot of active movement of that data? >> I think the first order of bit in those discussions that are about the workloads, What workload do they wanna run? And once they decide this is the, for instance, with the Google Cloud, with the MLAI type of workloads, lend themselves very well to the Google Cloud infrastructure. So when a customer says look this is the workload I wanna run on-prem, but I want the elastic capability for me to run on one of these public clouds, often the decision criteria seems to be what workload it is and where's the best place to run it in. And then, you know, the rest of the stuff comes into play. >> So, Ranga, let's step back for a second. I come out of this show, Google Cloud this year, and I'm hearing open, multi-cloud, reminds me of words I've heard going to Red Hat, some every year. Help us to kind of squint through a little bit as to where Red Hat sits in the customer. If I'm the c-suite of an enterprise customer day, where Red Hat fits in the partnership with customers, and where the partners fit into that overall story. >> So, our view is let's look at it customer end. And practically every customer that we talk to wants to embark on an open hybrid cloud storage. And I wanna kind of stress on the open part of it, because it's the easier way to say okay let me go build a hybrid cloud. The more difficult part is how do you facilitate it through open hybrid cloud story. And that's the march, if you will, that we've been on for the last five plus years. And, that business strategy and the technology strategy has not, we've been unwavering in that. And, the partners are and they say we truly believe that for us to be successful, for our customers to be successful, we need an ecosystem of partners. And the cloud providers are absolutely a critical ingredient and a critical component of the overall strategy, and I think together, with our partners, and our core technology, and our go-to-market routes, we think we can really solve our customers, we are solving them today, and we think we can continue to solve them over time. >> You talk about open, open has a lot of different definitions. And again it's suspected UNIX used to be open. (laughs) I see that potentially as one, real solid differentiator of Red Hat. I mean, your philosophy on open. What do you see as your differentiators in the marketplace? >> Well, I think the first is obviously open like you said, the second part is, I think I hinted upon it earlier, which is, projects are good. I think they are almost a fountain and of ideas and things, but I think where we spend a tremendous amount of hours of energy is to transform it from the upstream project into a product. And if you go back, Red Hat Linux, I think we've shown that Linux was in the same kind of state of vibe in other ways, 10, 20 years ago. And I think what we've shown to the industry is by being solely committed and focused on make these projects enterprise ready, I think we've shown the market leading the way, and making it successful. So I think for us, the next wave, whether it's Kubernetes, whether it's other things, it's a very similar recipe book, nothing dramatically different, but fundamentally what we want to do is help our customers take advantage of those innovations, but yet not compromise on what they need in their enterprise data centers. >> The recipe book is similar, but you've gotta make bets. You've made some pretty good bets over the years. >> Yep. >> We could debate about OpenStack, but I mean, even there. But that's not an easy thing for an open source company to do. 'Cause you've gotta pick your poison, you have to provide committers, what's the secret sauce there? >> Well, I think, first off, I think the number one secret sauce from our perspective is add more technical and intellectual horsepower to these communities. And, not so much for the sake of community, it's about does it solve a real business problem for our customers? That's the way we go about it because in the open source community, I don't even know, hundreds of thousands of open source projects are out there. And we pay, and our office of the CTO pays very close attention to all the projects out there, identify the ones that have promise, not just from our perspective but from customers' perspective, and invest in those areas. And a lot of them have succeeded, so we think we'll do well in that. >> Alright, so, Ranga, one of the biggest announcements this week is Anthos from Google. Wanna get your viewpoint as to where that fits. >> I think it's a good announcement, I haven't read through all the details, but part of it is I think it validates, to a certain extent, what Red Hat has been talking about for the last five, seven years. Which is you need a unified way to deploy, manage, provision your infrastructure, not just on public clouds, but a seamless way to connect to the on-prem. And I think Anthos is a validation of how we've been thinking about the work. So we think it's great. We think it's really good. >> Ranga Rangachari thanks so much for coming back on theCUBE >> Thank you, David! >> It's always a pleasure. >> Thank you again, Stu. >> Have a great Red Hat summit coming up in early May, theCUBE will be there, Stu will be co-hosting. You're watching theCUBE, day two of Google Cloud Next 2019 from Moscone. We'll be right back. (upbeat music)
SUMMARY :
Brought to you by Google Cloud, and its ecosystem partners. Ranga, good to see you again. How's the show going for you? the Red Hat booth too, since the early days. It's like that all the moons are lining up for you guys, and the ability to meet their needs. So the Google partnership, And it's really evolved. and make sure, I mean the industry we are in And the objective there is leave the data And a lot of the things that you talked about We know the core of Red Hat, it's if I build on top of rail, of the data nature of the conversations, So that kind of becomes the filter in the conversations we have with the customers. down the road for some of these companies, in your view? but the other aspect is how do you really is in the Cloud Native Computing Foundation. in the containerized and the Kubernetes SDO storage being the center of this What are you seeing is some of the potential blockers, is a proof point that it is kind of the best that that's because they wanna manage their risk, often the decision criteria seems to be If I'm the c-suite of an enterprise customer day, And that's the march, if you will, What do you see as your differentiators in the marketplace? the second part is, I think I hinted upon it earlier, You've made some pretty good bets over the years. for an open source company to do. That's the way we go about it Alright, so, Ranga, one of the biggest announcements for the last five, seven years. Have a great Red Hat summit coming up in early May,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
David | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Stu | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Stu Miniman | PERSON | 0.99+ |
Ranga | PERSON | 0.99+ |
Ranga Rangachari | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
hundreds | QUANTITY | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
second part | QUANTITY | 0.99+ |
Cloud Native Computing Foundation | ORGANIZATION | 0.99+ |
first filter | QUANTITY | 0.99+ |
Moscone | LOCATION | 0.99+ |
today | DATE | 0.98+ |
10 years ago | DATE | 0.98+ |
early May | DATE | 0.98+ |
one | QUANTITY | 0.98+ |
10 | DATE | 0.98+ |
one aspect | QUANTITY | 0.98+ |
five years ago | DATE | 0.98+ |
this year | DATE | 0.98+ |
four | DATE | 0.97+ |
theCUBE | ORGANIZATION | 0.97+ |
this week | DATE | 0.97+ |
Anthos | ORGANIZATION | 0.97+ |
OpenShift | TITLE | 0.97+ |
single pane | QUANTITY | 0.96+ |
Linux | TITLE | 0.96+ |
SIG | ORGANIZATION | 0.95+ |
one way | QUANTITY | 0.95+ |
CTO | ORGANIZATION | 0.95+ |
Kubernetes | TITLE | 0.95+ |
each level | QUANTITY | 0.95+ |
one cloud | QUANTITY | 0.95+ |
Red Hat | EVENT | 0.94+ |
Rook | ORGANIZATION | 0.93+ |
Moscone Center | LOCATION | 0.93+ |
UNIX | TITLE | 0.93+ |
one vendor | QUANTITY | 0.93+ |
more than once | QUANTITY | 0.92+ |
20 years ago | DATE | 0.91+ |
Google Cloud Next | TITLE | 0.88+ |
thousands | QUANTITY | 0.87+ |
Kubernetes | ORGANIZATION | 0.86+ |
Cloud Next | TITLE | 0.86+ |
One thing | QUANTITY | 0.84+ |
CNCF | ORGANIZATION | 0.84+ |
Vice President | PERSON | 0.83+ |
day two | QUANTITY | 0.83+ |
Anthos | TITLE | 0.82+ |
Cloud Storage | ORGANIZATION | 0.81+ |
seven years | QUANTITY | 0.8+ |
Rook.io | TITLE | 0.8+ |
Docker | ORGANIZATION | 0.79+ |
two sides | QUANTITY | 0.74+ |
Dominic Preuss, Google | Google Cloud Next 2019
>> Announcer: Live from San Francisco, it's theCUBE. Covering Google Cloud Next '19. Brought to you by Google Cloud and it's ecosystem partners. >> Welcome back to the Moscone Center in San Francisco everybody. This is theCUBE, the leader in live tech coverage. This is day two of our coverage of Google Cloud Next #GoogleNext19. I'm here with my co-host Stuart Miniman and I'm Dave Vellante, John Furrier is also here. Dominic Preuss is here, he's the Director of Product Management, Storage and Databases at Google. Dominic, good to see you. Thanks for coming on. >> Great, thanks to be here. >> Gosh, 15, 20 years ago there were like three databases and now there's like, I feel like there's 300. It's exploding, all this innovation. You guys made some announcements yesterday, we're gonna get into, but let's start with, I mean, data, we were just talking at the open, is the critical part of any IT transformation, business value, it's at the heart of it. Your job is at the heart of it and it's important to Google. >> Yes. Yeah, you know, Google has a long history of building businesses based on data. We understand the importance of it, we understand how critical it is. And so, really, that ethos is carried over into Google Cloud platform. We think about it very much as a data platform and we have a very strong responsibility to our customers to make sure that we provide the most secure, the most reliable, the most available data platform for their data. And it's a key part of any decision when a customer chooses a hyper cloud vendor. >> So summarize your strategy. You guys had some announcements yesterday really embracing open source. There's certainly been a lot of discussion in the software industry about other cloud service providers who were sort of bogarting open source and not giving back, et cetera, et cetera, et cetera. How would you characterize Google's strategy with regard to open source, data storage, data management and how do you differentiate from other cloud service providers? >> Yeah, Google has always been the open cloud. We have a long history in our commitment to open source. Whether be Kubernetes, TensorFlow, Angular, Golang. Pick any one of these that we've been contributing heavily back to open source. Google's entire history is built on the success of open source. So we believe very strongly that it's an important part of the success. We also believe that we can take a different approach to open source. We're in a very pivotal point in the open source industry, as these companies are understanding and deciding how to monetize in a hyper cloud world. So we think we can take a fundamentally different approach and be very collaborative and support the open source community without taking advantage or not giving back. >> So, somebody might say, okay, but Google's got its own operational databases, you got analytic databases, relational, non-relational. I guess Google Spanner kind of fits in between those. It was an amazing product. I remember that that first came out, it was making my eyes bleed reading the white paper on it but awesome tech. You certainly own a lot of your own database technology and do a lot of innovation there. So, square that circle with regard to partnerships with open source vendors. >> Yeah, I think you alluded to a little bit earlier there are hundreds of database technologies out there today. And there's really been a proliferation of new technology, specifically databases, for very specific use cases. Whether it be graph or time series, all these other things. As a hyper cloud vendor, we're gonna try to do the most common things that people need. We're gonna do manage MySQL, and PostgreS and SQL Server. But for other databases that people wanna run we want to make sure that those solutions are first class opportunities on the platform. So we've engaged with seven of the top and leading open source companies to make sure that they can provide a managed service on Google Cloud Platform that is first class. What that means is that as a GCP customer I can choose a Google offered service or a third-party offered service and I'm gonna have the same, seamless, frictionless, integrated experience. So I'm gonna get unified billing, I'm gonna get one bill at the end of the day. I'm gonna have unified support, I'm gonna reach out to Google support and they're going to figure out what the problem is, without blaming the third-party or saying that isn't our problem. We take ownership of the issue and we'll go and figure out what's happening to make sure you get an answer. Then thirdly, a unified experience so that the GCP customer can manage that experience, inside a cloud console, just like they would their Google offered serves. >> A fully-managed database as a service essentially. >> Yes, so of the seven vendors, a number of them are databases. But also for Kafka, to manage Kafka or any other solutions that are out there as well. >> All right, so we could spend the whole time talking about databases. I wanna spend a couple minutes talking about the other piece of your business, which is storage. >> Dominic: Absolutely. >> Dave and I have a long history in what we'd call traditional storage. And the dialog over the last few years has been we're actually talking about data more than the storing of information. A few years back, I called cloud the silent killer of the old storage market. Because, you know, I'm not looking at buying a storage array or building something in the cloud. I use storage is one of the many services that I leverage. Can you just give us some of the latest updates as to what's new and interesting in your world. As well as when customers come to Google where does storage fit in that overall discussion? >> I think that the amazing opportunity that we see for for large enterprises right now is today, a lot of that data that they have in their company are in silos. It's not properly documented, they don't necessarily know where it is or who owns it or the data lineage. When we pick all that date up across the enterprise and bring it in to Google Cloud Platform, what's so great about is regardless of what storage solution you choose to put your data in it's in a centralized place. It's all integrated, then you can really start to understand what data you have, how do I do connections across it? How do I try to drive value by correlating it? For us, we're trying to make sure that whatever data comes across, customers can choose whatever storage solution they want. Whichever is most appropriate for their workload. Then once the data's in the platform we help them take advantage of it. We are very proud of the fact that when you bring data into object storage, we have a single unified API. There's only one product to use. If you would have really cold data, or really fast data, you don't have to wait hours to get the data, it's all available within milliseconds. Now we're really excited that we announced today is a new storage class. So, in Google Cloud Storage, which is our object storage product, we're now gonna have a very cold, archival storage option, that's going to start at $0.12 per gigabyte, per month. We think that that's really going to change the game in terms of customers that are trying to retire their old tape backup systems or are really looking for the most cost efficient, long term storage option for their data. >> The other thing that we've heard a lot about this week is that hybrid and multi-cloud environment. Google laid out a lot of the partnerships. I think you had VMware up on stage. You had Cisco up on stage, I see Nutanix is here. How does that storage, the hybrid multi-cloud, fit together for your world. >> I think the way that we view hybrid is that every customer, at some point, is hybrid. Like, no one ever picks up all their data on day one and on day two, it's on the cloud. It's gonna be a journey of bringing that data across. So, it's always going to be hybrid for that period of time. So for us, it's making sure that all of our storage solutions, we support open standards. So if you're using an an S3 compliant storage solution on-premise, you can use Google Cloud Storage with our S3 compatible API. If you are doing block, we work with all the large vendors, whether be NetApp or EMC or any of the other vendors you're used to having on-premise, making sure we can support those. I'm personally very excited about the work that we've done with NetApp around NetApp cloud buying for Google Cloud Platform. If you're a NetApp shop and you've been leveraging that technology and you're really comfortable and really like it on-premise, we make it really easy to bring that data to the cloud and have the same exact experience. You get all the the wonderful features that NetApp offers you on-premise in a cloud native service where you're paying on a consumption based service. So, it really takes, kind of, the decision away for the customers. You like NetApp on-premise but you want cloud native features and pricing? Great, we'll give you NetApp in the cloud. It really makes it to be an easy transition. So, for us it's making sure that we're engaged and that we have a story with all the storage vendors that you used to using on-premise today. >> Let me ask you a question, about go back, to the very cold, ice cold storage. You said $0.12 per gigabyte per month, which is kinda in between your other two major competitors. What was your thinking on the pricing strategy there? >> Yeah, basically everything we do is based on customer demand. So after talking to a bunch of customers, understanding the workloads, understanding the cost structure that they need, we think that that's the right price to meet all of those needs and allow us to basically compete for all the deals. We think that that's a really great price-point for our customers. And it really unlocks all those workloads for the cloud. >> It's dirt cheap, it's easy to store and then it takes a while to get it back, right, that's the concept? >> No, it is not at all. We are very different than other storage vendors or other public cloud offerings. When you drop your data into our system, basically, the trade up that you're making is saying, I will give you a cheaper price in exchange for agreeing to leave the data in the platform, for a longer time. So, basically you're making a time-based commitment to us, at which point we're giving you a cheaper price. But, what's fundamentally different about Google Cloud Storage, is that regardless of which storage class you use, everything is available within milliseconds. You don't have to wait hours or any amount of time to be able to get that data. It's all available to you. So, this is really important, if you have long-term archival data and then, let's say, that you got a compliance request or regulatory requests and you need to analyze all the data and get to all your data, you're not waiting hours to get access to that data. We're actually giving you, within milliseconds, giving you access to that data, so that you can get the answers you need. >> And the quid pro quo is I commit to storing it there for some period of time, is that you said? >> Correct. So, we have four storage classes. We have our Standard, our Nearline, our Coldline and this new Archival. Each of them has a lower price point, in exchange for a longer, committed time the you'll leave the product. >> That's cool. I think that adds real business value there. So, obviously, it's not sitting on tape somewhere. >> We have a number of solutions for how we store the data. For us, it's indifferent, how we store the data. It's all about how long you're willing to tell us it'll be there and that allows us to plan for those resources long term. >> That's a great story. Now, you also have this pay-as-you-go pricing tiers, can you talk about that a little bit? >> For which, for Google Cloud Storage? >> Dave: Yes. >> Yeah, everything is pay-as-you-go and so basically you write data to us and there's a charge for the operations you do and then you charge for however long you leave the data in the system. So, if you're using our Standard class, you're just paying our standard price. You can either use Regional or Multi-Regional, depending on the disaster recovery and the durability and availability requirements that you have. Then you're just paying us for that for however long you leave the data in the system. Once you delete it, you stop paying. >> So it must be, I'm not sure what kind of customer discussions are going on in terms of storage optionality. It used to be just, okay, I got block and I got file, but now you've got all different kind of. You just mentioned several different tiers of performance. What's the customer conversation like, specifically in terms of optionality and what are they asking you to deliver? >> I think within the storage space, there's really three things, there's object, block and file. So, on the object side, or on the block side we have our persistence product. Customers are asking for better price performance, more performance, more IOPS, more throughput. We're continuing to deliver a higher-performance, block device for them and that's going very, very well. For those that need file, we have our first-party service, which is Cloud Filestore, which is our manage NFS. So if you need managed NFS, we can provide that for you at a really low price point. We also partner with, you mentioned Elastifile earlier. We partner with NetApp, we're partnering with EMC. So all those options are also available for file. Then on the object side, if you can accept the object API, it's not POSIX-compliant it's a very different model. If your workloads can support that model then we give you a bunch of options with the Object Model API. >> So, data management is another hot topic and it means a lot of things to a lot of people. You hear the backup guys talking about data management. The database guys talk about data management. What is data management to Google and what your philosophy and strategy there? >> I think for us, again, I spend a lot of time making sure that the solutions are unified and consistent across. So, for us, the idea is that if you bring data into the platform, you're gonna get a consistent experience. So you're gonna have consistent backup options you're gonna have consistent pricing models. Everything should be very similar across the various products So, number one, we're just making sure that it's not confusing by making everything very simple and very consistent. Then over time, we're providing additional features that help you manage that. I'm really excited about all the work we're doing on the security side. So, you heard Orr's talk about access transparency and access approvals right. So basically, we can have a unified way to know whether or not anyone, either Google or if a third-party offer, a third-party request has come in about if we're having to access the data for any reason. So we're giving you full transparency as to what's going on with your data. And that's across the data platform. That's not on a per-product basis. We can basically layer in all these amazing security features on top of your data. The way that we view our business is that we are stewards of your data. You've given us your data and asked us to take care of it, right, don't lose it. Give it back to me when I want it and let me know when anything's happening to it. We take that very seriously and we see all the things we're able to bring to bear on the security side, to really help us be good stewards of that data. >> The other thing you said is I get those access logs in near real time, which is, again, nuanced but it's very important. Dominic, great story, really. I think clear thinking and you, obviously, delivered some value for the customers there. So thanks very much for coming on theCUBE and sharing that with us. >> Absolutely, happy to be here. >> All right, keep it right there everybody, we'll be back with our next guest right after this. You're watching theCUBE live from Google Cloud Next from Moscone. Dave Vellante, Stu Miniman, John Furrier. We'll be right back. (upbeat music)
SUMMARY :
Brought to you by Google Cloud and it's ecosystem partners. Dominic Preuss is here, he's the Director Your job is at the heart of it and it's important to Google. to make sure that we provide the most secure, and how do you differentiate from We have a long history in our commitment to open source. So, square that circle with regard to partnerships and I'm gonna have the same, seamless, But also for Kafka, to manage Kafka the other piece of your business, which is storage. of the old storage market. to understand what data you have, How does that storage, the hybrid multi-cloud, and that we have a story with all the storage vendors to the very cold, ice cold storage. that that's the right price to meet all of those needs can get the answers you need. the you'll leave the product. I think that adds real business value there. We have a number of solutions for how we store the data. can you talk about that a little bit? for the operations you do and then you charge and what are they asking you to deliver? Then on the object side, if you can accept and it means a lot of things to a lot of people. on the security side, to really help us be good stewards and sharing that with us. we'll be back with our next guest right after this.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Stuart Miniman | PERSON | 0.99+ |
Dominic Preuss | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Dominic | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Cisco | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
Each | QUANTITY | 0.99+ |
San Francisco | LOCATION | 0.99+ |
yesterday | DATE | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
seven vendors | QUANTITY | 0.99+ |
Coldline | ORGANIZATION | 0.99+ |
MySQL | TITLE | 0.99+ |
first | QUANTITY | 0.99+ |
today | DATE | 0.98+ |
seven | QUANTITY | 0.98+ |
Kafka | TITLE | 0.98+ |
one product | QUANTITY | 0.98+ |
NetApp | TITLE | 0.98+ |
two major competitors | QUANTITY | 0.97+ |
PostgreS | TITLE | 0.97+ |
NetApp | ORGANIZATION | 0.97+ |
Google Cloud Next | TITLE | 0.97+ |
day two | QUANTITY | 0.97+ |
one bill | QUANTITY | 0.96+ |
S3 | TITLE | 0.96+ |
three things | QUANTITY | 0.96+ |
300 | QUANTITY | 0.96+ |
one | QUANTITY | 0.96+ |
single | QUANTITY | 0.96+ |
Cloud Filestore | TITLE | 0.95+ |
hundreds of database technologies | QUANTITY | 0.94+ |
three databases | QUANTITY | 0.94+ |
day one | QUANTITY | 0.94+ |
first class | QUANTITY | 0.94+ |
20 years ago | DATE | 0.94+ |
this week | DATE | 0.93+ |
SQL Server | TITLE | 0.93+ |
$0.12 per gigabyte | QUANTITY | 0.93+ |
Elastifile | ORGANIZATION | 0.92+ |
2019 | DATE | 0.91+ |
Google Cloud Platform | TITLE | 0.9+ |
Gosh | PERSON | 0.89+ |
Moscone Center | LOCATION | 0.87+ |
Google Cloud Storage | TITLE | 0.82+ |
Moscone | LOCATION | 0.8+ |
theCUBE | ORGANIZATION | 0.75+ |
15 | DATE | 0.73+ |
Object Model | OTHER | 0.73+ |
A few years back | DATE | 0.73+ |
Orr | ORGANIZATION | 0.68+ |
Google Spanner | TITLE | 0.66+ |
CUBEConversations Dell EMC Data Protection | February 2019
>> From the SiliconANGLE Media office in Boston, Massachusetts, it's theCUBE. Now, here's your host, Dave Vellante. >> Hi everybody. This is Dave Vellante and welcome to this CUBE conversation. I've been following trends in backup and recovery and data protection for decades and I'll tell you right now is one of the most exciting eras that I've ever seen and with me here to talk about some of the trends and some hard news is Beth Phalen. She's the president and general manager of Dell EMCs data protection division. Beth it's great to see you again. Thanks for coming on. >> It's great to be here Dave. It's always good to talk to you. >> So, there's been a subtle change in IT. Even when you go to sort of the downturn in 2008 where IT was largely a support function. It's really now becoming a fundamental enabler. Are you seeing that with your customers? >> Absolutely. The vision of IT being some back office that is segregated from the rest of the company is no longer true. What we find is customers want their application owners to be able to drive data protection and then have that compared with the central oversight so they can still have that global overview. >> The other change is, for years data has been this problem that we have to manage. I got so much data. I got to back it up or protect it, move it. It's now become a source of value. Everybody talks about digital transformation. It's all about how you get value from data. >> Yeah. And it's so interesting because it was there all the time. Right? And suddenly people have realized, yes, this is an asset that has a huge impact on our business on our customers and again makes it even more important that they can rely on getting access to that data because they're building their business on it. >> So as the head of the data protection division, it's interesting. Even the palance has changed. It used to be, when it was just tape it was backup and now it's data protection. So the mindset is shifting. >> It is and it's continuing to shift with new threats like cyber recovery and other challenges that are out there, protecting data becomes the core of what we are offering our customers. >> So let's talk a little bit more about the catalysts for that change. You got tons of data, you are able to apply now machine intelligence like you never have before and you got cloud which brings scale. So this is changing the needs of customers in the way in which they protect data. >> As customers data becomes more and more distributed across multiple cloud providers, multiple locations, it's even more important that they can answer the question, where is my data and is it protected? And that they can recover it as quickly as possible. >> And you're seeing things like DevOps, data protection strategies and data management strategies, and so supporting DevOps and analytics applications. You also have new threats like ransomware. So it's a more fundamental component of cyber. >> Yeah and you will hear us talking a little bit about cyber recovery, the new product that we introduced last year. We can't just think about data protection as backup. We have to think about it as the comprehensive way that customers can get access to their data even if they're attacked. >> So much has changed. Everything has changed. >> The level of innovation that we've been doing has been keeping up with that change. And that's one of the things that I'm most excited about as the president of this division. We've been investing in enhancing the customer experience, and cyber recovery as I mentioned and expanding into new markets into driving a new level of reliability and resiliency, building on the duration that we have. And of course expanding into the cloud. So one of the things that hasn't changed is the fundamentals of I need to get my data back, I need to be trusted. Why is it, you guys make a big deal out of being number one. You're number one in all the Gartner Magic Quadrants and so forth. Why is leadership so important to customers and what are those fundamentals that haven't changed? >> So two questions there. First, leadership is so important because we have more experience protecting data around the globe than anybody else. And that means all environments right from the multi-petabyte, major corporations to the shops have maybe a terabyte. So 24 terabytes. We're involved in it all. So that experience is important. And then those fundamentals you talked about, lowest cost to protect, fastest performance, fastest backups and resiliency, those fundamentals have to be part of any data protection product. >> The way you guys are organized, you are in chare of R&D as well, you talked about innovation before. I wonder if you could talk a little bit more about how your R&D investments are translating into customer value in terms of price performance. So resiliency, speed, cost. What's going on there? >> The biggest thing that I wanna talk about and highlight here is how much our investment in cloud is enabling our customers to continue to have confidence that they can get the same level of digital trust that they've had with us on prem but now as they expand into the cloud for cloud disaster recovery, long-term retention, data protection in the cloud that that confidence comes with them. And we're doing it in a way that allows them to seamlessly expand into the cloud without having to introduce additional gateways, additional hardware. It becomes an extension of their data protection infrastructure. >> So the cloud operating model is very important here. What are you guys doing for instance, admins, application owners, in terms of enabling self-service for example. >> We have the broadest application support of any company. And what we're doing is we're integrating directly with those applications. Whether it be Oracle, SAP. You can go down the list. And then of course directly integrating with VMware for the VM admins. That's not enough though because if we just did that you wouldn't be able to have one view of how your data protection policies are working. And so we pair that with centralized governance to make sure that the person in charge of the data protection for that company still could have confidence that all the right things are happening. >> So what does the data protection portfolio look like? How should we think about that? >> Three simple things, Data Domain, our new integrated appliances and data protection suite. >> Okay. Follow up question on that is, how do you, for customers, obstruct the complexity? How are you simplifying their world especially in this cloud operating module. >> Simplifying comes in multiple stages. You have to simplify the first box to backup experience. We've cut that down to an hour and a half, two hours in max. From there, you have to make sure the day-to-day tasks are simple. So things like two clicks to do cloud failover, three clicks to failback. Things like a single step to restore a file in a VMware environment and then live movement of that VM to another primary storage array. That kind of targeted customer use case simple process is core to what we've been doing to enhance the customer experience. >> Now, you guys aren't really a public cloud provider so you gotta support multiple clouds. What are you doing there in terms of both cloud support and what are you seeing in multi-cloud. >> Most customers have more than one cloud provider that they're working with. So what we do is we allow the customers specific example right from within the data domain interface to select which cloud they wanna tier to and then they can also select other cloud providers through the same interface. So, it's not a separate experience. They can focus on the Data Domain but then interact with multiple clouds. >> Awesome. Beth, thanks for taking some time here to set this up. We're gonna hear about some hard news that you guys have today. We've got some perspectives from IDC on this but right now lets take a look at what the customer says. Keep it right there. (chilled piano music) >> Phoenix Children's is a healthcare organization for kids. Everything that we do is about the kids. So we wanna make sure that all our critical data that a doctor or a nurse needs on the floors to be able to take care of a sick kid, we need to make sure it's available at any time. The data protection software that we're using from Dell EMC with Data Domain give us that protection. Our critical data are well kept and we can easily recover them. Before we moved to Data Domain we were using Veritas NetBackup and some older technology. Our backup windows were taking upwards of 20 to 24 hours. Moving to Data Domain with de-duplication we can finish our full backups in less than seven hours. The user deployment for data protection software and Data Domain was very easy for us. Our engineers, they have never worked with data protection software or Data Domain before. They were able to do some research, walk a little bit with some Dell engineers and we were able to implement the technology within a month, a month and a half. ECS for Phoenix Children's Hospital is a great technology. Simple to use, easy to manage. The benefits from a user perspective are tremendous. From an IT perspective, I can extract terabytes of data in less than an hour. When we get into a critical situation, we can rely 100% on ECS that we will get the information that the doctor or the nurse needs to take care of the kid. The data protection software and the Data Domain benefits for Phoenix Children's Hospital are great. There is a solution that works seamlessly together. I have no worries that my backups will not run. I have no worries I will not be able to recover critical applications. (chilled piano music) >> We're back with Ruya Barrett who's the vice president of marketing for Dell EMC's Data Protection division. We got some hard news to get into. Ruya, let's get right into it. What are you guys announcing today? >> We are announcing a basically tremendous push with our data protection family both in Data Domain and Integrated Data Protection appliances and the software that basically makes those two rock. >> So, you've got a few capabilities that you're announcing. Cloud performance. Take us through sort of at a high level. What are the three areas that you're focused on this announcement? >> Exactly. You nailed it Dave. So three areas of announcement, exciting cloud capabilities and cloud expansion. We've been investing in cloud over the last three years and this announcement is just a furthering of those capabilities. Tremendous push around performance for an additional use cases and services that customers want. The last one but not least is basically expanded coverage and push into the mid-market space with our Data Domain 3300 and IDPA 4400. >> And this comes in the form of software that I can install on my existing appliances? >> It's all software value that really enables our appliances to do what they do best, to drive efficiency, performance but it's really the software layer that makes it sane. >> And if I'm a customer I get that software, no additional charges? >> If you have the capabilities, today you'll be able to get the expanding capabilities. No charge. >> Okay. So one of the important areas is cloud. Let's get into some of the cloud use cases. You're focused on a few of those. What are they? >> Cloud has become a really prevalent destination. So when we look at cloud and what customers wanna do with regards to data protection in the cloud, it's really a lot of use cases. The three we're gonna touch on today is really cloud tiering. Our capabilities are in cloud tiering with long time archival. So they're really trying to leverage cloud as a long time archival. The second one is really around cloud disaster recovery. To and from the cloud. So that's really important use case. That's becoming really important to our customers. And not, God forbid, for a disaster but just being able to test our disaster recovery capabilities and resiliency. And the last one is really in-cloud data protection. So those are the three use cases and we have enhancements across all three. >> Let's go deeper into those. So cloud tiering. We think of tiering. Often times you remember the big days of tiering, inbox tiering, hot data, cold data. What are you doing in cloud tiering? >> Well, cloud tiering is our way of really supporting object storage both on premises and in the cloud. And we introduced it about two years ago. And what we're really doing now is expanding that coverage, making it more efficient, giving customers the tools to be able to understand what the costs are gonna be. So one of the announcements is actually a free space estimator tool for our customers that really enables them to understand the impact of taking an application and using long-term retention using cloud tier both for their on-premise data protection capacity as well as what they need in the cloud and the cost associated. So that's a big question before customers wanna move data. Second is really broadest coverage. I mean, right now in addition to the usual suspects of AWS, Azure, Dell EMC Elastic Cloud Storage, we now support Ceph, we support Alibaba, we support Google Cloud. So really, how do you build out that multi-cloud deployment that we see our customers wanting to do with regards to their long-term archival needs? So really expanding that reach. So we now have the broadest coverage with regards to archiving in the cloud and using cloud for long-term retention. >> Great. Okay. Let's talk about disaster recovery. I'm really interested in this topic because the customers that we talk to they wanna incorporate disaster recovery and backup as part of a holistic strategy. You also mentioned testing. Not enough customers are able to test their DR. It's too risky, it's too hard, it's too complicated. What are you guys doing in the DR space. >> So one of the things that's I think huge and very differentiated with regards to how we approach, whether it's archive or whether it's DR or in-cloud is the fact that from an appliance standpoint you need no additional hardware or gateway to be able to leverage the capabilities. One of the things that we introduced, again cloud DR over a year ago, and we introduced it across our Data Domain appliances as well as our first entry to the mid-sized companies with IDPA DP 4400. And now what we're doing is making it available across all our models, all our appliances. And all of our appliances now have the ability to do fully orchestrated disaster recovery either for test use cases or actual disasters, God forbid, but what they are able to do. The three click failovers and the two click failbacks from the cloud. So both for failback from the cloud or in the cloud. So it's really big and important use cases for our customers right now. Again, with that, we're expanding use case coverage to now, we used to support AWS only, now we also support Azure. >> Great. Okay. The third use case you talked about was in-cloud data protection. What do you mean by that and what are you doing there? >> So one of, again, the really interesting things about our portfolio is our ability to run it as an integrated hardware-software platform or in the form of a software only deployment. So our data domain virtual addition is exactly that. You can run our Data Domain software in virtual machines. And what that allows our customers to do is whether they're running a software defined data center on prem or whether they want in-cloud capabilities and all that goodness they have been getting from Data Domain in the cloud, they now can do that very easily. And what we've done in that space with this announcement is expanded our capacity coverage. So now Data Domain Virtual Edition can cover 96 terabytes of in-cloud capability and capacity. And we've also, again, with that use case, expanded our coverage to include Google Cloud, AWS, Azure. So really expanded our coverage. >> Great. I'm interested in performance as well because everybody wants more performance but are we talking about backup performance, restore performance? What are you doing in that area? >> Perfect. And one of the things, when we talk about performance, one of the big use cases we're seeing that's driving performance is that customers wanna make their backup copies do more. They wanna use it for application test and development, they wanna use it for instant access to their VMs, instant access and restores for their VMs. So performance is being fueled by some additional services that customers wanna see on their backup copies. So basically one of the things that we've done with this announcement is improved our performance across all of these use cases. So for application test of test of development, you can have access to instant VMs. Up to 32 instant access and restore capabilities with VMs. We have improved our cash utilization. So now you can basically support a lot more IOPS, leveraging our cash, enhanced cash, four times as many IOPS as we were doing before. So up to 40,000 IOPS with almost no latency. So tremendous, again, improvement in use cases. Restores. Customers are always wanting to do restores faster and faster. So file restores is no exception to that. So with multi-streaming capability, we now have the opportunity and the capabilities to do file restores two times faster on premise and four times faster from cloud. So again, cloud is a big, everything we do, there's a cloud component to it. And that performance is no exception to that. >> The last thing I wanna touch on is mid-market. So you guys made an announcement this past summer. And so it sounds like you're doubling down on that space. Give us the update. >> Sure. So we introduced the Data Domain 3300 and our customers have been asking for a new capacity point. So one of the things we're introducing with this release is an eight terabyte version of Data Domain 3300 that goes and scales up to 32 terabytes. In addition to that, we're supporting faster networking with 10 gig E support as well as virtual tape libraries over Fiber Channels. So virtual tape libraries are also back and we're supporting with Data Domain 3300. So again, tremendous improvements and capabilities that we've introduced for mid-market in the form of Data Domain 3300 as well as the DP4400 which is our integrated appliance. So, again, how do we bring all that enterprise goodness to a much broader segment of the market in the right form factor and right capacity points. >> Love it. You guys are on a nice cadence. Last summer, we had this announcement, we got Dell Technologies World coming up in May, actually end of April, now May. So looking forward to seeing you there. Thanks so much for taking us through these announcements. >> Yeah, thank you. Thanks for having us. >> You're very welcome. Now, let's go Phil Goodwin. Phil Goodwin was an analyst at IDC. And IDC has done a ton of research on the economic impact of moving to sort of modern data protection environment, they've interviewed about a thousand customers and they had deep dive interviews with about a dozen. So let's hear from Phil Goodwin in IDC and we'll be right back. (chilled music) >> IDC research shows that 60% of organizations will be executing on a digital transformaion strategy by 2020, barely a year away. The purpose of digital transformation is to make the organization more competitive with faster, more accurate information and timely information driving driving business decisions. If any digital transformation effort is to be successful, data availability must be a foundational part in the effort. Our research also shows that 48.5% or nearly half of all digital transformation projects involve improvements to the organizations data protection efforts. Purpose-built backup appliances or PBBAs have been the cornerstone for many data protection efforts. PBBAs provide faster, more reliable backup with fewer job failures than traditional tape infrastructure. More importantly, they support faster data restoration in the event of loss. Because they have very high data de-duplication rates, sometimes 40 to one or more, organizations can retain data onsite longer at a lower overall cost thereby improving data availability and TCO. PBBAs may be configured as a target device or disk-based appliance that can be used by any backup software as a backup target or as integrated appliances that include all hardware and software needed for fast efficient backups. The main customer advantages are rapid deployment, simple management and flexible growth options. The Dell EMC line of PBBAs is a broad portfolio that includes Data Domain appliances and the recently introduced Integrated Data Protection Appliances. Dell EMC Data Domain appliances have been in the PBBA market for more than 15 years. According to IDC market tracker data as of December 20th, 2018, Dell EMC with Data Domain and IDPA currently holds a 57.5% market share of PBBA appliances for both target and integrated devices. Dell EMC PBBAs have support for cloud data protection including cloud long term retention, cloud disaster recovery and protection for workloads running in the cloud. Recently IDC conducted a business value study among Dell EMC data protection customers. Our business value studies seek to identify and quantify real world customer experiences and financial impact of specific products. This study surveyed more than 1000 medium-sized organizations worldwide as well as provided in-depth interviews with a number of them. We found several highlights in the study including a 225% five-year ROI. In numerical terms, this translated to $218,928 of ROI per 100 terabytes of data per year. We also found a 50% lower cost of operating a data protection environment, a 71% faster data recovery window, 33% more frequent backups and 45% more efficient data protection staff. To learn more about IDC's business value study of Dell EMC data protection and measurable customer impact, we invite you to download the IDC white paper titled, The Business Value of Data Protection in IT Transformation sponsored by Dell EMC. (bouncy techno music) >> We're back with Beth Phalen. Beth, thanks again for helping us with this session and taking us through the news. We've heard about, from a customer, their perspective, some of the problems and challenges that they face, we heard about the hard news from Ruya. Phil Goodwin at IDC gave us a great overview of the customer research that they've done. So, lets bring it home. What are the key takeaways of today? >> First and foremost, this market is hot. It is important and it is changing rapidly. So that's number one. Data protection is a very dynamic and exciting market. Number two is, at Dell EMC, we've been modernizing our portfolio over the past three years and now we're at this exciting point where customers can take advantage of all of our strenth put in multi-cloud environment, in a commercial environment, for cyber recovery. So we've expanded where people can take the value from our portfolio. And I would just want people to know that if they haven't taken a look at the Dell EMC data protection portfolio recently, it's time to take another look. We appreciate all of our customers and what they do for us. We have such a great relationship with our customer base. We wanna make sure that they know what's coming, what's here today and how we're gonna work with them in the future. >> Alright. Well, great. Congratulations on the announcement. You guys have been hard at work. It is a hot space. A lot of action going on. Where can people find more information? >> Go back to dellemc.com, it's all there. >> Great. Well, thank you very much Beth. >> Thank you Dave. >> And thank you for watching. We'll see you next time. This is Dave Vellante from theCUBE. (chilled music)
SUMMARY :
From the SiliconANGLE Media office Beth it's great to see you again. It's always good to talk to you. Even when you go to sort of the downturn in 2008 and then have that compared with the central oversight that we have to manage. that they can rely on getting access to that data So as the head of the data protection division, It is and it's continuing to shift with new threats So let's talk a little bit more about the catalysts And that they can recover it as quickly as possible. So it's a more fundamental component of cyber. the new product that we introduced last year. So much has changed. So one of the things that hasn't changed is the fundamentals So that experience is important. The way you guys are organized, is enabling our customers to continue to have confidence So the cloud operating model is very important here. that all the right things are happening. and data protection suite. for customers, obstruct the complexity? of that VM to another primary storage array. and what are you seeing in multi-cloud. They can focus on the Data Domain that you guys have today. that the doctor or the nurse needs to take care of the kid. We got some hard news to get into. and the software that basically makes those two rock. What are the three areas that you're focused and push into the mid-market space but it's really the software layer that makes it sane. If you have the capabilities, So one of the important areas is cloud. To and from the cloud. What are you doing in cloud tiering? So one of the announcements is actually because the customers that we talk to One of the things that we introduced, The third use case you talked about So one of, again, the really interesting things What are you doing in that area? So basically one of the things that we've done So you guys made an announcement this past summer. So one of the things we're introducing with this release So looking forward to seeing you there. Thanks for having us. and they had deep dive interviews with about a dozen. and the recently introduced of the customer research that they've done. over the past three years Congratulations on the announcement. Well, thank you very much Beth. And thank you for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Beth Phalen | PERSON | 0.99+ |
2008 | DATE | 0.99+ |
$218,928 | QUANTITY | 0.99+ |
Phil Goodwin | PERSON | 0.99+ |
February 2019 | DATE | 0.99+ |
IDC | ORGANIZATION | 0.99+ |
December 20th, 2018 | DATE | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Ruya Barrett | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Beth | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
100% | QUANTITY | 0.99+ |
48.5% | QUANTITY | 0.99+ |
two hours | QUANTITY | 0.99+ |
10 gig | QUANTITY | 0.99+ |
Alibaba | ORGANIZATION | 0.99+ |
45% | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
24 terabytes | QUANTITY | 0.99+ |
50% | QUANTITY | 0.99+ |
33% | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
71% | QUANTITY | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
57.5% | QUANTITY | 0.99+ |
Second | QUANTITY | 0.99+ |
May | DATE | 0.99+ |
60% | QUANTITY | 0.99+ |
20 | QUANTITY | 0.99+ |
96 terabytes | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
Last summer | DATE | 0.99+ |
225% | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
two clicks | QUANTITY | 0.99+ |
two questions | QUANTITY | 0.99+ |
Dell EMC | ORGANIZATION | 0.99+ |
less than seven hours | QUANTITY | 0.99+ |
two times | QUANTITY | 0.99+ |
40 | QUANTITY | 0.99+ |
two click | QUANTITY | 0.99+ |
Phoenix Children's | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
less than an hour | QUANTITY | 0.99+ |
SAP | ORGANIZATION | 0.99+ |
more than 15 years | QUANTITY | 0.99+ |
second one | QUANTITY | 0.99+ |
three clicks | QUANTITY | 0.99+ |
an hour and a half | QUANTITY | 0.99+ |
24 hours | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
first box | QUANTITY | 0.99+ |
Phoenix Children's Hospital | ORGANIZATION | 0.99+ |
a month and a half | QUANTITY | 0.99+ |
three use cases | QUANTITY | 0.98+ |
about a dozen | QUANTITY | 0.98+ |
first entry | QUANTITY | 0.98+ |
Boston, Massachusetts | LOCATION | 0.98+ |
end of April | DATE | 0.98+ |
three click | QUANTITY | 0.98+ |
Dell EMCs | ORGANIZATION | 0.98+ |
Ram Venkatesh, Hortonworks & Sudhir Hasbe, Google | DataWorks Summit 2018
>> Live from San Jose, in the heart of Silicon Valley, it's theCUBE, covering DataWorks Summit 2018. Brought to you by HortonWorks. >> We are wrapping up Day One of coverage of Dataworks here in San Jose, California on theCUBE. I'm your host, Rebecca Knight, along with my co-host, James Kobielus. We have two guests for this last segment of the day. We have Sudhir Hasbe, who is the director of product management at Google and Ram Venkatesh, who is VP of Engineering at Hortonworks. Ram, Sudhir, thanks so much for coming on the show. >> Thank you very much. >> Thank you. >> So, I want to start out by asking you about a joint announcement that was made earlier this morning about using some Hortonworks technology deployed onto Google Cloud. Tell our viewers more. >> Sure, so basically what we announced was support for the Hortonworks DataPlatform and Hortonworks DataFlow, HDP and HDF, running on top of the Google Cloud Platform. So this includes deep integration with Google's cloud storage connector layer as well as it's a certified distribution of HDP to run on the Google Cloud Platform. >> I think the key thing is a lot of our customers have been telling us they like the familiar environment of Hortonworks distribution that they've been using on-premises and as they look at moving to cloud, like in GCP, Google Cloud, they want the similar, familiar environment. So, they want the choice to deploy on-premises or Google Cloud, but they want the familiarity of what they've already been using with Hortonworks products. So this announcement actually helps customers pick and choose like whether they want to run Hortonworks distribution on-premises, they want to do it in cloud, or they wat to build this hybrid solution where the data can reside on-premises, can move to cloud and build these common, hybrid architecture. So, that's what this does. >> So, HDP customers can store data in the Google Cloud. They can execute ephemeral workloads, analytic workloads, machine learning in the Google Cloud. And there's some tie-in between Hortonworks's real-time or low latency or streaming capabilities from HDF in the Google Cloud. So, could you describe, at a full sort of detail level, the degrees of technical integration between your two offerings here. >> You want to take that? >> Sure, I'll handle that. So, essentially, deep in the heart of HDP, there's the HDFS layer that includes Hadoop compatible file system which is a plug-able file system layer. So, what Google has done is they have provided an implementation of this API for the Google Cloud Storage Connector. So this is the GCS Connector. We've taken the connector and we've actually continued to refine it to work with our workloads and now Hortonworks has actually bundling, packaging, and making this connector be available as part of HDP. >> So bilateral data movement between them? Bilateral workload movement? >> No, think of this as being very efficient when our workloads are running on top of GCP. When they need to get at data, they can get at data that is in the Google Cloud Storage buckets in a very, very efficient manner. So, since we have fairly deep expertise on workloads like Apache Hive and Apache Spark, we've actually done work in these workloads to make sure that they can run efficiently, not just on HDFS, but also in the cloud storage connector. This is a critical part of making sure that the architecture is actually optimized for the cloud. So, at our skill and our customers are moving their workloads from on-premise to the cloud, it's not just functional parity, but they also need sort of the operational and the cost efficiency that they're looking for as they move to the cloud. So, to do that, we need to enable these fundamental disaggregated storage pattern. See, on-prem, the big win with Hadoop was we could bring the processing to where the data was. In the cloud, we need to make sure that we work well when storage and compute are disaggregated and they're scaled elastically, independent of each other. So this is a fairly fundamental architectural change. We want to make sure that we enable this in a first-class manner. >> I think that's a key point, right. I think what cloud allows you to do is scale the storage and compute independently. And so, with storing data in Google Cloud Storage, you can like scale that horizontally and then just leverage that as your storage layer. And the compute can independently scale by itself. And what this is allowing customers of HDP and HDF is store the data on GCP, on the cloud storage, and then just use the scale, the compute side of it with HDP and HDF. >> So, if you'll indulge me to a name, another Hortonworks partner for just a hypothetical. Let's say one of your customers is using IBM Data Science Experience to do TensorFlow modeling and training, can they then inside of HDP on GCP, can they use the compute infrastructure inside of GCP to do the actual modeling which is more compute intensive and then the separate decoupled storage infrastructure to do the training which is more storage intensive? Is that a capability that would available to your customers? With this integration with Google? >> Yeah, so where we are going with this is we are saying, IBM DSX and other solutions that are built on top of HDP, they can transparently take advantage of the fact that they have HDP compute infrastructure to run against. So, you can run your machine learning training jobs, you can run your scoring jobs and you can have the same unmodified DSX experience whether you're running against an on-premise HDP environment or an in-cloud HDP environment. Further, that's sort of the benefit for partners and partner solutions. From a customer standpoint, the big value prop here is that customers, they're used to securing and governing their data on-prem in their particular way with HDP, with Apache Ranger, Atlas, and so forth. So, when they move to the cloud, we want this experience to be seamless from a management standpoint. So, from a data management standpoint, we want all of their learning from a security and governance perspective to apply when they are running in Google Cloud as well. So, we've had this capability on Azure and on AWS, so with this partnership, we are announcing the same type of deep integration with GCP as well. >> So Hortonworks is that one pane of glass across all your product partners for all manner of jobs. Go ahead, Rebecca. >> Well, I just wanted to ask about, we've talked about the reason, the impetus for this. With the customer, it's more familiar for customers, it offers the seamless experience, But, can you delve a little bit into the business problems that you're solving for customers here? >> A lot of times, our customers are at various points on their cloud journey, that for some of them, it's very simple, they're like there's a broom coming by and the datacenter is going away in 12 months and I need to be in the cloud. So, this is where there is a wholesale movement of infrastructure from on-premise to the cloud. Others are exploring individual business use cases. So, for example, one of our large customers, a travel partner, so they are exploring their new pricing model and they want to roll out this pricing model in the cloud. They have on-premise infrastructure, they know they have that for a while. They are spinning up new use cases in the cloud typically for reasons of agility. So, if you, typically many of our customers, they operate large, multi-tenant clusters on-prem. That's nice for, so a very scalable compute for running large jobs. But, if you want to run, for example, a new version of Spark, you have to upgrade the entire cluster before you can do that. Whereas in this sort of model, what they can say is, they can bring up a new workload and just have the specific versions and dependency that it needs, independent of all of their other infrastructure. So this gives them agility where they can move as fast as... >> Through the containerization of the Spark jobs or whatever. >> Correct, and so containerization as well as even spinning up an entire new environment. Because, in the cloud, given that you have access to elastic compute resources, they can come and go. So, your workloads are much more independent of the underlying cluster than they are on-premise. And this is where sort of the core business benefits around agility, speed of deployment, things like that come into play. >> And also, if you look at the total cost of ownership, really take an example where customers are collecting all this information through the month. And, at month end, you want to do closing of books. And so that's a great example where you want ephemeral workloads. So this is like do it once in a month, finish the books and close the books. That's a great scenario for cloud where you don't have to on-premises create an infrastructure, keep it ready. So that's one example where now, in the new partnership, you can collect all the data through the on-premises if you want throughout the month. But, move that and leverage cloud to go ahead and scale and do this workload and finish the books and all. That's one, the second example I can give is, a lot of customers collecting, like they run their e-commerce platforms and all on-premises, let's say they're running it. They can still connect all these events through HDP that may be running on-premises with Kafka and then, what you can do is, in-cloud, in GCP, you can deploy HDP, HDF, and you can use the HDF from there for real-time stream processing. So, collect all these clickstream events, use them, make decisions like, hey, which products are selling better?, should we go ahead and give?, how many people are looking at that product?, or how many people have bought it?. That kind of aggregation and real-time at scale, now you can do in-cloud and build these hybrid architectures that are there. And enable scenarios where in past, to do that kind of stuff, you would have to procure hardware, deploy hardware, all of that. Which all goes away. In-cloud, you can do that much more flexibly and just use whatever capacity you have. >> Well, you know, ephemeral workloads are at the heart of what many enterprise data scientists do. Real-world experiments, ad-hoc experiments, with certain datasets. You build a TensorFlow model or maybe a model in Caffe or whatever and you deploy it out to a cluster and so the life of a data scientist is often nothing but a stream of new tasks that are all ephemeral in their own right but are part of an ongoing experimentation program that's, you know, they're building and testing assets that may be or may not be deployed in the production applications. That's you know, so I can see a clear need for that, well, that capability of this announcement in lots of working data science shops in the business world. >> Absolutely. >> And I think coming down to, if you really look at the partnership, right. There are two or three key areas where it's going to have a huge advantage for our customers. One is analytics at-scale at a lower cost, like total cost of ownership, reducing that, running at-scale analytics. That's one of the big things. Again, as I said, the hybrid scenarios. Most customers, enterprise customers have huge deployments of infrastructure on-premises and that's not going to go away. Over a period of time, leveraging cloud is a priority for a lot of customers but they will be in these hybrid scenarios. And what this partnership allows them to do is have these scenarios that can span across cloud and on-premises infrastructure that they are building and get business value out of all of these. And then, finally, we at Google believe that the world will be more and more real-time over a period of time. Like, we already are seeing a lot of these real-time scenarios with IoT events coming in and people making real-time decisions. And this is only going to grow. And this partnership also provides the whole streaming analytics capabilities in-cloud at-scale for customers to build these hybrid plus also real-time streaming scenarios with this package. >> Well it's clear from Google what the Hortonworks partnership gives you in this competitive space, in the multi-cloud space. It gives you that ability to support hybrid cloud scenarios. You're one of the premier public cloud providers and we all know about. And clearly now that you got, you've had the Hortonworks partnership, you have that ability to support those kinds of highly hybridized deployments for your customers, many of whom I'm sure have those requirements. >> That's perfect, exactly right. >> Well a great note to end on. Thank you so much for coming on theCUBE. Sudhir, Ram, that you so much. >> Thank you, thanks a lot. >> Thank you. >> I'm Rebecca Knight for James Kobielus, we will have more tomorrow from DataWorks. We will see you tomorrow. This is theCUBE signing off. >> From sunny San Jose. >> That's right.
SUMMARY :
in the heart of Silicon Valley, for coming on the show. So, I want to start out by asking you to run on the Google Cloud Platform. and as they look at moving to cloud, in the Google Cloud. So, essentially, deep in the heart of HDP, and the cost efficiency is scale the storage and to do the training which and you can have the same that one pane of glass With the customer, it's and just have the specific of the Spark jobs or whatever. of the underlying cluster and then, what you can and so the life of a data that the world will be And clearly now that you got, Sudhir, Ram, that you so much. We will see you tomorrow.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
James Kobielus | PERSON | 0.99+ |
Rebecca Knight | PERSON | 0.99+ |
Rebecca | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
Sudhir | PERSON | 0.99+ |
Ram Venkatesh | PERSON | 0.99+ |
San Jose | LOCATION | 0.99+ |
HortonWorks | ORGANIZATION | 0.99+ |
Sudhir Hasbe | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Hortonworks | ORGANIZATION | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
two guests | QUANTITY | 0.99+ |
San Jose, California | LOCATION | 0.99+ |
DataWorks | ORGANIZATION | 0.99+ |
tomorrow | DATE | 0.99+ |
Ram | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
one example | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
two offerings | QUANTITY | 0.98+ |
12 months | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
Day One | QUANTITY | 0.98+ |
DataWorks Summit 2018 | EVENT | 0.97+ |
IBM | ORGANIZATION | 0.97+ |
second example | QUANTITY | 0.97+ |
Google Cloud Platform | TITLE | 0.96+ |
Atlas | ORGANIZATION | 0.96+ |
Google Cloud | TITLE | 0.94+ |
Apache Ranger | ORGANIZATION | 0.92+ |
three key areas | QUANTITY | 0.92+ |
Hadoop | TITLE | 0.91+ |
Kafka | TITLE | 0.9+ |
theCUBE | ORGANIZATION | 0.88+ |
earlier this morning | DATE | 0.87+ |
Apache Hive | ORGANIZATION | 0.86+ |
GCP | TITLE | 0.86+ |
one pane | QUANTITY | 0.86+ |
IBM Data Science | ORGANIZATION | 0.84+ |
Azure | TITLE | 0.82+ |
Spark | TITLE | 0.81+ |
first | QUANTITY | 0.79+ |
HDF | ORGANIZATION | 0.74+ |
once in a month | QUANTITY | 0.73+ |
HDP | ORGANIZATION | 0.7+ |
TensorFlow | OTHER | 0.69+ |
Hortonworks DataPlatform | ORGANIZATION | 0.67+ |
Apache Spark | ORGANIZATION | 0.61+ |
GCS | OTHER | 0.57+ |
HDP | TITLE | 0.5+ |
DSX | TITLE | 0.49+ |
Cloud Storage | TITLE | 0.47+ |
Day Two Kickoff | Veritas Vision 2017
>> Announcer: Live from Las Vegas, it's theCUBE. Covering Veritas Vision 2017. Brought to you by Veritas. (peppy digital music) >> Veritas Vision 2017 everybody. We're here at The Aria Hotel. This is day two of theCUBE's coverage of Vtas, #VtasVision, and this is theCUBE, the leader in live tech coverage. My name is Dave Vellante, and I'm here with Stuart Miniman who is my cohost for the week. Stu, we heard Richard Branson this morning. The world-renowned entrepreneur Sir Richard Branson came up from the British Virgin Islands where he lives. He lives in the Caribbean. And evidently he was holed out during the hurricane in his wine cellar, but he was able to make it up here for the keynote. We saw on Twitter, so, great keynote, we'll talk about that a little bit. We saw on Twitter that he actually stopped by the Hitachi event, Hitachi NEXT for women in tech, a little mini event that they had over there. So, pretty cool guy. Some of the takeaways: he talked a lot about- well, first of all, welcome to day two. >> Thanks, Dave. Yeah, and people are pretty excited that sometimes they bring in those marquee guests, someone that's going to get everybody to say, "Okay, wait, it's day two. "I want to get up early, get in the groove." Some really interesting topics, I mean talking about, thinking about the community at large, one of the things I loved he talked about. I've got all of these, I've got hotels, I've got different things. We draw a circle around it. Think about the community, think about the schools that are there, think about if there's people that don't have homes. All these things to, giving back to the community, he says we can all do our piece there, and talking about sustainable business. >> As far as, I mean we do a lot of these, as you know, and as far as the keynote speakers go, I thought he was one of the better ones. Certainly one of the bigger names. Some of the ones that we've seen in the past that I think are comparable, Bill Clinton at Dell World 2012 was pretty happening. >> There's a reason that Bill Clinton is known as the orator that he is. >> Yeah, so he was quite good. And then Robert Gates, both at ServiceNow and Nutanics, Condi Rice at Nutanics, both very impressive. Malcolm Gladwell, who's been on theCUBE and Nate Silver, who's also been on theCUBE, again, very impressive. Thomas Friedman we've seen at the IBM shows. The author, the guy who wrote the Jobs book was very very strong, come on, help me. >> Oh, yeah, Walter Isaacson. >> Walter Isaacson was at Tableau, so you've seen some- >> Yeah, I've seen Elon Musk also at the Dell show. >> Oh, I didn't see Elon, okay. >> Yeah, I think that was the year you didn't come. >> So I say Branson, from the ones I've seen, I don't know how he compared to Musk, was probably the best I think I've ever seen. Very inspirational, talking about the disaster. They had really well-thought-out and well-produced videos that he sort of laid in. The first one was sort of a commercial for Richard Branson and who he was and how he's, his passion for changing the world, which is so genuine. And then a lot of stuff on the disaster in the British Virgin Islands, the total devastation. And then he sort of went into his passion for entrepreneurs, and what he sees as an entrepreneur is he sort of defined it as somebody who wants to make the world a better place, innovations, disruptive innovations to make the world a better place. And then had a sort of interesting Q&A session with Lynn Lucas. >> Yeah, and one of the lines he said, people, you don't go out with the idea that, "I'm going to be a businessman." It's, "I want to go out, I want to build something, "I want to create something." I love one of the early anecdotes that he said when he was in school, and he had, what was it, a newsletter or something he was writing against the Vietnam War, and the school said, "Well, you can either stay in school, "or you can keep doing your thing." He said, "Well, that choice is easy, buh-bye." And when he was leaving, they said, "Well, you're either going to be, end up in jail or be a millionaire, we're not sure." And he said, "Well, what do ya know, I ended up doing both." (both laughing) >> So he is quite a character, and just very understated, but he's got this aura that allows him to be understated and still appear as this sort of mega-personality. He talked about, actually some of the interesting things he said about rebuilding after Irma, obviously you got to build stronger homes, and he really sort of pounded the reducing the reliance on fossil fuels, and can't be the same old, same old, basically calling for a Marshall Plan for the Caribbean. One of the things that struck me, and it's a tech audience, generally a more liberal audience, he got some fond applause for that, but he said, "You guys are about data, you don't just ignore data." And one of the data points that he threw out was that the Atlantic Ocean at some points during Irma was 86 degrees, which is quite astounding. So, he's basically saying, "Time to make a commitment "to not retreat from the Paris Agreement." And then he also talked about, from an entrepreneurial standpoint and building a company that taking note of the little things, he said, makes a big difference. And talking about open cultures, letting people work from home, letting people take unpaid sabbaticals, he did say unpaid. And then he touted his new book, Finding My Virginity, which is the sequel to Losing My Virginity. So it was all very good. Some of the things to be successful: you need to learn to learn, you need to listen, sort of an age-old bromide, but somehow it seemed to have more impact coming from Branson. And then, actually then Lucas asked one of the questions that I put forth, was what's his relationship with Musk and Bezos? And he said he actually is very quite friendly with Elon, and of course they are sort of birds of a feather, all three of them, with the rocket ships. And he said, "We don't talk much about that, "we just sort of-" specifically in reference to Bezos. But overall, I thought it was very strong. >> Yeah Dave, what was the line I think he said? "You want to be friends with your competitors "but fight hard against them all day, "go drinking with them at night." >> Right, fight like crazy during the day, right. So, that was sort of the setup, and again, I thought Lynn Lucas did a very good job. He's, I guess in one respect he's an easy interview 'cause he's such a- we interview these dynamic figures, they just sort of talk and they're good. But she kept the conversation going and asked some good questions and wasn't intimidated, which you can be sometimes by those big personalities. So I thought that was all good. And then we turned into- which I was also surprised and appreciative that they put Branson on first. A lot of companies would've held him to the end. >> Stu: Right. >> Said, "Alright, let's get everybody in the room "and we'll force them to listen to our product stuff, "and then we can get the highlight, the headliner." Veritas chose to do it differently. Now, maybe it was a scheduling thing, I don't know. But that was kind of cool. Go right to where the action is. You're not coming here to watch 60 Minutes, you want to see the headline show right away, and that's what they did, so from a content standpoint I was appreciative of that. >> Yeah, absolutely. And then, of course, they brought on David Noy, who we're going to have on in a little while, and went through, really, the updates. So really it's the expansion, Dave, of their software-defined storage, the family of products called InfoScale. Yesterday we talked a bit about the Veritas HyperScale, so that is, they've got the HyperScale for OpenStack, they've got the HyperScale for containers, and then filling out the product line is the Veritas Access, which is really their scale-out NAS solution, including, they did one of the classic unveils of Veritas Software Company. It was a little odd for me to be like, "Here's an appliance "for Veritas Bezel." >> Here's a box! >> Partnership with Seagate. So they said very clearly, "Look, if you really want it simple, "and you want it to come just from us, "and that's what you'd like, great. "Here's an appliance, trusted supplier, "we've put the whole thing together, "but that's not going to be our primary business, "that's not the main way we want to do things. "We want to offer the software, "and you can choose your hardware piece." Once again, knocking on some of those integrated hardware suppliers with the 70 point margin. And then the last one, one of the bigger announcements of the show, is the Veritas Cloud Storage, which they're calling is object storage with brains. And one thing we want to dig into: those brains, what is that functionality, 'cause object storage from day one always had a little bit more intelligence than the traditional storage. Metadata is usually built in, so where is the artificial intelligence, machine learning, what is that knowledge that's kind of built into it, because I find, Dave, on the consumer side, I'm amazed these days as how much extra metadata and knowledge gets built into things. So, on my phone, I'll start searching for things, and it'll just have things appear. I know you're not fond of the automated assistants, but I've got a couple of them in my house, so I can ask them questions, and they are getting smarter and smarter over time, and they already know everything we're doing anyway. >> You know, I like the automated assistants. We have, well, my kid has an Echo, but what concerns me, Stu, is when I am speaking to those automated assistants about, "Hey, maybe we should take a trip "to this place or that place," and then all of a sudden the next day on my laptop I start to see ads for trips to that place. I start to think about, wow, this is strange. I worry about the privacy of those systems. They're going to, they already know more about me than I know about me. But I want to come back to those three announcements we're going to have David Noy on: HyperScale, Access, and Cloud Object. So some of the things we want to ask that we don't really know is the HyperScale: is it Block, is it File, it's OpenStack specific, but it's general. >> Right, but the two flavors: one's for OpenStack, and of course OpenStack has a number of projects, so I would think you could be able to do Block and File but would definitely love that clarification. And then they have a different one for containers. >> Okay, so I kind of don't understand that, right? 'Cause is it OpenStack containers, or is it Linux containers, or is it- >> Well, containers are always going to be on Linux, and containers can fit with OpenStack, but we've got their Chief Product Officer, and we've got David Noy. >> Dave: So we'll attack some of that. >> So we'll dig into all of those. >> And then, the Access piece, you know, after the apocalypse, there are going to be three things left in this world: cockroaches, mainframes, and Dot Hill RAID arrays. When Seagate was up on stage, Seagate bought this company called Dot Hill, which has been around longer than I have, and so, like you said, that was kind of strange seeing an appliance unveil from the software company. But hey, they need boxes to run on this stuff. It was interesting, too, the engineer Abhijit came out, and they talked about software-defined, and we've been doing software-defined, is what he said, way before the term ever came out. It's true, Veritas was, if not the first, one of the first software-defined storage companies. >> Stu: Oh yeah. >> And the problem back then was there were always scaling issues, there were performance issues, and now, with the advancements in microprocessor, in DRAM, and flash technologies, software-defined has plenty of horsepower underneath it. >> Oh yeah, well, Dave, 15 years ago, the FUD from every storage company was, "You can't trust storage functionality "just on some generic server." Reminds me back, I go back 20 years, it was like, "Oh, you wouldn't run some "mission-critical thing on Windows." It's always, "That's not ready for prime time, "it's not enterprise-grade." And now, of course, everybody's on the software-defined bandwagon. >> Well, and of course when you talk to the hardware companies, and you call them hardware companies, specifically HPE and Dell EMC as examples, and Lenovo, etc. Lenovo not so much, the Chinese sort of embraced hardware. >> And even Hitachi's trying to rebrand themselves; they're very much a hardware company, but they've got software assets. >> So when you worked at EMC, and you know when you sat down and talked to the guys like Brian Gallagher, he would stress, "Oh, all my guys, all my engineers "are software engineers. We're not a hardware company." So there's a nuance there, it's sort of more the delivery and the culture and the ethos, which I think defines the software culture, and of course the gross margins. And then of course the Cloud Object piece; we want to understand what's different from, you know, object storage embeds metadata in the data and obviously is a lower cost sort of option. Think of S3 as the sort of poster child for cloud object storage. So Veritas is an arms dealer that's putting their hat in the ring kind of late, right? There's a lot of object going on out there, but it's not really taking off, other than with the cloud guys. So you got a few object guys around there. Cleversafe got bought out by IBM, Scality's still around doing some stuff with HPE. So really, it hasn't even taken off yet, so maybe the timing's not so bad. >> Absolutely, and love to hear some of the use cases, what their customers are doing. Yeah, Dave, if we have but one critique, saw a lot of partners up on stage but not as many customers. Usually expect a few more customers to be out there. Part of it is they're launching some new products, not talking about very much the products they've had in there. I know in the breakouts there are a lot of customers here, but would have liked to see a few more early customers front and center. >> Well, I think that's the key issue for this company, Stu, is that, we talked about this at the close yesterday, is how do they transition that legacy install base to the new platform. Bill Coleman said, "It's ours to lose." And I think that's right, and so the answer for a company like that in the playbook is clear: go private so you don't have to get exposed to the 90 day shock lock, invest, build out a modern platform. He talked about microservices and modern development platform. And create products that people want, and migrate people over. You're in a position to do that. But you're right, when you talk to the customers here, they're NetBackup customers, that's really what they're doing, and they're here to sort of learn, learn about best practice and see where they're going. NetBackup, I think, 8.1 was announced this week, so people are glomming onto that, but the vast majority of the revenue of this company is from their existing legacy enterprise business. That's a transition that has to take place. Luckily it doesn't have to take place in the public eye from a financial standpoint. So they can have some patient capital and work through it. Alright Stu, lineup today: a lot of product stuff. We got Jason Buffington coming on for getting the analyst perspective. So we'll be here all day. Last word? >> Yeah, and end of the day with Foreigner, it feels like the first time we're here. Veritas feels hot-blooded. We'll keep rolling. >> Alright, luckily we're not seeing double vision. Alright, keep it right there everybody. We'll be back right after this short break. This is theCUBE, we're live from Vertias Vision 2017 in Las Vegas. We'll be right back. (peppy digital music)
SUMMARY :
Brought to you by Veritas. Some of the takeaways: he talked a lot about- one of the things I loved he talked about. and as far as the keynote speakers go, as the orator that he is. The author, the guy who wrote the Jobs book So I say Branson, from the ones I've seen, Yeah, and one of the lines he said, people, and he really sort of pounded the "You want to be friends with your competitors and appreciative that they put Branson on first. Said, "Alright, let's get everybody in the room So really it's the expansion, Dave, "that's not the main way we want to do things. So some of the things we want to ask that we don't really know Right, but the two flavors: one's for OpenStack, and containers can fit with OpenStack, one of the first software-defined storage companies. And the problem back then was everybody's on the software-defined bandwagon. Lenovo not so much, the Chinese sort of embraced hardware. And even Hitachi's trying to rebrand themselves; and of course the gross margins. I know in the breakouts there are a lot of customers here, and so the answer for a company like that Yeah, and end of the day with Foreigner, This is theCUBE, we're live
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Brian Gallagher | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Stuart Miniman | PERSON | 0.99+ |
Seagate | ORGANIZATION | 0.99+ |
Bill Coleman | PERSON | 0.99+ |
Jason Buffington | PERSON | 0.99+ |
Abhijit | PERSON | 0.99+ |
David Noy | PERSON | 0.99+ |
Lynn Lucas | PERSON | 0.99+ |
Hitachi | ORGANIZATION | 0.99+ |
Lucas | PERSON | 0.99+ |
Musk | PERSON | 0.99+ |
Nutanics | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Lenovo | ORGANIZATION | 0.99+ |
Thomas Friedman | PERSON | 0.99+ |
70 point | QUANTITY | 0.99+ |
Walter Isaacson | PERSON | 0.99+ |
Malcolm Gladwell | PERSON | 0.99+ |
Losing My Virginity | TITLE | 0.99+ |
British Virgin Islands | LOCATION | 0.99+ |
Nate Silver | PERSON | 0.99+ |
Richard Branson | PERSON | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
Finding My Virginity | TITLE | 0.99+ |
Bezos | PERSON | 0.99+ |
90 day | QUANTITY | 0.99+ |
Veritas | ORGANIZATION | 0.99+ |
Bill Clinton | PERSON | 0.99+ |
ServiceNow | ORGANIZATION | 0.99+ |
Atlantic Ocean | LOCATION | 0.99+ |
86 degrees | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Vietnam War | EVENT | 0.99+ |
HyperScale | TITLE | 0.99+ |
Veritas Bezel | ORGANIZATION | 0.99+ |
yesterday | DATE | 0.99+ |
Paris Agreement | TITLE | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
Caribbean | LOCATION | 0.99+ |
today | DATE | 0.99+ |
two flavors | QUANTITY | 0.99+ |
15 years ago | DATE | 0.99+ |
Branson | PERSON | 0.99+ |
Robert Gates | PERSON | 0.99+ |
Elon Musk | PERSON | 0.99+ |
both | QUANTITY | 0.98+ |
Irma | PERSON | 0.98+ |
one | QUANTITY | 0.98+ |
Echo | COMMERCIAL_ITEM | 0.98+ |
Stu | PERSON | 0.98+ |
three announcements | QUANTITY | 0.98+ |
three things | QUANTITY | 0.98+ |
Elon | PERSON | 0.98+ |
Linux | TITLE | 0.98+ |
this week | DATE | 0.97+ |
first time | QUANTITY | 0.97+ |
Dell World 2012 | EVENT | 0.97+ |
OpenStack | TITLE | 0.97+ |
Condi Rice | PERSON | 0.97+ |
day two | QUANTITY | 0.96+ |
one critique | QUANTITY | 0.96+ |
Dave Nettleton, Google | Veritas Vision 2017
>> Narrator: Live from Las Vegas, it's theCUBE, covering Veritas Vision 2017. Brought to you by Veritas. (techno music) >> Welcome back to Veritas Vision 2017. This is theCUBE, the leader in live tech coverage. My name is Dave Vellante, and I'm here with my cohost, Stu Miniman. Dave Nettleton is here. He's the group product manager at Google. Dave, thanks for coming on theCUBE. >> Thank you, really excited to be here. >> Alright, let's talk storage and cloud. So Google Cloud Platform, we were at your show in March. Kind of the second coming out party. Diane Green at the helm. Obviously you guys are making serious moves in the enterprise. Give us the update overall and then we'll get into the storage piece. >> Yeah. Well as you say, over the last couple of years a big focus for Google has actually been shifting and focusing on enterprise customers. I think Gartner reflects that about a trillion dollars of IT spend is going to be affected by the cloud over the next three to five years. And Google has some amazing assets that its developed over the last 10 or 15 years that we can bring to bat will really help meet enterprise customers' needs, help them where they are, and really help transform their businesses for the future. So we're excited about that. >> So how's that going? One of the big thrusts that we heard in March was and we saw it. You guys have made some moves bringing in people from enterprise companies In particular, you came from Microsoft. See a lot of guys from Cisco. We saw a lot of guys running around from EMC. Diane herself from VMware, bringing a lot of that enterprise DNA. How is the the patient assimilating with those organs? >> Yeah, actually that's been one of the most exciting parts I think of the journey has been watching the team come together over the last year or two. As you say, bringing together that pool of talent that has entered one and created even new business in the past, it's amazing to see that talent group come together. Diane is doing an amazing job bringing the team together and building out all of the sales functions and other parts of the business that we need for the enterprise. Building out the partner ecosystem, as well, is obviously super critical. And when you marry that together with the technology assets that Google has, it really is giving customers unprecedented levels of capabilities in the cloud to operate their business in new, more efficient ways. >> So Google is really well known for kind of the analytics piece of the business. Look at all the pieces that have spun out of what Google has done. I'm a networking guy by background. I said when PCP was launched I said, "Google's network is second to none." Best network. Really understand when the whole wave of SDN came out. Storage on the other hand, one of those foundational pieces, but it's not the first thing that comes to mind. So give us a little bit of a pedigree of the group, what you're building, what differentiates Google from the other infrastructure as a service and cloud players. >> Yeah, and actually you teed it up beautifully, because one of our in storage big differentiators is actually our ability to leverage the network. So, let me talk you through that a little bit. So Google internally has been building out massive, scalable storage systems for years to power the rest of Google. And as we take those to our enterprise customers we find that we're able to leverage that core infrastructure together with global assets like our network. Two parts of the network actually I talk about. One is our wide area network. That allows us to actually not only store data in regions around the world, but distribute that content through hundreds of points of presence direct to customers very, very quickly. Inside of our data centers we have software defined networks that allow us to separate out compute and storage to really help us then scale these independently so that we can give massive flexibility and cost savings and pass that through to our customers. And how this shows up in our products, perhaps the best example is if you take something like Google Cloud Storage, which is our object storage product, that product is very differentiated in the industry in that it provides a single API that will meet use cases from global content serving for customers like Spotify and Vimeo who want to stream media content around the world, streaming news, web, media, videos, all the way through to archival storage. Last year we launched our cold line storage class, and this is unique in the industry because it is archival storage that's online, and it has the same API and access as all of the rest of Google Cloud Storage. So I can take a single piece of data, a video for example, I could be streaming it out to customers around the world globally, and then after a month or two I might decide that I want to archive it. I can archive that down to our colder storage class, and if a customer wants to set it up again they have instant access to it. >> What we're hearing from customers is something we heard in the keynotes here at the Veritas show is customers' cloud strategy is rather fragmented, and by that I mean they're not all in on one place to spot. Certain companies say that. How does that impact your relationship with customers on storage? How do you interact with their SaaS environment, their on premises solutions, as well as what you have inside Google? >> Yeah. I think fundamentally we believe the world is going to evolve to sort of a multi cloud world, and that includes both on premises and public clouds. And as part of that our strategy is to be, be the most open. And by being the most open that means we need to help customers be portable with their workloads. We need to help them bring their workloads to the cloud for when that's appropriate, but also if it's appropriate to take it back to say on premises to enable them to do that in a very first class way, as well. And we think what will happen is some customers will go all in on a particular cloud. There will be particular use cases and platform capabilities that will be very differentiated that they want to go all in on, and others will take a more portfolio approach. And then partners, such as Veritas and others, are great for helping customers through their information map helping manage that overall portfolio. >> Could you explain that portability? Is Kubernetes a piece of it? Is that the primary piece of it? And maybe explain a little bit more how Veritas fits in, too. >> Yeah, so the overall ecosystem is evolving. Kubernetes is obviously a huge part of that, that environment, for being able to portably move your compute around. In terms of relationship with Veritas, you know, for me it's all about helping customers solve the problems that they have and meet customers where they are. And if customers are leveraging multiple clouds, either because they use investor breed solutions through acquisitions, etc., they need the ability to be able to manage their data across all of those environments. And someone like Veritas with information map is a key partner for us in helping customers meet and manage their needs. >> So what does that mean for storage? So containers obviously for the application portability, mobility. Kubernetes is sort of Google's little lever. Everybody wants to do Kubernetes and you guys are front and center there. So that gives you credibly in the cloud world, not that you didn't have it before, but everybody now wants to belly up to you on that. What does that mean for storage? Is that just sort of like an ice breaker for you guys? Are there other things that you're doing specific to storage to take advantage of your expertise there? >> Yeah, we want to make sure that customers have a really great integrated experience as they build out their application platforms. So we're always working with them to better define and understand their needs and build that out. It is a fast emerging, fast evolving space. APIs are still evolving fast. Different layers of the stack are evolving fast. So we continue to work with customers and just meet their needs through partnerships and also first party platform. >> And as you move up the stack sort of beyond the networking storage and compute into even database, Google has got some amazing database technologies. Are you doing specific things in storage to take advantage of that, making things run faster or more available or recover faster? Can you talk about that a little bit? >> Yeah. The underlying infrastructure at Google powers a lot of our external facing services. So we actually are able to reap very interesting benefits by managing on a single shared TI, technical infrastructure, that we have at Google. But as that surfaces up to customers we have to make sure obviously that they can use it in the ways that best meet their needs. But we want to make sure that we integrate their solutions as easy as possible. So for example, Google Cloud Storage (mumbles) talking about is really well integrated with Dataproc, which is our managed Hadoop product for running big data workloads, and also with something like BigQuery, which is our massively scalable data warehousing solution. So, I can store a lot of my own structured data in Google Cloud Storage and then leverage my entire analytics portfolio to operate over that. And again, a key part of that is the separation of computer networking that we were talking about. When storage is separate from compute and we've used that very powerful software defined network, then that lets us spin up thousands of nodes in something like BigQuery to operate over data and make a very seamless experience for customers. >> So Stu kind of touched on it before. People talk about Google and Google Cloud they point to two things. Obviously the Google app suite, okay, boom. We're a customer. We love it. Everybody is familiar with it. And the other is data, the data king. And they kind of put you in those two boxes. Are you comfortable with that? Is that fair? Is that really the brand that you want? Are you trying to extend that? I wonder if you can comment. >> Yeah. Obviously our strengths have been in analytics and machine learning, and we find that that's a thing that customers are really looking to find ways to add new value to their business. But we also wanted to make sure, we also want to make sure that we're a very trusted provider offering the various high levels of services. And it's not just the capabilities but overall TCO. We want to make it much easier for people to develop new applications on the platform. We talked a little bit about some of our open capabilities, but just in general we want to make it easy for customers to get the best value out of their cloud. So you'll see us doing more and more of that. Things we've done have been like being able to create a custom, custom VM images. You can dial up your memory and size, give you a lot of flexibility to really just hone in and solve the problems that you have. >> So help us square a circle there. When you talk to the cloud, we'll call pure cloud folks, people that, you know, born in the cloud, they developed cloud from day one, no legacy infrastructure. You talk to those guys they're like, "Wow, TCO advantages "from a developer advantage, the speed, etc." When you talk to the legacy enterprise guys they'll tell you, "Oh it's expensive in that cloud. "A lot of people moving back from the cloud." Now of course we know the cloud growth is astronomical. The enterprise growth is flat at best. But there's two different exact polar opposites. Which is the truth? >> I mean the truth is it depends on what you need, right? We think cloud will be a huge disruptor to IT spend over the next several years, it already is. Wind back five or 10 years ago, I don't think people would even be thinking we'd be having the conversations that we have today. People were like, "Security, "I'm not even sure this cloud thing. "Seems like a shared colo facility to me. "I don't think I want to go near that." And it's taken us awhile collectively as an industry to educate really what the cloud is, that it's actually a much more integrated set of services that helps people up level what it is that they can do. But you know, one of the biggest challenges we still face in the industry is just education, skills. You know, it takes time to learn new skills. It's encouraging developers, working with partners, providing solutions to IT that make it much more turnkey for them to use solutions so they don't have to learn deep developer skills or super high end data science skills to get value out of their data. >> One of the hot button topics at this show has been GDPR. How does Google fit into the discussion? How are you helping customers get ready for that? >> Yeah, well obviously we're very well aware of GDPR and are working really hard to make sure that we're going to be meeting the requirements for our customers as we move forward. We take security and compliance incredibly seriously. So yes, expect us to see see us having full GDPR compliance, and then working with partners to make sure that customers can get the confidence that they need for their business. >> So Dave, as a storage technology guy, what are the big trends that you're tracking as it relates to storage that sort of are driving Google's thinking? >> Yeah, great question. So ... So, you know, more and more data is going to be coming out. Like data has traditionally been siloed. People haven't known where their data is. More and more of that data is now going to be shared within a single environment, and it's not just going to be in the cloud. That data is going to reach both onto on premises and also all the way out to the edge. IoT is going to be a huge generator of data. Being able to gather that data, manage that data, provide rich analytics over that data with machine learning and then push that intelligence back out to the edge so that actually data that's produced can just be analyzed right there is going to be super important. I love to say that data is the fuel for analytics and ML, and that fuel is going to be not just in the cloud, on prem, and all the way to the edge and managing that. It's going to be super, super, super interesting. I think network again. Network, once you start to bring low latency networks to your storage you can actually start to do really new and interesting things with your data that you'd never thought of before. If your data, if you can't access it quickly, your data is dark to you. It might as well not be there, right? >> Have things like ... How have things like Flash affected sort of bottlenecks and you mentioned the network. People talk about the network is now the new bottleneck. How is that shaping your thinking? >> Yeah, so storage trends continue, densities get higher, speeds get faster. That's a trend that's been continuing. We've been tracking it, continuing to track it. For me that just means then people will store more data and look to get more value out of that data. Sort of like the latent value of, the latent value of your data is often a function of how quickly you can run machine learning and analytics over that data and get value out of it. And you know we can do things now to analyze data faster than ever before. I was just thinking of an example the other day. I was running a query myself to look at storage usage. It's something I do regularly. And I ran the query and looked at the results. "Oh, that's cool." And then I was like, "Oh, "how many rows of data am I querying here?" And I run that query. Oh, that was like several billion rows of data that I just analyzed in like four seconds. I have no idea how much compute power was ran up in the background to meet that query, but that's the power that these new capabilities will enable over that data. >> Dave, how are customers doing with ... Kind of the thing I want to poke at is in the room data centers utilization is usually abysmal. And the biggest problem we have is when you do a technology you do it the old way. How are they doing at really taking advantage of cloud, getting utilization, utility? I'm sure if they go all serverless and per micro second it would be much better, but how are they doing? >> Well, so one of the beauties of the cloud is of course that it's a pay as you go model, right? And with storage and compute being disaggregated we see customers can provision storage, pay per gig as they go, and then when they need to run compute they just pay for the compute as they need it. They can shape custom compute instances in GCP, so they only pay for the compute that they need. When they finish they can shut them down. And if you're running something like for example a Hadoop workload where traditionally you were provisioning large amounts of compute and storage, sizing for maximum capacity, you no longer need to think about that anymore. You can just store data super cheaply. When you want to run a large 100, 1,000, 10,000 node Hadoop cluster over that data no problem. You spin it up. It spins up in under a minute. Run huge amounts of compute, shut it down, and you're done. And actually what we're finding is that like this is leading ... People are now having to ask new questions of how they manage custom controls in their business, because this is an incredible power that you can give to businesses, but they also want their controls to say, "Hey yeah, don't do that too often, "or if you do I want to manage it "and manage the cost and controls "for departments inside of organizations." So, we're building out the capabilities to help customers with that. >> Last question. Veritas were here. What do you look for in a partner like Veritas? What do you want from Veritas partnership? >> So Veritas is a fantastic partner for us. They really help us do the two things that we strive for, which is meet customers where they are today and help them transform their business for the future. So for our integration with NetBackup really helps customers in the enterprise just use existing products that they know and love and in a very turnkey way use the cloud. That helps them manage the costs and meet a lot of demands they have in their IT environments today super easily, so we love that. It also empowers them to do new things in the future. So the integration with information map we love. Helps customers identify new opportunities in their data and add new value to their business. >> Great, Dave Nettleton, Google, we'll leave it there. Thanks very much for coming on theCUBE. >> Thank you very much, been a pleasure. >> Alright, we'll keep it right there, buddy. Stu and I will be back with our next guest. This is Veritas Vision 2017. You're watching theCUBE. (techno music)
SUMMARY :
Brought to you by Veritas. He's the group product manager at Google. Kind of the second coming out party. over the next three to five years. One of the big thrusts that we heard in March was and building out all of the sales functions but it's not the first thing that comes to mind. and pass that through to our customers. and by that I mean they're not all in on one place to spot. And as part of that our strategy is to be, Is that the primary piece of it? that environment, for being able to So that gives you credibly in the cloud world, and build that out. And as you move up the stack is the separation of computer networking Is that really the brand that you want? hone in and solve the problems that you have. born in the cloud, they developed cloud from day one, I mean the truth is it depends on what you need, right? One of the hot button topics at this show has been GDPR. the confidence that they need and it's not just going to be in the cloud. How is that shaping your thinking? and look to get more value out of that data. And the biggest problem we have is of course that it's a pay as you go model, right? What do you want from Veritas partnership? So the integration with information map we love. Thanks very much for coming on theCUBE. Stu and I will be back with our next guest.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Dave Nettleton | PERSON | 0.99+ |
Diane Green | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Veritas | ORGANIZATION | 0.99+ |
March | DATE | 0.99+ |
Dave | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Diane | PERSON | 0.99+ |
two things | QUANTITY | 0.99+ |
Last year | DATE | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
two boxes | QUANTITY | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Two parts | QUANTITY | 0.99+ |
GDPR | TITLE | 0.99+ |
Spotify | ORGANIZATION | 0.99+ |
Stu | PERSON | 0.99+ |
Vimeo | ORGANIZATION | 0.99+ |
1,000 | QUANTITY | 0.99+ |
Gartner | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.98+ |
hundreds | QUANTITY | 0.98+ |
100 | QUANTITY | 0.98+ |
four seconds | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
BigQuery | TITLE | 0.97+ |
about a trillion dollars | QUANTITY | 0.97+ |
both | QUANTITY | 0.97+ |
single piece | QUANTITY | 0.96+ |
EMC | ORGANIZATION | 0.96+ |
one place | QUANTITY | 0.95+ |
a month | QUANTITY | 0.94+ |
10 years ago | DATE | 0.94+ |
single | QUANTITY | 0.93+ |
first thing | QUANTITY | 0.93+ |
second | QUANTITY | 0.93+ |
Dataproc | ORGANIZATION | 0.91+ |
VMware | ORGANIZATION | 0.91+ |
Kubernetes | TITLE | 0.91+ |
Las Vegas | LOCATION | 0.9+ |
single API | QUANTITY | 0.89+ |
two | QUANTITY | 0.89+ |
thousands of nodes | QUANTITY | 0.88+ |
day one | QUANTITY | 0.88+ |
first class | QUANTITY | 0.87+ |
last year | DATE | 0.87+ |
under a minute | QUANTITY | 0.87+ |
TITLE | 0.87+ | |
single environment | QUANTITY | 0.86+ |
Jyothi Swaroop, Veritas | Veritas Vision 2017
>> Announcer: Live from Las Vegas, it's theCUBE! Covering Veritas Vision 2017. Brought to you by Veritas. >> Welcome back to the Aria in Las Vegas, everybody. This is theCUBE, the leader in live tech coverage. We go out to the events and extract the signal from the noise. We're here at Veritas Vision 2017, #VtasVision. Jyothi Swaroop is here. He's the vice president of product and solutions marketing at Veritas. Jyothi, welcome to theCUBE. Good to see you. >> Thanks, Dave. I'm an officially an alum, now? >> A CUBE alum, absolutely! >> Two times! Three more times, we'll give you a little VIP badge, you know, we give you the smoking jacket, all that kind of stuff. >> Five or six times, you'll be doing the interviews. >> I'm going to be following you guys around, then, for the next three events. >> So, good keynote this morning. >> Jyothi: Thank you. >> Meaty. There was a lot going on. Wasn't just high-level concepts, it was a lot of high-level messaging, but then, here's what we've done behind it. >> No, it's actually the opposite. It's a lot of real products that customers are using. The world forgets that Veritas has only been out of Symantec, what, 20 months? Since we got out, we were kind of quiet the first year. That was because we were figuring our strategy out, investing in innovation and engineering, 'cause that's what Carlyle, our board, wants for us to do is invest in innovation and engineering, and build real products. So we took our time, 18 to 20 months to build these products out, and we launched them. And they're catching on like wildfire in the customer base. >> Jyothi, Bill came on and talked about, he made a lot of changes in the company. Focused it on culture, innovation, something he's want. What brought you? You know, a lot of places you could've gone. Why Veritas, why now? >> Well, Bill is one of the reasons, actually. I mean, if you look at his history and what he's done with different companies over the years, and how the journey of IT, as he put it during his keynote, he wants to make that disruption happen again at Veritas. That was one. Two was just the strategy that they had. Veritas has a Switzerland approach to doing business. Look, it's granted that most Fortune 500 or even midmarket customers have some sort of a Cloud project going on. But what intrigued me the most, especially with my background, coming from other larger companies is, Veritas was not looking to tie them down or become a data hoarder, you know what I mean? It's just charge this massive dollar per terabyte and just keep holding them, lock them into a storage or lock them into a cloud technology. But, we were facilitating their journey to whichever cloud they wanted to go. It was refreshing, and I still remember the first interview with Veritas, and they were talking about, "Oh, we want to help move customers' data "into Azure and AWS and Google," and my brain from previous storage vendors is going, "Hang on a minute. "How are you going to make money "if you're just going to move all of this data "to everyone else?" But that's what is right for the customer. >> Okay, so, how are you going to make money? >> Well, it's not just about the destination, right? Cloud's a journey, it's not just a destination. Most customers are asking us, "On average, we adopt three clouds," is what they're telling us. Whether it's public, private, on-prem, on average, they have about three separate clouds. What they say is, "Jyothi, our struggle is to move "an entire virtual business service "from on-prem to the Cloud." And once we've moved it, let's say Cloud A is suddenly expensive or is not working out for them. To get out of that cloud and move it to Cloud B is just so painful. It's going to cost me tons of money, and I lost all of the agility that I was expecting from Cloud A, anyway. If you have products like VRP from Veritas, for example, where we could move an entire cloud business service from Cloud A to Cloud B, and guess what. We can move it back onto on-prem on the fly. That's brilliant for the customers. Complete portability. >> Let's see. The portfolio is large. Help us boil it down. How should we think about it at a high level? We only have 20 minutes, so how do we think about that in 15, 20 minutes? >> I'll focus on three tenets. Our 360 data management wheel, if you saw at the keynote, has six tenets. The three tenets I'll focus on today are visibility, portability, and last, but definitely not the least, storage. You want to store it efficiently and cost-effectively. Visibility, most of our customers that are getting on their cloud journey are already in the Cloud, somewhere. They have zero visibility, almost. Like, "What applications should I move into the Cloud? "If I have moved these applications, "are they giving me the right value? "Because I've invested heavily in the Cloud "to move these applications." They don't know. 52% of our customers have dark data. We've surveyed them. All that dark data has now been moved into some cloud. Look, cloud is awesome. We have partnered up with every cloud vendor out there. But if we're not making it easy for customers to identify what is the right data to move to the Cloud, then they lost half the battle even before they moved to the Cloud. That's one. We're giving complete visibility with the Info Map connectors that we just announced earlier on in the keynote. >> That's matching the workload characteristics with the right sort of platform characteristics, is that right? >> Absolutely. You could be a Vmware user, you're only interested in VM-based data that you want to move, and you want role-based access into that data, and you want to protect only that data and back it up into the Cloud. We give you that granularity. It's one thing to provide visibility. It's quite another to give them the ability to have policy-driven actions on that data. >> Jyothi, just take us inside the customers for that. Who owns this kind of initiative? The problem in IT, it's very heterogeneous, very siloed. You take that multi-cloud environment, most customers we talk to, if they've got a cloud strategy, the ink's still drying. It's usually because, well, that group needed this, and somebody needed this, and it's very tactical. So, how do I focus on the information? Who drives that kind of need for visibility and manages across all of these environments? >> That's a great question, Stu. I mean, we pondered around the same question for about a year, because we were going both top-down and bottoms-up in the customer's organization, and trying to find where's our sweet spot. What we figured is, it's not a one-strategy thing, especially with the portfolio that we have. 80% of the time, we are talking to the CIOs, we are talking to the CXOs, and we're coming down with their digital transformation strategy or their cloud transformation strategy, they may call it whatever they want. We're coming top-down with our products, because when you talk visibility, a backup admin, he may not jump out of his seat the first thing. "Visibility's not what I care about, "the ease of use of this backup job "is what I care about, day one." But if you talk to the CIO, and I tell him, "I'll give you end-to-end visibility "of your entire infrastructure. "I don't care which cloud you're in." He'll be like, "I'm interested in that, "'cause I may not want to move 40% of this data "that I'm moving to Cloud A today. "I want to keep it back, or just delete it." 'Cause GDPR in Europe gives the citizens the right to delete their data. Doesn't matter which company the data's present in. The citizen can go to that company and say, "You have to delete my data." How will you delete the data if you just don't know where the data is? >> It's in 20 places in 15 different databases. Okay, so that's one. You had said there were three areas that you wanted to explore. >> The second one is, again, all about workload data and application portability. Over the years, we had storage lock-ins. I'm not going to name names, but historically, there are lots of storage vendors that tend to lock customers into a particular type of storage, or to the company, and they just get caught up in that stacked refresh every three years, and you just keep doing that over and over again. We're seeing more and more of cloud lock-in start to happen. You start migrating all of this into one cloud service provider, and you get familiar with the tools and widgets that they give you around that data, and then all of a sudden you realize this is not the right fit, or I'm moving too much data into this place and it's costing me a lot more. I want to not do this anymore, I want to move it to another local service provider, for example. It's going to cost you twice as much as it did just to move the data into the Cloud in the first place. With VRP, Veritas Resiliency Platform, we give our customers literally a few mouse clicks, if you watched the demo onstage. Literally, with a few mouse clicks, you identify the data that you want to move, including your virtual machines and your applications, and you move them as a business service, not just as random data. You move it as an entire business service from Cloud A to Cloud B. >> Jyothi, there's still physics involved in this. There's many reasons why with lock-in, you mentioned, kind of familiarity. But if I have a lot of data, moving it takes a lot of time as well as the money. How do we handle that? >> It goes back to the original talk track here about visibility. If you give the customer the right amount of visibility, they know exactly what to move. If the customer has 80 petabytes of data in their infrastructure, they don't have to move all 80 petabytes of it, if we are able to tell them, "These are the 10 petabytes that you need to move, "based on what Information Map is telling you." They'll only move those 10 petabytes, so the workload comes down drastically, because they're able to visualize what they need to move. >> Stu: Third piece of storage? >> Third piece of storage. A lot of people don't know this, but Veritas was the first vendor that launched the software to find storage solution. Back in the VOS days, Veritas, Oracle, and Sun Microsystems, we had the first file system that would be this paper over rocks, if you will, that was just a software layer. It would work with literally SAN/DAS, anything that's out there in the market, it would just be that file system that would work. And we've kept that DNA in our engineering team. Like, for example, Abhijit, who leads up our engineering, he wrote the first cluster file system. We are extending that beyond just a file system. We're going file, block, and object, just as any other storage vendor would. We are certifying on various commodity hardware, so the customers can choose the hardware of their choice. And not just that. The one thing we're doing very differently, though, is embedding intelligence close to the metadata. The reason we can do that is, unlike some of the classic storage vendors, we wrote the storage ground-up. We wrote the code ground-up. We could extract, if you look at an object, it has object data and metadata. So, metadata standard, it's about this long, right? It's got all these characters in it. It's hard to make sense of it unless you buy another tool to read that object and digest it for the customer. But what if you embed intelligence next to the metadata, so storage is not dumb anymore? It's intelligent, so you avoid the number of layers before you actually get to a BI product. I'll just give you a quick example in healthcare. We're all wearing Apple Watches and FitBits. The data is getting streamed into some object store, whether it's in the Cloud or on-prem. Billions of objects are getting stored even right now, with all the Apple Watches and FitBits out there. What if the storage could predictively, using machine learning and intelligence, tell you predictively you might be experiencing a stroke right on your watch, because your heartbeats are X and your pulse is Y? Combining all of the data and your history, based on the last month or last three months, I can tell you, "Jyothi, you should probably go see the doctor "or do something about it." So that's predictive, and it can happen at the storage layer. It doesn't have to be this other superficial intelligence layer that you paid millions of dollars for. >> So that analytic capability is really a feature of your platform, right? I mean, others, Stu, have tried it, and they tried to make it the product, and it really isn't a product, it's a byproduct. And so, is that something I could buy today? Is that something that's sort of roadmap, or, what's the reaction been from customers? >> The reaction has been great, both customers and analysts have just loved where we're going with this. Obviously, we have two products that are on the truck today, which are InfoScale and Access. InfoScale is a block-based product and Access is a file-based product. We also have HyperScale, which was designed specifically for modern workloads, containers, and OpenStack. That has its own roadmap. You know how OpenStack and containers work. We have to think like a developer for those products. Those are the products that are on the truck today. What you'll see announced tomorrow, I hope I'm not giving away too much, because Mike already announced it, is Veritas Cloud Storage. That's going to be announced tomorrow, and we're going to go deep into that. Veritas Cloud Storage will be this on-prem, object-based storage which will eventually become a platform that will also support file and block. It's just one single, software-defined, highly-intelligent storage system for all use cases. Throw whatever data you want at it. >> And the line on Veritas, the billboards, no hardware agenda. Ironic where that came from. Sometimes you'll announce appliances. What is that all about, and when do you decide to do that? >> Great question. You know, it's all about choice. It's the cliched thing to say, I know, but Veritas, most people don't know this, has a heavy channel revenue element to what we do. We love our partners and channel. Now, if you go to the channel that's catering to midmarket customers, or SMBs, they just want the easy button to storage. Their agility, I don't have five people sitting around trying to piece all of this together with your software and Seagate's hardware and whatever else, and piece this together. I just want a box, a pizza box that I can put in my infrastructure, turn it on, and it just works, and I call Veritas if something goes wrong. I don't call three different people. This is for those people. Those customers that just want the easy button to storage or easy button to back up. >> To follow up on the flip side, when you're only selling software, the knock on software of course is, I want it to be fast, I want it to be simple, I need to be agile. How come Veritas can deliver these kinds of solutions and not be behind all the people that have all the hardware and it's all fully baked-in to start with? >> Well, that's because we've written these from the ground up. When you write software code from the ground up, I mean, I'm an engineer, and I know how hard it is to take a piece of legacy code that's baked in for 10, 20 years. It's almost like adding lipstick, right? It just doesn't work, especially in today's cloud-first world, where people are in the DevOps situation, where apps are being delivered in five, 10, 15 minutes. Every day, my app almost gets updated on the phone every day? That just doesn't work. We wrote these systems from the ground up to be able to easily be placed onto any hardware possible. Now, again, I won't mention the vendor, but in my previous lives, there were a lot of hardware boxes and the software was written specifically for those hardware configurations. When they tried to software-define it forcefully, it became a huge challenge, 'cause it was never designed to do that. Whereas at Veritas, we write the software layer first. We test it on multiple hardware systems, and we keep fine-tuning it. Our ideal situation is to sell the software, and if the customer wants the hardware, we'll ship them the box. >> One of the things that struck me in the keynote this morning was what I'll call your compatibility matrix. Whether it was cloud, somebody's data store, that really is your focus, and that is a differentiator, I think. Knocking those down so you can, basically, it's a TAM expansion strategy. >> Oh, yeah, absolutely. I mean, TAM expansion strategy, as well as helping the customer choose what's best for them. We're not limiting their choices. We're literally saying, we go from the box and dropboxes of the world all the way to Dell EMC, even, with Info Map, for example. We'll cover end-to-end spectrum because we don't have a dollar-per-terabyte or dollar-per-petabyte agenda to store this data within our own cloud situation. >> All right, Jyothi, we got to leave it there. Thanks very much for coming back on theCUBE. It's good to see you again. >> Jyothi: No, it's great to be here. >> All right, keep it right there, everybody. We'll be back with our next guest. We're live from Veritas Vision 2017. This is theCUBE. (fast electronic music)
SUMMARY :
Brought to you by Veritas. and extract the signal from the noise. I'm an officially an alum, now? Three more times, we'll give you a little VIP badge, I'm going to be following you guys around, then, it was a lot of high-level messaging, and we launched them. You know, a lot of places you could've gone. and I still remember the first interview with Veritas, and I lost all of the agility so how do we think about that in 15, 20 minutes? and last, but definitely not the least, storage. and you want to protect only that data So, how do I focus on the information? the right to delete their data. that you wanted to explore. It's going to cost you twice as much as it did you mentioned, kind of familiarity. "These are the 10 petabytes that you need to move, that launched the software to find storage solution. and they tried to make it the product, We have to think like a developer for those products. and when do you decide to do that? It's the cliched thing to say, I know, and not be behind all the people that have all the hardware and the software was written specifically in the keynote this morning was all the way to Dell EMC, even, It's good to see you again. We'll be back with our next guest.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Seagate | ORGANIZATION | 0.99+ |
Jyothi | PERSON | 0.99+ |
Five | QUANTITY | 0.99+ |
80 petabytes | QUANTITY | 0.99+ |
18 | QUANTITY | 0.99+ |
20 places | QUANTITY | 0.99+ |
10 | QUANTITY | 0.99+ |
Veritas | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
Bill | PERSON | 0.99+ |
Symantec | ORGANIZATION | 0.99+ |
Abhijit | PERSON | 0.99+ |
40% | QUANTITY | 0.99+ |
Jyothi Swaroop | PERSON | 0.99+ |
15 | QUANTITY | 0.99+ |
10 petabytes | QUANTITY | 0.99+ |
Sun Microsystems | ORGANIZATION | 0.99+ |
Mike | PERSON | 0.99+ |
Two times | QUANTITY | 0.99+ |
20 minutes | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
tomorrow | DATE | 0.99+ |
two products | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
20 months | QUANTITY | 0.99+ |
Two | QUANTITY | 0.99+ |
15 different databases | QUANTITY | 0.99+ |
TAM | ORGANIZATION | 0.99+ |
six tenets | QUANTITY | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Europe | LOCATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Third piece | QUANTITY | 0.99+ |
five people | QUANTITY | 0.99+ |
three tenets | QUANTITY | 0.99+ |
52% | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
twice | QUANTITY | 0.99+ |
three areas | QUANTITY | 0.99+ |
InfoScale | TITLE | 0.98+ |
six times | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
Dell EMC | ORGANIZATION | 0.98+ |
first vendor | QUANTITY | 0.98+ |
80% | QUANTITY | 0.98+ |
Stu | PERSON | 0.98+ |
Veritas Cloud Storage | ORGANIZATION | 0.98+ |
three events | QUANTITY | 0.97+ |
One | QUANTITY | 0.97+ |
OpenStack | TITLE | 0.97+ |
second one | QUANTITY | 0.97+ |
Three more times | QUANTITY | 0.97+ |
Access | TITLE | 0.97+ |
first year | QUANTITY | 0.96+ |
first file system | QUANTITY | 0.96+ |
Billions of objects | QUANTITY | 0.96+ |
both customers | QUANTITY | 0.96+ |
both | QUANTITY | 0.96+ |
15 minutes | QUANTITY | 0.95+ |
HyperScale | TITLE | 0.95+ |
A.J. Wineski, Shazam ITS, Inc. & Matt Waxman, Dell EMC Data Protection - Dell World 2017
>> Voiceover: Live from Las Vegas, it's theCUBE. Covering Dell EMC World 2017. Brought to you by Dell EMC. >> Okay, welcome back, everyone. We are live here in Las Vegas for Dell EMC World 2017. theCUBE's 8th year of coverage of what was once EMC World, now it's Dell EMC World. The first official show of the combination of two companies. I'm John Furrier with SiliconANGLE. My cohost this week for three days of wall-to-wall coverage, Paul Gillin. And our next guest is Max, Matt Waxman, Vice President of Product Management, Dell EMC Data Protection and A.J. Wineski, who's with UNIX and Microsoft Technologies Managers at Shazam ITS. Welcome to theCUBE, good to see you guys. >> Thanks for having us. >> Thank you. >> So data protection on stage, it's hot. I mean, it is the hottest category, both on the startup side but also customers, as they go to the cloud, are rethinking the four-wall strategy of data management, data protection. Why is, is it the cloud? What's the, why is it so hot? >> Yeah, I think you nailed it. It is very hot. It's, backup is not boring. I think customers like A.J. are talking about simplifying, automating, getting to the cloud, and so we oughtta modernize data protection. Our announcements this week were all about how we're doing that. We had a great announcement around a new appliance that's a turnkey solution, out of the box, time to value less than three hours. That's the agility that customers are really looking for. And of course our cloud data protection's evolved a lot. Great new use cases, disaster recovery now for the cloud, great use case. >> Matt, A.J., I want to get your thoughts in a second, but Matt, first talk about the dynamics that the customers are facing right now, because really there's two worlds that exist now, pure cloud native, born in the cloud. Completely different paradigm for backup and recovery, data protection, all on this scheme that has to be architected. And then companies that are moving quickly that had a data domain, had a pre-existing apps that have been doing great, but now have to be architected for that cloud, hypercloud. Those are the two hot areas. Can you just break that down real quick? >> Yeah, yeah, you know, I think you have a good framework there. Right, there are customers who will go through a re-platforming, and think about how they can move their application and its existing eco system into the cloud. That's where we've seen a lot of traction. We would call that "lift and shift." You know, move the application as is. And then this cloud native space is really different. It's developer-centric. It's thinking about "How do you cater to "the application developer who wants to build "off of a modern tool-set?" And there it's all about micro services, it's API-driven. You know, it's a-- - [John] Programmable infrastructure. >> Absolutely. >> John: Programmable backup. >> Exactly, right? That's what makes a text-- >> Alright, A.J., the proof is in the pudding, when you sit there and you look at that scenario, programmable, being agile, automations all coming down the pike, what's it look like for you? >> Well, for us, prior to having the ECS, the Elastic Cloud Storage Suite, we were running everything to backup tape. And we were having to do two sets of tapes. It was taking us two weeks sometimes to do our tape retentions. We had a set retention policies at 11 year across the board because our past backup software didn't allow us to set retention periods very well. Once we got to Elastic Cloud Storage Suite, it was couple clicks, you set retention periods, it takes care of itself. Automatically replicates to our DR site and we don't have to worry about it. It's done. I used to have three and a half FTEs who took care of backup suites all the time. I'm down to a half-guy now. So I gained back-- >> So you re-deployed those resources on other things. >> On other products; what I hired them for in the beginning, and now since that's happened, I'm able to use a lot more of those resources for the projects we should be using them for. We don't have to worry about backups like we used to. I don't have to worry at night, "Did it back up? "Did it not? Did my essential databases "get backed up to tape?" I don't have to worry about that anymore, it's done automatically. >> What was that transition like for you? Going from tape to cloud? >> Painful. It was because we were having to move everything that was on tape on to ECS. Takes a while to redo that. Finally we decided at one point that after this period, no longer are we going to be writing to tape, we're going to write everything to ECS. Just became too painful. So once that transition was done, once we made a decision that we were no longer going to tape, it was easy. >> How about the cost? I mean, you now have an operational cost instead of a capital cost in your backup equipment. Over the long-term, is this a better, a lower cost happen for you? >> Oh, much better. We're saving $350,000 a year just in backups. And over the five-year TCO of that product, it's $2.7 million that we are saving over five years for that product alone. We're a small non-profit organization that we can then, in turn, turn around and give our customers some of that money back because we're not having to charge them so much for some of the backups that we have to do. >> Matt, talk about the dynamic, you mentioned developers. This comes back down to the developer angle because, just a scenario, data is becoming the life-blood for developers, and providing that data available in that kind of infrastructure's code way, or data as code, as we say, the DataOps world, if there is one yet. But I'm a developer, okay, I want the data from the application, from an hour ago, not two weeks ago, or those backup windows used to be a hindrance to that agility. >> Yeah, yeah. >> How is that progressing, and where is that going in terms of making that totally developer-centric infrastructure? >> Yeah, I mean, I'd answer that on two fronts. I think there's the cloud-native view of that where, you know, what those developers are looking for is inherent protection. They don't want to have to worry about it. Regardless of their app framework, regardless of the size of their app. But at the same time you also have database sizes that are growing so dramatically. I mean, when I was here even two years ago, I remember talking to customers who had databases that were over a hundred terabytes was like, 1 out of 10. Now I talk to 6 out of 10, hundred, two hundred terabyte infrastructures. At a certain point you can't back up anymore. And you have to go to the more transformative-- >> And the time alone, the time is killer too. >> Absolutely, absolutely. And so customers are replicating, and how do you put the same sort of controls around replication to get the levels of data protection that you expect? >> Well we're in a world where people are, customers are collecting everything now, they're saving everything. And they don't have to save everything necessarily. They don't find out until they start to use it. Is data protection becoming more of a service, a filtering service also, of how you, of what data you really need to back up? >> Yeah, I think that gets into the whole notion of data management. And that whole space is, "How can you "leverage the information out of the data, "as opposed to just managing the infrastructure?" And through automation, we're going to enable our customers to get there. Automate the infrastructure to the point where it's completely turnkey. Set a policy, set an SLA, and go. And at that point, you're managing the metadata. Analytics become really important. We've got a really cool new offering called Enterprise Copy Data Analytics. It's a SAS-based solution. Literally log on to our website and you enter your serial number, you're off and running. Analytics, predictive recommendations, based off of machine learning. That, to me, is the transition-- >> Is that managing your copies, you mean? >> That will give you visibility into your copies, that will give you visibility into your protection levels, and it'll actually score you so you have a very simple way to understand where you're weak, where you're not. >> So this is A.J.'s point about staff efficiency. You have that machine learning, like an automated way, what used to be crawling through log data, looking at stuff, pushing buttons, and provisioning (laughs). I mean, do you see that impact on your end? >> Oh, it's huge on our end. Because in the past, our database administrators would have to write something, and if a developer needed a backup copy of that database, it took potentially days, if not weeks, depending upon the size of that, to get it from tape. Or to go back to the old tape set to do that. Now, with ECS and DD Boost, it's instantaneous. They can restore that instantaneously to where the developers need it. It's a tremendous, tremendous savings for us. >> Some recent research I've seen says that there's still a sizable minority of customers who are concerned about the private security and the integrity of their data in the cloud. Does that, is that an issue for you? >> It is. We're heavily regulated through different regulations 'cause we're in the financial services industry, so we have PCI compliance, we have FFIEC compliance, SOC compliance. That's huge. And making sure that that data is protected at all times, is encrypted from end to end, is encrypted in transmission. Those are all things that the Dell EMC Suites give us. >> Talk about your data environment, because the data industry's growing, and I remember calling up Dave Velante years ago in 2010, 2011. The companies that were selling data stuff weren't really data companies, they were selling software. And a lot of the innovation came from, we call "data full" companies. They actually had a ton of data to deal with. They had the data lakes piling up. And they had to figure it out along the way. You guys have a lot of data. >> A.J.: We do. >> Can you insight into how big the data size coming in, because Tier 2 data is very valuable. You have data lakes going to be more intelligent, and that comes another factor into the architectural question. >> Yeah, we, the amount of data we collect is enormous, and we're just starting to get into the analytics of that and how can we use that data to better serve our customers, and how can we better advertise and pull our customers in to us to provide those services for us. The data, I mean, we're doing over 90 million transactions a month is what we're coming through our system. And-- >> John: So you're data full. You're full of data. >> Oh yeah, we're full of data. (laughs) And so there's just a tremendous amount of stuff that comes through us, and that data used for analytics is very powerful for us to be able to turn around and provide services to our customers. >> Matt, talk about the dynamic of, as you get into more analytics, this brings up where the data world's going, and this where kind of the data protection question is. Okay, all this data's coming in, you got some automation in there, you got some wrangling, you got some automation stuff now, analytics surfaces the citizen analysts now decided to start poking and touching the data. Okay, so now policy's the-- how do you back that up? So you have now multiple touch points on the data. Does that impact the data protection scheme and architecture? >> Yeah, I think it does. You know, fundamentally there's going to be a shift from the traditional backup admin role. And not just managing the policy, but also managing the data itself. To a role that's more centric around managing the policy. And compliance against it. As you go to decentralized environments and centers of data as opposed to data centers, you need to rethink the whole model and-- >> John: Data center. Data. Center. >> Exactly. >> John: Not server center. >> Right. >> It's the data center. (laughs) >> Paul: As you look-- >> And data's gone mass, right, so it doesn't move very easily. >> As you move to a more distributed model in an "Internet of things" type of environment, how will that affect data protection? You have to re-architect your service? >> We have been on a journey to transform data protection. We last year talked about some new offerings in that space with our Copy Data Management and Analytics solution. And that's really oriented towards that decentralized model. It's a different approach. It's not your traditional combine-your-data-path- and-your-control-path, it's truly a decentralized distributed model. >> Paul and I were talking on the intro today with Peter Burris, our head of research at Wikibon, and we know about the business value of data, and not to spare you the abstract conversation we had, we were talking about the valuation of companies were based on the data that they have and data under-management might be a term that we're fleshing out, but the question specifically comes back down to the protection and security of the data. I mean, you look at the marketing capital of Yahoo on that hack that they had, I think you mentioned Yahoo hack, really killed the value of the company. So the data will become instrumental in the valuation, so if that's the case, if you believe that, then you got to believe that the protection is going to be super important, and that there's going to be real emphasis on ground management policies and also the value of that data. You guys talk about that in your world? You guys think that holistically and can you share some insight into that conversation? >> Yeah, I mean, I think that comes back to your very first point about "data protection is hot." It's hot because there are a lot more threats out there, and of course there's that blurry line a little bit between security and data protection sometimes, but absolutely, if you look at regulations, if you look at things like GDPR in the EU, this is going to drive an increased focus on data protection. And that's where we're focusing. - [John] And IoT doesn't make this thing any easier. >> Absolutely not. >> John: (laughs) He shook his head like, "Yeah, I know." ATMs will be devices, wearables will be using analytics to share security data and movement data of people. >> Yeah. And so, us, security is one of the top priorities, it has to be. You look at what's happened with Target and Sony and Yahoo and all the other breaches. That keeps me up at night. And being sure that, >> John: I can imagine. >> being sure that we have a stable backup is integral to our system, especially with some of the recent ransomware threats and things like that. >> Paul: Yeah, going to ask you about that. >> That's scary stuff. And one way to be sure that you are protected from that is being sure that you have, number one, a good security system, but number two, you have a good backup. >> Over half of companies now have been hit by ransomware. Is there a service, a type of service that you have specifically for companies that are worried about that? >> Yeah, we have, I think A.J. said it very well, it's a layered approach. You have to have security, you have to have backups. We have a solution called Isolated Recovery, which is all about helping our customers create a vaulted, air-gap solution as the next level of protection. And some of the largest firms out there are leveraging it today to do exactly that. It's your data. You got to get it off prem, you got to get it into a vaulted area, you got to get it off the network. >> Matt, A.J., thanks so much for sharing the insight on the data protection, great customer reference, great testimonial there in the products. Congratulations. Final question. Your take on the show, it's the first year, big story is Dell EMC World, as a customer are you kind of like, "Mmm, good, it's looking good off the tee, "middle of the fairway, you know?" >> No, I'm impressed. I was really kind of skeptic coming in last year when it was announced and "What is this going to mean?" and things like that, and just seeing this year the integration of all the technologies with Vmware and the Dell desktops, laptops, the server line, the VxRail, VxRack, and all the other suites that EMC Dell products offer, it's refreshing to me as a customer knowing that now I have that one call for just about anything in the IT world. >> As they say in the IT, "one throat to choke, "single pane of glass." We're kind of going back down, congratulations on the solution. >> Matt: Thanks very much. >> Data protection, data center, they call it for a reason, the data center, you got to protect it. It's theCUBE, bringing you all the data here from Dell EMC World 2017, I'm John Furrier with Paul Gillin with SiliconANGLE Media. We'll be right back with more, stay with us. (upbeat tech music)
SUMMARY :
Brought to you by Dell EMC. Welcome to theCUBE, good to see you guys. I mean, it is the hottest category, Yeah, I think you nailed it. that the customers are facing right now, and its existing eco system into the cloud. Alright, A.J., the proof is in the pudding, it was couple clicks, you set retention periods, So you re-deployed for the projects we should be using them for. going to tape, it was easy. Over the long-term, is this a better, for some of the backups that we have to do. data is becoming the life-blood for developers, But at the same time you also have And the time alone, to get the levels of data protection that you expect? And they don't have to save everything necessarily. Automate the infrastructure to the point where that will give you visibility into your protection levels, I mean, do you see that impact on your end? and if a developer needed a backup copy of that database, and the integrity of their data in the cloud. And making sure that that data is protected at all times, And a lot of the innovation came from, You have data lakes going to be more intelligent, and pull our customers in to us You're full of data. provide services to our customers. Matt, talk about the dynamic of, and centers of data as opposed to data centers, John: Data center. It's the data center. And data's gone mass, right, We have been on a journey to and not to spare you the abstract conversation we had, this is going to drive an increased focus on data protection. to share security data and movement data of people. and Sony and Yahoo and all the other breaches. is integral to our system, especially with Paul: Yeah, going to ask you is being sure that you have, number one, Is there a service, a type of service that you have You have to have security, you have to have backups. "middle of the fairway, you know?" and the Dell desktops, laptops, the server line, congratulations on the solution. the data center, you got to protect it.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Neil | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Jonathan | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Ajay Patel | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
$3 | QUANTITY | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Jonathan Ebinger | PERSON | 0.99+ |
Anthony | PERSON | 0.99+ |
Mark Andreesen | PERSON | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Yahoo | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Paul Gillin | PERSON | 0.99+ |
Matthias Becker | PERSON | 0.99+ |
Greg Sands | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Jennifer Meyer | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Target | ORGANIZATION | 0.99+ |
Blue Run Ventures | ORGANIZATION | 0.99+ |
Robert | PERSON | 0.99+ |
Paul Cormier | PERSON | 0.99+ |
Paul | PERSON | 0.99+ |
OVH | ORGANIZATION | 0.99+ |
Keith Townsend | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
California | LOCATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Sony | ORGANIZATION | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
Robin | PERSON | 0.99+ |
Red Cross | ORGANIZATION | 0.99+ |
Tom Anderson | PERSON | 0.99+ |
Andy Jazzy | PERSON | 0.99+ |
Korea | LOCATION | 0.99+ |
Howard | PERSON | 0.99+ |
Sharad Singal | PERSON | 0.99+ |
DZNE | ORGANIZATION | 0.99+ |
U.S. | LOCATION | 0.99+ |
five minutes | QUANTITY | 0.99+ |
$2.7 million | QUANTITY | 0.99+ |
Tom | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Matthias | PERSON | 0.99+ |
Matt | PERSON | 0.99+ |
Boston | LOCATION | 0.99+ |
Jesse | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Paul Scott-Murphy, WANdisco - Google Next 2017 - #GoogleNext17 - #theCUBE
>> Narrator: You are Cube Alumni. Live from Silicon Valley, it's the Cube. Covering Google Cloud Next 17. >> Welcome back to the Cube's coverage of Google Next 2017. Having a lot of conversations as to how enterprises are really grappling with cloud. You know, move from on premises to public cloud, multi-cloud, hybrid-cloud, all those pieces in between. Happy to welcome to the program a first time guest, Paul Scott-Murphy who's the vice president of product management at WANdisco, thanks so much for joining us. >> Yeah, thanks very much, it's great to be here and join your program. >> Alright, so you know, Paul, I think a lot of our audience probably is familiar with WANdisco, we've had many of your executives on, really dug into your environment for the last few years, usually see you guys a lot of not only the big data shows, we've got Strata coming up next week, last time I did an interview with you guys was at AWS re:Invent. So you know, WAN, replication, data, all those things put together, you've got a big bucket of big data in cloud. Tell us a little bit about kind of your background, your role at the company. >> Okay. So I've been at WANdisco now for about two and a half years. I previously worked for TIBCO Software for a decade. Working out of Asia-Pacific, held the CTO role there for APJ. And joined WANdisco two and half years ago, just as we were entering into the big data market with our replication capabilities. I now run product management for the company and work out of our headquarters here in the Bay area. >> Stu Miniman: Great. And connect with us you know, what you guys are doing at Google, what's the conversations you're having with customers that are attending. >> Yeah, so Google is definitely one of the key strategic partners for WANdisco, obviously particularly in the Cloud space for us. We're hosting a booth fair for the conference and using that as an opportunity to speak to other vendors and the customers that we have attending the Google conference. Particularly around what we're doing for replication between on premises and cloud environments, and how we support Google Cloud. Dataproc, and Google Cloud Storage as well. >> Can you help unpack for us a little bit, where are your customers, give us a tip of the customers, you know they're saying hey, I want to start using this cloud stuff, how are they figuring out what applications stay on premises, what goes to the public cloud, and that data piece is a challenging thing, moving data is not easy, there's a whole data gravity piece that fits into it, maybe you can help walk us through some of the scenarios. >> Yeah, as we're progressing the technology, we're certainly finding a broader and broader range of customers getting interested in what they can do around data replication. The sorts of organizations that we deal with primarily are those who are looking to leverage both on premises and cloud infrastructure. All those who are moving from a situation where they've been toying with these environments and moving into production-ready scenarios where the demands or enterprise level SLAs or availability, or the needs around disaster recovery, backup and migration use cases become a lot more dominant for them. The organizations that we work with typically they are larger organizations, we deal a lot with retail, with financial services, telecommunications, with research institutions as well. All of whom have larger needs around taking advantage of cloud infrastructure. Of course they all share the same challenge of the availability of their data, where it's sourced from, isn't always necessarily in the cloud, taking advantage of cloud infrastructure then requires them to think about how they make their information available both to their on premises systems and to the cloud environment where they can run perhaps larger analytic workloads against it, or use the cloud services that they would otherwise not have access to. >> One of the challenges we've seen is when we've got kind of that hybrid or multi-cloud environment, you know, manages my data, kind of the holes, you know, orchestrating pieces and getting my arms around how I take care of it and leverage it can be challenging. Is that something you guys help with or are there other partners that get involved, how are customers helping to sort out and mature these environments? >> Yeah it's a big question of course, you've touched on the management of data as a whole and what they means, and how organizations handle that. WANdisco's role in supporting organizations with those challenges is in ensuring that when they need to take advantage of more than one environment or when they need their data to be available in more than one place. They can do that seamlessly and easily. What we aport to do and what we encourage our customers to do with our technology is rather than keeping one copy of data on premises and using it solely there, or copying your data to another location in order that you can act upon it there, we treat those environments as the same and say well, have the best of both worlds. Have your data available in each location, let your applications use it at the local speed and do that without regard to the need for retaining a workflow by which you exchange data between environments. WANdisco's technology can take care of all of that, and to do so it has to do some very smart things under the covers, around consistency and making it work across wide-area networks. Makes it particularly suited to cloud environments where we can leverage those underlying capabilities in conjunction with the scale of the cloud which is a native home for data at scale. >> Can you give us some, you know, where do you see customers kind of in this maturation, Dan Green made a statement that today 5% of the data is in the public cloud, so what are some of those barriers that are stopping people from getting more data in the Cloud, is it something that we will just see a massive adoption of data in the cloud, or what's your guys viewpoint as to where data's going to live, how that movement is happening. >> Yeah, I think longer term the economic advantages of using cloud environments are undeniable. The cost advantages of hosting information in the cloud and the benefits that come from the scalability of those environments is certainly far surpassing the capabilities that organizations can invest in themselves through their own data centers. So that natural migration of data to the cloud is a common theme that we see across all sorts of organizations. But as many people say, data has gravity, and if the majority of your application information resides today in your own environments or in environments outside of the cloud, whether that's internet connected devices, or in points of ingest that reside outside of cloud environments, there's a natural tendency for data to remain in place where they're either ingested or created. What you need to do to better take advantage of cloud environments then is the ability to easily access that data from cloud infrastructure. So the sorts of organizations that are looking to that are those with either burgeoning problems around consuming data at multiple points. They might operate environments that span multiple contents. They might have jurisdictional restrictions around where their data can reside but need to control its flow between separate environments as well. So WANdisco can certainly help with all of those problems, the underlying replication technology that we bring to bear is very well suited to it. But we are a part of the overall solution. We're not the full answer to everything. We certainly deal very well with replication and we believe we cover that very well. >> I'm curious when you talk about kind of the dispersion of data and where it's being created, of course edge-use cases for things like IOT, are quite a hot topic at that point. Is that something you guys are touching on yet, gets involved in discussions, you know, where does that sit? >> Yeah, definitely. The interesting thing about WANdisco's approach to data replication is that we base it on this foundation of consistency. And using a mathematically proven approach to distributed consensus to guarantee that changes made in one environment are represented in others equally, regardless of where those changes occur. Now when you apply that to batch based data storage or streaming environments, or other forms of ingest is relatively irrelevant as long as you have that same underlying capability to guarantee consistency regardless of where changes occur. If you're talking about high IT environments where you naturally have infrastructure sitting outside of the cloud, and this is the type of infrastructure that needs to reside out of the cloud, right, your edge points where data are captured, where your consuming information are generating it from devices perhaps from an automotive vehicle or from an embedded device, some sort of sensor array, whatever that happens to be, these are the types of environments where it means you're generating data outside of the cloud. So if you're looking to use that inside of the cloud itself, you need some way of moving data around, and you need to do that with some degree of consistency between those environments to make sure you're not just challenged with extra copies of information. >> The other really interesting topic around data that's being discussed at the Google Cloud event is artificial intelligence, machine learning, I'm curious, are your customers involved in that, where do you see that kind of on the radar today? >> Yeah, it's obviously an absolutely critical part of where the IT industry in general is going, and the type of solution that's fed off data. These systems are better as your data set grows. The more information you have, the better they work, and the more capable they become. It's certainly an aspect of how well machine learning technique and artificial intelligence approaches have been adopted in the industry, and the rapid rate of change in that side of IT is driving a lot of the demand for increasing access to data sets. We see some of our customers using that for really interesting things. You might've seen some of the recent news around our involvement in a research project led through the University of Sheffield, looking to use data sets captured from a variety of research institutions and medical environments to solve the problem of identifying and responding to dementia. And it's a great outcome from that type of environment. Through which machine learning techniques are being applied across data sets. What you find though is that because there's a large set of institutions sharing access to data, no single data set is sufficient to support those outcomes, regardless of what intelligence you can place against the machine learning models that you build up. So by enabling the ability to bring those data sets together, have them available in a single location, being the cloud, where larger models can be assessed against the data sets means much better outcomes for those types of environments. >> Okay. Paul, in your role of product management, we've been through some of the hot buzz terms out there, how do you help the company identify those trends, focused on the ones that are important to your customers and the kind of feedback loops that you get from them. >> I guess a lot of work in the end is how we do it but we need to listen to customers directly of course, understand what they're looking to do with their information systems. What they're aiming for. Their goals at a business level, what type of value that they want to get out of their data, and how they're approaching that. That's really critical. We also need to look to the industry in general. We're obviously in a very rapidly changing environment where technologies, the organizations that build IT systems, are increasingly adopting new approaches and building systems that simply weren't available days ago. You look at the announcements from Google of late around their video intelligence APIs as a service, their image APIs as well, all new capabilities that organizations today now have access to. So bringing those things together, understanding where the general IT trends are, how that applies to our customers, and what WANdisco can do with the unique value that we bring is really key to the product management role. >> Alright, and Paul, you've been at the show, curious, any cool things you saw, interesting customer conversations that may want to give our audience a flavor of what's going on, why 10 thousand people are excited to be at the event. >> Yeah well it is a very exciting event, just the scale of these types of events run by Google and similar organizations is something in itself to behold. We're really excited to be a part of that. The things that are really interesting for me out of the show tend to be where we see customers or opportunities coming to us, identifying challenges that they can't address without the type of technology that we bring to bear. Those tend to be areas where either they're looking to do migration from on premises systems into the cloud which is obviously very strong interest for Google themselves, they need to bring customers in to take better advantage of the services that they have. WANdisco can play a strong role in that. We're seeing a lot of interesting things around the edge too, so all of the ways in which data can be used are always exciting and interesting to see. The combination of technologies like artificial intelligence, like virtual reality, the type of work that WANdisco does also, is certainly going to bring forward I think a new wave of applications and systems that we just hadn't considered even a few years ago. >> Yeah. Lots of really interesting things. There's personal assistants at home and personal assistants that are listening. Okay Google, subscribe to SiliconANGLE on Youtube. We'll be back with lots more coverage here from the Cube, talking about Google Next 2017. You're watching the Cube.
SUMMARY :
it's the Cube. Welcome back to the Cube's coverage it's great to be here for the last few years, here in the Bay area. connect with us you know, fair for the conference some of the scenarios. of the availability of their data, One of the challenges we've seen and to do so it has to do a massive adoption of data in the cloud, is the ability to easily access that data Is that something you inside of the cloud itself, is driving a lot of the demand focused on the ones that are important in the end is how we do it to be at the event. of the services that they have. from the Cube,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Paul | PERSON | 0.99+ |
Paul Scott-Murphy | PERSON | 0.99+ |
WANdisco | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Dan Green | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
University of Sheffield | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
next week | DATE | 0.99+ |
APJ | ORGANIZATION | 0.99+ |
10 thousand people | QUANTITY | 0.98+ |
more than one place | QUANTITY | 0.98+ |
each location | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
5% | QUANTITY | 0.98+ |
single | QUANTITY | 0.98+ |
both worlds | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
one copy | QUANTITY | 0.98+ |
first time | QUANTITY | 0.98+ |
TIBCO Software | ORGANIZATION | 0.97+ |
Strata | TITLE | 0.97+ |
today | DATE | 0.97+ |
Youtube | ORGANIZATION | 0.95+ |
about two and a half years | QUANTITY | 0.95+ |
one | QUANTITY | 0.95+ |
Bay | LOCATION | 0.94+ |
Asia-Pacific | LOCATION | 0.93+ |
Cube | ORGANIZATION | 0.93+ |
Dataproc | ORGANIZATION | 0.92+ |
two and half years ago | DATE | 0.91+ |
more than one environment | QUANTITY | 0.88+ |
EVENT | 0.88+ | |
few years ago | DATE | 0.86+ |
one environment | QUANTITY | 0.85+ |
days | DATE | 0.77+ |
Google Cloud Next 17 | TITLE | 0.77+ |
Google Next | TITLE | 0.73+ |
last few years | DATE | 0.69+ |
Cube | COMMERCIAL_ITEM | 0.66+ |
Google Next 2017 | TITLE | 0.66+ |
Next 2017 | TITLE | 0.63+ |
re:Invent | EVENT | 0.6+ |
Google Cloud | TITLE | 0.56+ |
Cloud Storage | TITLE | 0.54+ |
2017 | DATE | 0.5+ |
SiliconANGLE | ORGANIZATION | 0.47+ |
Cloud | TITLE | 0.33+ |