Image Title

Search Results for Mission Critical DatabaseTechnologies:

Juan Loaiza, Oracle | Building the Mission Critical Supercloud


 

(upbeat music) >> Welcome back to Supercloud two where we're gathering a number of industry luminaries to discuss the future of cloud services. And we'll be focusing on various real world practitioners today, their challenges, their opportunities with an emphasis on data, self-service infrastructure and how organizations are evolving their data and cloud strategies to prepare for that next era of digital innovation. And we really believe that support for multiple cloud estates is a first step of any Supercloud. And in that regard Oracle surprise some folks with its Azure collaboration the Oracle database and exit database services. And to discuss the challenges of developing a mission critical Supercloud we welcome Juan Loaiza, who's the executive vice president of Mission Critical Database Technologies at Oracle. Juan, you're many time CUBE alums so welcome back to the show. Great to see you. >> Great to see you, and happy to be here with you. >> Yeah, thank you. So a lot of people felt that Oracle was resistant to multicloud strategies and preferred to really have everything run just on the Oracle cloud infrastructure, OCI and maybe that was a misperception maybe you guys were misunderstood or maybe you had to change your heart. Take us through the decision to support multiple cloud platforms >> Now we've supported multiple cloud platforms for many years, so I think that was probably a misperception. Oracle database, we partnered up with Amazon very early on in their cloud when they had kind of the the first cloud out there. And we had Oracle database running on their cloud. We have backup, we have a lot of stuff running. So, yeah, part of the philosophy of Oracle has always been we partner with every platform. We're very open we started with SQL and APIs. As we develop new technologies we push them into the SQL standard. So that's always been part of the ecosystem at Oracle. That's how we think we get an advantage by being more open. I think if we try to create this isolated little world it actually hurts us and hurts customers. So for us it's a win-win to be open across the clouds. >> So Supercloud is this concept that we put forth to describe a platform or some people think it's an architecture if you have an opinion, and I'd love to hear it but it provides a programmatically consistent set of services that hosted on heterogeneous cloud providers. And so we look at the Oracle database service for Azure as fitting within this definition. In your view, is this accurate? >> Yeah, I would broaden it. I'd see a little bit more than that. We just think that services should be available from everywhere, right? So, I mean, it's a little bit like if you go back to the pre-internet world, there was things like AOL and CompuServe and those were kind of islands. And if you were on AOL, you really didn't have access to anything on CompuServe and vice versa. And the cloud world has evolved a little bit like that. And we just think that's the wrong model. They shouldn't these clouds are part of the world and they need to be interconnected like all the rest of the world. It's been a long time with telephones internet, everything, everything's interconnected. Everything should work seamlessly together. So that's how we believe if you're running in one cloud and you're running let's say an application, one cloud you want to use a service from another cloud should be completely simple to do that. It shouldn't be, I can only use what's in AOL or CompuServe or whatever else. It should not be isolated. >> Well, we got a long way to go before that Nirvana exists but one example is the Oracle database service with Azure. So what exactly does that service provide? I'm interested in how consistent the service experience is across clouds. Did you create a purpose-built PaaS layer to achieve this common experience? Or is it off the shelf Terraform? Is there unique value in the PaaS layer? Let's dig into some of those questions. I know I just threw six at you. >> Yeah, I mean, so what this is, is what we're trying to do is very simple. Which is, for example, starting with the Oracle database we want to make that seamless to use from anywhere you're running. Whether it's on-prem, on some other cloud, anywhere else you should be able to seamlessly use the Oracle database and it should look like the internet. There's no friction. There's not a lot of hoops you got to jump just because you're trying to use a database that isn't local to you. So it's pretty straightforward. And in terms of things like Azure, it's not easy to do because all these clouds have a lot of kind of very unique technologies. So what we've done is at Oracle is we've said, "Okay we're going to make Oracle database look exactly like if it was running on Azure." That means we'll use the Azure security systems, the identity management systems, the networking, there's things like monitoring and management. So we'll push all these technologies. For example, when we have monitoring event or we have alerts we'll push those into the Azure console. So as a user, it looks to you exactly as if that Oracle database was running inside Azure. Also, the networking is a big challenge across these clouds. So we've basically made that whole thing seamless. So we create the super high bandwidth network between Azure and Oracle. We make sure that's extremely low latency, under two milliseconds round trip. It's all within the local metro region. So it's very fast, very high bandwidth, very low latency. And we take care establishing the links and making sure that it's secure and all that kind of stuff. So at a high level, it looks to you like the database is--even the look and feel of the screens. It's the Azure colors, it's the Azure buttons it's the Azure layout of the screens so it looks like you're running there and we take care of all the technical details underlying that which there's a lot which has taken a lot of work to make it work seamlessly. >> In the magic of that abstraction. Juan, does it happen at the PaaS layer? Could you take us inside that a little bit? Is there intelligence in there that helps you deal with latency or are there any kind of purpose-built functions for this service? >> You could think of it as... I mean it happens at a lot of different layers. It happens at the identity management layer, it happens at the networking layer, it happens at the database layer, it happens at the monitoring layer, at the management layer. So all those things have been integrated. So it's not one thing that you just go and do. You have to integrate all these different services together. You can access files in Azure from the Oracle database. Again, that's completely seamless. You, it's just like if it was local to our cloud you get your Azure files in your kind of S3 equivalent. So yeah, the, it's not one thing. There's a whole lot of pieces to the ecosystem. And what we've done is we've worked on each piece separately to make sure that it's completely seamless and transparent so you don't have to think about it, it just works. >> So you kind of answered my next question which is one of the technical hurdles. It sounds like the technical hurdles are that integration across the entire stack. That's the sort of architecture that you've built. What was the catalyst for this service? >> Yeah, the catalyst is just fulfilling our vision of an open cloud world. It's really like I said, Oracle, from the very beginning has been believed in open standards. Customers should be able to have choice customers should be able to use whatever they want from wherever they want. And we saw that, you know in the new world of cloud that had broken down everybody had their own authentication system management system, monitoring system networking system, configuration system. And it became very difficult. There was a lot of friction to using services across cloud. So we said, "Well, okay we can fix that." It's work, it's significant amount of work but we know how to do it and let's just go do it and make it easy for customers. >> So given Oracle is really your main focus is on mission critical workloads. You talked about this low latency network, I mean but you still have physical distances, so how are you managing that latency? What's the experience been for customers across Azure and OCI? >> Yeah, so it, it's a good point. I mean, latency can be an issue. So the good thing about clouds is we have a lot of cloud data centers. We have dozens and dozens of cloud data centers around the world. And Azure has dozens and dozens of cloud data centers. And in most cases, they're in the same metro region because there's kind of natural metro regions within each country that you want to put your cloud data centers in. So most of our data centers are actually very close to the Azure data centers. There's the kind of northern Virginia, there's London, there's Tokyo I mean, there's natural places where everybody puts their data centers Seoul et cetera. And so that's the real key. So that allows us to put a very high bandwidth and low latency network. The real problems with latency come when you're trying to go along physical distance. If you're trying to connect, you know across the Pacific or you know across the country or something like that, then you can get in trouble with latency within the same metro region. It's extremely fast. It tends to be around one, you know the highest two millisecond that's roundtrip through all the routers and connections and gateways and everything else. With everything taken into consideration, what we guarantee is it's always less than two millisecond which is a very low latency time. So that tends to not be a problem because it's extremely low latency. >> I was going to ask you less than two milliseconds. So, earlier in the program we had Jack Greenfield who runs architecture for Walmart, and he was explaining what we call their Supercloud, and it's runs across Azure, GCP, and they're on-prem. They have this thing called the triplet model. So my question to you is, are you in situations where you guaranteeing that less than two milliseconds do you have situations where you're bringing, you know Exadata Cloud, a customer on-prem to achieve that? Or is this just across clouds? >> Yeah, in this case, we're talking public cloud data center to public cloud data center. >> Oh okay. >> So add your public cloud data center to Oracle Public Cloud data center. They're in the same metro region. We set up the connections, we do all the technology to make it seamless. And from a customer point of view they don't really see the network. Also, remember that SQL is actually designed to have very low bandwidth and latency requirements. So it is a language. So you don't go to the database and say do this one little thing for me. You send it a SQL statement that can actually access lots of data while in the database. So the real latency requirement of a SQL database is within the database. So I need to access all that data fast. So I need very fast access to storage very fast access across node. That's what exit data gives you. But you send one request and that request can do a huge amount of work and then return one answer. And that's kind of the design point of SQL. So SQL is inherently low bandwidth requirements, it was used back in the eighties when we used to have 10 megabit networks and the the biggest companies in the world ran back then. So right now we're talking over hundred hundreds of gigabits. So it's really not much of a challenge. When you're designed to run on 10 megabit to say, okay I'm going to give you 10,000 times what you were designed for it's really, it's a pretty low hurdle jump. >> What about the deployment models? How do you handle this? Is it a single global instance across clouds or do you sort of instantiate in each you got exudate in Azure and exudates in OCI? What's the deployment model look like? >> It's pretty straightforward. So customer decides where they want to run their application and database. So there's natural places where people go. If you're in Tokyo, you're going to choose the local Tokyo data centers for both, you know Microsoft and Oracle. If you're in London, you're going to do that. If you're in California you're going to choose maybe San Jose, something like that. So a customer just chooses. We both have data centers in that metro region. So they create their service on Azure and then they go to our console which looks just like an Azure console and say all right create me a database. And then we choose the closest Oracle data center which is generally a few miles away, and then it it all gets created. So from a customer point of view, it's very straightforward. >> I'm always in awe about how simple you make things sound. All right what about security? You talked a little bit before about identity access how you sort of abstracting the Azure capabilities away so that you've simplified it for your customers but are there any other specific security things that you need to do? How much did you have to abstract the underlying primitives of Azure or OCI to present that common experience to customers? >> Yeah, so there's really two big things. One is the identity management. Like my name is X on Azure and I have this set of privileges. Oracle has its own identity management system, right? So what we didn't want is that you have to kind of like bridge these things yourself. It's a giant pain to do that. So we actually what we call federate across these identity managements. So you put your credentials into Azure and then they automatically get to use the exact same credentials and identity in the Oracle cloud. So again, you don't have to think about it, it just works. And then the second part is that the whole bridging the network. So within a cloud you generally have virtual network that's private to your company. And so at Oracle, we bridge the private network that you created in, for example, Azure to the private network that we create for you in Oracle. So it is still a private network without you having to do a whole bunch of work. So it's just like if you were in your own data center other people can't get into your network. So it's secured at the network level, it's secured at the identity management, and encryption level. And again we did a lot of work to make that seamless for customers and they don't have to worry about it because we did the work. That's really as simple as it gets. >> That's what's Supercloud's supposed to be all about. Alright, we were talking earlier about sort of the misperception around multicloud, your view of Open I think, which is you run the Oracle database, wherever the customer wants to run it. So you got this database service across OCI and Azure customers today, they run Oracle database in AWS. You got heat wave, MySQL, heat wave that you announced on AWS, Google touts a bare metal offering where you can run Oracle on GCP. Do you see a day when you extend an OCI Azure like situation across multiple clouds? Would that bring benefits to customers or will the world of database generally remain largely fenced with maybe a few exceptions like what you're doing with OCI and Azure? I'm particularly interested in your thoughts on egress fees as maybe one of the reasons that there is a barrier to this happening and why maybe these stove pipes, exist today and in the future. What are your thoughts on that? >> Yeah, we're very open to working with everyone else out there. Like I said, we've always been, big believers in customers should have choice and you should be able to run wherever you want. So that's been kind of a founding principle of Oracle. We have the Azure, we did a partnership with them, we're open to doing other partnerships and you're going to see other things coming down the pipe on the topic of egress. Yeah, the large egress fees, it's pretty obvious what goes on with that. Various vendors like to have large egress fees because they want to keep things kind of locked into their cloud. So it's not a very customer friendly thing to do. And I think everybody recognizes that it's really trying to kind of course or put a lot of friction on moving data out of a particular cloud. And that's not what we do. We have very, very low egress fees. So we don't really do that and we don't think anybody else should do that. But I think customers at the end of the day, will win that battle. They're going to have to go back to their vendor and say, well I have choice in clouds and if you're going to impose these limits on me, maybe I'll make a different choice. So that's ultimately how these things get resolved. >> So do you think other cloud providers are going to take a page out of what you're doing with Azure and provide similar solutions? >> Yeah, well I think customers want, I mean, I've talked to a lot of customers, this is what they want, right? I mean, there's really no doubt no customer wants to be locked into a single ecosystem. There's nobody out there that wants that. And as the competition, when they start seeing an open ecosystem evolving they're going to be like, okay, I'd rather go there than the closed ecosystem, and that's going to put pressure on the closed ecosystems. So that's the nature of competition. That's what ultimately will tip the balance on these things. >> So Juan, even though you have this capability of distributing a workload across multiple clouds as in our Supercloud premise it's still something that's relatively new. It's a big decision that maybe many people might consider somewhat of a risk. So I'm curious who's driving the decisions for your initial customers? What do they want to get out of it? What's the decision point there? >> Yeah, I mean, this is generally driven by customers that want a specific technology in a cloud. I think the risk, I haven't seen a lot of people worry too much about the risk. Everybody involved in this is a very well known, very reputable firm. I mean, Oracle's been around for 40 years. We run most of the world's largest companies. I think customers understand we're not going to build a solution that's going to put their technology and their business at risk. And the same thing with Azure and others. So I don't see customers too worried about this is a risky move because it's really not. And you know, everybody understands networking at the end the day networking works. I mean, how does the internet work? It's a known quantity. It's not like it's some brand new invention. What we're really doing is breaking down the barriers to interconnecting things. Automating 'em, making 'em easy. So there's not a whole lot of risk here for customers. And like I said, every single customer in the world loves an open ecosystem. It's just not a question. If you go to a customer would you rather put your technology or your business to run on a closed ecosystem or an open system? It's kind of not even worth asking a question. It's a no-brainer. >> All right, so we got to go. My last question. What do you think of the term "Supercloud"? You think it'll stick? >> We'll see. There's a lot of terms out there and it's always fun to see which terms stick. It's a cool term. I like it, but the decision makers are actually the public, what sticks and what doesn't. It's very hard to predict. >> Yeah well, it's been a lot of fun having you on, Juan. Really appreciate your time and always good to see you. >> All right, Dave, thanks a lot. It's always fun to talk to you. >> You bet. All right, keep it right there. More Supercloud two content from theCUBE Community Dave Vellante for John Furrier. We'll be right back. (upbeat music)

Published Date : Jan 12 2023

SUMMARY :

and cloud strategies to prepare happy to be here with you. just on the Oracle cloud of the ecosystem at Oracle. and I'd love to hear it And the cloud world has Or is it off the shelf Terraform? So at a high level, it looks to you Juan, does it happen at the PaaS layer? it happens at the database layer, So you kind of And we saw that, you know What's the experience been for customers across the Pacific or you know So my question to you is, to public cloud data center. So the real latency requirement and then they go to our console the Azure capabilities away So it's secured at the network level, So you got this database We have the Azure, we did So that's the nature of competition. What's the decision point there? down the barriers to the term "Supercloud"? and it's always fun to and always good to see you. It's always fun to talk to you. Vellante for John Furrier.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
MicrosoftORGANIZATION

0.99+

OracleORGANIZATION

0.99+

DavePERSON

0.99+

WalmartORGANIZATION

0.99+

Juan LoaizaPERSON

0.99+

AmazonORGANIZATION

0.99+

San JoseLOCATION

0.99+

CaliforniaLOCATION

0.99+

Dave VellantePERSON

0.99+

TokyoLOCATION

0.99+

JuanPERSON

0.99+

LondonLOCATION

0.99+

sixQUANTITY

0.99+

10,000 timesQUANTITY

0.99+

Jack GreenfieldPERSON

0.99+

GoogleORGANIZATION

0.99+

second partQUANTITY

0.99+

AWSORGANIZATION

0.99+

less than two millisecondQUANTITY

0.99+

less than two millisecondsQUANTITY

0.99+

OneQUANTITY

0.99+

SQLTITLE

0.99+

10 megabitQUANTITY

0.99+

bothQUANTITY

0.99+

AOLORGANIZATION

0.98+

each pieceQUANTITY

0.98+

MySQLTITLE

0.98+

first cloudQUANTITY

0.98+

singleQUANTITY

0.98+

each countryQUANTITY

0.98+

John FurrierPERSON

0.98+

two big thingsQUANTITY

0.98+

under two millisecondsQUANTITY

0.98+

oneQUANTITY

0.98+

northern VirginiaLOCATION

0.98+

CompuServeORGANIZATION

0.97+

first stepQUANTITY

0.97+

Mission Critical Database TechnologiesORGANIZATION

0.97+

one requestQUANTITY

0.97+

SeoulLOCATION

0.97+

AzureTITLE

0.97+

eachQUANTITY

0.97+

two millisecondQUANTITY

0.97+

AzureORGANIZATION

0.96+

one cloudQUANTITY

0.95+

one thingQUANTITY

0.95+

cloud data centersQUANTITY

0.95+

one answerQUANTITY

0.95+

SupercloudORGANIZATION

0.94+

Tim Burlowski, Veritas | CUBE Conversation, June 2020


 

(bright upbeat music) >> Reporter: From theCUBE Studios in Palo Alto in Boston. Connecting with thought leaders all around the world. This is theCUBE conversation. >> Hey, welcome back everybody. Jeff Frick here with theCUBE. We're coming to you today from our Palo Alto studios, talking about a really important topic. And that's data. And as we hear over and over and over, right data is the oil. Data is the new currency. Data is driving business decisions. Data drives AI. Data drives machine learning. Data is increasingly important. And we're still kind of waiting for it, to show up on balance sheets. Which is kind of implied in a lot of the big iterations, that we see in companies that are built on data. But one of the important things about data, is taking care of it. And we're excited to have our next guest here to talk about, some of the things you need to think about, and best practices in securing your data. Backing up your data, protecting your data. We're joined today by Tim Burlowski. He is the senior director, Product Management from Veritas. Joining us from remote. I believe you're in Minnesota. Tim, great to see you. >> Yep, thanks for having me. >> Absolutely, so let's just jump into it. So all we hear about is data these days. It's such an important topic, that is growing exponentially. And it's structured and it's unstructured. And it's so core to the business. And are you making database decisions? And are you getting enough data to drive your AI? And your machine learning algorithms? I mean, data is only exploding. You've been in this business for a long, a long, long time. I wonder if you can share your perspective, when you hear these things. more data is going to be created in the next 15 minutes. And wasn't the entire history of men before us? I'm making that up, but it's been quite an explosion. >> I know yeah, I know where you're coming from. And frankly, I don't even put that, in my presentation anymore. Because it's a lot like saying gravity exists, And things that you drop out, of a window will fall to the ground. Everyone's heard it. Everyone's aware of it. The numbers are just so staggering. You don't even know what to do with it. Like how many iPhones could you stack to the moon and back and then to Saturn? Doesn't make sense. But the truth is, we are seeing an explosion. Everyone knows it. We have to manage it better. Now for us, a lot of what we do, is in this data protection space. Where we want to make sure, that data is protected and always available. All of the data that's been created, and the growth in mission-critical applications. It's no longer seven to 20 mission-critical applications. It's hundreds and hundreds of mission-critical applications. Means you have to be ready, with a recent recovery if necessary. And you need to provide that data back to the consumer, as quickly as you possibly can. Because you've got people waiting on it. We've all got our apps on our phone, where we're looking at our bank account 24 seven. We don't wait until a teller appears at nine a.m anymore. It's not the world we live in. >> Right, I'm just curious if you've got some tailwinds, in terms of you're kind of, you've been in this market for a very long time. In terms of people finally realizing that their data, is really more of an asset and a liability. In the investments, to gather it, protect it, analyze it, have it ready for refresh it, If there's some problem. It's a positive investment towards, kind of revenue and strategic importance to the company, as opposed to kind of a back-office IT function, that we're kind of taking care of business because we have to. >> Boy that one really varies a lot by company. I see companies taking shortcuts and outsourcing, and then suddenly you'll see them in the news. And they discover that they had a major outage for a couple of days. And suddenly practices change very, very quickly. The relative comprehensive, sturdy and reliable infrastructure that people run today, sometimes lulls people into false security. And then you see a major airlines with a multi-day outage. And you go hmm, I think we missed a few steps in the process. So it sometimes takes those rude awakenings. But the companies who are really taking it seriously, and starting to practice pruning their data, examining their data for PII. So they meet various compliance regimes, and other in various states and countries. And starting to think about their backup stream, really being, how do we get a fast recovery? Instead of how do I make a copy, which I will never use again? Are really starting to drive a more efficient IT operation, when it comes to data protection. >> No, it's an interesting take, in reference to having some issue. Because we do a lot of stuff around security. Which is related to but not equal to this conversation. And one of the topics in security is that, most people have already been breached. It's just a function of how fast can you find out, and how fast can you minimize the damage? And how fast can you move on? Why are they breaching? They're breaching to get the data. So I would imagine, with this constant reading in the newspaper, of who was breached here there and everywhere, pretty much every day. That's got to be a huge driver, in terms of people kind of upping their game, and the sophistication, of the way they really think about data protection. >> It is and I'll tell you, I've had the misfortune, I would say. Of talking to customers who are in the middle of recovering, from a major ransomware malware attack. And it's a very difficult proposition. And what customers often discover is, they haven't practiced enough, they don't have enough of a DR plan present. We are certainly rising the occasion. Our products are sort of the last thing, that often stands between the customer, losing their data completely. And so we're looking at a number of technology innovations, that will enable them to store their data on immutable devices. And for the backup infrastructure, to be completely aware of that. Which we'll be announcing later this summer. Which we're very excited about. Of course, from our perspective of our appliance portfolio, we've always provided a couple of extra layers of security against intrusion detection, and intrusion prevention right out of the box. Because we know the backup infrastructure becomes this collection of the very most important data in your infrastructure. Because that's the thing you back up. And you want to restore. If there's ever any sort of manmade disaster or otherwise. >> Right. So I want to shift gears a little bit, and talk about kind of the evolution of the infrastructure kind of scene. If you will. With the rise of public clouds, with Amazon and Google and Microsoft, is sure. And then obviously, you tried into a data center. Lot of talk about HP discover, this week kind of going from edge to cloud and data center in the middle. So the environment in which these applications live, and these applications run, and where the data is, relative to those applications. Is evolved dramatically over the last, you probably have a much better time perspective than I do. Five years, 10 years. But it continues to accelerate, in this kind of Application-Centric World versus, kind of an Infrastructure Centric World. Just curious to get your take on, The kind of the challenges that presents to your company, and what you guys are trying to do to accomplish. And how do you see that continuing to evolve and get, not simpler but more complex over time? >> That is a very astute acknowledgement of what's going on in the industry. And I often call it the industry's getting weirder. I would have thought at some point, we'd sort of have Linux and Windows, and a couple of database vendors. And the truth is that database vendors exploded. And it's not just Linux anymore. It's containers. And it might be a container based on CentOS. And it might be container running in the cloud. Or it might be a simple function, like a lambda function running on nothing in AWS. And so this whole world has gotten a lot stranger. From my perspective, I think the biggest change for Veritas, has been a renewed focus on API's that we make public to customers, in ways that we can glue and stitch these systems together. Now, of course, it doesn't replace the deep integration, we do with companies like VMware, with Docker, as well as the the container ecosystem around. OpenShift and some of those technologies. But from our perspective, we've had to be a little bit more prolific, in what we support. And the truth is, it's all files, it's all objects, it's all things we've done before. But they just keep bubbling up in new and different ways. >> Right, but what's interesting though, is you touch on all kinds of stuff there with Kubernetes and clouds and in containers. Is a lot of it's kind of ethereal, right? The whole idea of of a cloud-based infrastructure, is that you can bring it up and bring it down as you need it. You can adjust it as you go. And literally turn it off when you don't need it. And bring it back up. And then you add to that serverless. And this kind of increasing atomization, of all the different parts of compute. Kind of an interesting thing for you guys, to try to back up as these things are created and destructed. We hear these crazy stories of, automating Kubernetes to spin up tons of these things at a time and then bring them back back down. And then I'm curious too. Within that is also the open source. kind of challenge in continuing to have evolution in open source technologies, API's, et cetera. So it is getting weirder and weirder, on a number of fronts as you guys continue to evolve with the market. >> Absolutely, and all I'll tell you, you have to think about all technologies as being on a bridge. As I remind people, we have washing machines. They work really well but washboards still exist, even though it's a technology from 18th century, or beforehand. Now, they may be used as still do exist. Now, my point in this is, people need a bridge. Most enterprises run on an amazing amount of technology, they've developed as a stack over the last 10 to 15 years. And they can't immediately rewrite that, and put it all in a cloud container. So we're actually seeing a lot of use of containers, and Kubernetes with fairly heavy application stacks. When you think about something as heavy as, all have Oracle inside of a container. You can understand that, that's a big lift for container. And it's not ephemeral at all. Then it reaches out to storage, that has that persistence value. And that's where we come in. 'Cause we want to make sure that persistent storage, is always protected. And easily available to the customer for any recovery needs. >> Is great, so I want to shift gears a little bit Tim, to talk about regulations and compliance. 'Cause, regulatory requirements drive a lot of behavior and activity, and really oftentimes, are ahead of maybe the business prerogative to do things like provide backups, provide quick and dirty, quick and easy access. Because you needed it for, a public Freedom of Information Act request. Or you need it for some type of court type of activity. So I wonder if you can kind of talk about, how the regulatory environment, continues to evolve over time. And how does that impact, what you guys are doing in the marketplace? >> Great question. The biggest place is It's affected us, is customers are starting to think about privacy. And where do I have data which relates to, personally identifying information. And that's really driven a lot, by the European regulations around GDPR. Then we're seeing the California Privacy Act come in. And a number of other states are considering legislation in this area. In some ways, it's actually been a good news story for data protection and data management. Because people are starting to say, I should identify where the data is, I should figure out where the PII is. And I should make sure, I'm actually using my backups for the right purposes. Which is something we've always believed in. We've always thought, Hey, Mr. Customer, I see you're backing up an Oracle database for 10 years. What are you going to do with it in 10 years? Are you going to install Oracle seven and reboot it? It doesn't really add up to me. So, how can you get to a true archive, for that data you really need archive? And then for your backup set, how can you keep it lean and mean. And just keep it for the length of time you actually need it? Which for many customers, could be as little as 14, 15 days, maybe six months, maybe a year. But it's often not those extreme retentions people were thinking of, when they were building their tape based infrastructure 10 years ago. >> Right, that's funny. 'Cause as you mentioned, also I'm thinking of, is big data. Right in this constant kind of conversation. In the Big Data world is they keep everything forever, with the hopes that at some point in time, there may be a different algorithm or a different kind of process, you might run on that, but you didn't think about. Right kind of scheme on read versus scheme on right. But to your point, is that necessarily something that has to be backed up, but it sounds like a lot of, kind of policy driven activity. Than to drive the software to define what to back up, what you don't back up, how you back it up, how long you back it up? And a lot of kind of business decisions as opposed to technology decisions. >> Absolutely, that's been on the back of, the price of storing a bit of data, has declined over the last 10 years. An average 15 percent year over year. For a very long time. So people have ignored the problem. But the truth is, when you're really working at scale, there's a tremendous amount of waste. And we've identified for customers, using our data analytics technology. Millions of dollars of cost savings, where they were, both had storing files on, expensive primary tier one storage. And they were backing up those same, that same bit of information every single week. Even though it hadn't changed, or hadn't been read in seven plus years, and they couldn't find an owner for the information in the company. They literally didn't know why they had it. And I think people are starting to consider that. Especially in budget constraint times. >> Right, it's so funny, right? Sometimes it's such a simple answer, a friend one time had a startup, and he was doing contract management. This is 20 years ago. And I was like, how do you manage the complexity of contracts inside software. Again 20 years ago. And he said, Jeff, that's not it at all. We just need to know like, where is the contract? who signed it and when does it expire? And they built the business, on answering simple questions like that. It's sometimes the simple stuff that's the hard stuff. I want to shift gears a little bit Tim, on what bear toss dude in the market in terms of still having appliances? I'm sure a lot of people like weight appliances. Why are we still using appliances? This is a software defined world. And everything just runs on x86 architecture. You guys still have appliances, tell us a little bit about the why. And some of the benefits of having, kind of a dedicated hardware, software piece of equipment, versus just a pure software solution that sits on anybody's box. >> That's a great question. Thanks for asking. When I think about that world, you have to understand Veritas at its core is absolutely a software company. We build software and we preserve the choice and how the customer implements. When I say we preserve choice. We obviously still support old school Unix. We certainly have enormous investment in the x86 world, both on Windows and various Linux flavors. And of course, you can run those same That same software in the cloud. And of course, you can run it inside of a virtualized infrastructure. So we always like to preserve choice. Now why did we create the appliance business, it's frankly because customers asked us to. The thing that made storing backups on disk affordable, was this technology known as deduplication. Which at its heart is just a fancy kind of compression, That's very, very good at copies of data, where there's a lot of blocks that are have been seen before. And so we don't store them if we've seen them before. We simply store the ones that are new and fresh. So from our perspective, customers said, "we want this technology." And the market really moved away, from general purpose solutions on servers to do that. Because it was very hard to build something, that could have a very high throughput, very high memory, and at the same time, could give excellent support for random access reads, when the customer actually needed to read that data. And so we created a purpose built appliances as a result. And what we discovered in the the process was, there were a lot of pieces that were actually fairly hard in the enterprise. So when a customer would describe, the purchasing process of their typical solution before appliances, they would talk about, filing tickets with the server team. Filing tickets with the storage team. Filing tickets with security team. And sometimes taking six or nine months, to get a piece of equipment ready to install the backup software on the floor. Whereas with ours, they placed an order, it showed up on the dock, as soon as it when it was in the rack, they were ready to go and working independently. Now while we have a great and thriving appliance business, we're very, very proud of, we always preserve choice at Veritas. And even though that's the business I represent, I would make sure our customers always understand, that we're interested in the best platform for the customer. So that's our basic perspective. If you want to go deeper, let me know where you have questions. (chuckles) >> Well, I'm curious on the process, when there's a fail, when there's attack, when there's ransomware, whatever. When you need to go back to your backup. What are some of the things that your approach enables, or what are kind of the typical stumbling blocks that are the hardest things to overcome. That people miss when they're planning for that. Or thinking about it. That kind of rear their ugly heads, when the time comes that, oh, I guess we need to go back to a backup version. >> Yeah, and I'll break that input into this disaster recovery or restore process. And then also the process of backup. So when you think about that disaster recovery, and I'll use ransomware as that piece of it. Because that's the real kind of disaster, when you're looking at equipment in the infrastructure, which has been wiped clean. That's a worst case scenario for most IT managers. When you think about that situation, we've built into our appliances first of all, a hardened Linux OS. Meaning we've shrank down that OS as much as we possibly could. Second, we've added role-based access protection. To make sure that you simply can't log in and perform activities which you're not privileged to perform. And then we have intrusion protection software, intrusion detection software. To ensure that even for those zero day attacks, that we may not even be aware of when we release our software, that the system is hardened. Of course, you have firewalls and STIG rules, STIG or rules are DoD standard, for hardening Linux based devices. So we've got a hardened device. And I was talking to a customer, in a different part of the world this week. Where they described having a data center, where everything had been wiped. And there's one thing left there, their NetBackup appliances. And they were then able to then take that, and use that for the restore. Because that was a real vault for their data. Now, the flipside is, that's a rare day. So that is truly a black swan event. When you think about day to day, and we're running a data protection operation, really think about speed of backup. And for us being able to take something that's neatly tuned for the hardware, the operating system, the tuning, the net backup software is all configured out of the box and ready to go. And the data protection folks, can be independently able to drive that is a great value. Because essentially, you have Lego style building blocks. Where you can order device, it always performs the same. And three years from now, you don't have to redesign it. And take your expensive IT staff and ask them to figure out what's the best solution. We've just got another one off the shelf for you, another series in the model. >> Right >> Now, as you said earlier, the world's getting weirder. It definitely is. So we'll be branching off into what kind of appliances we offer. And you'll see some announcements later, in the year where we'll be offering some reference architecture approaches, which will be a little different than what we offer today. Just to meet the customer demand that's out there. >> Yeah, that's great. I mean, 'cause as you said, it's all about customer choice. And meeting the customer where they want to meet. But before I let you go, this is pretty interesting conversation. I want to get your perspective, as someone who's been in the business, for a really long time. And as you look at opportunities around, machine learning and artificial intelligence, and you look at kind of the I'm going to steal your line about things getting weirder. And use over and over. But as they continue to get weirder and weirder, where do you see kind of the evolution is, you kind of sit back, not necessarily in the next six months or so. But where do you see growth opportunities and places you want to go? That better still out in front of you, even though you've been doing this for many, many years? >> Well, that's a great question. So this is yet another wave. And that's often how I look at it. Meaning, there's a wave of Unix. There's a wave of windows. There's wave of virtualization. And each of these technologies, brought some real shifts to our environment. I think, from my perspective, the next big wave is dealing with ransomware. And some of these compliance requirements we talked about earlier. And then I can't get away from this big data, AI piece and my son's studying computer science in college. And that's a weekly conversation for us. What's new in that front? Because I think we're going to see, a lot more technology developed there. We are just truly on the beginning of that curve. And frankly, when I think about the companies I work with, they have a tremendous amount of data. But that's really only going to increase, as they realize they can actually develop value from it. And as you mentioned, first thing once it shows up on the balance sheet, suddenly everyone's going to get very excited about that. >> Yeah, it's so funny, right? 'Cause it basically does show up on the balance sheet of Facebook, and it shows up on the balance sheet of Google. But it's just not a line item. And I keep waiting for the tipping point, to happen where that becomes, a line item on the balance sheet. Because increasingly, that is arguably, the most important asset. 0r certainly the information and learning that goes around that data. >> You're right. And frankly, it's an insurable asset at this point. You can go to a company in a number of commercial settings and get ransomware insurance, for instance. So people are definitely recognizing the value of it if they're willing to insure it. >> Right, right. All right, Tim. Well, thank you very much for stopping by. And giving us an update really interesting times in, kind of taking care of business and really the core of the business, which is the data inside the business. So, important work. And thanks for taking a few minutes. >> All right, thanks. I'll be glad to be back anytime you want me. >> Alright, He's Tim. I'm Jeff. You're watching theCUBE. Thanks for watching. We'll see you next time. (upbeat music)

Published Date : Jun 29 2020

SUMMARY :

leaders all around the world. some of the things you And it's so core to the business. And you need to provide that In the investments, to gather it, And then you see a major And one of the topics in security is that, Because that's the thing you back up. And how do you see that And I often call it the And then you add to that serverless. over the last 10 to 15 years. are ahead of maybe the business And just keep it for the length of time And a lot of kind of business decisions So people have ignored the problem. And some of the benefits of having, And of course, you can run those same that are the hardest things to overcome. And the data protection folks, in the year where we'll be offering And meeting the customer And as you mentioned, a line item on the balance sheet. And frankly, it's an and really the core of the business, anytime you want me. We'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Tim BurlowskiPERSON

0.99+

AmazonORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

JeffPERSON

0.99+

Jeff FrickPERSON

0.99+

GoogleORGANIZATION

0.99+

TimPERSON

0.99+

MinnesotaLOCATION

0.99+

hundredsQUANTITY

0.99+

VeritasORGANIZATION

0.99+

FacebookORGANIZATION

0.99+

June 2020DATE

0.99+

sixQUANTITY

0.99+

10 yearsQUANTITY

0.99+

California Privacy ActTITLE

0.99+

18th centuryDATE

0.99+

Palo AltoLOCATION

0.99+

six monthsQUANTITY

0.99+

Freedom of Information ActTITLE

0.99+

Five yearsQUANTITY

0.99+

SaturnLOCATION

0.99+

theCUBEORGANIZATION

0.99+

14QUANTITY

0.99+

CentOSTITLE

0.99+

15 percentQUANTITY

0.99+

todayDATE

0.99+

theCUBE StudiosORGANIZATION

0.99+

nine a.mDATE

0.99+

nine monthsQUANTITY

0.99+

iPhonesCOMMERCIAL_ITEM

0.99+

SecondQUANTITY

0.99+

seven plus yearsQUANTITY

0.99+

LinuxTITLE

0.99+

OracleORGANIZATION

0.99+

eachQUANTITY

0.99+

WindowsTITLE

0.99+

sevenQUANTITY

0.99+

oneQUANTITY

0.99+

10 years agoDATE

0.99+

15 daysQUANTITY

0.99+

20 years agoDATE

0.98+

HPORGANIZATION

0.98+

bothQUANTITY

0.98+

a yearQUANTITY

0.98+

this weekDATE

0.98+

BostonLOCATION

0.98+

LegoORGANIZATION

0.98+

AWSORGANIZATION

0.98+

GDPRTITLE

0.98+

Millions of dollarsQUANTITY

0.98+

bigEVENT

0.96+

UnixTITLE

0.96+

one thingQUANTITY

0.96+

20 mission-critical applicationsQUANTITY

0.95+

Centric WorldORGANIZATION

0.95+

zero dayQUANTITY

0.93+

next six monthsDATE

0.93+

first thingQUANTITY

0.93+

OpenShiftTITLE

0.92+

single weekQUANTITY

0.92+

one timeQUANTITY

0.91+

hundreds of mission-critical applicationsQUANTITY

0.84+

later this summerDATE

0.84+

VeritasPERSON

0.82+

last 10 yearsDATE

0.8+

yearsQUANTITY

0.8+

minutesDATE

0.79+

DoDTITLE

0.74+

15 yearsQUANTITY

0.73+

wave ofEVENT

0.73+

Tim Burlowski, Veritas | CUBE Conversation, June 2020


 

(bright upbeat music) >> Reporter: From theCUBE Studios in Palo Alto in Boston. Connecting with thought leaders all around the world. This is theCUBE conversation. >> Hey, welcome back everybody. Jeff Frick here with theCUBE. We're coming to you today from our Palo Alto studios, talking about a really important topic. And that's data. And as we hear over and over and over, right data is the oil. Data is the new currency. Data is driving business decisions. Data drives AI. Data drives machine learning. Data is increasingly important. And we're still kind of waiting for it, to show up on balance sheets. Which is kind of implied in a lot of the big iterations, that we see in companies that are built on data. But one of the important things about data, is taking care of it. And we're excited to have our next guest here to talk about, some of the things you need to think about, and best practices in securing your data. Backing up your data, protecting your data. We're joined today by Tim Burlowski. He is the senior director, Product Management from Veritas. Joining us from remote. I believe you're in Minnesota. Tim, great to see you. >> Yep, thanks for having me. >> Absolutely, so let's just jump into it. So all we hear about is data these days. It's such an important topic, that is growing exponentially. And it's structured and it's unstructured. And it's so core to the business. And are you making database decisions? And are you getting enough data to drive your AI? And your machine learning algorithms? I mean, data is only exploding. You've been in this business for a long, a long, long time. I wonder if you can share your perspective, when you hear these things. more data is going to be created in the next 15 minutes. And wasn't the entire history of men before us? I'm making that up, but it's been quite an explosion. >> I know yeah, I know where you're coming from. And frankly, I don't even put that, in my presentation anymore. Because it's a lot like saying gravity exists, And things that you drop out, of a window will fall to the ground. Everyone's heard it. Everyone's aware of it. The numbers are just so staggering. You don't even know what to do with it. Like how many iPhones could you stack to the moon and back and then to Saturn? Doesn't make sense. But the truth is, we are seeing an explosion. Everyone knows it. We have to manage it better. Now for us, a lot of what we do, is in this data protection space. Where we want to make sure, that data is protected and always available. All of the data that's been created, and the growth in mission-critical applications. It's no longer seven to 20 mission-critical applications. It's hundreds and hundreds of mission-critical applications. Means you have to be ready, with a recent recovery if necessary. And you need to provide that data back to the consumer, as quickly as you possibly can. Because you've got people waiting on it. We've all got our apps on our phone, where we're looking at our bank account 24 seven. We don't wait until a teller appears at nine a.m anymore. It's not the world we live in. >> Right, I'm just curious if you've got some tailwinds, in terms of you're kind of, you've been in this market for a very long time. In terms of people finally realizing that their data, is really more of an asset and a liability. In the investments, to gather it, protect it, analyze it, have it ready for refresh it, If there's some problem. It's a positive investment towards, kind of revenue and strategic importance to the company, as opposed to kind of a back-office IT function, that we're kind of taking care of business because we have to. >> But that one really varies a lot by company. I see companies taking shortcuts and outsourcing, and then suddenly you'll see them in the news. And they discover that they had a major outage for a couple of days. And suddenly practices change very, very quickly. The relative comprehensive, sturdy and reliable infrastructure that people run today, sometimes lulls people into false security. And then you see a major airlines with a multi-day outage. And you go hmm, I think we missed a few steps in the process. So it sometimes takes those rude awakenings. But the companies who are really taking it seriously, and starting to practice pruning their data, examining their data for PII. So they meet various compliance regimes, and other in various states and countries. And starting to think about their backup stream, really being, how do we get a fast recovery? Instead of how do I make a copy, which I will never use again? Are really starting to drive a more efficient IT operation, when it comes to data protection. >> No, it's an interesting take, in reference to having some issue. Because we do a lot of stuff around security. Which is related to but not equal to this conversation. And one of the topics in security is that, most people have already been breached. It's just a function of how fast can you find out, and how fast can you minimize the damage? And how fast can you move on? Why are they breaching? They're breaching to get the data. So I would imagine, with this constant reading in the newspaper, of who was breached here there and everywhere, pretty much every day. That's got to be a huge driver, in terms of people kind of upping their game, and the sophistication, of the way they really think about data protection. >> It is and I'll tell you, I've had the misfortune, I would say. Of talking to customers who are in the middle of recovering, from a major ransomware malware attack. And it's a very difficult proposition. And what customers often discover is, they haven't practiced enough, they don't have enough of a DR plan present. We are certainly rising the occasion. Our products are sort of the last thing, that often stands between the customer, losing their data completely. And so we're looking at a number of technology innovations, that will enable them to store their data on immutable devices. And for the backup infrastructure, to be completely aware of that. Which we'll be announcing later this summer. Which we're very excited about. Of course, from our perspective of our appliance portfolio, we've always provided a couple of extra layers of security against intrusion detection, and intrusion prevention right out of the box. Because we know the backup infrastructure becomes this collection of the very most important data in your infrastructure. Because that's the thing you back up. And you want to restore. If there's ever any sort of manmade disaster or otherwise. >> Right. So I want to shift gears a little bit, and talk about kind of the evolution of the infrastructure kind of scene. If you will. With the rise of public clouds, with Amazon and Google and Microsoft, is sure. And then obviously, you tried into a data center. Lot of talk about HP discover, this week kind of going from edge to cloud and data center in the middle. So the environment in which these applications live, and these applications run, and where the data is, relative to those applications. Is evolved dramatically over the last, you probably have a much better time perspective than I do. Five years, 10 years. But it continues to accelerate, in this kind of Application-Centric World versus, kind of an Infrastructure Centric World. Just curious to get your take on, The kind of the challenges that presents to your company, and what you guys are trying to do to accomplish. And how do you see that continuing to evolve and get, not simpler but more complex over time? >> That is a very astute acknowledgement of what's going on in the industry. And I often call it the industry's getting weirder. I would have thought at some point, we'd sort of have Linux and Windows, and a couple of database vendors. And the truth is that database vendors exploded. And it's not just Linux anymore. It's containers. And it might be a container based on CentOS. And it might be container running in the cloud. Or it might be a simple function, like a lambda function running on nothing in AWS. And so this whole world has gotten a lot stranger. From my perspective, I think the biggest change for Veritas, has been a renewed focus on API's that we make public to customers, in ways that we can glue and stitch these systems together. Now, of course, it doesn't replace the deep integration, we do with companies like VMware, with Docker, as well as the the container ecosystem around. Open shift and some of those technologies. But from our perspective, we've had to be a little bit more prolific, in what we support. And the truth is, it's all files, it's all objects, it's all things we've done before. But they just keep bubbling up in new and different ways. >> Right, but what's interesting though, is you touch on all kinds of stuff there with Kubernetes and clouds and in containers. Is a lot of it's kind of ethereal, right? The whole idea of of a cloud-based infrastructure, is that you can bring it up and bring it down as you need it. You can adjust it as you go. And literally turn it off when you don't need it. And bring it back up. And then you add to that serverless. And this kind of increasing atomization, of all the different parts of compute. Kind of an interesting thing for you guys, to try to back up as these things are created and destructed. We hear these crazy stories of, automating Kubernetes to spin up tons of these things at a time and then bring them back back down. And then I'm curious too. Within that is also the open source. kind of challenge in continuing to have evolution in open source technologies, API's, et cetera. So it is getting weirder and weirder, on a number of fronts as you guys continue to evolve with the market. >> Absolutely, and all I'll tell you, you have to think about all technologies as being on a bridge. As I remind people, we have washing machines. They work really well but washboards still exist, even though it's a technology from 18th century, or beforehand. Now, they may be used as still do exist. Now, my point in this is, people need a bridge. Most enterprises run on an amazing amount of technology, they've developed as a stack over the last 10 to 15 years. And they can't immediately rewrite that, and put it all in a cloud container. So we're actually seeing a lot of use of containers, and Kubernetes with fairly heavy application stacks. When you think about something as heavy as, all have Oracle inside of a container. You can understand that, that's a big lift for container. And it's not ephemeral at all. Then it reaches out to storage, that has that persistence value. And that's where we come in. 'Cause we want to make sure that persistent storage, is always protected. And easily available to the customer for any recovery needs. >> Is great, so I want to shift gears a little bit Tim, to talk about regulations and compliance. 'Cause, regulatory requirements drive a lot of behavior and activity, and really oftentimes, are ahead of maybe the business prerogative to do things like provide backups, provide quick and dirty, quick and easy access. Because you needed it for, a public Freedom of Information Act request. Or you need it for some type of court type of activity. So I wonder if you can kind of talk about, how the regulatory environment, continues to evolve over time. And how does that impact, what you guys are doing in the marketplace? >> Great question. The biggest place is It's affected us, is customers are starting to think about privacy. And where do I have data which relates to, personally identifying information. And that's really driven a lot, by the European regulations around GDPR. Then we're seeing the California Privacy Act come in. And a number of other states are considering legislation in this area. In some ways, it's actually been a good news story for data protection and data management. Because people are starting to say, I should identify where the data is, I should figure out where the PII is. And I should make sure, I'm actually using my backups for the right purposes. Which is something we've always believed in. We've always thought, Hey, Mr. Customer, I see you're backing up an Oracle database for 10 years. What are you going to do with it in 10 years? Are you going to install Oracle seven and reboot it? It doesn't really add up to me. So, how can you get to a true archive, for that data you really need archive? And then for your backup set, how can you keep it lean and mean. And just keep it for the length of time you actually need it? Which for many customers, could be as little as 14, 15 days, maybe six months, maybe a year. But it's often not those extreme retentions people were thinking of, when they were building their tape based infrastructure 10 years ago. >> Right, that's funny. 'Cause as you mentioned, also I'm thinking of, is big data. Right in this constant kind of conversation. In the Big Data world is they keep everything forever, with the hopes that at some point in time, there may be a different algorithm or a different kind of process, you might run on that, but you didn't think about. Right kind of scheme on read versus scheme on right. But to your point, is that necessarily something that has to be backed up, but it sounds like a lot of, kind of policy driven activity. Than to drive the software to define what to back up, what you don't back up, how you back it up, how long you back it up? And a lot of kind of business decisions as opposed to technology decisions. >> Absolutely, that's been on the back of, the price of storing a bit of data, has declined over the last 10 years. An average 15 percent year over year. For a very long time. So people have ignored the problem. But the truth is, when you're really working at scale, there's a tremendous amount of waste. And we've identified for customers, using our data analytics technology. Millions of dollars of cost savings, where they were, both had storing files on, expensive primary tier one storage. And they were backing up those same, that same bit of information every single week. Even though it hadn't changed, or hadn't been read in seven plus years, and they couldn't find an owner for the information in the company. They literally didn't know why they had it. And I think people are starting to consider that. Especially in budget constraint times. >> Right, it's so funny, right? Sometimes it's such a simple answer, a friend one time had a startup, and he was doing contract management. This is 20 years ago. And I was like, how do you manage the complexity of contracts inside software. Again 20 years ago. And he said, Jeff, that's not it at all. We just need to know like, where is the contract? who signed it and when does it expire? And they built the business, on answering simple questions like that. It's sometimes the simple stuff that's the hard stuff. I want to shift gears a little bit Tim, on what bear toss dude in the market in terms of still having appliances? I'm sure a lot of people like weight appliances. Why are we still using appliances? This is a software defined world. And everything just runs on x86 architecture. You guys still have appliances, tell us a little bit about the why. And some of the benefits of having, kind of a dedicated hardware, software piece of equipment, versus just a pure software solution that sits on anybody's box. >> That's a great question. Thanks for asking. When I think about that world, you have to understand Veritas at its core is absolutely a software company. We build software and we preserve the choice and how the customer implements. When I say we preserve choice. We obviously still support old school Unix. We certainly have enormous investment in the x86 world, both on Windows and various Linux flavors. And of course, you can run those same That same software in the cloud. And of course, you can run it inside of a virtualized infrastructure. So we always like to preserve choice. Now why did we create the appliance business, it's frankly because customers asked us to. The thing that made storing backups on disk affordable, was this technology known as deduplication. Which at its heart is just a fancy kind of compression, That's very, very good at copies of data, where there's a lot of blocks that are have been seen before. And so we don't store them if we've seen them before. We simply store the ones that are new and fresh. So from our perspective, customers said, "we want this technology." And the market really moved away, from general purpose solutions on servers to do that. Because it was very hard to build something, that could have a very high throughput, very high memory, and at the same time, could give excellent support for random access reads, when the customer actually needed to read that data. And so we created a purpose built appliances as a result. And what we discovered in the the process was, there were a lot of pieces that were actually fairly hard in the enterprise. So when a customer would describe, the purchasing process of their typical solution before appliances, they would talk about, filing tickets with the server team. Filing tickets with the storage team. Filing tickets with security team. And sometimes taking six or nine months, to get a piece of equipment ready to install the backup software on the floor. Whereas with ours, they placed an order, it showed up on the dock, as soon as it when it was in the rack, they were ready to go and working independently. Now while we have a great and thriving appliance business, we're very, very proud of, we always preserve choice at Veritas. And even though that's the business I represent, I would make sure our customers always understand, that we're interested in the best platform for the customer. So that's our basic perspective. If you want to go deeper, let me know where you have questions. (chuckles) >> Well, I'm curious on the process, when there's a fail, when there's attack, when there's ransomware, whatever. When you need to go back to your backup. What are some of the things that your approach enables, or what are kind of the typical stumbling blocks that are the hardest things to overcome. That people miss when they're planning for that. Or thinking about it. That kind of rear their ugly heads, when the time comes that, oh, I guess we need to go back to a backup version. >> Yeah, and I'll break that input into this disaster recovery or restore process. And then also the process of backup. So when you think about that disaster recovery, and I'll use ransomware as that piece of it. Because that's the real kind of disaster, when you're looking at equipment in the infrastructure, which has been wiped clean. That's a worst case scenario for most IT managers. When you think about that situation, we've built into our appliances first of all, a hardened Linux OS. Meaning we've shrank down that OS as much as we possibly could. Second, we've added role-based access protection. To make sure that you simply can't log in and perform activities which you're not privileged to perform. And then we have intrusion protection software, intrusion detection software. To ensure that even for those zero day attacks, that we may not even be aware of when we release our software, that the system is hardened. Of course, you have firewalls and STIG rules, STIG or rules are DoD standard, for hardening Linux based devices. So we've got a hardened device. And I was talking to a customer, in a different part of the world this week. Where they described having a data center, where everything had been wiped. And there's one thing left there, their NetBackup appliances. And they were then able to then take that, and use that for the restore. Because that was a real vault for their data. Now, the flipside is, that's a rare day. So that is truly a black swan event. When you think about day to day, and we're running a data protection operation, really think about speed of backup. And for us being able to take something that's neatly tuned for the hardware, the operating system, the tuning, the net backup software is all configured out of the box and ready to go. And the data protection folks, can be independently able to drive that is a great value. Because essentially, you have Lego style building blocks. Where you can order device, it always performs the same. And three years from now, you don't have to redesign it. And take your expensive IT staff and ask them to figure out what's the best solution. We've just got another one off the shelf for you, another series in the model. >> Right >> Now, as you said earlier, the world's getting weirder. It definitely is. So we'll be branching off into what kind of appliances we offer. And you'll see some announcements later, in the year where we'll be offering some reference architecture approaches, which will be a little different than what we offer today. Just to meet the customer demand that's out there. >> Yeah, that's great. I mean, 'cause as you said, it's all about customer choice. And meeting the customer where they want to meet. But before I let you go, this is pretty interesting conversation. I want to get your perspective, as someone who's been in the business, for a really long time. And as you look at opportunities around, machine learning and artificial intelligence, and you look at kind of the I'm going to steal your line about things getting weirder. And use over and over. But as they continue to get weirder and weirder, where do you see kind of the evolution is, you kind of sit back, not necessarily in the next six months or so. But where do you see growth opportunities and places you want to go? That better still out in front of you, even though you've been doing this for many, many years? >> Well, that's a great question. So this is yet another wave. And that's often how I look at it. Meaning, there's a wave of Unix. There's a wave of windows. There's wave of virtualization. And each of these technologies, brought some real shifts to our environment. I think, from my perspective, the next big wave is dealing with ransomware. And some of these compliance requirements we talked about earlier. And then I can't get away from this big data, AI piece and my son's studying computer science in college. And that's a weekly conversation for us. What's new in that front? Because I think we're going to see, a lot more technology developed there. We are just truly on the beginning of that curve. And frankly, when I think about the companies I work with, they have a tremendous amount of data. But that's really only going to increase, as they realize they can actually develop value from it. And as you mentioned, first thing once it shows up on the balance sheet, suddenly everyone's going to get very excited about that. >> Yeah, it's so funny, right? 'Cause it basically does show up on the balance sheet of Facebook, and it shows up on the balance sheet of Google. But it's just not a line item. And I keep waiting for the tipping point, to happen where that becomes, a line item on the balance sheet. Because increasingly, that is arguably, the most important asset. 0r certainly the information and learning that goes around that data. >> You're right. And frankly, it's an insurable asset at this point. You can go to a company in a number of commercial settings and get ransomware insurance, for instance. So people are definitely recognizing the value of it if they're willing to insure it. >> Right, right. All right, Tim. Well, thank you very much for stopping by. And giving us an update really interesting times in, kind of taking care of business and really the core of the business, which is the data inside the business. So, important work. And thanks for taking a few minutes. >> All right, thanks. I'll be glad to be back anytime you want me. >> Alright, He's Tim. I'm Jeff. You're watching theCUBE. Thanks for watching. We'll see you next time. (upbeat music)

Published Date : Jun 24 2020

SUMMARY :

leaders all around the world. some of the things you And it's so core to the business. And you need to provide that In the investments, to gather it, And then you see a major And one of the topics in security is that, Because that's the thing you back up. And how do you see that And I often call it the And then you add to that serverless. over the last 10 to 15 years. are ahead of maybe the business And just keep it for the length of time And a lot of kind of business decisions So people have ignored the problem. And some of the benefits of having, And of course, you can run those same that are the hardest things to overcome. And the data protection folks, in the year where we'll be offering And meeting the customer And as you mentioned, a line item on the balance sheet. And frankly, it's an and really the core of the business, anytime you want me. We'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Tim BurlowskiPERSON

0.99+

AmazonORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

Jeff FrickPERSON

0.99+

JeffPERSON

0.99+

GoogleORGANIZATION

0.99+

TimPERSON

0.99+

MinnesotaLOCATION

0.99+

VeritasORGANIZATION

0.99+

hundredsQUANTITY

0.99+

FacebookORGANIZATION

0.99+

sixQUANTITY

0.99+

June 2020DATE

0.99+

10 yearsQUANTITY

0.99+

California Privacy ActTITLE

0.99+

18th centuryDATE

0.99+

Palo AltoLOCATION

0.99+

six monthsQUANTITY

0.99+

Freedom of Information ActTITLE

0.99+

Five yearsQUANTITY

0.99+

15 percentQUANTITY

0.99+

SaturnLOCATION

0.99+

theCUBEORGANIZATION

0.99+

CentOSTITLE

0.99+

14QUANTITY

0.99+

nine monthsQUANTITY

0.99+

LinuxTITLE

0.99+

todayDATE

0.99+

theCUBE StudiosORGANIZATION

0.99+

nine a.mDATE

0.99+

iPhonesCOMMERCIAL_ITEM

0.99+

seven plus yearsQUANTITY

0.99+

SecondQUANTITY

0.99+

OracleORGANIZATION

0.99+

WindowsTITLE

0.99+

sevenQUANTITY

0.99+

oneQUANTITY

0.99+

10 years agoDATE

0.99+

15 daysQUANTITY

0.99+

HPORGANIZATION

0.98+

bothQUANTITY

0.98+

eachQUANTITY

0.98+

a yearQUANTITY

0.98+

BostonLOCATION

0.98+

20 years agoDATE

0.98+

this weekDATE

0.98+

AWSORGANIZATION

0.98+

GDPRTITLE

0.98+

UnixTITLE

0.97+

Millions of dollarsQUANTITY

0.96+

bigEVENT

0.96+

one thingQUANTITY

0.96+

20 mission-critical applicationsQUANTITY

0.95+

LegoORGANIZATION

0.95+

Centric WorldORGANIZATION

0.95+

first thingQUANTITY

0.93+

one timeQUANTITY

0.91+

single weekQUANTITY

0.91+

next six monthsDATE

0.9+

zero dayQUANTITY

0.88+

hundreds of mission-critical applicationsQUANTITY

0.84+

later this summerDATE

0.84+

DoDTITLE

0.84+

three yearsQUANTITY

0.84+

VeritasPERSON

0.82+

minutesDATE

0.79+

KubernetesTITLE

0.77+

ofEVENT

0.75+

wave ofEVENT

0.73+

Sharad Singhal, The Machine & Michael Woodacre, HPE | HPE Discover Madrid 2017


 

>> Man: Live from Madrid, Spain, it's the Cube! Covering HPE Discover Madrid, 2017. Brought to you by: Hewlett Packard Enterprise. >> Welcome back to Madrid, everybody, this is The Cube, the leader in live tech coverage. My name is Dave Vellante, I'm here with my co-host, Peter Burris, and this is our second day of coverage of HPE's Madrid Conference, HPE Discover. Sharad Singhal is back, Director of Machine Software and Applications, HPE and Corps and Labs >> Good to be back. And Mike Woodacre is here, a distinguished engineer from Mission Critical Solutions at Hewlett-Packard Enterprise. Gentlemen, welcome to the Cube, welcome back. Good to see you, Mike. >> Good to be here. >> Superdome Flex is all the rage here! (laughs) At this show. You guys are happy about that? You were explaining off-camera that is the first jointly-engineered product from SGI and HPE, so you hit a milestone. >> Yeah, and I came into Hewett Packard Enterprise just over a year ago with the SGI Acquisition. We're already working on our next generation in memory computing platform. We basically hit the ground running, integrated the engineering teams immediately that we closed the acquisition so we could drive through the finish line and with the product announcement just recently, we're really excited to get that out into the market. Really represent the leading in memory, computing system in the industry. >> Sharad, a high performance computer, you've always been big data, needing big memories, lots of performance... How has, or has, the acquisition of SGI shaped your agenda in any way or your thinking, or advanced some of the innovations that you guys are coming up with? >> Actually, it was truly like a meeting of the minds when these guys came into HPE. We had been talking about memory-driven computing, the machine prototype, for the last two years. Some of us were aware of it, but a lot of us were not aware of it. These guys had been working essentially in parallel on similar concepts. Some of the work we had done, we were thinking in terms of our road maps and they were looking at the same things. Their road maps were looking incredibly similar to what we were talking about. As the engineering teams came about, we brought both the Superdome X technology and The UV300 technology together into this new product that Mike can talk a lot more about. From my side, I was talking about the machine and the machine research project. When I first met Mike and I started talking to him about what they were doing, my immediate reaction was, "Oh wow wait a minute, this is exactly what I need!" I was talking about something where I could take the machine concepts and deliver products to customers in the 2020 time frame. With the help of Mike and his team, we are able to now do essentially something where we can take the benefits we are describing in the machine program and- make those ideas available to customers right now. I think to me that was the fun part of this journey here. >> So what are the key problems that your team is attacking with this new offering? >> The primary use case for the Superdome Flex is really high-performance in memory database applications, typically SAP Hana is sort of the industry leading solution in that space right now. One of the key things with the Superdome Flex, you know, Flex is the active word, it's the flexibility. You can start with a small building block of four socket, three terabyte building block, and then you just connect these boxes together. The memory footprint just grows linearly. The latency across our fabric just stays constant as you add these modules together. We can deliver up to 32 processes, 48 terabytes of in-memory data in a single rack. So it's really the flexibility, sort of a pay as you grow model. As their needs grow, they don't have to throw out the infrastructure. They can add to it. >> So when you take a look ultimately at the combination, we talked a little bit about some of the new types of problems that can be addressed, but let's bring it practical to the average enterprise. What can the enterprise do today, as a consequence of this machine, that they couldn't do just a few weeks ago? >> So it sort of builds on the modularity, as Lance explained. If you ask a CEO today, "what's my database requirement going to be in two or three years?" they're like, "I hope my business is successful, I hope I'm gonna grow my needs," but I really don't know where that side is going to grow, so the flexibility to just add modules and scale up the capacity of memory to bring that- so the whole concept of in-memory databases is basically bringing your online transaction processing and your data-analytics processing together. So then you can do this in real time and instead of your data going to a data warehouse and looking at how the business is operating days or weeks or months ago, I can see how it's acting right now with the latest updates of transactions. >> So this is important. You mentioned two different things. Number one is you mentioned you can envision- or three things. You can start using modern technology immediately on an extremely modern platform. Number two, you can grow this and scale this as needs follow, because Hana in memory is not gonna have the same scaling limitations that you know, Oracle on a bunch of spinning discs had. >> Mike: Exactly. >> So, you still have the flexibility to learn and then very importantly, you can start adding new functions, including automation, because now you can put the analytics and the transaction processing together, close that loop so you can bring transactions, analytics, boom, into a piece of automation, and scale that in unprecedented ways. That's kind of three things that the business can now think about. Have I got that right? >> Yeah, that's exactly right. It lets people really understand how their business is operating in real time, look for trends, look for new signatures in how the business is operating. They can basically build on their success and basically having this sort of technology gives them a competitive advantage over their competitors so they can out-compute or out-compete and get ahead of the competition. >> But it also presumably leads to new kinds of efficiencies because you can converge, that converge word that we've heard so much. You can not just converge the hardware and converge the system software management, but you can now increasingly converge tasks. Bring those tasks in the system, but also at a business level, down onto the same platform. >> Exactly, and so moving in memory is really about bringing real time to the problem instead of batch mode processing, you bring in the real-time aspect. Humans, we're interactive, we like to ask a question, get an answer, get on to the next question in real time. When processes move from batch mode to real time, you just get a step change in the innovation that can occur. We think with this foundation, we're really enabling the industry to step forward. >> So let's create a practical example here. Let's apply this platform to a sizeable system that's looking at customer behavior patterns. Then let's imagine how we can take the e-commerce system that's actually handling order, bill, fulfillment and all those other things. We can bring those two things together not just in a way that might work, if we have someone online for five minutes, but right now. Is that kind of one of those examples that we're looking at? >> Absolutely, you can basically- you have a history of the customers you're working with. In retail when you go in a store, the store will know your history of transactions with them. They can decide if they want to offer you real time discounts on particular items. They'll also be taking in other data, weather conditions to drive their business. Suddenly there's going to be a heat wave, I want more ice cream in the store, or it's gonna be freezing next week, I'm gonna order in more coats and mittens for everyone to buy. So taking in lots of transactional data, not just the actual business transaction, but environmental data, you can accelerate your ability to provide consumers with the things they will need. >> Okay, so I remember when you guys launched Apollo. Antonio Neri was running the server division, you might have had networking to him. He did a little reveal on the floor. Antonio's actually in the house over there. >> Mike: (laughs) Next door. There was an astronaut at the reveal. We covered it on the Cube. He's always been very focused on this part of the business of the high-performance computing, and obviously the machine has been a huge project. How has the leadership been? We had a lot of skeptics early on that said you were crazy. What was the conversation like with Meg and Antonio? Were they continuously supportive, were they sometimes skeptical too? What was that like? >> So if you think about the total amount of effort we've put in the machine program, and truly speaking, that kind of effort would not be possible if the senior leadership was not behind us inside this company. Right? A lot of us in HP labs were working on it. It was not just a labs project, it was a project where our business partners were working on it. We brought together engineering teams from the business groups who understood how projects were put together. We had software people working with us who were working inside the business, we had researchers from labs working, we had supply chain partners working with us inside this project. A project of this scale and scope does not succeed if it's a handful of researchers doing this work. We had enormous support from the business side and from our leadership team. I give enormous thanks to our leadership team to allow us to do this, because it's an industry thing, not just an HP Enterprise thing. At the same time, with this kind of investment, there's clearly an expectation that we will make it real. It's taken us three years to go from, "here is a vague idea from a group of crazy people in labs," to something which actually works and is real. Frankly, the conversation in the last six months has been, "okay, so how do we actually take it to customers?" That's where the partnership with Mike and his team has become so valuable. At this point in time, we have a shared vision of where we need to take the thing. We have something where we can on-board customers right now. We have something where, frankly, even I'm working on the examples we were talking about earlier today. Not everybody can afford a 16-socket, giant machine. The Superdome Flex allows my customer, or anybody who is playing with an application to start small, something that is reasonably affordable, try that application out. If that application is working, they have the ability to scale up. This is what makes the Superdome Flex such a nice environment to work in for the types of applications I'm worrying about because it takes something which when we had started this program, people would ask us, "when will the machine product be?" From day one, we said, "the machine product will be something that might become available to you in some form or another by the end of the decade." Well, suddenly with Mike, I think I can make it happen right now. It's not quite the end of the decade yet, right? So I think that's what excited me about this partnership we have with the Superdome Flex team. The fact that they had the same vision and the same aspirations that we do. It's a platform that allows my current customers with their current applications like Mike described within the context of say, SAB Hana, a scalable platform, they can operate it now. It's also something that allows them to involve towards the future and start putting new applications that they haven't even thought about today. Those were the kinds of applications we were talking about. It makes it possible for them to move into this journey today. >> So what is the availability of Superdome Flex? Can I buy it today? >> Mike: You can buy it today. Actually, I had the pleasure of installing the first early-access system in the UK last week. We've been delivering large memory platforms to Stephen Hawking's team at Cambridge University for the last twenty years because they really like the in-memory capability to allow them, as they say, to be scientists, not computer scientists, in working through their algorithms and data. Yeah, it's ready for sale today. >> What's going on with Hawking's team? I don't know if this is fake news or not, but I saw something come across that said he says the world's gonna blow up in 600 years. (laughter) I was like, uh-oh, what's Hawking got going now? (laughs) That's gotta be fun working with those guys. >> Yeah, I know, it's been fun working with that team. Actually, what I would say following up on Sharad's comment, it's been really fun this last year, because I've sort of been following the machine from outside when the announcements were made a couple of years ago. Immediately when the acquisition closed, I was like, "tell me about the software you've been developing, tell me about the photonics and all these technologies," because boy, I can now accelerate where I want to go with the technology we've been developing. Superdome Flex is really the first step on the path. It's a better product than either company could have delivered on their own. Now over time, we can integrate other learnings and technologies from the machine research program. It's a really exciting time. >> Excellent. Gentlemen, I always love the SGI acquisitions. Thought it made a lot of sense. Great brand, kind of put SGI back on the map in a lot of ways. Gentlemen, thanks very much for coming on the Cube. >> Thank you again. >> We appreciate you. >> Mike: Thank you. >> Thanks for coming on. Alright everybody, We'll be back with our next guest right after this short break. This is the Cube, live from HGE Discover Madrid. Be right back. (energetic synth)

Published Date : Nov 29 2017

SUMMARY :

it's the Cube! the leader in live tech coverage. Good to be back. that is the first jointly-engineered the finish line and with the product How has, or has, the acquisition of Some of the work we had done, One of the key things with the What can the enterprise do today, so the flexibility to just add gonna have the same scaling limitations that the transaction processing together, how the business is operating. You can not just converge the hardware and the innovation that can occur. Let's apply this platform to a not just the actual business transaction, Antonio's actually in the house We covered it on the Cube. the same aspirations that we do. Actually, I had the pleasure of he says the world's gonna blow up in 600 years. Superdome Flex is really the first Gentlemen, I always love the SGI This is the Cube,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Peter BurrisPERSON

0.99+

MikePERSON

0.99+

Dave VellantePERSON

0.99+

MegPERSON

0.99+

Sharad SinghalPERSON

0.99+

AntonioPERSON

0.99+

Mike WoodacrePERSON

0.99+

SGIORGANIZATION

0.99+

HawkingPERSON

0.99+

UKLOCATION

0.99+

five minutesQUANTITY

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

Antonio NeriPERSON

0.99+

LancePERSON

0.99+

HPEORGANIZATION

0.99+

48 terabytesQUANTITY

0.99+

Hewlett-Packard EnterpriseORGANIZATION

0.99+

todayDATE

0.99+

three yearsQUANTITY

0.99+

next weekDATE

0.99+

OracleORGANIZATION

0.99+

HPORGANIZATION

0.99+

Michael WoodacrePERSON

0.99+

Stephen HawkingPERSON

0.99+

last weekDATE

0.99+

MadridLOCATION

0.99+

Hewett Packard EnterpriseORGANIZATION

0.99+

first stepQUANTITY

0.99+

2020DATE

0.99+

SharadPERSON

0.99+

OneQUANTITY

0.99+

Cambridge UniversityORGANIZATION

0.99+

two thingsQUANTITY

0.99+

HGE Discover MadridORGANIZATION

0.98+

firstQUANTITY

0.98+

last yearDATE

0.98+

three thingsQUANTITY

0.98+

twoQUANTITY

0.98+

three terabyteQUANTITY

0.98+

bothQUANTITY

0.98+

oneQUANTITY

0.97+

600 yearsQUANTITY

0.97+

16-socketQUANTITY

0.97+

second dayQUANTITY

0.97+

The MachineORGANIZATION

0.96+

Superdome FlexORGANIZATION

0.96+

Madrid, SpainLOCATION

0.95+

two different thingsQUANTITY

0.95+

upQUANTITY

0.93+

single rackQUANTITY

0.91+

CubeCOMMERCIAL_ITEM

0.9+

endDATE

0.9+

HPE DiscoverEVENT

0.88+

32 processesQUANTITY

0.88+

Superdome FlexCOMMERCIAL_ITEM

0.88+

few weeks agoDATE

0.88+

SAB HanaTITLE

0.86+

couple of years agoDATE

0.86+

overDATE

0.84+

Number twoQUANTITY

0.83+

Mission Critical SolutionsORGANIZATION

0.83+

four socketQUANTITY

0.82+

end of the decadeDATE

0.82+

last six monthsDATE

0.81+

a year agoDATE

0.81+

earlier todayDATE

0.8+

Randy Meyer & Alexander Zhuk | HPE Discover 2017 Madrid


 

>> Announcer: Live from Madrid, Spain. It's the Cube. Covering HP Discover Madrid 2017. Brought to you by Hewlett Packard Enterprise. >> Good afternoon from Madrid everybody. Good morning on the East Coast. Good really early morning on the West Coast. This is the Cube, the leader in live tech coverage. We're here day one at HPE Discover Madrid 2017. My name is Dave Velonte, I'm here with my cohost Peter Berse. Randy Meyers here is the Vice President and General Manager of the Mission Critical business unit at Hewlett Packard Enterprise. And he's joined by Alexander Zhuk, who is the SAP practice lead at Eldorado. Welcome to the Cube, thanks for coming on. >> Thanks for having us. >> Thank you. >> Randy we were just reminiscing about the number of times you've been on the Cube, consecutive years, it's like the Patriots winning the AFC East it just keeps happening. >> Or Cal Ripkin would probably be you. >> Me and Tom Brady. >> You're the Cal Ripken of the Cube. So give us the update, what's happening in the Mission Critical Business unit. What's going on here at Discover. >> Well, actually just lots of exciting things going on, in fact we just finished the main general session keynote. And that was the coming out party for our new Superdome Flex product. So, we've been in the Mission Critical space for quite some time now. Driving the HANA business, we've got 2500 customers around the world, small, large. And with out acquisition last year of SGI, we got this fabulous technology, that not only scales up to the biggest and most baddest thing that you can imagine to the point where we're talking about Stephen Hawking using that to explore the universe. But it scales down, four sockets, one terabyte, for lots of customers doing various things. So I look at that part of the Mission Critical business, and it's just so exciting to take technology, and watch it scale both directions, to the biggest problems that are out there, whether they are commercial and enterprise, and Alexander will talk about lots of things we're doing in that space. Or even high performance computing now, so we've kind of expanded into that arena. So, that's really the big news Super Dome Flex coming out, and really expanding that customer base. >> Yeah, Super Dome Flex, any memory in that baby? (laughing) >> 32 sockets, 48 terabyte if you want to go that big, and it will get bigger and bigger and bigger over time as we get more density that's there. And we really do have customers in the commercial space using that. I've got customers that are building massive ERP systems, massive data warehouses to address that kind of memory. >> Alright, let's hear from the customer. Alexander, first of all, tell us about your role, and tell us about Eldorado. >> I'm responsible for SAP basis and infrastructure. I'm working in Eldorado who is one of the largest consumer electronics network in Russia. We have more than 600 shops all over the country in more than 200 cities and towns, and have more than 16,000 employees. We have more than 50,000 stock keeping units, and proceeding over three and a half million orders with our international primarily. >> SAP practice lead, obviously this is a HANA story, so can you take us through your HANA journey, what led to the decision for HANA, maybe give us the before, during and after. Leading up to the decision to move to HANA, what was life like, and why HANA? >> We first moved our business warehouse system to HANA back in 2011. It's a time we got strong business requirements to have weak reporting. So, retail business, it's a business whose needs and very rapid decision making. So after we moved to HANA, we get the speed increasing of reports giving at 15 times. We got stock replenishment reports nine times faster. We got 50 minute sales reports every hour, instead of two hours. May I repeat this? >> No, it makes sense. So, the move to HANA was really precipitated by a need to get more data faster, so in memory allows you to do that. What about the infrastructure platform underneath, was it always HP at the time, that was 2011. What's HP's role, HPE's role in that, HANA? >> Initially we were on our business system in Germany, primarily on IBM solutions. But then according to the law requirements, we intended to go to Russia. And here we choose HP solutions as the main platform for our HANA database and traditional data bases. >> Okay Data residency forced you to move this whole solution back to Russia. If I may, Dave, one of the things that we're talking about and I want to test this with you, Alexander, is businesses not only have to be able to scale, but we talk about plastic infrastructure, where they have to be able to change their work loads. They have to be able to go up and down, but they also have to be able to add quickly. As you went through the migration process, how were you able to use the technology to introduce new capabilities into the systems to help your business to grow even faster? >> At that time, before migration, we had strong business requirements for our business growing and had some forecasts how HANA will grow. So we represented to our possible partners, our needs, for example, our main requirement was the possibility to scale up our CRM system up to nine terabytes memory. So, at that time, there was only HP who could provide that kind of solution. >> So, you migrated from a traditional RDBMS environment, your data warehouse previously was a traditional data base, is that right? And then you moved to HANA? >> Not all systems, but the most critical, the most speed critical system, it's our business warehouse and our CRM system. >> How hard was that? So, the EDW and the CRM, how difficult was that migration, did you have to freeze code, was it a painful migration? >> Yes, from the application point of view it was very painful, because we had to change everything, some our reports they had to be completely changed, reviewed, they had to adopt some abap code for the new data base. Also, we got some HANA level troubles, because it was very elaborate. >> Early days of HANA, I think it was announced in 2011. Maybe 2012... (laughing) >> That's one of the things for most customers that we talk to, it's a journey. You're moving from a tried and true environment that you've run for years, but you want the benefits in memory of speed, of massive data that you can use to change your business. But you have to plan that. It was a great point. You have to plan it's gonna scale up, some things might have to scale out, and at the same time you have to think about the application migration, the data migration, the data residency rules, different countries have different rules on what has to be there. And I think that's one of the things we try to take into account as HPE when we're designing systems. I want to let you partition them. I want to let you scale them up or down depending on the work load that's there. Because you don't just have one, you have BW and CRM, you have development environments, test environments, staging environments. The more we can help that look similar, and give you flexibility, the easier that is for customers. And then I think it's incumbent on us also to make sure we support our customers with knowledge, service, expertise, because it really is a journey, but you're right, 2011 it was the Wild West. >> So, give us the HPE HANA commercial. Everybody always tells us, we're great at HANA, we're best at HANA. What makes HPE best at HANA, different with HANA? >> What makes us best at HANA, one, we're all in on this, we have a partnership with SAP, we're designing for the large scale, as you said, that nobody else is building up into this space. Lots of people are building one terabyte things, okay. But when you really want to get real, when you want to get to 12 terabytes, when you want to get to 24 to 48. We're not only building systems capable of that, we're doing co-engineering and co-innovation work with SAP to make that work, to test that. I put systems on site in Waldorf, Germany, to allow them to go do that. We'll go diagnose software issues in the HANA code jointly, and say, here's where you're stressing that, and how we can go leverage that. You couple that with our services capability, and our move towards, you'll consume HANA in a lot of different ways. There will be some of it that you want on premise, in house, there will be some things that you say, that part of it might want to be in the Cloud. Yes, my answer to all of those things is yes. How do I make it easy to fit your business model, your business requirements, and the way you want to consume things economically? How do I alow you to say yes to that? 2500 customers, more than half of the installed base of all HANA systems worldwide reside on Hewlett Packard Enterprise. I think we're doing a pretty good job of enabling customers to say, that's a real choice that we can go forward with, not just today, but tomorrow. >> Alexander, are you doing things in the Cloud? I'm sure you are, what are you doing in the Cloud? Are you doing HANA in the Cloud? >> We have not traditional Cloud, as to use it to say, now we have a private Cloud. We have during some circumstance, we got all the hardware into our property. Now, it's operating by our partner. Between two company they are responsible for all those layers from hardware layer, service contracts, hardware maintenance, to the basic operation systems support, SEP support. >> So, if you had to do it all over again, what might you do differently? What advice would you give to other customers going down this journey? >> My advice is to at first, choose the right team and the right service provider. Because when you go to solution, some technical overview, architectural overview, you should get some confirmation from vendor. At first, it should be confirmed by HP. It should be confirmed by SEP. Also, there is a financial question, how to sponsor all this thing. And we got all these things from HP and our service partner. >> Right, give you the last word. >> So, one, it's an exciting time. We're watching this explosion of data happening. I believe we've only just scratched the surface. Today, we're looking at tens of thousands of skews for a customer, and looking at the velocity of that going through a retail chain. But every device that we have, is gonna have a sensor in it, it's gonna be connected all the time. It's gonna be generating data to the point where you say, I'm gonna keep it, and I'm gonna use it, because it's gonna let me take real time action. Some day they will be able to know that the mobile phone they care about is in their store, and pop up an offer to a customer that's exactly meaningful to do that. That confluence of sensor data, location data, all the things that we will generate over time. The ability to take action on that in real time, whether it's fix a part before it fails, create a marketing offer to the person that's already in the store, that allows them to buy more. That allows us to search the universe, in search for how did we all get here. That's what's happening with data. It is exploding. We are at the very front edge of what I think is gonna be transformative for businesses and organizations everywhere. It is cool. I think the advent of in memory, data analytics, real time, it's gonna change how we work, it's gonna change how we play. Frankly, it's gonna change human kind when we watch some of these researchers doing things on a massive level. It's pretty cool. >> Yeah, and the key is being able to do that wherever the data lives. >> Randy: Absolutely >> Gentlemen, thanks very much for coming on the Cube. >> Thank you for having us. >> Your welcome, great to see you guys again. Alright, keep it right there everybody, Peter and I will be back with our next guest, right after this short break. This is the Cube, we're live from HPE Discover Madrid 2017. We'll be right back. (upbeat music)

Published Date : Nov 28 2017

SUMMARY :

Brought to you by Hewlett Packard Enterprise. and General Manager of the Mission Critical the number of times you've been on the Cube, in the Mission Critical Business unit. So I look at that part of the Mission Critical business, 32 sockets, 48 terabyte if you want to go that big, Alright, let's hear from the customer. We have more than 600 shops all over the country this is a HANA story, so can you take us It's a time we got strong business requirements So, the move to HANA was really precipitated But then according to the law requirements, If I may, Dave, one of the things that we're So, at that time, there was only HP Not all systems, but the most critical, it was very painful, because we had to change everything, Early days of HANA, I think it was announced in 2011. and at the same time you have to think about So, give us the HPE HANA commercial. in house, there will be some things that you say, as to use it to say, now we have a private Cloud. and the right service provider. It's gonna be generating data to the point where you say, Yeah, and the key is being able to do that This is the Cube, we're live from HPE

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Peter BersePERSON

0.99+

Alexander ZhukPERSON

0.99+

Dave VelontePERSON

0.99+

GermanyLOCATION

0.99+

HPORGANIZATION

0.99+

Randy MeyersPERSON

0.99+

PeterPERSON

0.99+

RussiaLOCATION

0.99+

2011DATE

0.99+

2012DATE

0.99+

two hoursQUANTITY

0.99+

Stephen HawkingPERSON

0.99+

MadridLOCATION

0.99+

50 minuteQUANTITY

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

Tom BradyPERSON

0.99+

Cal RipkinPERSON

0.99+

tomorrowDATE

0.99+

AlexanderPERSON

0.99+

DavePERSON

0.99+

24QUANTITY

0.99+

one terabyteQUANTITY

0.99+

IBMORGANIZATION

0.99+

Cal RipkenPERSON

0.99+

EldoradoORGANIZATION

0.99+

2500 customersQUANTITY

0.99+

32 socketsQUANTITY

0.99+

more than 16,000 employeesQUANTITY

0.99+

HANATITLE

0.99+

Randy MeyerPERSON

0.99+

TodayDATE

0.99+

12 terabytesQUANTITY

0.99+

RandyPERSON

0.99+

more than 200 citiesQUANTITY

0.99+

nine timesQUANTITY

0.99+

todayDATE

0.99+

15 timesQUANTITY

0.99+

Madrid, SpainLOCATION

0.99+

SGIORGANIZATION

0.99+

48QUANTITY

0.99+

more than 600 shopsQUANTITY

0.99+

Waldorf, GermanyLOCATION

0.99+

two companyQUANTITY

0.99+

last yearDATE

0.99+

four socketsQUANTITY

0.99+

PatriotsORGANIZATION

0.99+

more than 50,000 stockQUANTITY

0.98+

48 terabyteQUANTITY

0.98+

Super Dome FlexCOMMERCIAL_ITEM

0.98+

oneQUANTITY

0.98+

both directionsQUANTITY

0.97+

West CoastLOCATION

0.97+

over three and a half million ordersQUANTITY

0.97+

DiscoverORGANIZATION

0.97+

East CoastLOCATION

0.97+

firstQUANTITY

0.96+

SEPORGANIZATION

0.96+

HPETITLE

0.93+