Michael Foster, Red Hat | CloudNativeSecurityCon 23
(lively music) >> Welcome back to our coverage of Cloud Native Security Con. I'm Dave Vellante, here in our Boston studio. We're connecting today, throughout the day, with Palo Alto on the ground in Seattle. And right now I'm here with Michael Foster with Red Hat. He's on the ground in Seattle. We're going to discuss the trends and containers and security and everything that's going on at the show in Seattle. Michael, good to see you, thanks for coming on. >> Good to see you, thanks for having me on. >> Lot of market momentum for Red Hat. The IBM earnings call the other day, announced OpenShift is a billion-dollar ARR. So it's quite a milestone, and it's not often, you know. It's hard enough to become a billion-dollar software company and then to have actually a billion-dollar product alongside. So congratulations on that. And let's start with the event. What's the buzz at the event? People talking about shift left, obviously supply chain security is a big topic. We've heard a little bit about or quite a bit about AI. What are you hearing on the ground? >> Yeah, so the last event I was at that I got to see you at was three months ago, with CubeCon and the talk was supply chain security. Nothing has really changed on that front, although I do think that the conversation, let's say with the tech companies versus what customers are actually looking at, is slightly different just based on the market. And, like you said, thank you for the shout-out to a billion-dollar OpenShift, and ACS is certainly excited to be part of that. We are seeing more of a consolidation, I think, especially in security. The money's still flowing into security, but people want to know what they're running. We've allowed, had some tremendous growth in the last couple years and now it's okay. Let's get a hold of the containers, the clusters that we're running, let's make sure everything's configured. They want to start implementing policies effectively and really get a feel for what's going on across all their workloads, especially with the bigger companies. I think bigger companies allow some flexibility in the security applications that they can deploy. They can have different groups that manage different ones, but in the mid to low market, you're seeing a lot of consolidation, a lot of companies that want basically one security tool to manage them all, so to speak. And I think that the features need to somewhat accommodate that. We talk supply chain, I think most people continue to care about network security, vulnerability management, shifting left and enabling developers. That's the general trend I see. Still really need to get some hands on demos and see some people that I haven't seen in a while. >> So a couple things on, 'cause, I mean, we talk about the macroeconomic climate all the time. We do a lot of survey data with our partners at ETR, and their recent data shows that in terms of cost savings, for those who are actually cutting their budgets, they're looking to consolidate redundant vendors. So, that's one form of consolidation. The other theme, of course, is there's so many tools out in the security market that consolidating tools is something that can help simplify, but then at the same time, you see opportunities open up, like IOT security. And so, you have companies that are starting up to just do that. So, there's like these countervailing trends. I often wonder, Michael, will this ever end? It's like the universe growing and tooling, what are your thoughts? >> I mean, I completely agree. It's hard to balance trying to grow the company in a time like this, at the same time while trying to secure it all, right? So you're seeing the consolidation but some of these applications and platforms need to make some promises to say, "Hey, we're going to move into this space." Right, so when you have like Red Hat who wants to come out with edge devices and help manage the IOT devices, well then, you have a security platform that can help you do that, that's built in. Then the messaging's easy. When you're trying to do that across different cloud providers and move into IOT, it becomes a little bit more challenging. And so I think that, and don't take my word for this, some of those IOT startups, you might see some purchasing in the next couple years in order to facilitate those cloud platforms to be able to expand into that area. To me it makes sense, but I don't want to hypothesize too much from the start. >> But I do, we just did our predictions post and as a security we put up the chart of candidates, and there's like dozens, and dozens, and dozens. Some that are very well funded, but I mean, you've seen some down, I mean, down rounds everywhere, but these many companies have raised over a billion dollars and it's like uh-oh, okay, so they're probably okay, maybe. But a lot of smaller firms, I mean there's just, there's too many tools in the marketplace, but it seems like there is misalignment there, you know, kind of a mismatch between, you know, what customers would like to have happen and what actually happens in the marketplace. And that just underscores, I think, the complexities in security. So I guess my question is, you know, how do you look at Cloud Native Security, and what's different from traditional security approaches? >> Okay, I mean, that's a great question, and it's something that we've been talking to customers for the last five years about. And, really, it's just a change in mindset. Containers are supposed to unleash developer speed, and if you don't have a security tool to help do that, then you're basically going to inhibit developers in some form or another. I think managing that, while also giving your security teams the ability to tell the message of we are being more secure. You know, we're limiting vulnerabilities in our cluster. We are seeing progress because containers, you know, have a shorter life cycle and there is security and speed. Having that conversation with the C-suites is a little different, especially when how they might be used to virtual machines and managing it through that. I mean, if it works, it works from a developer's standpoint. You're not taking advantage of those containers and the developer's speed, so that's the difference. Now doing that and then first challenge is making that pitch. The second challenge is making that pitch to then scale it, so you can get onboard your developers and get your containers up and running, but then as you bring in new groups, as you move over to Kubernetes or you get into more container workloads, how do you onboard your teams? How do you scale? And I tend to see a general trend of a big investment needed for about two years to make that container shift. And then the security tools come in and really blossom because once that core separation of responsibilities happens in the organization, then the security tools are able to accelerate the developer workflow and not inhibit it. >> You know, I'm glad you mentioned, you know, separation of responsibilities. We go to a lot of shows, as you know, with theCUBE, and many of them are cloud shows. And in the one hand, Cloud has, you know, obviously made the world, you know, more interesting and better in so many different ways and even security, but it's like new layers are forming. You got the cloud, you got the shared responsibility model, so the cloud is like the first line of defense. And then you got the CISO who is relying heavily on devs to, you know, the whole shift left thing. So we're asking developers to do a lot and then you're kind of behind them. I guess you have audit is like the last line of defense, but my question to you is how can software developers really ensure that cloud native tools that they're using are secure? What steps can they take to improve security and specifically what's Red Hat doing in that area? >> Yeah, well I think there's, I would actually move away from that being the developer responsibility. I think the job is the operators' and the security people. The tools to give them the ability to see. The vulnerabilities they're introducing. Let's say signing their images, actually verifying that the images that's thrown in the cloud, are the ones that they built, that can all be done and it can be done open source. So we have a DevSecOps validated pattern that Red Hat's pushed out, and it's all open source tools in the cloud native space. And you can sign your builds and verify them at runtime and make sure that you're doing that all for free as one option. But in general, I would say that the hope is that you give the developer the information to make responsible choices and that there's a dialogue between your security and operations and developer teams but security, we should not be pushing that on developer. And so I think with ACS and our tool, the goal is to get in and say, "Let's set some reasonable policies, have a conversation, let's get a security liaison." Let's say in the developer team so that we can make some changes over time. And the more we can automate that and the more we can build and have that conversation, the better that you'll, I don't say the more security clusters but I think that the more you're on your path of securing your environment. >> How much talk is there at the event about kind of recent high profile incidents? We heard, you know, Log4j, of course, was mentioned in the Keynote. Somebody, you know, I think yelled out from the audience, "We're still dealing with that." But when you think about these, you know, incidents when looking back, what lessons do you think we've learned from these events? >> Oh, I mean, I think that I would say, if you have an approach where you're managing your containers, managing the age and using containers to accelerate, so let's say no images that are older than 90 days, for example, you're going to avoid a lot of these issues. And so I think people that are still dealing with that aspect haven't set up the proper, let's say, disclosure between teams and update strategy and so on. So I don't want to, I think the Log4j, if it's still around, you know, something's missing there but in general you want to be able to respond quickly and to do that and need the tools and policies to be able to tell people how to fix that issue. I mean, the Log4j fix was seven days after, so your developers should have been well aware of that. Your security team should have been sending the messages out. And I remember even fielding all the calls, all the fires that we had to put out when that happened. But yeah. >> I thought Brian Behlendorf's, you know, talk this morning was interesting 'cause he was making an attempt to say, "Hey, here's some things that you might not be thinking about that are likely to occur." And I wonder if you could, you know, comment on them and give us your thoughts as to how the industry generally, maybe Red Hat specifically, are thinking about dealing with them. He mentioned ChatGPT or other GPT to automate Spear phishing. He said the identity problem is still not fixed. Then he talked about free riders sniffing repos essentially for known vulnerabilities that are slow to fix. He talked about regulations that might restrict shipping code. So these are things that, you know, essentially, we can, they're on the radar, but you know, we're kind of putting out, you know, yesterday's fire. What are your thoughts on those sort of potential issues that we're facing and how are you guys thinking about it? >> Yeah, that's a great question, and I think it's twofold. One, it's brought up in front of a lot of security leaders in the space for them to be aware of it because security, it's a constant battle, constant war that's being fought. ChatGPT lowers the barrier of entry for a lot of them, say, would-be hackers or people like that to understand systems and create, let's say, simple manifests to leverage Kubernetes or leverage a misconfiguration. So as the barrier drops, we as a security team in security, let's say group organization, need to be able to respond and have our own tools to be able to combat that, and we do. So a lot of it is just making sure that we shore up our barriers and that people are aware of these threats. The harder part I think is educating the public and that's why you tend to see maybe the supply chain trend be a little bit ahead of the implementation. I think they're still, for example, like S-bombs and signing an attestation. I think that's still, you know, a year, two years, away from becoming, let's say commonplace, especially in something like a production environment. Again, so, you know, stay bleeding edge, and then make sure that you're aware of these issues and we'll be constantly coming to these calls and filling you in on what we're doing and make sure that we're up to speed. >> Yeah, so I'm hearing from folks like yourself that the, you know, you think of the future of Cloud Native Security. We're going to see continued emphasis on, you know, better integration of security into the DevSecOps. You're pointing out it's really, you know, the ops piece, that runtime that we really need to shore up. You can't just put it on the shoulders of the devs. And, you know, using security focused tools and best practices. Of course you hear a lot about that and the continued drive toward automation. My question is, you know, automation, machine learning, how, where are we in that maturity cycle? How much of that is being adopted? Sometimes folks are, you know, they embrace automation but it brings, you know, unknown, unintended consequences. Are folks embracing that heavily? Are there risks associated around that, or are we kind of through that knothole in your view? >> Yeah, that's a great question. I would compare it to something like a smart home. You know, we sort of hit a wall. You can automate so much, but it has to actually be useful to your teams. So when we're going and deploying ACS and using a cloud service, like one, you know, you want something that's a service that you can easily set up. And then the other thing is you want to start in inform mode. So you can't just automate everything, even if you're doing runtime enforcement, you need to make sure that's very, very targeted to exactly what you want and then you have to be checking it because people start new workloads and people get onboarded every week or month. So it's finding that balance between policies where you can inform the developer and the operations teams and that they give them the information to act. And that worst case you can step in as a security team to stop it, you know, during the onboarding of our ACS cloud service. We have an early access program and I get on-calls, and it's not even security team, it's the operations team. It starts with the security product, you know, and sometimes it's just, "Hey, how do I, you know, set this policy so my developers will find this vulnerability like a Log4Shell and I just want to send 'em an email, right?" And these are, you know, they have the tools and they can do that. And so it's nice to see the operations take on some security. They can automate it because maybe you have a NetSec security team that doesn't know Kubernetes or containers as well. So that shared responsibility is really useful. And then just again, making that automation targeted, even though runtime enforcement is a constant thing that we talk about, the amount that we see it in the wild where people are properly setting up admission controllers and it's acting. It's, again, very targeted. Databases, cubits x, things that are basically we all know is a no-go in production. >> Thank you for that. My last question, I want to go to the, you know, the hardest part and 'cause you're talking to customers all the time and you guys are working on the hardest problems in the world. What is the hardest aspect of securing, I'm going to come back to the software supply chain, hardest aspect of securing the software supply chain from the perspective of a security pro, software engineer, developer, DevSecOps Pro, and then this part b of that is, is how are you attacking that specifically as Red Hat? >> Sure, so as a developer, it's managing vulnerabilities with updates. As an operations team, it's keeping all the cluster, because you have a bunch of different teams working in the same environment, let's say, from a security team. It's getting people to listen to you because there are a lot of things that need to be secured. And just communicating that and getting it actionable data to the people to make the decisions as hard from a C-suite. It's getting the buy-in because it's really hard to justify the dollars and cents of security when security is constantly having to have these conversations with developers. So for ACS, you know, we want to be able to give the developer those tools. We also want to build the dashboards and reporting so that people can see their vulnerabilities drop down over time. And also that they're able to respond to it quickly because really that's where the dollars and cents are made in the product. It's that a Log4Shell comes out. You get immediately notified when the feeds are updated and you have a policy in action that you can respond to it. So I can go to my CISOs and say, "Hey look, we're limiting vulnerabilities." And when this came out, the developers stopped it in production and we were able to update it with the next release. Right, like that's your bread and butter. That's the story that you want to tell. Again, it's a harder story to tell, but it's easy when you have the information to be able to justify the money that you're spending on your security tools. Hopefully that answered your question. >> It does. That was awesome. I mean, you got data, you got communication, you got the people, obviously there's skillsets, you have of course, tooling and technology is a big part of that. Michael, really appreciate you coming on the program, sharing what's happening on the ground in Seattle and can't wait to have you back. >> Yeah. Awesome. Thanks again for having me. >> Yeah, our pleasure. All right. Thanks for watching our coverage of the Cloud Native Security Con. I'm Dave Vellante. I'm in our Boston studio. We're connecting to Palo Alto. We're connecting on the ground in Seattle. Keep it right there for more coverage. Be right back. (lively music)
SUMMARY :
He's on the ground in Seattle. Good to see you, and it's not often, you know. but in the mid to low market, And so, you have companies that can help you do kind of a mismatch between, you know, and if you don't have a And in the one hand, Cloud has, you know, that and the more we can build We heard, you know, Log4j, of course, but in general you want to that you might not be in the space for them to be but it brings, you know, as a security team to stop it, you know, to go to the, you know, That's the story that you want to tell. and can't wait to have you back. Thanks again for having me. of the Cloud Native Security Con.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Michael | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Seattle | LOCATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Michael Foster | PERSON | 0.99+ |
Brian Behlendorf | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
dozens | QUANTITY | 0.99+ |
Boston | LOCATION | 0.99+ |
second challenge | QUANTITY | 0.99+ |
two years | QUANTITY | 0.99+ |
first challenge | QUANTITY | 0.99+ |
ACS | ORGANIZATION | 0.99+ |
billion-dollar | QUANTITY | 0.99+ |
GPT | ORGANIZATION | 0.99+ |
yesterday | DATE | 0.99+ |
ETR | ORGANIZATION | 0.99+ |
three months ago | DATE | 0.98+ |
today | DATE | 0.98+ |
one option | QUANTITY | 0.98+ |
Cloud Native Security Con. | EVENT | 0.97+ |
a year | QUANTITY | 0.97+ |
over a billion dollars | QUANTITY | 0.97+ |
one form | QUANTITY | 0.97+ |
NetSec | ORGANIZATION | 0.97+ |
One | QUANTITY | 0.97+ |
about two years | QUANTITY | 0.96+ |
this morning | DATE | 0.96+ |
ChatGPT | ORGANIZATION | 0.96+ |
older than 90 days | QUANTITY | 0.94+ |
OpenShift | ORGANIZATION | 0.93+ |
one security tool | QUANTITY | 0.92+ |
Spear | PERSON | 0.89+ |
Kubernetes | TITLE | 0.87+ |
first line | QUANTITY | 0.86+ |
last couple years | DATE | 0.85+ |
seven days | DATE | 0.85+ |
Log4j | PERSON | 0.84+ |
Log4Shell | TITLE | 0.82+ |
last five years | DATE | 0.82+ |
one | QUANTITY | 0.79+ |
Cloud | TITLE | 0.77+ |
DevSecOps | TITLE | 0.77+ |
CubeCon | EVENT | 0.76+ |
CloudNativeSecurityCon 23 | EVENT | 0.75+ |
twofold | QUANTITY | 0.72+ |
theCUBE | ORGANIZATION | 0.71+ |
next couple years | DATE | 0.67+ |
couple | QUANTITY | 0.66+ |
DevSecOps Pro | TITLE | 0.59+ |
Cloud Native | TITLE | 0.59+ |
Log4j | TITLE | 0.35+ |
Jed Dougherty, Dataiku | AWS re:Invent 2022
(bright music) >> Welcome back to Vegas, guys and girls. We're pleased that you're watching theCUBE. We know you've been with us. This is our fourth day. We know you've been with us since day one. Why wouldn't you be? Lisa Martin, here. As I mentioned, day four of theCUBE's coverage of AWS re:Invent. There are north of 55,000 people that have been at this event this week. We're hearing hundreds of thousands online. It really feels like old times, which is awesome. We're pleased to welcome back a gentleman from Dataiku who's actually new to theCUBE but Dataiku is not. Jed Dougherty is here, the VP of Platform Strategy. Thanks to joining me today, Jed. >> Oh, I'm so happy to be here. >> Talk a little bit, for anybody that isn't familiar with Dataiku, tell the audience a little bit about the technology, what you guys do. >> Dataiku is an end-to-end data science machine learning platform. We take everything from data ingestion, piplining of that data, bringing it all together, something that's useful for building models, deploying those models and then managing your ML ops workflow. So, really all the way across. And we sit on top of, basically, tons of different AWS stack as well as lots of the partners that are here today. >> Okay, got it. >> Snowflake, Databricks, all that. >> Got it, so one of the things that, it was funny, I think it was Adam's keynote Tuesday morning. I didn't time it, I watched it, but one of my guests said to me earlier this week that Adam spent exactly 52 minutes talking about data. >> Yeah. >> 52 minutes. Obviously, we can't come to an event like this without talking about data. Every company these days has to be a data company. Whether it's my grocery store or a retailer, a hospital, and so- >> Jed: It is the lifeblood of every modern company. >> It is, but you have to be able to access it. You have to be able to harness it, access it, derive insights from it, and be able to act on that faster than the competitors that are waiting, like, right back here. One of the things Adam Selipsky talked about with our boss, John Furrier, who's the co-CEO of theCUBE, they had a sit-down about a week before re:Invent. John always gets a preview of the show and Adam said, you know, he thinks the role of data analyst is going to go away. Or at least the term, because with data democratization that needs to happen. Putting data in the hands of all the business users, that every business user, whether you're in technology or marketing or ops or finance, it's going to have to analyze data to do their jobs. >> Could not agree more. >> Are you hearing that from customers? >> 100% >> Yeah. >> I was just at the CTO Summit of Bank of America two weeks ago out in California, and they told, their CTO had a statistic, 60,000 technologists in Bank of America, all asking data-type questions. You can have the best team of data scientists in the world, and they do. They have some of the best data scientists in the world there. And this team of data scientists could answer any one of the questions that those 60,000 people might have but they can't answer all of them, right? You need those people to be able to answer their own questions. I don't know if the term data analysts are going away. I think, yeah, everybody's just going to have to become a bit more of one. Just like how Excel taught everybody how to use the spreadsheet, in the future, in the next five, 10 years, the democratization of AI means that tools like Dataiku and other data science tools are going to teach everybody how to analyze data. >> Talk about Dataiku as a facilitator of that, of that democratization. Giving, like the citizen technologist who might be in finance, the ability to do that. >> So, a lot of data science tools are aimed at your hardcore coder, right? Somebody who wants to be sitting at a notebook writing (indistinct) or something like that and running models on some big fancy Spark server. Dataiku is still going to be running models on some big fancy Spark server but we're really obfuscating the challenge of writing code away from the user. So we target low code, no code, and high code users all working together in a collaborative platform. So we really do, we believe that there is always going to be a place for data scientists. That role is not going away. You will always need hardcore coders to take on those moonshot very challenging topics. But for every day AI, anybody should be able to do this and it should be open to anybody. >> Right. >> Jed: Really aim to facilitate that. >> I would love to hear some feedback, you know, this is day four of the show as I was saying, and day four is packed. I mean, this is energy-level-wise, guys, it is the same as it was when we started here on Friday night. But I'd love to hear, Jed, from your perspective some of the customer conversations that you've had, what are some of the challenges? They're coming to you saying, "Jed, Dataiku, help us eradicate these challenges so we can transform our business." >> What I'm hearing from customers and partners and AWS here is, over and over, we don't want to buy tools anymore. We want to buy solutions. We want a vertical solution that's pre-built for our industry. And we want it to be, not necessarily click and run out of the box, but we want a template that we can build off of quickly. And I've heard that customers are also looking to understand how tools can be packaged together. You got how many booths are here? 1000 booths? >> Yes, easily. >> You have 1000 different products being talked about, right behind us. Customers need to know which of these products are friends with each other and how they fit together so that they are making sure that when they purchase a set, a suite of tools to do their jobs, it's all going to work naturally together. So, being able, I think this is a really vital concept for GSIs as well. GSIs needs to understand how to package sets of tools together to deliver a full solution to clients. People don't want to be, you know, I think 10 years ago, five years ago, AWS was in the business of selling servers in the cloud. But basically what you do is, you would buy an EC two instance and you install whatever software you wanted on it. I don't know that they're in that business still but customers don't want to buy servers from AWS anymore. They want to buy solutions. >> Right. >> Rent, whatever. >> Yeah. (chuckles) >> That is the big repeated message that I've heard here. >> So you brought up a good point that there are probably 1000 booths here. You could be here every day and not get to see everything that's going on. Plus this show was going on across the strip. We're only getting a fraction of the people that are here. But with that said, to your point, there are so many tools out there. Customers are looking for solutions. One of the things that we say about theCUBE is, we extract the signal from the noise. How does Dataiku get past the noise? How do you get up the stack to really impact customers so they understand the value that you're delivering? >> I think that Data science and ML sound like a very complicated topic but our value prop is relatively simple. And we appeal both to your end users who are excited to learn about how data science works and how they can leverage these tools in their day-to-day jobs, as well as appealing to IT. IT, right now, at major organizations they want to be able to build a full stack that makes sense. And the big choices they're making right now are around infrastructure. Where am I going to run my compute? So, they're choosing between Snowflake or Databricks or a native AWS compute solution, right? And so they make this big choice around compute and then they realize, "Oh, how many of our users across our organization are actually able to leverage this big compute choice?" Oh, maybe 100, maybe 200. That's not incredibly useful for what we've just decided to completely stand behind. Dataiku, all of a sudden, opens that up to 1000s of users across your organization. So it makes IT feel empowered by being able to help more people. And it makes users feel empowered by being able to use a great tool and start answering their own questions. >> And where are your customer conversations these days? As we look at AI and ML, emerging technologies, so many customers and companies, knowing we have to go in this direction. We have to have AI to speed the business. Are you seeing more of the conversations are still in IT or are they actually going up the stack? >> (chuckles) It's a great question. When you're going into large organizations, there's two sales motions, right? There's convincing the business users that this is a great thing and then convincing IT that it's not going to be too painful. You always have to go to both places. IT doesn't want to take on a boondoggler, or there's an albatross, I don't remember the word, but, something that they're going to have to deal with for the next 10 years and then eventually dismantle and pull apart. I think a lot of IT got very scared about big data platforms and solutions because of Hadoop. To be honest, Hadoop was incredibly powerful but maybe not as mature of technology as IT would've liked it to be. From a maintenance and administration standpoint. So yes, you will always have to sell to IT and help IT feel comfortable with the platform. But no, the conversations that I want to have are the use case conversations with a Chief Data Officer, Chief Revenue Officer, Chief Marketing Officer. That's who I really want to convince that this is going to be a worthwhile opportunity. >> And what are some of the key, sorry. What are some of the key use cases that Dataiku is tackling in the market these days? >> So we work a lot. Two of the biggest organizations, or verticals, that I work with personally are finance and pharmaceuticals. In finance, we are closely embedded with wealth management organizations. So, a lot of that is around customer entertainment, churn, relatively obvious, simple concepts but ones where it's worth a lot of money. In pharma, we work both on the supply side. So, doing supply chain optimization, ensuring the right drugs get to the right places at the right time. As well as on the business and marketing side. So, ensuring that your ad spend is correctly distributed across different advertising platforms. >> So if you're working with a financial organization, I want to understand from a consumer, from the end user's perspective, although obviously this technology impacts the end user who's trying to do a transaction. What's in it for me? And I don't know as the end user that Dataiku is under the hood. >> You'd never know. >> Which is good. I shouldn't have to worry about the technology. >> Jed: You shouldn't have to worry about that at all. >> What's in it for the end user customer? What are they gaining from this? >> So, from a very end user perspective, if you think about when you logged onto maybe your Bank of America, your Chase app, five or 10 years ago, maybe you didn't even have it on your phone five years ago. Or when you logged into your account online. We do 95% of our banking online right now, right? I go into a physical location, what? I don't know, once every six months or something? Get a cashier's check? I don't know. The experience that you're getting and the amount of information you're getting back about your spending habits, where your money is going, what your credit score is, all of these things are being driven by these big data organizations inside the banks. Also, any type, this is a little creepier, but any type of promotional emails or the types of things that you get feedback on when you use your credit card and the offers that you get through that, are all being personalized to you through the information that these banks are collecting about your spending habits. >> Yeah, but we want that as a consumer, we want the personalized. >> Yeah, of course. We want it to be magic slash not creepy. (laughs) >> Right, I want them to recommend the best card for me. >> Right. >> The next best thing. >> It's good for me, it's good for them. >> Don't serve me up something that I've already bought. That always bugs me when I'm like, I already bought that. >> I get that all the time. I'm like, yeah, I have that card already. It's in my wallet. Why are you telling me? >> We only have a couple of minutes left Jed, but talk to me about from a platform strategy perspective, what's next for Dataiku and AWS? >> So we are making a matrix transition right now and it's core to our platform. For a long time, the way that we've installed Dataiku is, we help our customers install it on their AWS account so it runs inside their tenant. This is very comfortable for, for example, large banking clients, pharma clients that have personally identifiable information, all that kind of thing. They own everything. However, as we were talking about before, we're really moving from providing a tool to providing solutions. And part of that is obviously a move to SaaS. So two years ago we released a SaaS offering. We've been expanding it more and more to, this year, we want to be pushing SaaS first. So Dataiku online should be the first option when new customers move on. And that is a huge platform shift. It means making sure that we have the right security in place. It means making sure that we have the right scaling in place, that we have 24-7 support. All this has been a big challenge. A big fascinating challenge, actually, to put together. >> Awesome. Last question for you. Say you get a brand new DeLorean, I hear they're coming back, and you want to put, you really, really want to put a bumper sticker on it, 'cause why not? And it's about Dataiku and it's like a sizzle reel kind of thing. >> A sizzle real, alright. >> Yeah. What does it say? >> Extraordinary people, everyday AI. >> Wow. Drop the mic, Jed. That was awesome. Thank you so much for coming on the program. We really appreciate the update on Dataiku. What you guys are doing for customers, your specialization and solutions for verticals. Awesome stuff, we'll have to have you back. >> Thank you so much. >> Alright, my pleasure. >> Bye-Bye. >> For my guest, I'm Lisa Martin. You're watching theCUBE, the leader in live enterprise and emerging tech coverage. (bright music)
SUMMARY :
Jed Dougherty is here, the tell the audience a little lots of the partners that are here today. Got it, so one of the has to be a data company. Jed: It is the lifeblood that needs to happen. I don't know if the term the ability to do that. is always going to be a of the show as I was saying, and run out of the box, I don't know that they're That is the big repeated of the people that are here. And the big choices We have to have AI to speed the business. that this is going to be What are some of the key use cases So, a lot of that is around And I don't know as the I shouldn't have to worry to worry about that at all. and the offers that you get through that, Yeah, but we want that as a consumer, We want it to be magic the best card for me. it's good for them. something that I've already bought. I get that all the time. and it's core to our platform. and you want to put, you really, really What does it say? have to have you back. the leader in live enterprise
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Adam | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Jed Dougherty | PERSON | 0.99+ |
Adam Selipsky | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
95% | QUANTITY | 0.99+ |
California | LOCATION | 0.99+ |
Jed | PERSON | 0.99+ |
1000 booths | QUANTITY | 0.99+ |
Friday night | DATE | 0.99+ |
John | PERSON | 0.99+ |
100% | QUANTITY | 0.99+ |
fourth day | QUANTITY | 0.99+ |
Two | QUANTITY | 0.99+ |
first option | QUANTITY | 0.99+ |
Tuesday morning | DATE | 0.99+ |
Excel | TITLE | 0.99+ |
60,000 people | QUANTITY | 0.99+ |
Bank of America | ORGANIZATION | 0.99+ |
Databricks | ORGANIZATION | 0.99+ |
two years ago | DATE | 0.99+ |
this year | DATE | 0.99+ |
100 | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
52 minutes | QUANTITY | 0.99+ |
60,000 technologists | QUANTITY | 0.99+ |
10 years ago | DATE | 0.99+ |
both | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
five | DATE | 0.99+ |
Dataiku | ORGANIZATION | 0.99+ |
52 minutes | QUANTITY | 0.98+ |
five years ago | DATE | 0.98+ |
200 | QUANTITY | 0.98+ |
two sales | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
earlier this week | DATE | 0.98+ |
Snowflake | ORGANIZATION | 0.98+ |
Vegas | LOCATION | 0.98+ |
1000 different products | QUANTITY | 0.97+ |
this week | DATE | 0.97+ |
both places | QUANTITY | 0.97+ |
Hadoop | TITLE | 0.97+ |
CTO Summit | EVENT | 0.97+ |
two weeks ago | DATE | 0.96+ |
hundreds of thousands | QUANTITY | 0.96+ |
theCUBE | ORGANIZATION | 0.95+ |
Bank of America | LOCATION | 0.94+ |
Bank of America | EVENT | 0.93+ |
Dataiku | TITLE | 0.92+ |
day one | QUANTITY | 0.91+ |
Spark | TITLE | 0.9+ |
day four | QUANTITY | 0.89+ |
first | QUANTITY | 0.88+ |
EC two | TITLE | 0.88+ |
Dataiku | PERSON | 0.86+ |
a week | DATE | 0.83+ |
Chase | TITLE | 0.83+ |
one of my guests | QUANTITY | 0.83+ |
CTO | ORGANIZATION | 0.81+ |
Shireesh Thota, SingleStore & Hemanth Manda, IBM | AWS re:Invent 2022
>>Good evening everyone and welcome back to Sparkly Sin City, Las Vegas, Nevada, where we are here with the cube covering AWS Reinvent for the 10th year in a row. John Furrier has been here for all 10. John, we are in our last session of day one. How does it compare? >>I just graduated high school 10 years ago. It's exciting to be, here's been a long time. We've gotten a lot older. My >>Got your brain is complex. You've been a lot in there. So fast. >>Graduated eight in high school. You know how it's No. All good. This is what's going on. This next segment, wrapping up day one, which is like the the kickoff. The Mondays great year. I mean Tuesdays coming tomorrow big days. The announcements are all around the kind of next gen and you're starting to see partnering and integration is a huge part of this next wave cuz API's at the cloud, next gen cloud's gonna be deep engineering integration and you're gonna start to see business relationships and business transformation scale a horizontally, not only across applications but companies. This has been going on for a while, covering it. This next segment is gonna be one of those things that we're gonna look at as something that's gonna happen more and more on >>Yeah, I think so. It's what we've been talking about all day. Without further ado, I would like to welcome our very exciting guest for this final segment, trust from single store. Thank you for being here. And we also have him on from IBM Data and ai. Y'all are partners. Been partners for about a year. I'm gonna go out on a limb only because their legacy and suspect that a few people, a few more people might know what IBM does versus what a single store does. So why don't you just give us a little bit of background so everybody knows what's going on. >>Yeah, so single store is a relational database. It's a foundational relational systems, but the thing that we do the best is what we call us realtime analytics. So we have these systems that are legacy, which which do operations or analytics. And if you wanted to bring them together, like most of the applications want to, it's really a big hassle. You have to build an ETL pipeline, you'd have to duplicate the data. It's really faulty systems all over the place and you won't get the insights really quickly. Single store is trying to solve that problem elegantly by having an architecture that brings both operational and analytics in one place. >>Brilliant. >>You guys had a big funding now expanding men. Sequel, single store databases, 46 billion again, databases. We've been saying this in the queue for 12 years have been great and recently not one database will rule the world. We know that. That's, everyone knows that databases, data code, cloud scale, this is the convergence now of all that coming together where data, this reinvent is the theme. Everyone will be talking about end to end data, new kinds of specialized services, faster performance, new kinds of application development. This is the big part of why you guys are working together. Explain the relationship, how you guys are partnering and engineering together. >>Yeah, absolutely. I think so ibm, right? I think we are mainly into hybrid cloud and ai and one of the things we are looking at is expanding our ecosystem, right? Because we have gaps and as opposed to building everything organically, we want to partner with the likes of single store, which have unique capabilities that complement what we have. Because at the end of the day, customers are looking for an end to end solution that's also business problems. And they are very good at real time data analytics and hit staff, right? Because we have transactional databases, analytical databases, data lakes, but head staff is a gap that we currently have. And by partnering with them we can essentially address the needs of our customers and also what we plan to do is try to integrate our products and solutions with that so that when we can deliver a solution to our customers, >>This is why I was saying earlier, I think this is a a tell sign of what's coming from a lot of use cases where people are partnering right now you got the clouds, a bunch of building blocks. If you put it together yourself, you can build a durable system, very stable if you want out of the box solution, you can get that pre-built, but you really can't optimize. It breaks, you gotta replace it. High level engineering systems together is a little bit different, not just buying something out of the box. You guys are working together. This is kind of an end to end dynamic that we're gonna hear a lot more about at reinvent from the CEO ofs. But you guys are doing it across companies, not just with aws. Can you guys share this new engineering business model use case? Do you agree with what I'm saying? Do you think that's No, exactly. Do you think John's crazy, crazy? I mean I all discourse, you got out of the box, engineer it yourself, but then now you're, when people do joint engineering project, right? They're different. Yeah, >>Yeah. No, I mean, you know, I think our partnership is a, is a testament to what you just said, right? When you think about how to achieve realtime insights, the data comes into the system and, and the customers and new applications want insights as soon as the data comes into the system. So what we have done is basically build an architecture that enables that we have our own storage and query engine indexing, et cetera. And so we've innovated in our indexing in our database engine, but we wanna go further than that. We wanna be able to exploit the innovation that's happening at ibm. A very good example is, for instance, we have a native connector with Cognos, their BI dashboards right? To reason data very natively. So we build a hyper efficient system that moves the data very efficiently. A very other good example is embedded ai. >>So IBM of course has built AI chip and they have basically advanced quite a bit into the embedded ai, custom ai. So what we have done is, is as a true marriage between the engineering teams here, we make sure that the data in single store can natively exploit that kind of goodness. So we have taken their libraries. So if you have have data in single store, like let's imagine if you have Twitter data, if you wanna do sentiment analysis, you don't have to move the data out model, drain the model outside, et cetera. We just have the pre-built embedded AI libraries already. So it's a, it's a pure engineering manage there that kind of opens up a lot more insights than just simple analytics and >>Cost by the way too. Moving data around >>Another big theme. Yeah. >>And latency and speed is everything about single store and you know, it couldn't have happened without this kind of a partnership. >>So you've been at IBM for almost two decades, don't look it, but at nearly 17 years in how has, and maybe it hasn't, so feel free to educate us. How has, how has IBM's approach to AI and ML evolved as well as looking to involve partnerships in the ecosystem as a, as a collaborative raise the water level together force? >>Yeah, absolutely. So I think when we initially started ai, right? I think we are, if you recollect Watson was the forefront of ai. We started the whole journey. I think our focus was more on end solutions, both horizontal and vertical. Watson Health, which is more vertically focused. We were also looking at Watson Assistant and Watson Discovery, which were more horizontally focused. I think it it, that whole strategy of the world period of time. Now we are trying to be more open. For example, this whole embedable AI that CICE was talking about. Yeah, it's essentially making the guts of our AI libraries, making them available for partners and ISVs to build their own applications and solutions. We've been using it historically within our own products the past few years, but now we are making it available. So that, how >>Big of a shift is that? Do, do you think we're seeing a more open and collaborative ecosystem in the space in general? >>Absolutely. Because I mean if you think about it, in my opinion, everybody is moving towards AI and that's the future. And you have two option. Either you build it on your own, which is gonna require significant amount of time, effort, investment, research, or you partner with the likes of ibm, which has been doing it for a while, right? And it has the ability to scale to the requirements of all the enterprises and partners. So you have that option and some companies are picking to do it on their own, but I believe that there's a huge amount of opportunity where people are looking to partner and source what's already available as opposed to investing from the scratch >>Classic buy versus build analysis for them to figure out, yeah, to get into the game >>And, and, and why reinvent the wheel when we're all trying to do things at, at not just scale but orders of magnitude faster and and more efficiently than we were before. It, it makes sense to share, but it's, it is, it does feel like a bit of a shift almost paradigm shift in, in the culture of competition versus how we're gonna creatively solve these problems. There's room for a lot of players here, I think. And yeah, it's, I don't >>Know, it's really, I wanted to ask if you don't mind me jumping in on that. So, okay, I get that people buy a bill I'm gonna use existing or build my own. The decision point on that is, to your point about the path of getting the path of AI is do I have the core competency skills, gap's a big issue. So, okay, the cube, if you had ai, we'd take it cuz we don't have any AI engineers around yet to build out on all the linguistic data we have. So we might use your ai but I might say this to then and we want to have a core competency. How do companies get that core competency going while using and partnering with, with ai? What you guys, what do you guys see as a way for them to get going? Because I think some people probably want to have core competency of >>Ai. Yeah, so I think, again, I think I, I wanna distinguish between a solution which requires core competency. You need expertise on the use case and you need expertise on your industry vertical and your customers versus the foundational components of ai, which are like, which are agnostic to the core competency, right? Because you take the foundational piece and then you further train it and define it for your specific use case. So we are not saying that we are experts in all the industry verticals. What we are good at is like foundational components, which is what we wanna provide. Got it. >>Yeah, that's the hard deep yes. Heavy lift. >>Yeah. And I can, I can give a color to that question from our perspective, right? When we think about what is our core competency, it's about databases, right? But there's a, some biotic relationship between data and ai, you know, they sort of like really move each other, right? You >>Need, they kind of can't have one without the other. You can, >>Right? And so the, the question is how do we make sure that we expand that, that that relationship where our customers can operationalize their AI applications closer to the data, not move the data somewhere else and do the modeling and then training somewhere else and dealing with multiple systems, et cetera. And this is where this kind of a cross engineering relationship helps. >>Awesome. Awesome. Great. And then I think companies are gonna want to have that baseline foundation and then start hiring in learning. It's like driving the car. You get the keys when you're ready to go. >>Yeah, >>Yeah. Think I'll give you a simple example, right? >>I want that turnkey lifestyle. We all do. Yeah, >>Yeah. Let me, let me just give you a quick analogy, right? For example, you can, you can basically make the engines and the car on your own or you can source the engine and you can make the car. So it's, it's basically an option that you can decide. The same thing with airplanes as well, right? Whether you wanna make the whole thing or whether you wanna source from someone who is already good at doing that piece, right? So that's, >>Or even create a new alloy for that matter. I mean you can take it all the way down in that analogy, >>Right? Is there a structural change and how companies are laying out their architecture in this modern era as we start to see this next let gen cloud emerge, teams, security teams becoming much more focused data teams. Its building into the DevOps into the developer pipeline, seeing that trend. What do you guys see in the modern data stack kind of evolution? Is there a data solutions architect coming? Do they exist yet? Is that what we're gonna see? Is it data as code automation? How do you guys see this landscape of the evolving persona? >>I mean if you look at the modern data stack as it is defined today, it is too detailed, it's too OSes and there are way too many layers, right? There are at least five different layers. You gotta have like a storage you replicate to do real time insights and then there's a query layer, visualization and then ai, right? So you have too many ETL pipelines in between, too many services, too many choke points, too many failures, >>Right? Etl, that's the dirty three letter word. >>Say no to ETL >>Adam Celeste, that's his quote, not mine. We hear that. >>Yeah. I mean there are different names to it. They don't call it etl, we call it replication, whatnot. But the point is hassle >>Data is getting more hassle. More >>Hassle. Yeah. The data is ultimately getting replicated in the modern data stack, right? And that's kind of one of our thesis at single store, which is that you'd have to converge not hyper specialize and conversation and convergence is possible in certain areas, right? When you think about operational analytics as two different aspects of the data pipeline, it is possible to bring them together. And we have done it, we have a lot of proof points to it, our customer stories speak to it and that is one area of convergence. We need to see more of it. The relationship with IBM is sort of another step of convergence wherein the, the final phases, the operation analytics is coming together and can we take analytics visualization with reports and dashboards and AI together. This is where Cognos and embedded AI comes into together, right? So we believe in single store, which is really conversions >>One single path. >>A shocking, a shocking tie >>Back there. So, so obviously, you know one of the things we love to joke about in the cube cuz we like to goof on the old enterprise is they solve complexity by adding more complexity. That's old. Old thinking. The new thinking is put it under the covers, abstract the way the complexities and make it easier. That's right. So how do you guys see that? Because this end to end story is not getting less complicated. It's actually, I believe increasing and complication complexity. However there's opportunities doing >>It >>More faster to put it under the covers or put it under the hood. What do you guys think about the how, how this new complexity gets managed or in this new data world we're gonna be coming in? >>Yeah, so I think you're absolutely right. It's the world is becoming more complex, technology is becoming more complex and I think there is a real need and it's not just from coming from us, it's also coming from the customers to simplify things. So our approach around AI is exactly that because we are essentially providing libraries, just like you have Python libraries, there are libraries now you have AI libraries that you can go infuse and embed deeply within applications and solutions. So it becomes integrated and simplistic for the customer point of view. From a user point of view, it's, it's very simple to consume, right? So that's what we are doing and I think single store is doing that with data, simplifying data and we are trying to do that with the rest of the portfolio, specifically ai. >>It's no wonder there's a lot of synergy between the two companies. John, do you think they're ready for the Instagram >>Challenge? Yes, they're ready. Uhoh >>Think they're ready. So we're doing a bit of a challenge. A little 32nd off the cuff. What's the most important takeaway? This could be your, think of it as your thought leadership sound bite from AWS >>2023 on Instagram reel. I'm scrolling. That's the Instagram, it's >>Your moment to stand out. Yeah, exactly. Stress. You look like you're ready to rock. Let's go for it. You've got that smile, I'm gonna let you go. Oh >>Goodness. You know, there is, there's this quote from astrophysics, space moves matter, a matter tells space how to curve. They have that kind of a relationship. I see the same between AI and data, right? They need to move together. And so AI is possible only with right data and, and data is meaningless without good insights through ai. They really have that kind of relationship and you would see a lot more of that happening in the future. The future of data and AI are combined and that's gonna happen. Accelerate a lot faster. >>Sures, well done. Wow. Thank you. I am very impressed. It's tough hacks to follow. You ready for it though? Let's go. Absolutely. >>Yeah. So just, just to add what is said, right, I think there's a quote from Rob Thomas, one of our leaders at ibm. There's no AI without ia. Essentially there's no AI without information architecture, which essentially data. But I wanna add one more thing. There's a lot of buzz around ai. I mean we are talking about simplicity here. AI in my opinion is three things and three things only. Either you use AI to predict future for forecasting, use AI to automate things. It could be simple, mundane task, it would be complex tasks depending on how exactly you want to use it. And third is to optimize. So predict, automate, optimize. Anything else is buzz. >>Okay. >>Brilliantly said. Honestly, I think you both probably hit the 32nd time mark that we gave you there. And the enthusiasm loved your hunger on that. You were born ready for that kind of pitch. I think they both nailed it for the, >>They nailed it. Nailed it. Well done. >>I I think that about sums it up for us. One last closing note and opportunity for you. You have a V 8.0 product coming out soon, December 13th if I'm not mistaken. You wanna give us a quick 15 second preview of that? >>Super excited about this. This is one of the, one of our major releases. So we are evolving the system on multiple dimensions on enterprise and governance and programmability. So there are certain features that some of our customers are aware of. We have made huge performance gains in our JSON access. We made it easy for people to consume, blossom on OnPrem and hybrid architectures. There are multiple other things that we're gonna put out on, on our site. So it's coming out on December 13th. It's, it's a major next phase of our >>System. And real quick, wasm is the web assembly moment. Correct. And the new >>About, we have pioneers in that we, we be wasm inside the engine. So you could run complex modules that are written in, could be C, could be rushed, could be Python. Instead of writing the the sequel and SQL as a store procedure, you could now run those modules inside. I >>Wanted to get that out there because at coupon we covered that >>Savannah Bay hot topic. Like, >>Like a blanket. We covered it like a blanket. >>Wow. >>On that glowing note, Dre, thank you so much for being here with us on the show. We hope to have both single store and IBM back on plenty more times in the future. Thank all of you for tuning in to our coverage here from Las Vegas in Nevada at AWS Reinvent 2022 with John Furrier. My name is Savannah Peterson. You're watching the Cube, the leader in high tech coverage. We'll see you tomorrow.
SUMMARY :
John, we are in our last session of day one. It's exciting to be, here's been a long time. So fast. The announcements are all around the kind of next gen So why don't you just give us a little bit of background so everybody knows what's going on. It's really faulty systems all over the place and you won't get the This is the big part of why you guys are working together. and ai and one of the things we are looking at is expanding our ecosystem, I mean I all discourse, you got out of the box, When you think about how to achieve realtime insights, the data comes into the system and, So if you have have data in single store, like let's imagine if you have Twitter data, if you wanna do sentiment analysis, Cost by the way too. Yeah. And latency and speed is everything about single store and you know, it couldn't have happened without this kind and maybe it hasn't, so feel free to educate us. I think we are, So you have that option and some in, in the culture of competition versus how we're gonna creatively solve these problems. So, okay, the cube, if you had ai, we'd take it cuz we don't have any AI engineers around yet You need expertise on the use case and you need expertise on your industry vertical and Yeah, that's the hard deep yes. you know, they sort of like really move each other, right? You can, And so the, the question is how do we make sure that we expand that, You get the keys when you're ready to I want that turnkey lifestyle. So it's, it's basically an option that you can decide. I mean you can take it all the way down in that analogy, What do you guys see in the modern data stack kind of evolution? I mean if you look at the modern data stack as it is defined today, it is too detailed, Etl, that's the dirty three letter word. We hear that. They don't call it etl, we call it replication, Data is getting more hassle. When you think about operational analytics So how do you guys see that? What do you guys think about the how, is exactly that because we are essentially providing libraries, just like you have Python libraries, John, do you think they're ready for the Instagram Yes, they're ready. A little 32nd off the cuff. That's the Instagram, You've got that smile, I'm gonna let you go. and you would see a lot more of that happening in the future. I am very impressed. I mean we are talking about simplicity Honestly, I think you both probably hit the 32nd time mark that we gave you there. They nailed it. I I think that about sums it up for us. So we are evolving And the new So you could run complex modules that are written in, could be C, We covered it like a blanket. On that glowing note, Dre, thank you so much for being here with us on the show.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
December 13th | DATE | 0.99+ |
Shireesh Thota | PERSON | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Adam Celeste | PERSON | 0.99+ |
Rob Thomas | PERSON | 0.99+ |
46 billion | QUANTITY | 0.99+ |
12 years | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
three things | QUANTITY | 0.99+ |
15 second | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
Python | TITLE | 0.99+ |
10th year | QUANTITY | 0.99+ |
two companies | QUANTITY | 0.99+ |
third | QUANTITY | 0.99+ |
32nd time | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
tomorrow | DATE | 0.99+ |
32nd | QUANTITY | 0.99+ |
single store | QUANTITY | 0.99+ |
Tuesdays | DATE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.98+ |
10 years ago | DATE | 0.98+ |
SingleStore | ORGANIZATION | 0.98+ |
Single store | QUANTITY | 0.98+ |
Hemanth Manda | PERSON | 0.98+ |
Dre | PERSON | 0.97+ |
eight | QUANTITY | 0.96+ |
two option | QUANTITY | 0.96+ |
day one | QUANTITY | 0.96+ |
one more thing | QUANTITY | 0.96+ |
one database | QUANTITY | 0.95+ |
two different aspects | QUANTITY | 0.95+ |
Mondays | DATE | 0.95+ |
ORGANIZATION | 0.95+ | |
IBM Data | ORGANIZATION | 0.94+ |
10 | QUANTITY | 0.94+ |
about a year | QUANTITY | 0.94+ |
CICE | ORGANIZATION | 0.93+ |
three letter | QUANTITY | 0.93+ |
today | DATE | 0.93+ |
one place | QUANTITY | 0.93+ |
Watson | TITLE | 0.93+ |
One last | QUANTITY | 0.92+ |
Cognos | ORGANIZATION | 0.91+ |
Watson Assistant | TITLE | 0.91+ |
nearly 17 years | QUANTITY | 0.9+ |
Watson Health | TITLE | 0.89+ |
Las Vegas, Nevada | LOCATION | 0.89+ |
aws | ORGANIZATION | 0.86+ |
one area | QUANTITY | 0.86+ |
SQL | TITLE | 0.86+ |
One single path | QUANTITY | 0.85+ |
two decades | QUANTITY | 0.8+ |
five different layers | QUANTITY | 0.8+ |
Invent 2022 | EVENT | 0.77+ |
JSON | TITLE | 0.77+ |
Daniel Rethmeier & Samir Kadoo | Accelerating Business Transformation
(upbeat music) >> Hi everyone. Welcome to theCUBE special presentation here in Palo Alto, California. I'm John Furrier, host of theCUBE. We got two great guests, one for calling in from Germany, or videoing in from Germany, one from Maryland. We've got VMware and AWS. This is the customer successes with VMware Cloud on AWS Showcase: Accelerating Business Transformation. Here in the Showcase at Samir Kadoo, worldwide VMware strategic alliance solution architect leader with AWS. Samir, great to have you. And Daniel Rethmeier, principal architect global AWS synergy at VMware. Guys, you guys are working together, you're the key players in this relationship as it rolls out and continues to grow. So welcome to theCUBE. >> Thank you, greatly appreciate it. >> Great to have you guys both on. As you know, we've been covering this since 2016 when Pat Gelsinger, then CEO, and then then CEO AWS at Andy Jassy did this. It kind of got people by surprise, but it really kind of cleaned out the positioning in the enterprise for the success of VM workloads in the cloud. VMware's had great success with it since and you guys have the great partnerships. So this has been like a really strategic, successful partnership. Where are we right now? You know, years later, we got this whole inflection point coming, you're starting to see this idea of higher level services, more performance are coming in at the infrastructure side, more automation, more serverless, I mean and AI. I mean, it's just getting better and better every year in the cloud. Kind of a whole 'nother level. Where are we? Samir, let's start with you on the relationship. >> Yeah, totally. So I mean, there's several things to keep in mind, right? So in 2016, right, that's when the partnership between AWS and VMware was announced. And then less than a year later, that's when we officially launched VMware Cloud on AWS. Years later, we've been driving innovation, working with our customers, jointly engineering this between AWS and VMware. Day in, day out, as far as advancing VMware Cloud on AWS. You know, even if you look at the innovation that takes place with the solution, things have modernized, things have changed, there's been advancements. You know, whether it's security focus, whether it's platform focus, whether it's networking focus, there's been modifications along the way, even storage, right, more recently. One of the things to keep in mind is we're looking to deliver value to our customers together. These are our joint customers. So there's hundreds of VMware and AWS engineers working together on this solution. And then factor in even our sales teams, right? We have VMware and AWS sales teams interacting with each other on a constant daily basis. We're working together with our customers at the end of the day too. Then we're looking to even offer and develop jointly engineered solutions specific to VMware Cloud on AWS. And even with VMware to other platforms as well. Then the other thing comes down to is where we have dedicated teams around this at both AWS and VMware. So even from solutions architects, even to our sales specialists, even to our account teams, even to specific engineering teams within the organizations, they all come together to drive this innovation forward with VMware Cloud on AWS and the jointly engineered solution partnership as well. And then I think one of the key things to keep in mind comes down to we have nearly 600 channel partners that have achieved VMware Cloud on AWS service competency. So think about it from the standpoint, there's 300 certified or validated technology solutions, they're now available to our customers. So that's even innovation right off the top as well. >> Great stuff. Daniel, I want to get to you in a second upon this principal architect position you have. In your title, you're the global AWS synergy person. Synergy means bringing things together, making it work. Take us through the architecture, because we heard a lot of folks at VMware explore this year, formerly VMworld, talking about how the workloads on IT has been completely transforming into cloud and hybrid, right? This is where the action is. Where are you? Is your customers taking advantage of that new shift? You got AIOps, you got ITOps changing a lot, you got a lot more automation, edges right around the corner. This is like a complete transformation from where we were just five years ago. What's your thoughts on the relationship? >> So at first, I would like to emphasize that our collaboration is not just that we have dedicated teams to help our customers get the most and the best benefits out of VMware Cloud and AWS, we are also enabling us mutually. So AWS learns from us about the VMware technology, where VMware people learn about the AWS technology. We are also enabling our channel partners and we are working together on customer projects. So we have regular assembles globally and also virtually on Slack and the usual suspect tools working together and listening to customers. That's very important. Asking our customers where are their needs? And we are driving the solution into the direction that our customers get the best benefits out of VMware Cloud on AWS. And over the time, we really have involved the solution. As Samir mentioned, we just added additional storage solutions to VMware Cloud on AWS. We now have three different instance types that cover a broad range of workloads. So for example, we just edited the I4i host, which is ideally for workloads that require a lot of CPU power, such as, you mentioned it, AI workloads. >> Yeah, so I want to get us just specifically on the customer journey and their transformation, you know, we've been reporting on Silicon angle in theCUBE in the past couple weeks in a big way that the ops teams are now the new devs, right? I mean that sounds a little bit weird, but IT operations is now part of a lot more DataOps, security, writing code, composing. You know, with open source, a lot of great things are changing. Can you share specifically what customers are looking for when you say, as you guys come in and assess their needs, what are they doing, what are some of the things that they're doing with VMware on AWS specifically that's a little bit different? Can you share some of and highlights there? >> That's a great point, because originally, VMware and AWS came from very different directions when it comes to speaking people and customers. So for example, AWS, very developer focused, whereas VMware has a very great footprint in the ITOps area. And usually these are very different teams, groups, different cultures, but it's getting together. However, we always try to address the customer needs, right? There are customers that want to build up a new application from the scratch and build resiliency, availability, recoverability, scalability into the application. But there are still a lot of customers that say, "Well, we don't have all of the skills to redevelop everything to refactor an application to make it highly available. So we want to have all of that as a service. Recoverability as a service, scalability as a service. We want to have this from the infrastructure." That was one of the unique selling points for VMware on-premise and now we are bringing this into the cloud. >> Samir, talk about your perspective. I want to get your thoughts, and not to take a tangent, but we had covered the AWS re:MARS, actually it was Amazon re:MARS, machine learning automation, robotics and space was really kind of the confluence of industrial IoT, software, physical. And so when you look at like the IT operations piece becoming more software, you're seeing things about automation, but the skill gap is huge. So you're seeing low code, no code, automation, you know, "Hey Alexa, deploy a Kubernetes cluster." Yeah, I mean that's coming, right? So we're seeing this kind of operating automation meets higher level services, meets workloads. Can you unpack that and share your opinion on what you see there from an Amazon perspective and how it relates to this? >> Yeah. Yeah, totally, right? And you know, look at it from the point of view where we said this is a jointly engineered solution, but it's not migrating to one option or the other option, right? It's more or less together. So even with VMware Cloud on AWS, yes it is utilizing AWS infrastructure, but your environment is connected to that AWS VPC in your AWS account. So if you want to leverage any of the native AWS services, so any of the 200 plus AWS services, you have that option to do so. So that's going to give you that power to do certain things, such as, for example, like how you mentioned with IoT, even with utilizing Alexa, or if there's any other service that you want to utilize, that's the joining point between both of the offerings right off the top. Though with digital transformation, right, you have to think about where it's not just about the technology, right? There's also where you want to drive growth in the underlying technology even in your business. Leaders are looking to reinvent their business, they're looking to take different steps as far as pursuing a new strategy, maybe it's a process, maybe it's with the people, the culture, like how you said before, where people are coming in from a different background, right? They may not be used to the cloud, they may not be used to AWS services, but now you have that capability to mesh them together. >> Okay. >> Then also- >> Oh, go ahead, finish your thought. >> No, no, no, I was going to say what it also comes down to is you need to think about the operating model too, where it is a shift, right? Especially for that vStor admin that's used to their on-premises environment. Now with VMware Cloud on AWS, you have that ability to leverage a cloud, but the investment that you made and certain things as far as automation, even with monitoring, even with logging, you still have that methodology where you can utilize that in VMware Cloud on AWS too. >> Daniel, I want to get your thoughts on this because at Explore and after the event, as we prep for CubeCon and re:Invent coming up, the big AWS show, I had a couple conversations with a lot of the VMware customers and operators, and it's like hundreds of thousands of users and millions of people talking about and peaked on VMware, interested in VMware. The common thread was one person said, "I'm trying to figure out where I'm going to put my career in the next 10 to 15 years." And they've been very comfortable with VMware in the past, very loyal, and they're kind of talking about, I'm going to be the next cloud, but there's no like role yet. Architects, is it solution architect, SRE? So you're starting to see the psychology of the operators who now are going to try to make these career decisions. Like what am I going to work on? And then it's kind of fuzzy, but I want to get your thoughts, how would you talk to that persona about the future of VMware on, say, cloud for instance? What should they be thinking about? What's the opportunity? And what's going to happen? >> So digital transformation definitely is a huge change for many organizations and leaders are perfectly aware of what that means. And that also means to some extent, concerns with your existing employees. Concerns about do I have to relearn everything? Do I have to acquire new skills and trainings? Is everything worthless I learned over the last 15 years of my career? And the answer is to make digital transformation a success, we need not just to talk about technology, but also about process, people, and culture. And this is where VMware really can help because if you are applying VMware Cloud on AWS to your infrastructure, to your existing on-premise infrastructure, you do not need to change many things. You can use the same tools and skills, you can manage your virtual machines as you did in your on-premise environment, you can use the same managing and monitoring tools, if you have written, and many customers did this, if you have developed hundreds of scripts that automate tasks and if you know how to troubleshoot things, then you can use all of that in VMware Cloud on AWS. And that gives not just leaders, but also the architects at customers, the operators at customers, the confidence in such a complex project. >> The consistency, very key point, gives them the confidence to go. And then now that once they're confident, they can start committing themselves to new things. Samir, you're reacting to this because on your side, you've got higher level services, you've got more performance at the hardware level. I mean, a lot improvements. So, okay, nothing's changed, I can still run my job, now I got goodness on the other side. What's the upside? What's in it for the customer there? >> Yeah, so I think what it comes down to is they've already been so used to or entrenched with that VMware admin mentality, right? But now extending that to the cloud, that's where now you have that bridge between VMware Cloud on AWS to bridge that VMware knowledge with that AWS knowledge. So I will look at it from the point of view where now one has that capability and that ability to just learn about the cloud. But if they're comfortable with certain aspects, no one's saying you have to change anything. You can still leverage that, right? But now if you want to utilize any other AWS service in conjunction with that VM that resides maybe on-premises or even in VMware Cloud on AWS, you have that option to do so. So think about it where you have that ability to be someone who's curious and wants to learn. And then if you want to expand on the skills, you certainly have that capability to do so. >> Great stuff, I love that. Now that we're peeking behind the curtain here, I'd love to have you guys explain, 'cause people want to know what's goes on behind the scenes. How does innovation get happen? How does it happen with the relationships? Can you take us through a day in the life of kind of what goes on to make innovation happen with the joint partnership? Do you guys just have a Zoom meeting, do you guys fly out, you write code, go do you ship things? I mean, I'm making it up, but you get the idea. How does it work? What's going on behind the scenes? >> So we hope to get more frequently together in-person, but of course we had some difficulties over the last two to three years. So we are very used to Zoom conferences and Slack meetings. You always have to have the time difference in mind if you are working globally together. But what we try, for example, we have regular assembles now also in-person, geo-based, so for AMEA, for the Americas, for APJ. And we are bringing up interesting customer situations, architectural bits and pieces together. We are discussing it always to share and to contribute to our community. >> What's interesting, you know, as events are coming back, Samir, before you weigh in this, I'll comment as theCUBE's been going back out to events, we're hearing comments like, "What pandemic? We were more productive in the pandemic." I mean, developers know how to work remotely and they've been on all the tools there, but then they get in-person, they're happy to see people, but no one's really missed the beat. I mean, it seems to be very productive, you know, workflow, not a lot of disruption. More, if anything, productivity gains. >> Agreed, right? I think one of the key things to keep in mind is even if you look at AWS's, and even Amazon's leadership principles, right? Customer obsession, that's key. VMware is carrying that forward as well. Where we are working with our customers, like how Daniel said and meant earlier, right? We might have meetings at different time zones, maybe it's in-person, maybe it's virtual, but together we're working to listen to our customers. You know, we're taking and capturing that feedback to drive innovation in VMware Cloud on AWS as well. But one of the key things to keep in mind is yes, there has been the pandemic, we might have been disconnected to a certain extent, but together through technology, we've been able to still communicate, work with our customers, even with VMware in between, with AWS and whatnot, we had that flexibility to innovate and continue that innovation. So even if you look at it from the point of view, right? VMware Cloud on AWS Outposts, that was something that customers have been asking for. We've been able to leverage the feedback and then continue to drive innovation even around VMware Cloud on AWS Outposts. So even with the on-premises environment, if you're looking to handle maybe data sovereignty or compliance needs, maybe you have low latency requirements, that's where certain advancements come into play, right? So the key thing is always to maintain that communication track. >> In our last segment we did here on this Showcase, we listed the accomplishments and they were pretty significant. I mean geo, you got the global rollouts of the relationship. It's just really been interesting and people can reference that, we won't get into it here. But I will ask you guys to comment on, as you guys continue to evolve the relationship, what's in it for the customer? What can they expect next? Because again, I think right now, we're at an inflection point more than ever. What can people expect from the relationship and what's coming up with re:Invent? Can you share a little bit of kind of what's coming down the pike? >> So one of the most important things we have announced this year, and we will continue to evolve into that direction, is independent scale of storage. That absolutely was one of the most important items customer asked for over the last years. Whenever you are requiring additional storage to host your virtual machines, you usually in VMware Cloud on AWS, you have to add additional nodes. Now we have three different node types with different ratios of compute, storage, and memory. But if you only require additional storage, you always have to get also additional compute and memory and you have to pay for it. And now with two solutions which offer choice for the customers, like FS6 wanted a ONTAP and VMware Cloud Flex Storage, you now have two cost effective opportunities to add storage to your virtual machines. And that offers opportunities for other instance types maybe that don't have local storage. We are also very, very keen looking forward to announcements, exciting announcements, at the upcoming events. >> Samir, what's your reaction take on what's coming down on your side? >> Yeah, I think one of the key things to keep in mind is we're looking to help our customers be agile and even scaled with their needs, right? So with VMware Cloud on AWS, that's one of the key things that comes to mind, right? There are going to be announcements, innovations, and whatnot with upcoming events. But together, we're able to leverage that to advance VMware cloud on AWS. To Daniel's point, storage for example, even with host offerings. And then even with decoupling storage from compute and memory, right? Now you have the flexibility where you can do all of that. So to look at it from the standpoint where now with 21 regions where we have VMware Cloud on AWS available as well, where customers can utilize that as needed when needed, right? So it comes down to, you know, transformation will be there. Yes, there's going to be maybe where workloads have to be adapted where they're utilizing certain AWS services, but you have that flexibility and option to do so. And I think with the continuing events, that's going to give us the options to even advance our own services together. >> Well you guys are in the middle of it, you're in the trenches, you're making things happen, you've got a team of people working together. My final question is really more of a kind of a current situation, kind of future evolutionary thing that you haven't seen this before. I want to get both of your reaction to it. And we've been bringing this up in the open conversations on theCUBE is in the old days, let's go back this generation, you had ecosystems, you had VMware had an ecosystem, AWS had an ecosystem. You know, we have a product, you have a product, biz dev deals happen, people sign relationships, and they do business together and they sell each other's products or do some stuff. Now it's more about architecture, 'cause we're now in a distributed large scale environment where the role of ecosystems are intertwining and you guys are in the middle of two big ecosystems. You mentioned channel partners, you both have a lot of partners on both sides, they come together. So you have this now almost a three dimensional or multidimensional ecosystem interplay. What's your thoughts on this? Because it's about the architecture, integration is a value, not so much innovations only. You got to do innovation, but when you do innovation, you got to integrate it, you got to connect it. So how do you guys see this as an architectural thing, start to see more technical business deals? >> So we are removing dependencies from individual ecosystems and from individual vendors. So a customer no longer has to decide for one vendor and then it is a very expensive and high effort project to move away from that vendor, which ties customers even closer to specific vendors. We are removing these obstacles. So with VMware Cloud on AWS, moving to the cloud, firstly it's not a dead end. If you decide at one point in time because of latency requirements or maybe some compliance requirements, you need to move back into on-premise, you can do this. If you decide you want to stay with some of your services on-premise and just run a couple of dedicated services in the cloud, you can do this and you can man manage it through a single pane of glass. That's quite important. So cloud is no longer a dead end, it's no longer a binary decision, whether it's on-premise or the cloud, it is the cloud. And the second thing is you can choose the best of both worlds, right? If you are migrating virtual machines that have been running in your on-premise environment to VMware Cloud on AWS either way in a very, very fast cost effective and safe way, then you can enrich, later on enrich these virtual machines with services that are offered by AWS, more than 200 different services ranging from object-based storage, load balancing, and so on. So it's an endless, endless possibility. >> We call that super cloud in the way that we generically defining it where everyone's innovating, but yet there's some common services. But the differentiation comes from innovation where the lock in is the value, not some spec, right? Samir, this is kind of where cloud is right now. You guys are not commodity, amazon's completely differentiating, but there's some commodity things happen. You got storage, you got compute, but then you got now advances in all areas. But partners innovate with you on their terms. >> Absolutely. >> And everybody wins. >> Yeah, I 100% agree with you. I think one of the key things, you know, as Daniel mentioned before, is where it's a cross education where there might be someone who's more proficient on the cloud side with AWS, maybe more proficient with the VMware's technology. But then for partners, right? They bridge that gap as well where they come in and they might have a specific niche or expertise where their background, where they can help our customers go through that transformation. So then that comes down to, hey, maybe I don't know how to connect to the cloud, maybe I don't know what the networking constructs are, maybe I can leverage that partner. That's one aspect to go about it. Now maybe you migrated that workload to VMware Cloud on AWS. Maybe you want to leverage any of the native AWS services or even just off the top, 200 plus AWS services, right? But it comes down to that skillset, right? So again, solutions architecture at the back of the day, end of the day, what it comes down to is being able to utilize the best of both worlds. That's what we're giving our customers at the end of the day. >> I mean, I just think it's a refactoring and innovation opportunity at all levels. I think now more than ever, you can take advantage of each other's ecosystems and partners and technologies and change how things get done with keeping the consistency. I mean, Daniel, you nailed that, right? I mean you don't have to do anything. You still run it. Just spear the way you're working on it and now do new things. This is kind of a cultural shift. >> Yeah, absolutely. And if you look, not every customer, not every organization has the resources to refactor and re-platform everything. And we give them a very simple and easy way to move workloads to the cloud. Simply run them and at the same time, they can free up resources to develop new innovations and grow their business. >> Awesome. Samir, thank you for coming on. Daniel, thank you for coming to Germany. >> Thank you. Oktoberfest, I know it's evening over there, weekend's here. And thank you for spending the time. Samir, give you the final word. AWS re:Invent's coming up. We're preparing, we're going to have an exclusive with Adam, with Fryer, we'd do a curtain raise, and do a little preview. What's coming down on your side with the relationship and what can we expect to hear about what you got going on at re:Invent this year? The big show? >> Yeah, so I think Daniel hit upon some of the key points, but what I will say is we do have, for example, specific sessions, both that VMware's driving and then also that AWS is driving. We do have even where we have what are called chalk talks. So I would say, and then even with workshops, right? So even with the customers, the attendees who are there, whatnot, if they're looking to sit and listen to a session, yes that's there, but if they want to be hands-on, that is also there too. So personally for me as an IT background, been in sysadmin world and whatnot, being hands-on, that's one of the key things that I personally am looking forward. But I think that's one of the key ways just to learn and get familiar with the technology. >> Yeah, and re:Invent's an amazing show for the in-person. You guys nail it every year. We'll have three sets this year at theCUBE and it's becoming popular. We have more and more content. You guys got live streams going on, a lot of content, a lot of media. So thanks for sharing that. Samir, Daniel, thank you for coming on on this part of the Showcase episode of really the customer successes with VMware Cloud on AWS, really accelerating business transformation with AWS and VMware. I'm John Furrier with theCUBE, thanks for watching. (upbeat music)
SUMMARY :
This is the customer successes Great to have you guys both on. things to keep in mind, right? One of the things to keep in mind Daniel, I want to get to you in a second And over the time, we really that the ops teams are in the ITOps area. And so when you look at So that's going to give you even with logging, you in the next 10 to 15 years." And the answer is to make What's in it for the customer there? and that ability to just I'd love to have you guys explain, and to contribute to our community. but no one's really missed the beat. So the key thing is always to maintain But I will ask you guys to comment on, and memory and you have to pay for it. So it comes down to, you know, and you guys are in the is you can choose the best with you on their terms. on the cloud side with AWS, I mean you don't have to do anything. has the resources to refactor Samir, thank you for coming on. And thank you for spending the time. that's one of the key things of really the customer successes
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Daniel | PERSON | 0.99+ |
Samir | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Daniel Rethmeier | PERSON | 0.99+ |
Maryland | LOCATION | 0.99+ |
amazon | ORGANIZATION | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
Germany | LOCATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
2016 | DATE | 0.99+ |
100% | QUANTITY | 0.99+ |
Samir Kadoo | PERSON | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
Adam | PERSON | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
21 regions | QUANTITY | 0.99+ |
both sides | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
VMworld | ORGANIZATION | 0.99+ |
two solutions | QUANTITY | 0.99+ |
Accelerating Business Transformation with VMware Cloud on AWS 10 31
>>Hi everyone. Welcome to the Cube special presentation here in Palo Alto, California. I'm John Foer, host of the Cube. We've got two great guests, one for calling in from Germany, our videoing in from Germany, one from Maryland. We've got VMware and aws. This is the customer successes with VMware cloud on AWS showcase, accelerating business transformation here in the showcase with Samir Candu Worldwide. VMware strategic alliance solution, architect leader with AWS Samir. Great to have you and Daniel Re Myer, principal architect global AWS synergy at VMware. Guys, you guys are, are working together. You're the key players in the re relationship as it rolls out and continues to grow. So welcome to the cube. >>Thank you. Greatly appreciate it. >>Great to have you guys both on, As you know, we've been covering this since 2016 when Pat Geling, then CEO and then then CEO AWS at Andy Chasy did this. It kind of got people by surprise, but it really kind of cleaned out the positioning in the enterprise for the success. OFM workloads in the cloud. VMware's had great success with it since, and you guys have the great partnerships. So this has been like a really strategic, successful partnership. Where are we right now? You know, years later we got this whole inflection point coming. You're starting to see, you know, this idea of higher level services, more performance are coming in at the infrastructure side. More automation, more serverless, I mean, and a, I mean it's just getting better and better every year in the cloud. Kinda a whole nother level. Where are we, Samir? Let's start with you on, on the relationship. >>Yeah, totally. So I mean, there's several things to keep in mind, right? So in 2016, right, that's when the partnership between AWS and VMware was announced, and then less than a year later, that's when we officially launched VMware cloud on aws. Years later, we've been driving innovation, working with our customers, jointly engineering this between AWS and VMware day in, day out. As far as advancing VMware cloud on aws. You know, even if you look at the innovation that takes place with a solution, things have modernized, things have changed, there's been advancements, you know, whether it's security focus, whether it's platform focus, whether it's networking focus, there's been modifications along the way, even storage, right? More recently, one of the things to keep in mind is we're looking to deliver value to our customers together. These are our joint customers. So there's hundreds of VMware and AWS engineers working together on this solution. >>And then factor in even our sales teams, right? We have VMware and AWS sales teams interacting with each other on a constant daily basis. We're working together with our customers at the end of the day too. Then we're looking to even offer and develop jointly engineered solutions specific to VMware cloud on aws, and even with VMware's, other platforms as well. Then the other thing comes down to is where we have dedicated teams around this at both AWS and VMware. So even from solutions architects, even to our sales specialists, even to our account teams, even to specific engineering teams within the organizations, they all come together to drive this innovation forward with VMware cloud on AWS and the jointly engineered solution partnership as well. And then I think one of the key things to keep in mind comes down to we have nearly 600 channel partners that have achieved VMware cloud on AWS service competency. So think about it from the standpoint there's 300 certified or validated technology solutions, they're now available to our customers. So that's even innovation right off the top as well. >>Great stuff. Daniel, I wanna get to you in a second. Upon this principal architect position you have in your title, you're the global a synergy person. Synergy means bringing things together, making it work. Take us through the architecture, because we heard a lot of folks at VMware explore this year, formerly world, talking about how the, the workloads on it has been completely transforming into cloud and hybrid, right? This is where the action is. Where are you? Is your customers taking advantage of that new shift? You got AI ops, you got it. Ops changing a lot, you got a lot more automation edges right around the corner. This is like a complete transformation from where we were just five years ago. What's your thoughts on the >>Relationship? So at at, at first, I would like to emphasize that our collaboration is not just that we have dedicated teams to help our customers get the most and the best benefits out of VMware cloud on aws. We are also enabling US mutually. So AWS learns from us about the VMware technology, where VMware people learn about the AWS technology. We are also enabling our channel partners and we are working together on customer projects. So we have regular assembled globally and also virtually on Slack and the usual suspect tools working together and listening to customers, that's, that's very important. Asking our customers where are their needs? And we are driving the solution into the direction that our customers get the, the best benefits out of VMware cloud on aws. And over the time we, we really have involved the solution. As Samia mentioned, we just added additional storage solutions to VMware cloud on aws. We now have three different instance types that cover a broad range of, of workload. So for example, we just added the I four I host, which is ideally for workloads that require a lot of CPU power, such as you mentioned it, AI workloads. >>Yeah. So I wanna guess just specifically on the customer journey and their transformation. You know, we've been reporting on Silicon angle in the queue in the past couple weeks in a big way that the OPS teams are now the new devs, right? I mean that sounds OP a little bit weird, but operation IT operations is now part of the, a lot more data ops, security writing code composing, you know, with open source, a lot of great things are changing. Can you share specifically what customers are looking for when you say, as you guys come in and assess their needs, what are they doing? What are some of the things that they're doing with VMware on AWS specifically that's a little bit different? Can you share some of and highlights there? >>That, that's a great point because originally VMware and AWS came from very different directions when it comes to speaking people at customers. So for example, aws very developer focused, whereas VMware has a very great footprint in the IT ops area. And usually these are very different, very different teams, groups, different cultures, but it's, it's getting together. However, we always try to address the customers, right? There are customers that want to build up a new application from the scratch and build resiliency, availability, recoverability, scalability into the application. But there are still a lot of customers that say, well we don't have all of the skills to redevelop everything to refactor an application to make it highly available. So we want to have all of that as a service, recoverability as a service, scalability as a service. We want to have this from the infrastructure. That was one of the unique selling points for VMware on premise and now we are bringing this into the cloud. >>Samir, talk about your perspective. I wanna get your thoughts, and not to take a tangent, but we had covered the AWS remar of, actually it was Amazon res machine learning automation, robotics and space. It was really kinda the confluence of industrial IOT software physical. And so when you look at like the IT operations piece becoming more software, you're seeing things about automation, but the skill gap is huge. So you're seeing low code, no code automation, you know, Hey Alexa, deploy a Kubernetes cluster. Yeah, I mean, I mean that's coming, right? So we're seeing this kind of operating automation meets higher level services meets workloads. Can you unpack that and share your opinion on, on what you see there from an Amazon perspective and how it relates to this? >>Yeah, totally. Right. And you know, look at it from the point of view where we said this is a jointly engineered solution, but it's not migrating to one option or the other option, right? It's more or less together. So even with VMware cloud on aws, yes it is utilizing AWS infrastructure, but your environment is connected to that AWS VPC in your AWS account. So if you wanna leverage any of the native AWS services, so any of the 200 plus AWS services, you have that option to do so. So that's gonna give you that power to do certain things, such as, for example, like how you mentioned with iot, even with utilizing Alexa or if there's any other service that you wanna utilize, that's the joining point between both of the offerings. Right off the top though, with digital transformation, right? You, you have to think about where it's not just about the technology, right? There's also where you want to drive growth in the underlying technology. Even in your business leaders are looking to reinvent their business. They're looking to take different steps as far as pursuing a new strategy. Maybe it's a process, maybe it's with the people, the culture, like how you said before, where people are coming in from a different background, right? They may not be used to the cloud, they may not be used to AWS services, but now you have that capability to mesh them together. Okay. Then also, Oh, >>Go ahead, finish >>Your thought. No, no, I was gonna say, what it also comes down to is you need to think about the operating model too, where it is a shift, right? Especially for that VS four admin that's used to their on-premises at environment. Now with VMware cloud on aws, you have that ability to leverage a cloud, but the investment that you made and certain things as far as automation, even with monitoring, even with logging, yeah. You still have that methodology where you can utilize that in VMware cloud on AWS two. >>Danielle, I wanna get your thoughts on this because at at explore and, and, and after the event, now as we prep for Cuban and reinvent coming up the big AWS show, I had a couple conversations with a lot of the VMware customers and operators and it's like hundreds of thousands of, of, of, of users and millions of people talking about and and peaked on VM we're interested in v VMware. The common thread was one's one, one person said, I'm trying to figure out where I'm gonna put my career in the next 10 to 15 years. And they've been very comfortable with VMware in the past, very loyal, and they're kind of talking about, I'm gonna be the next cloud, but there's no like role yet architects, is it Solution architect sre. So you're starting to see the psychology of the operators who now are gonna try to make these career decisions, like how, what am I gonna work on? And it's, and that was kind of fuzzy, but I wanna get your thoughts. How would you talk to that persona about the future of VMware on, say, cloud for instance? What should they be thinking about? What's the opportunity and what's gonna happen? >>So digital transformation definitely is a huge change for many organizations and leaders are perfectly aware of what that means. And that also means in, in to to some extent, concerns with your existing employees. Concerns about do I have to relearn everything? Do I have to acquire new skills? And, and trainings is everything worthless I learned over the last 15 years of my career? And the, the answer is to make digital transformation a success. We need not just to talk about technology, but also about process people and culture. And this is where VMware really can help because if you are applying VMware cloud on a, on AWS to your infrastructure, to your existing on-premise infrastructure, you do not need to change many things. You can use the same tools and skills, you can manage your virtual machines as you did in your on-premise environment. You can use the same managing and monitoring tools. If you have written, and many customers did this, if you have developed hundreds of, of scripts that automate tasks and if you know how to troubleshoot things, then you can use all of that in VMware cloud on aws. And that gives not just leaders, but but also the architects at customers, the operators at customers, the confidence in, in such a complex project, >>The consistency, very key point, gives them the confidence to go and, and then now that once they're confident they can start committing themselves to new things. Samir, you're reacting to this because you know, on your side you've got higher level services, you got more performance at the hardware level. I mean, lot improvement. So, okay, nothing's changed. I can still run my job now I got goodness on the other side. What's the upside? What's in it for the, for the, for the customer there? >>Yeah, so I think what it comes down to is they've already been so used to or entrenched with that VMware admin mentality, right? But now extending that to the cloud, that's where now you have that bridge between VMware cloud on AWS to bridge that VMware knowledge with that AWS knowledge. So I will look at it from the point of view where now one has that capability and that ability to just learn about the cloud, but if they're comfortable with certain aspects, no one's saying you have to change anything. You can still leverage that, right? But now if you wanna utilize any other AWS service in conjunction with that VM that resides maybe on premises or even in VMware cloud on aws, you have that option to do so. So think about it where you have that ability to be someone who's curious and wants to learn. And then if you wanna expand on the skills, you certainly have that capability to do so. >>Great stuff. I love, love that. Now that we're peeking behind the curtain here, I'd love to have you guys explain, cuz people wanna know what's goes on in behind the scenes. How does innovation get happen? How does it happen with the relationship? Can you take us through a day in the life of kind of what goes on to make innovation happen with the joint partnership? You guys just have a zoom meeting, Do you guys fly out, you write go do you ship thing? I mean I'm making it up, but you get the idea, what's the, what's, how does it work? What's going on behind the scenes? >>So we hope to get more frequently together in person, but of course we had some difficulties over the last two to three years. So we are very used to zoom conferences and and Slack meetings. You always have to have the time difference in mind if we are working globally together. But what we try, for example, we have reg regular assembled now also in person geo based. So for emia, for the Americas, for aj. And we are bringing up interesting customer situations, architectural bits and pieces together. We are discussing it always to share and to contribute to our community. >>What's interesting, you know, as, as events are coming back to here, before you get, you weigh in, I'll comment, as the cube's been going back out to events, we are hearing comments like what, what pandemic we were more productive in the pandemic. I mean, developers know how to work remotely and they've been on all the tools there, but then they get in person, they're happy to see people, but there's no one's, no one's really missed the beat. I mean it seems to be very productive, you know, workflow, not a lot of disruption. More if anything, productivity gains. >>Agreed, right? I think one of the key things to keep in mind is, you know, even if you look at AWS's and even Amazon's leadership principles, right? Customer obsession, that's key. VMware is carrying that forward as well. Where we are working with our customers, like how Daniel said met earlier, right? We might have meetings at different time zones, maybe it's in person, maybe it's virtual, but together we're working to listen to our customers. You know, we're taking and capturing that feedback to drive innovation and VMware cloud on AWS as well. But one of the key things to keep in mind is yes, there have been, there has been the pandemic, we might have been disconnected to a certain extent, but together through technology we've been able to still communicate work with our customers. Even with VMware in between, with AWS and whatnot. We had that flexibility to innovate and continue that innovation. So even if you look at it from the point of view, right? VMware cloud on AWS outposts, that was something that customers have been asking for. We've been been able to leverage the feedback and then continue to drive innovation even around VMware cloud on AWS outposts. So even with the on premises environment, if you're looking to handle maybe data sovereignty or compliance needs, maybe you have low latency requirements, that's where certain advancements come into play, right? So the key thing is always to maintain that communication track. >>And our last segment we did here on the, on this showcase, we listed the accomplishments and they were pretty significant. I mean go, you got the global rollouts of the relationship. It's just really been interesting and, and people can reference that. We won't get into it here, but I will ask you guys to comment on, as you guys continue to evolve the relationship, what's in it for the customer? What can they expect next? Cuz again, I think right now we're in at a, an inflection point more than ever. What can people expect from the relationship and what's coming up with reinvent? Can you share a little bit of kind of what's coming down the pike? >>So one of the most important things we have announced this year, and we will continue to evolve into that direction, is independent scale of storage. That absolutely was one of the most important items customer asked us for over the last years. Whenever, whenever you are requiring additional storage to host your virtual machines, you usually in VMware cloud on aws, you have to add additional notes. Now we have three different note types with different ratios of compute, storage and memory. But if you only require additional storage, you always have to get also additional compute and memory and you have to pay. And now with two solutions which offer choice for the customers, like FS six one, NetApp onap, and VMware cloud Flex Storage, you now have two cost effective opportunities to add storage to your virtual machines. And that offers opportunities for other instance types maybe that don't have local storage. We are also very, very keen looking forward to announcements, exciting announcements at the upcoming events. >>Samir, what's your, what's your reaction take on the, on what's coming down on your side? >>Yeah, I think one of the key things to keep in mind is, you know, we're looking to help our customers be agile and even scale with their needs, right? So with VMware cloud on aws, that's one of the key things that comes to mind, right? There are gonna be announcements, innovations and whatnot with outcoming events. But together we're able to leverage that to advance VMware cloud on AWS to Daniel's point storage, for example, even with host offerings. And then even with decoupling storage from compute and memory, right now you have the flexibility where you can do all of that. So to look at it from the standpoint where now with 21 regions where we have VMware cloud on AWS available as well, where customers can utilize that as needed when needed, right? So it comes down to, you know, transformation will be there. Yes, there's gonna be maybe where workloads have to be adapted where they're utilizing certain AWS services, but you have that flexibility and option to do so. And I think with the continuing events that's gonna give us the options to even advance our own services together. >>Well you guys are in the middle of it, you're in the trenches, you're making things happen, you've got a team of people working together. My final question is really more of a kind of a current situation, kind of future evolutionary thing that you haven't seen this before. I wanna get both of your reaction to it. And we've been bringing this up in, in the open conversations on the cube is in the old days it was going back this generation, you had ecosystems, you had VMware had an ecosystem they did best, had an ecosystem. You know, we have a product, you have a product, biz dev deals happen, people sign relationships and they do business together and they, they sell to each other's products or do some stuff. Now it's more about architecture cuz we're now in a distributed large scale environment where the role of ecosystems are intertwining. >>And this, you guys are in the middle of two big ecosystems. You mentioned channel partners, you both have a lot of partners on both sides. They come together. So you have this now almost a three dimensional or multidimensional ecosystem, you know, interplay. What's your thoughts on this? And, and, and because it's about the architecture, integration is a value, not so much. Innovation is only, you gotta do innovation, but when you do innovation, you gotta integrate it, you gotta connect it. So what is, how do you guys see this as a, as an architectural thing, start to see more technical business deals? >>So we are, we are removing dependencies from individual ecosystems and from individual vendors. So a customer no longer has to decide for one vendor and then it is a very expensive and high effort project to move away from that vendor, which ties customers even, even closer to specific vendors. We are removing these obstacles. So with VMware cloud on aws moving to the cloud, firstly it's, it's not a dead end. If you decide at one point in time because of latency requirements or maybe it's some compliance requirements, you need to move back into on-premise. You can do this if you decide you want to stay with some of your services on premise and just run a couple of dedicated services in the cloud, you can do this and you can mana manage it through a single pane of glass. That's quite important. So cloud is no longer a dead and it's no longer a binary decision, whether it's on premise or the cloud. It it is the cloud. And the second thing is you can choose the best of both works, right? If you are migrating virtual machines that have been running in your on-premise environment to VMware cloud on aws, by the way, in a very, very fast cost effective and safe way, then you can enrich later on enrich these virtual machines with services that are offered by aws. More than 200 different services ranging from object based storage, load balancing and so on. So it's an endless, endless possibility. >>We, we call that super cloud in, in a, in a way that we be generically defining it where everyone's innovating, but yet there's some common services. But the differentiation comes from innovation where the lock in is the value, not some spec, right? Samir, this is gonna where cloud is right now, you guys are, are not commodity. Amazon's completely differentiating, but there's some commodity things. Having got storage, you got compute, but then you got now advances in all areas. But partners innovate with you on their terms. Absolutely. And everybody wins. >>Yeah. And a hundred percent agree with you. I think one of the key things, you know, as Daniel mentioned before, is where it it, it's a cross education where there might be someone who's more proficient on the cloud side with aws, maybe more proficient with the viewers technology, but then for partners, right? They bridge that gap as well where they come in and they might have a specific niche or expertise where their background, where they can help our customers go through that transformation. So then that comes down to, hey, maybe I don't know how to connect to the cloud. Maybe I don't know what the networking constructs are. Maybe I can leverage that partner. That's one aspect to go about it. Now maybe you migrated that workload to VMware cloud on aws. Maybe you wanna leverage any of the native AWS services or even just off the top 200 plus AWS services, right? But it comes down to that skill, right? So again, solutions architecture at the back of, back of the day, end of the day, what it comes down to is being able to utilize the best of both worlds. That's what we're giving our customers at the end of the >>Day. I mean, I just think it's, it's a, it's a refactoring and innovation opportunity at all levels. I think now more than ever, you can take advantage of each other's ecosystems and partners and technologies and change how things get done with keeping the consistency. I mean, Daniel, you nailed that, right? I mean, you don't have to do anything. You still run the fear, the way you working on it and now do new things. This is kind of a cultural shift. >>Yeah, absolutely. And if, if you look, not every, not every customer, not every organization has the resources to refactor and re-platform everything. And we gave, we give them a very simple and easy way to move workloads to the cloud. Simply run them and at the same time they can free up resources to develop new innovations and, and grow their business. >>Awesome. Samir, thank you for coming on. Danielle, thank you for coming to Germany, Octoberfest, I know it's evening over there, your weekend's here. And thank you for spending the time. Samir final give you the final word, AWS reinvents coming up. Preparing. We're gonna have an exclusive with Adam, but Fry, we do a curtain raise, a dual preview. What's coming down on your side with the relationship and what can we expect to hear about what you got going on at reinvent this year? The big show? >>Yeah, so I think, you know, Daniel hit upon some of the key points, but what I will say is we do have, for example, specific sessions, both that VMware's driving and then also that AWS is driving. We do have even where we have what I call a chalk talks. So I would say, and then even with workshops, right? So even with the customers, the attendees who are there, whatnot, if they're looking for to sit and listen to a session, yes that's there. But if they wanna be hands on, that is also there too. So personally for me as an IT background, you know, been in CIS admin world and whatnot, being hands on, that's one of the key things that I personally am looking forward. But I think that's one of the key ways just to learn and get familiar with the technology. Yeah, >>Reinvents an amazing show for the in person. You guys nail it every year. We'll have three sets this year at the cube. It's becoming popular. We more and more content. You guys got live streams going on, a lot of content, a lot of media, so thanks, thanks for sharing that. Samir Daniel, thank you for coming on on this part of the showcase episode of really the customer successes with VMware Cloud Ons, really accelerating business transformation withs and VMware. I'm John Fur with the cube, thanks for watching. Hello everyone. Welcome to this cube showcase, accelerating business transformation with VMware cloud on it's a solution innovation conversation with two great guests, Fred and VP of commercial services at aws and NA Ryan Bard, who's the VP and general manager of cloud solutions at VMware. Gentlemen, thanks for joining me on this showcase. >>Great to be here. >>Hey, thanks for having us on. It's a great topic. You know, we, we've been covering this VMware cloud on abus since, since the launch going back and it's been amazing to watch the evolution from people saying, Oh, it's the worst thing I've ever seen. It's what's this mean? And depress work were, we're kind of not really on board with kind of the vision, but as it played out as you guys had announced together, it did work out great for VMware. It did work out great for a D and it continues two years later and I want just get an update from you guys on where you guys see this has been going. I'll see multiple years. Where is the evolution of the solution as we are right now coming off VMware explorer just recently and going in to reinvent, which is only a couple weeks away, feels like tomorrow. But you know, as we prepare a lot going on, where are we with the evolution of the solution? >>I mean, first thing I wanna say is, you know, PBO 2016 was a someon moment and the history of it, right? When Pat Gelsinger and Andy Jessey came together to announce this and I think John, you were there at the time I was there, it was a great, great moment. We launched the solution in 2017, the year after that at VM Word back when we called it Word, I think we have gone from strength to strength. One of the things that has really mattered to us is we have learned froms also in the processes, this notion of working backwards. So we really, really focused on customer feedback as we build a service offering now five years old, pretty remarkable journey. You know, in the first years we tried to get across all the regions, you know, that was a big focus because there was so much demand for it. >>In the second year we started going really on enterprise grade features. We invented this pretty awesome feature called Stretch clusters, where you could stretch a vSphere cluster using VSA and NSX across two AZs in the same region. Pretty phenomenal four nine s availability that applications start started to get with that particular feature. And we kept moving forward all kinds of integration with AWS direct connect transit gateways with our own advanced networking capabilities. You know, along the way, disaster recovery, we punched out two, two new services just focused on that. And then more recently we launched our outposts partnership. We were up on stage at Reinvent, again with Pat Andy announcing AWS outposts and the VMware flavor of that VMware cloud and AWS outposts. I think it's been significant growth in our federal sector as well with our federal and high certification more recently. So all in all, we are super excited. We're five years old. The customer momentum is really, really strong and we are scaling the service massively across all geos and industries. >>That's great, great update. And I think one of the things that you mentioned was how the advantages you guys got from that relationship. And, and this has kind of been the theme for AWS since I can remember from day one. Fred, you guys do the heavy lifting as as, as you always say for the customers here, VMware comes on board, takes advantage of the AWS and kind of just doesn't miss a beat, continues to move their workloads that everyone's using, you know, vSphere and these are, these are big workloads on aws. What's the AWS perspective on this? How do you see it? >>Yeah, it's pretty fascinating to watch how fast customers can actually transform and move when you take the, the skill set that they're familiar with and the advanced capabilities that they've been using on Preem and then overlay it on top of the AWS infrastructure that's, that's evolving quickly and, and building out new hardware and new instances we'll talk about. But that combined experience between both of us on a jointly engineered solution to bring the best security and the best features that really matter for those workloads drive a lot of efficiency and speed for the, for the customer. So it's been well received and the partnership is stronger than ever from an engineering standpoint, from a business standpoint. And obviously it's been very interesting to look at just how we stay day one in terms of looking at new features and work and, and responding to what customers want. So pretty, pretty excited about just seeing the transformation and the speed that which customers can move to bmc. Yeah, >>That's what great value publish. We've been talking about that in context too. Anyone building on top of the cloud, they can have their own supercloud as we call it. If you take advantage of all the CapEx and and investment Amazon's made and AWS has made and, and and continues to make in performance IAS and pass all great stuff. I have to ask you guys both as you guys see this going to the next level, what are some of the differentiations you see around the service compared to other options on the market? What makes it different? What's the combination? You mentioned jointly engineered, what are some of the key differentiators of the service compared to others? >>Yeah, I think one of the key things Fred talked about is this jointly engineered notion right from day one. We were the earlier doctors of AWS Nitro platform, right? The reinvention of E two back five years ago. And so we have been, you know, having a very, very strong engineering partnership at that level. I think from a VMware customer standpoint, you get the full software defined data center or compute storage networking on EC two, bare metal across all regions. You can scale that elastically up and down. It's pretty phenomenal just having that consistency globally, right on aws EC two global regions. Now the other thing that's a real differentiator for us that customers tell us about is this whole notion of a managed service, right? And this was somewhat new to VMware, but we took away the pain of this undifferentiated heavy lifting where customers had to provision rack, stack hardware, configure the software on top, and then upgrade the software and the security batches on top. >>So we took, took away all of that pain as customers transitioned to VMware cloud and aws. In fact, my favorite story from last year when we were all going through the lock for j debacle industry was just going through that, right? Favorite proof point from customers was before they put even race this issue to us, we sent them a notification saying we already patched all of your systems, no action from you. The customers were super thrilled. I mean these are large banks, many other customers around the world, super thrilled they had to take no action, but a pretty incredible industry challenge that we were all facing. >>Nora, that's a great, so that's a great point. You know, the whole managed service piece brings up the security, you kind of teasing at it, but you know, there's always vulnerabilities that emerge when you are doing complex logic. And as you grow your solutions, there's more bits. You know, Fred, we were commenting before we came on camera, there's more bits than ever before and, and at at the physics layer too, as well as the software. So you never know when there's gonna be a zero day vulnerability out there. Just, it happens. We saw one with fornet this week, this came outta the woodwork. But moving fast on those patches, it's huge. This brings up the whole support angle. I wanted to ask you about how you guys are doing that as well, because to me we see the value when we, when we talk to customers on the cube about this, you know, it was a real, real easy understanding of how, what the cloud means to them with VMware now with the aws. But the question that comes up that we wanna get more clarity on is how do you guys handle support together? >>Well, what's interesting about this is that it's, it's done mutually. We have dedicated support teams on both sides that work together pretty seamlessly to make sure that whether there's a issue at any layer, including all the way up into the app layer, as you think about some of the other workloads like sap, we'll go end to end and make sure that we support the customer regardless of where the particular issue might be for them. And on top of that, we look at where, where we're improving reliability in, in as a first order of, of principle between both companies. So from an availability and reliability standpoint, it's, it's top of mind and no matter where the particular item might land, we're gonna go help the customer resolve. That works really well >>On the VMware side. What's been the feedback there? What's the, what are some of the updates? >>Yeah, I think, look, I mean, VMware owns and operates the service, but we have a phenomenal backend relationship with aws. Customers call VMware for the service for any issues and, and then we have a awesome relationship with AWS on the backend for support issues or any hardware issues. The BASKE management that we jointly do, right? All of the hard problems that customers don't have to worry about. I think on the front end, we also have a really good group of solution architects across the companies that help to really explain the solution. Do complex things like cloud migration, which is much, much easier with VMware cloud aws, you know, we are presenting that easy button to the public cloud in many ways. And so we have a whole technical audience across the two companies that are working with customers every single day. >>You know, you had mentioned, I've got a list here, some of the innovations the, you mentioned the stretch clustering, you know, getting the GOs working, Advanced network, disaster recovery, you know, fed, Fed ramp, public sector certifications, outposts, all good. You guys are checking the boxes every year. You got a good, good accomplishments list there on the VMware AWS side here in this relationship. The question that I'm interested in is what's next? What recent innovations are you doing? Are you making investments in what's on the lists this year? What items will be next year? How do you see the, the new things, the list of accomplishments, people wanna know what's next. They don't wanna see stagnant growth here, they wanna see more action, you know, as as cloud kind of continues to scale and modern applications cloud native, you're seeing more and more containers, more and more, you know, more CF C I C D pipe pipelining with with modern apps, put more pressure on the system. What's new, what's the new innovations? >>Absolutely. And I think as a five yearold service offering innovation is top of mind for us every single day. So just to call out a few recent innovations that we announced in San Francisco at VMware Explorer. First of all, our new platform i four I dot metal, it's isolate based, it's pretty awesome. It's the latest and greatest, all the speeds and feeds that we would expect from VMware and aws. At this point in our relationship. We announced two different storage options. This notion of working from customer feedback, allowing customers even more price reductions, really take off that storage and park it externally, right? And you know, separate that from compute. So two different storage offerings there. One is with AWS Fsx, with NetApp on tap, which brings in our NetApp partnership as well into the equation and really get that NetApp based, really excited about this offering as well. >>And the second storage offering for VMware cloud Flex Storage, VMware's own managed storage offering. Beyond that, we have done a lot of other innovations as well. I really wanted to talk about VMware cloud Flex Compute, where previously customers could only scale by hosts and a host is 36 to 48 cores, give or take. But with VMware cloud Flex Compute, we are now allowing this notion of a resource defined compute model where customers can just get exactly the V C P memory and storage that maps to the applications, however small they might be. So this notion of granularity is really a big innovation that that we are launching in the market this year. And then last but not least, talk about ransomware. Of course it's a hot topic in industry. We are seeing many, many customers ask for this. We are happy to announce a new ransomware recovery with our VMware cloud DR solution. >>A lot of innovation there and the way we are able to do machine learning and make sure the workloads that are covered from snapshots and backups are actually safe to use. So there's a lot of differentiation on that front as well. A lot of networking innovations with Project Knot star for ability to have layer flow through layer seven, you know, new SaaS services in that area as well. Keep in mind that the service already supports managed Kubernetes for containers. It's built in to the same clusters that have virtual machines. And so this notion of a single service with a great TCO for VMs and containers and sort of at the heart of our office, >>The networking side certainly is a hot area to keep innovating on. Every year it's the same, same conversation, get better, faster networking, more, more options there. The flex computes. Interesting. If you don't mind me getting a quick clarification, could you explain the Drew screen resource defined versus hardware defined? Because this is kind of what we had saw at Explore coming out, that notion of resource defined versus hardware defined. What's the, what does that mean? >>Yeah, I mean I think we have been super successful in this hardware defined notion. We we're scaling by the hardware unit that we present as software defined data centers, right? And so that's been super successful. But we, you know, customers wanted more, especially customers in different parts of the world wanted to start even smaller and grow even more incrementally, right? Lower their costs even more. And so this is the part where resource defined starts to be very, very interesting as a way to think about, you know, here's my bag of resources exactly based on what the customers request for fiber machines, five containers, its size exactly for that. And then as utilization grows, we elastically behind the scenes, we're able to grow it through policies. So that's a whole different dimension. It's a whole different service offering that adds value and customers are comfortable. They can go from one to the other, they can go back to that post based model if they so choose to. And there's a jump off point across these two different economic models. >>It's kind of cloud of flexibility right there. I like the name Fred. Let's get into some of the examples of customers, if you don't mind. Let's get into some of the ex, we have some time. I wanna unpack a little bit of what's going on with the customer deployments. One of the things we've heard again on the cube is from customers is they like the clarity of the relationship, they love the cloud positioning of it. And then what happens is they lift and shift the workloads and it's like, feels great. It's just like we're running VMware on AWS and then they would start consuming higher level services, kind of that adoption next level happens and because it it's in the cloud, so, So can you guys take us through some recent examples of customer wins or deployments where they're using VMware cloud on AWS on getting started, and then how do they progress once they're there? How does it evolve? Can you just walk us through a couple of use cases? >>Sure. There's a, well there's a couple. One, it's pretty interesting that, you know, like you said, as there's more and more bits you need better and better hardware and networking. And we're super excited about the I four and the capabilities there in terms of doubling and or tripling what we're doing around a lower variability on latency and just improving all the speeds. But what customers are doing with it, like the college in New Jersey, they're accelerating their deployment on a, on onboarding over like 7,400 students over a six to eight month period. And they've really realized a ton of savings. But what's interesting is where and how they can actually grow onto additional native services too. So connectivity to any other services is available as they start to move and migrate into this. The, the options there obviously are tied to all the innovation that we have across any services, whether it's containerized and with what they're doing with Tanu or with any other container and or services within aws. >>So there's, there's some pretty interesting scenarios where that data and or the processing, which is moved quickly with full compliance, whether it's in like healthcare or regulatory business is, is allowed to then consume and use things, for example, with tech extract or any other really cool service that has, you know, monthly and quarterly innovations. So there's things that you just can't, could not do before that are coming out and saving customers money and building innovative applications on top of their, their current app base in, in a rapid fashion. So pretty excited about it. There's a lot of examples. I think I probably don't have time to go into too, too many here. Yeah. But that's actually the best part is listening to customers and seeing how many net new services and new applications are they actually building on top of this platform. >>Nora, what's your perspective from the VMware sy? So, you know, you guys have now a lot of headroom to offer customers with Amazon's, you know, higher level services and or whatever's homegrown where's being rolled out? Cuz you now have a lot of hybrid too, so, so what's your, what's your take on what, what's happening in with customers? >>I mean, it's been phenomenal, the, the customer adoption of this and you know, banks and many other highly sensitive verticals are running production grade applications, tier one applications on the service over the last five years. And so, you know, I have a couple of really good examples. S and p Global is one of my favorite examples. Large bank, they merge with IHS market, big sort of conglomeration. Now both customers were using VMware cloud and AWS in different ways. And with the, with the use case, one of their use cases was how do I just respond to these global opportunities without having to invest in physical data centers? And then how do I migrate and consolidate all my data centers across the global, which there were many. And so one specific example for this company was how they migrated thousand 1000 workloads to VMware cloud AWS in just six weeks. Pretty phenomenal. If you think about everything that goes into a cloud migration process, people process technology and the beauty of the technology going from VMware point A to VMware point B, the the lowest cost, lowest risk approach to adopting VMware, VMware cloud, and aws. So that's, you know, one of my favorite examples. There are many other examples across other verticals that we continue to see. The good thing is we are seeing rapid expansion across the globe that constantly entering new markets with the limited number of regions and progressing our roadmap there. >>Yeah, it's great to see, I mean the data center migrations go from months, many, many months to weeks. It's interesting to see some of those success stories. So congratulations. One >>Of other, one of the other interesting fascinating benefits is the sustainability improvement in terms of being green. So the efficiency gains that we have both in current generation and new generation processors and everything that we're doing to make sure that when a customer can be elastic, they're also saving power, which is really critical in a lot of regions worldwide at this point in time. They're, they're seeing those benefits. If you're running really inefficiently in your own data center, that is just a, not a great use of power. So the actual calculators and the benefits to these workloads is, are pretty phenomenal just in being more green, which I like. We just all need to do our part there. And, and this is a big part of it here. >>It's a huge, it's a huge point about the sustainability. Fred, I'm glad you called that out. The other one I would say is supply chain issues. Another one you see that constrains, I can't buy hardware. And the third one is really obvious, but no one really talks about it. It's security, right? I mean, I remember interviewing Stephen Schmidt with that AWS and many years ago, this is like 2013, and you know, at that time people were saying the cloud's not secure. And he's like, listen, it's more secure in the cloud on premise. And if you look at the security breaches, it's all about the on-premise data center vulnerabilities, not so much hardware. So there's a lot you gotta to stay current on, on the isolation there is is hard. So I think, I think the security and supply chain, Fred is, is another one. Do you agree? >>I I absolutely agree. It's, it's hard to manage supply chain nowadays. We put a lot of effort into that and I think we have a great ability to forecast and make sure that we can lean in and, and have the resources that are available and run them, run them more efficiently. Yeah, and then like you said on the security point, security is job one. It is, it is the only P one. And if you think of how we build our infrastructure from Nitro all the way up and how we respond and work with our partners and our customers, there's nothing more important. >>And naron your point earlier about the managed service patching and being on top of things, it's really gonna get better. All right, final question. I really wanna thank you for your time on this showcase. It's really been a great conversation. Fred, you had made a comment earlier. I wanna kind of end with kind of a curve ball and put you eyes on the spot. We're talking about a modern, a new modern shift. It's another, we're seeing another inflection point, we've been documenting it, it's almost like cloud hitting another inflection point with application and open source growth significantly at the app layer. Continue to put a lot of pressure and, and innovation in the infrastructure side. So the question is for you guys each to answer is what's the same and what's different in today's market? So it's kind of like we want more of the same here, but also things have changed radically and better here. What are the, what's, what's changed for the better and where, what's still the same kind of thing hanging around that people are focused on? Can you share your perspective? >>I'll, I'll, I'll, I'll tackle it. You know, businesses are complex and they're often unique that that's the same. What's changed is how fast you can innovate. The ability to combine manage services and new innovative services and build new applications is so much faster today. Leveraging world class hardware that you don't have to worry about that's elastic. You, you could not do that even five, 10 years ago to the degree you can today, especially with innovation. So innovation is accelerating at a, at a rate that most people can't even comprehend and understand the, the set of services that are available to them. It's really fascinating to see what a one pizza team of of engineers can go actually develop in a week. It is phenomenal. So super excited about this space and it's only gonna continue to accelerate that. That's my take. All right. >>You got a lot of platform to compete on with, got a lot to build on then you're Ryan, your side, What's your, what's your answer to that question? >>I think we are seeing a lot of innovation with new applications that customers are constant. I think what we see is this whole notion of how do you go from desktop to production to the secure supply chain and how can we truly, you know, build on the agility that developers desire and build all the security and the pipelines to energize that motor production quickly and efficiently. I think we, we are seeing, you know, we are at the very start of that sort of of journey. Of course we have invested in Kubernetes the means to an end, but there's so much more beyond that's happening in industry. And I think we're at the very, very beginning of this transformations, enterprise transformation that many of our customers are going through and we are inherently part of it. >>Yeah. Well gentlemen, I really appreciate that we're seeing the same thing. It's more the same here on, you know, solving these complexities with distractions. Whether it's, you know, higher level services with large scale infrastructure at, at your fingertips. Infrastructures, code, infrastructure to be provisioned, serverless, all the good stuff happen in Fred with AWS on your side. And we're seeing customers resonate with this idea of being an operator, again, being a cloud operator and developer. So the developer ops is kind of, DevOps is kind of changing too. So all for the better. Thank you for spending the time and we're seeing again, that traction with the VMware customer base and of us getting, getting along great together. So thanks for sharing your perspectives, >>I appreciate it. Thank you so >>Much. Okay, thank you John. Okay, this is the Cube and AWS VMware showcase, accelerating business transformation. VMware cloud on aws, jointly engineered solution, bringing innovation to the VMware customer base, going to the cloud and beyond. I'm John Fur, your host. Thanks for watching. Hello everyone. Welcome to the special cube presentation of accelerating business transformation on vmc on aws. I'm John Furrier, host of the Cube. We have dawan director of global sales and go to market for VMware cloud on adb. This is a great showcase and should be a lot of fun. Ashish, thanks for coming on. >>Hi John. Thank you so much. >>So VMware cloud on AWS has been well documented as this big success for VMware and aws. As customers move their workloads into the cloud, IT operations of VMware customers has signaling a lot of change. This is changing the landscape globally is on cloud migration and beyond. What's your take on this? Can you open this up with the most important story around VMC on aws? >>Yes, John. The most important thing for our customers today is the how they can safely and swiftly move their ID infrastructure and applications through cloud. Now, VMware cloud AWS is a service that allows all vSphere based workloads to move to cloud safely, swiftly and reliably. Banks can move their core, core banking platforms, insurance companies move their core insurance platforms, telcos move their goss, bss, PLA platforms, government organizations are moving their citizen engagement platforms using VMC on aws because this is one platform that allows you to move it, move their VMware based platforms very fast. Migrations can happen in a matter of days instead of months. Extremely securely. It's a VMware manage service. It's very secure and highly reliably. It gets the, the reliability of the underlyings infrastructure along with it. So win-win from our customers perspective. >>You know, we reported on this big news in 2016 with Andy Chas, the, and Pat Geling at the time, a lot of people said it was a bad deal. It turned out to be a great deal because not only could VMware customers actually have a cloud migrate to the cloud, do it safely, which was their number one concern. They didn't want to have disruption to their operations, but also position themselves for what's beyond just shifting to the cloud. So I have to ask you, since you got the finger on the pulse here, what are we seeing in the market when it comes to migrating and modern modernizing in the cloud? Because that's the next step. They go to the cloud, you guys have done that, doing it, then they go, I gotta modernize, which means kind of upgrading or refactoring. What's your take on that? >>Yeah, absolutely. Look, the first step is to help our customers assess their infrastructure and licensing and entire ID operations. Once we've done the assessment, we then create their migration plans. A lot of our customers are at that inflection point. They're, they're looking at their real estate, ex data center, real estate. They're looking at their contracts with colocation vendors. They really want to exit their data centers, right? And VMware cloud and AWS is a perfect solution for customers who wanna exit their data centers, migrate these applications onto the AWS platform using VMC on aws, get rid of additional real estate overheads, power overheads, be socially and environmentally conscious by doing that as well, right? So that's the migration story, but to your point, it doesn't end there, right? Modernization is a critical aspect of the entire customer journey as as well customers, once they've migrated their ID applications and infrastructure on cloud get access to all the modernization services that AWS has. They can correct easily to our data lake services, to our AIML services, to custom databases, right? They can decide which applications they want to keep and which applications they want to refactor. They want to take decisions on containerization, make decisions on service computing once they've come to the cloud. But the most important thing is to take that first step. You know, exit data centers, come to AWS using vmc or aws, and then a whole host of modernization options available to them. >>Yeah, I gotta say, we had this right on this, on this story, because you just pointed out a big thing, which was first order of business is to make sure to leverage the on-prem investments that those customers made and then migrate to the cloud where they can maintain their applications, their data, their infrastructure operations that they're used to, and then be in position to start getting modern. So I have to ask you, how are you guys specifically, or how is VMware cloud on s addressing these needs of the customers? Because what happens next is something that needs to happen faster. And sometimes the skills might not be there because if they're running old school, IT ops now they gotta come in and jump in. They're gonna use a data cloud, they're gonna want to use all kinds of machine learning, and there's a lot of great goodness going on above the stack there. So as you move with the higher level services, you know, it's a no brainer, obviously, but they're not, it's not yesterday's higher level services in the cloud. So how are, how is this being addressed? >>Absolutely. I think you hit up on a very important point, and that is skills, right? When our customers are operating, some of the most critical applications I just mentioned, core banking, core insurance, et cetera, they're most of the core applications that our customers have across industries, like even, even large scale ERP systems, they're actually sitting on VMware's vSphere platform right now. When the customer wants to migrate these to cloud, one of the key bottlenecks they face is skill sets. They have the trained manpower for these core applications, but for these high level services, they may not, right? So the first order of business is to help them ease this migration pain as much as possible by not wanting them to, to upscale immediately. And we VMware cloud and AWS exactly does that. I mean, you don't have to do anything. You don't have to create new skill set for doing this, right? Their existing skill sets suffice, but at the same time, it gives them that, that leeway to build that skills roadmap for their team. DNS is invested in that, right? Yes. We want to help them build those skills in the high level services, be it aml, be it, be it i t be it data lake and analytics. We want to invest in them, and we help our customers through that. So that ultimately the ultimate goal of making them drop data is, is, is a front and center. >>I wanna get into some of the use cases and success stories, but I want to just reiterate, hit back your point on the skill thing. Because if you look at what you guys have done at aws, you've essentially, and Andy Chassey used to talk about this all the time when I would interview him, and now last year Adam was saying the same thing. You guys do all the heavy lifting, but if you're a VMware customer user or operator, you are used to things. You don't have to be relearn to be a cloud architect. Now you're already in the game. So this is like almost like a instant path to cloud skills for the VMware. There's hundreds of thousands of, of VMware architects and operators that now instantly become cloud architects, literally overnight. Can you respond to that? Do you agree with that? And then give an example. >>Yes, absolutely. You know, if you have skills on the VMware platform, you know, know, migrating to AWS using via by cloud and AWS is absolutely possible. You don't have to really change the skills. The operations are exactly the same. The management systems are exactly the same. So you don't really have to change anything but the advantages that you get access to all the other AWS services. So you are instantly able to integrate with other AWS services and you become a cloud architect immediately, right? You are able to solve some of the critical problems that your underlying IT infrastructure has immediately using this. And I think that's a great value proposition for our customers to use this service. >>And just one more point, I want just get into something that's really kind of inside baseball or nuanced VMC or VMware cloud on AWS means something. Could you take a minute to explain what on AWS means? Just because you're like hosting and using Amazon as a, as a work workload? Being on AWS means something specific in your world, being VMC on AWS mean? >>Yes. This is a great question, by the way, You know, on AWS means that, you know, VMware's vse platform is, is a, is an iconic enterprise virtualization software, you know, a disproportionately high market share across industries. So when we wanted to create a cloud product along with them, obviously our aim was for them, for the, for this platform to have the goodness of the AWS underlying infrastructure, right? And, and therefore, when we created this VMware cloud solution, it it literally use the AWS platform under the eighth, right? And that's why it's called a VMs VMware cloud on AWS using, using the, the, the wide portfolio of our regions across the world and the strength of the underlying infrastructure, the reliability and, and, and sustainability that it offers. And therefore this product is called VMC on aws. >>It's a distinction I think is worth noting, and it does reflect engineering and some levels of integration that go well beyond just having a SaaS app and, and basically platform as a service or past services. So I just wanna make sure that now super cloud, we'll talk about that a little bit in another interview, but I gotta get one more question in before we get into the use cases and customer success stories is in, in most of the VM world, VMware world, in that IT world, it used to, when you heard migration, people would go, Oh my God, that's gonna take months. And when I hear about moving stuff around and doing cloud native, the first reaction people might have is complexity. So two questions for you before we move on to the next talk. Track complexity. How are you addressing the complexity issue and how long these migrations take? Is it easy? Is it it hard? I mean, you know, the knee jerk reaction is month, You're very used to that. If they're dealing with Oracle or other old school vendors, like, they're, like the old guard would be like, takes a year to move stuff around. So can you comment on complexity and speed? >>Yeah. So the first, first thing is complexity. And you know, what makes what makes anything complex is if you're, if you're required to acquire new skill sets or you've gotta, if you're required to manage something differently, and as far as VMware cloud and AWS on both these aspects, you don't have to do anything, right? You don't have to acquire new skill sets. Your existing idea operation skill sets on, on VMware's platforms are absolutely fine and you don't have to manage it any differently like, than what you're managing your, your ID infrastructure today. So in both these aspects, it's exactly the same and therefore it is absolutely not complex as far as, as far as, as far as we cloud and AWS is concerned. And the other thing is speed. This is where the huge differentiation is. You have seen that, you know, large banks and large telcos have now moved their workloads, you know, literally in days instead of months. >>Because because of VMware cloud and aws, a lot of time customers come to us with specific deadlines because they want to exit their data centers on a particular date. And what happens, VMware cloud and AWS is called upon to do that migration, right? So speed is absolutely critical. The reason is also exactly the same because you are using the exactly the same platform, the same management systems, people are available to you, you're able to migrate quickly, right? I would just reference recently we got an award from President Zelensky of Ukraine for, you know, migrating their entire ID digital infrastructure and, and that that happened because they were using VMware cloud database and happened very swiftly. >>That's been a great example. I mean, that's one political, but the economic advantage of getting outta the data center could be national security. You mentioned Ukraine, I mean Oscar see bombing and death over there. So clearly that's a critical crown jewel for their running their operations, which is, you know, you know, world mission critical. So great stuff. I love the speed thing. I think that's a huge one. Let's get into some of the use cases. One of them is, the first one I wanted to talk about was we just hit on data, data center migration. It could be financial reasons on a downturn or our, or market growth. People can make money by shifting to the cloud, either saving money or making money. You win on both sides. It's a, it's a, it's almost a recession proof, if you will. Cloud is so use case for number one data center migration. Take us through what that looks like. Give an example of a success. Take us through a day, day in the life of a data center migration in, in a couple minutes. >>Yeah. You know, I can give you an example of a, of a, of a large bank who decided to migrate, you know, their, all their data centers outside their existing infrastructure. And they had, they had a set timeline, right? They had a set timeline to migrate the, the, they were coming up on a renewal and they wanted to make sure that this set timeline is met. We did a, a complete assessment of their infrastructure. We did a complete assessment of their IT applications, more than 80% of their IT applications, underlying v vSphere platform. And we, we thought that the right solution for them in the timeline that they wanted, right, is VMware cloud ands. And obviously it was a large bank, it wanted to do it safely and securely. It wanted to have it completely managed, and therefore VMware cloud and aws, you know, ticked all the boxes as far as that is concerned. >>I'll be happy to report that the large bank has moved to most of their applications on AWS exiting three of their data centers, and they'll be exiting 12 more very soon. So that's a great example of, of, of the large bank exiting data centers. There's another Corolla to that. Not only did they manage to manage to exit their data centers and of course use and be more agile, but they also met their sustainability goals. Their board of directors had given them goals to be carbon neutral by 2025. They found out that 35% of all their carbon foot footprint was in their data centers. And if they moved their, their ID infrastructure to cloud, they would severely reduce the, the carbon footprint, which is 35% down to 17 to 18%. Right? And that meant their, their, their, their sustainability targets and their commitment to the go to being carbon neutral as well. >>And that they, and they shift that to you guys. Would you guys take that burden? A heavy lifting there and you guys have a sustainability story, which is a whole nother showcase in and of itself. We >>Can Exactly. And, and cause of the scale of our, of our operations, we are able to, we are able to work on that really well as >>Well. All right. So love the data migration. I think that's got real proof points. You got, I can save money, I can, I can then move and position my applications into the cloud for that reason and other reasons as a lot of other reasons to do that. But now it gets into what you mentioned earlier was, okay, data migration, clearly a use case and you laid out some successes. I'm sure there's a zillion others. But then the next step comes, now you got cloud architects becoming minted every, and you got managed services and higher level services. What happens next? Can you give us an example of the use case of the modernization around the NextGen workloads, NextGen applications? We're starting to see, you know, things like data clouds, not data warehouses. We're not gonna data clouds, it's gonna be all kinds of clouds. These NextGen apps are pure digital transformation in action. Take us through a use case of how you guys make that happen with a success story. >>Yes, absolutely. And this is, this is an amazing success story and the customer here is s and p global ratings. As you know, s and p global ratings is, is the world leader as far as global ratings, global credit ratings is concerned. And for them, you know, the last couple of years have been tough as far as hardware procurement is concerned, right? The pandemic has really upended the, the supply chain. And it was taking a lot of time to procure hardware, you know, configure it in time, make sure that that's reliable and then, you know, distribute it in the wide variety of, of, of offices and locations that they have. And they came to us. We, we did, again, a, a, a alar, a fairly large comprehensive assessment of their ID infrastructure and their licensing contracts. And we also found out that VMware cloud and AWS is the right solution for them. >>So we worked there, migrated all their applications, and as soon as we migrated all their applications, they got, they got access to, you know, our high level services be our analytics services, our machine learning services, our, our, our, our artificial intelligence services that have been critical for them, for their growth. And, and that really is helping them, you know, get towards their next level of modern applications. Right Now, obviously going forward, they will have, they will have the choice to, you know, really think about which applications they want to, you know, refactor or which applications they want to go ahead with. That is really a choice in front of them. And, but you know, the, we VMware cloud and AWS really gave them the opportunity to first migrate and then, you know, move towards modernization with speed. >>You know, the speed of a startup is always the kind of the Silicon Valley story where you're, you know, people can make massive changes in 18 months, whether that's a pivot or a new product. You see that in startup world. Now, in the enterprise, you can see the same thing. I noticed behind you on your whiteboard, you got a slogan that says, are you thinking big? I know Amazon likes to think big, but also you work back from the customers and, and I think this modern application thing's a big deal because I think the mindset has always been constrained because back before they moved to the cloud, most IT, and, and, and on-premise data center shops, it's slow. You gotta get the hardware, you gotta configure it, you gotta, you gotta stand it up, make sure all the software is validated on it, and loading a database and loading oss, I mean, mean, yeah, it got easier and with scripting and whatnot, but when you move to the cloud, you have more scale, which means more speed, which means it opens up their capability to think differently and build product. What are you seeing there? Can you share your opinion on that epiphany of, wow, things are going fast, I got more time to actually think about maybe doing a cloud native app or transforming this or that. What's your, what's your reaction to that? Can you share your opinion? >>Well, ultimately we, we want our customers to utilize, you know, most of our modern services, you know, applications should be microservices based. When desired, they should use serverless applic. So list technology, they should not have monolithic, you know, relational database contracts. They should use custom databases, they should use containers when needed, right? So ultimately, we want our customers to use these modern technologies to make sure that their IT infrastructure, their licensing, their, their entire IT spend is completely native to cloud technologies. They work with the speed of a startup, but it's important for them to, to, to get to the first step, right? So that's why we create this journey for our customers, where you help them migrate, give them time to build the skills, they'll help them mo modernize, take our partners along with their, along with us to, to make sure that they can address the need for our customers. That's, that's what our customers need today, and that's what we are working backwards from. >>Yeah, and I think that opens up some big ideas. I'll just say that the, you know, we're joking, I was joking the other night with someone here in, in Palo Alto around serverless, and I said, you know, soon you're gonna hear words like architectural list. And that's a criticism on one hand, but you might say, Hey, you know, if you don't really need an architecture, you know, storage lists, I mean, at the end of the day, infrastructure is code means developers can do all the it in the coding cycles and then make the operations cloud based. And I think this is kind of where I see the dots connecting. Final thought here, take us through what you're thinking around how this new world is evolving. I mean, architecturals kind of a joke, but the point is, you know, you have to some sort of architecture, but you don't have to overthink it. >>Totally. No, that's a great thought, by the way. I know it's a joke, but it's a great thought because at the end of the day, you know, what do the customers really want? They want outcomes, right? Why did service technology come? It was because there was an outcome that they needed. They didn't want to get stuck with, you know, the, the, the real estate of, of a, of a server. They wanted to use compute when they needed to, right? Similarly, what you're talking about is, you know, outcome based, you know, desire of our customers and, and, and that's exactly where the word is going to, Right? Cloud really enforces that, right? We are actually, you know, working backwards from a customer's outcome and using, using our area the breadth and depth of our services to, to deliver those outcomes, right? And, and most of our services are in that path, right? When we use VMware cloud and aws, the outcome is a, to migrate then to modernize, but doesn't stop there, use our native services, you know, get the business outcomes using this. So I think that's, that's exactly what we are going through >>Actually, should actually, you're the director of global sales and go to market for VMware cloud on Aus. I wanna thank you for coming on, but I'll give you the final minute. Give a plug, explain what is the VMware cloud on Aus, Why is it great? Why should people engage with you and, and the team, and what ultimately is this path look like for them going forward? >>Yeah. At the end of the day, we want our customers to have the best paths to the cloud, right? The, the best path to the cloud is making sure that they migrate safely, reliably, and securely as well as with speed, right? And then, you know, use that cloud platform to, to utilize AWS's native services to make sure that they modernize their IT infrastructure and applications, right? We want, ultimately that our customers, customers, customer get the best out of, you know, utilizing the, that whole application experience is enhanced tremendously by using our services. And I think that's, that's exactly what we are working towards VMware cloud AWS is, is helping our customers in that journey towards migrating, modernizing, whether they wanna exit a data center or whether they wanna modernize their applications. It's a essential first step that we wanna help our customers with >>One director of global sales and go to market with VMware cloud on neighbors. He's with aws sharing his thoughts on accelerating business transformation on aws. This is a showcase. We're talking about the future path. We're talking about use cases with success stories from customers as she's thank you for spending time today on this showcase. >>Thank you, John. I appreciate it. >>Okay. This is the cube, special coverage, special presentation of the AWS Showcase. I'm John Furrier, thanks for watching.
SUMMARY :
Great to have you and Daniel Re Myer, principal architect global AWS synergy Greatly appreciate it. You're starting to see, you know, this idea of higher level services, More recently, one of the things to keep in mind is we're looking to deliver value Then the other thing comes down to is where we Daniel, I wanna get to you in a second. lot of CPU power, such as you mentioned it, AI workloads. composing, you know, with open source, a lot of great things are changing. So we want to have all of that as a service, on what you see there from an Amazon perspective and how it relates to this? And you know, look at it from the point of view where we said this to leverage a cloud, but the investment that you made and certain things as far How would you talk to that persona about the future And that also means in, in to to some extent, concerns with your I can still run my job now I got goodness on the other side. on the skills, you certainly have that capability to do so. Now that we're peeking behind the curtain here, I'd love to have you guys explain, You always have to have the time difference in mind if we are working globally together. I mean it seems to be very productive, you know, I think one of the key things to keep in mind is, you know, even if you look at AWS's guys to comment on, as you guys continue to evolve the relationship, what's in it for So one of the most important things we have announced this year, Yeah, I think one of the key things to keep in mind is, you know, we're looking to help our customers You know, we have a product, you have a product, biz dev deals happen, people sign relationships and they do business And this, you guys are in the middle of two big ecosystems. You can do this if you decide you want to stay with some of your services But partners innovate with you on their terms. I think one of the key things, you know, as Daniel mentioned before, You still run the fear, the way you working on it and And if, if you look, not every, And thank you for spending the time. So personally for me as an IT background, you know, been in CIS admin world and whatnot, thank you for coming on on this part of the showcase episode of really the customer successes with VMware we're kind of not really on board with kind of the vision, but as it played out as you guys had announced together, across all the regions, you know, that was a big focus because there was so much demand for We invented this pretty awesome feature called Stretch clusters, where you could stretch a And I think one of the things that you mentioned was how the advantages you guys got from that and move when you take the, the skill set that they're familiar with and the advanced capabilities that I have to ask you guys both as you guys see this going to the next level, you know, having a very, very strong engineering partnership at that level. put even race this issue to us, we sent them a notification saying we And as you grow your solutions, there's more bits. the app layer, as you think about some of the other workloads like sap, we'll go end to What's been the feedback there? which is much, much easier with VMware cloud aws, you know, they wanna see more action, you know, as as cloud kind of continues to And you know, separate that from compute. And the second storage offering for VMware cloud Flex Storage, VMware's own managed storage you know, new SaaS services in that area as well. If you don't mind me getting a quick clarification, could you explain the Drew screen resource defined versus But we, you know, because it it's in the cloud, so, So can you guys take us through some recent examples of customer The, the options there obviously are tied to all the innovation that we So there's things that you just can't, could not do before I mean, it's been phenomenal, the, the customer adoption of this and you know, Yeah, it's great to see, I mean the data center migrations go from months, many, So the actual calculators and the benefits So there's a lot you gotta to stay current on, Yeah, and then like you said on the security point, security is job one. So the question is for you guys each to Leveraging world class hardware that you don't have to worry production to the secure supply chain and how can we truly, you know, Whether it's, you know, higher level services with large scale Thank you so I'm John Furrier, host of the Cube. Can you open this up with the most important story around VMC on aws? platform that allows you to move it, move their VMware based platforms very fast. They go to the cloud, you guys have done that, So that's the migration story, but to your point, it doesn't end there, So as you move with the higher level services, So the first order of business is to help them ease Because if you look at what you guys have done at aws, the advantages that you get access to all the other AWS services. Could you take a minute to explain what on AWS on AWS means that, you know, VMware's vse platform is, I mean, you know, the knee jerk reaction is month, And you know, what makes what the same because you are using the exactly the same platform, the same management systems, which is, you know, you know, world mission critical. decided to migrate, you know, their, So that's a great example of, of, of the large bank exiting data And that they, and they shift that to you guys. And, and cause of the scale of our, of our operations, we are able to, We're starting to see, you know, things like data clouds, And for them, you know, the last couple of years have been tough as far as hardware procurement is concerned, And, and that really is helping them, you know, get towards their next level You gotta get the hardware, you gotta configure it, you gotta, you gotta stand it up, most of our modern services, you know, applications should be microservices based. I mean, architecturals kind of a joke, but the point is, you know, the end of the day, you know, what do the customers really want? I wanna thank you for coming on, but I'll give you the final minute. customers, customer get the best out of, you know, utilizing the, One director of global sales and go to market with VMware cloud on neighbors. I'm John Furrier, thanks for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Samir | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Maryland | LOCATION | 0.99+ |
Pat Geling | PERSON | 0.99+ |
John Foer | PERSON | 0.99+ |
Andy Chassey | PERSON | 0.99+ |
Adam | PERSON | 0.99+ |
Daniel | PERSON | 0.99+ |
Andy Jessey | PERSON | 0.99+ |
2017 | DATE | 0.99+ |
Daniel Re Myer | PERSON | 0.99+ |
Germany | LOCATION | 0.99+ |
Fred | PERSON | 0.99+ |
Samir Daniel | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
Stephen Schmidt | PERSON | 0.99+ |
Danielle | PERSON | 0.99+ |
2016 | DATE | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Samia | PERSON | 0.99+ |
two companies | QUANTITY | 0.99+ |
2025 | DATE | 0.99+ |
Andy Chas | PERSON | 0.99+ |
John Fur | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
2013 | DATE | 0.99+ |
36 | QUANTITY | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
two questions | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Nora | PERSON | 0.99+ |
Andy Goldstein & Tushar Katarki, Red Hat | KubeCon + CloudNativeCon NA 2022
>>Hello everyone and welcome back to Motor City, Michigan. We're live from the Cube and my name is Savannah Peterson. Joined this afternoon with my co-host John Ferer. John, how you doing? Doing >>Great. This next segment's gonna be awesome about application modernization, scaling pluses. This is what's gonna, how are the next generation software revolution? It's gonna be >>Fun. You know, it's kind of been a theme of our day today is scale. And when we think about the complex orchestration platform that is Kubernetes, everyone wants to scale faster, quicker, more efficiently, and our guests are here to tell us all about that. Please welcome to Char and Andy, thank you so much for being here with us. You were on the Red Hat OpenShift team. Yeah. I suspect most of our audience is familiar, but just in case, let's give 'em a quick one-liner pitch so everyone's on the same page. Tell us about OpenShift. >>I, I'll take that one. OpenShift is our ES platform is our ES distribution. You can consume it as a self-managed platform or you can consume it as a managed service on on public clouds. And so we just call it all OpenShift. So it's basically Kubernetes, but you know, with a CNCF ecosystem around it to make things more easier. So maybe there's two >>Lights. So what does being at coupon mean for you? How does it feel to be here? What's your initial takes? >>Exciting. I'm having a fantastic time. I haven't been to coupon since San Diego, so it's great to be back in person and see old friends, make new friends, have hallway conversations. It's, it's great as an engineer trying to work in this ecosystem, just being able to, to be in the same place with these folks. >>And you gotta ask, before we came on camera, you're like, this is like my sixth co con. We were like, we're seven, you know, But that's a lot of co coupons. It >>Is, yes. I mean, so what, >>Yes. >>Take us status >>For sure. Where we are now. Compare and contrast co. Your first co con, just scope it out. What's the magnitude of change? If you had to put a pin on that, because there's a lot of new people coming in, they might not have seen where it's come from and how we got here is maybe not how we're gonna get to the next >>Level. I've seen it grow tremendously since the first one I went to, which I think was Austin several years ago. And what's great is seeing lots of new people interested in contributing and also seeing end users who are trying to figure out the best way to take advantage of this great ecosystem that we have. >>Awesome. And the project management side, you get the keys to the Kingdom with Red Hat OpenShift, which has been successful. Congratulations by the way. Thank you. We watched that grow and really position right on the wave. It's going great. What's the update on on the product? Kind of, you're in a good, good position right now. Yeah, >>No, we we're feeling good about it. It's all about our customers. Obviously the fact that, you know, we have thousands of customers using OpenShift as the cloud native platform, the container platform. We're very excited. The great thing about them is that, I mean you can go to like OpenShift Commons is kind of a user group that we run on the first day, like on Tuesday we ran. I mean you should see the number of just case studies that our customers went through there, you know? And it is fantastic to see that. I mean it's across so many different industries, across so many different use cases, which is very exciting. >>One of the things we've been reporting here in the Qla scene before, but here more important is just that if you take digital transformation to the, to its conclusion, the IT department and developers, they're not a department to serve the business. They are the business. Yes. That means that the developers are deciding things. Yeah. And running the business. Prove their code. Yeah. Okay. If that's, if that takes place, you gonna have scale. And we also said on many cubes, certainly at Red Hat Summit and other ones, the clouds are distributed computer, it's distributed computing. So you guys are focusing on this project, Andy, that you're working on kcp. >>Yes. >>Which is, I won't platform Kubernetes platform for >>Control >>Planes. Control planes. Yes. Take us through, what's the focus on why is that important and why is that relate to the mission of developers being in charge and large scale? >>Sure. So a lot of times when people are interested in developing on Kubernetes and running workloads, they need a cluster of course. And those are not cheap. It takes time, it takes money, it takes resources to get them. And so we're trying to make that faster and easier for, for end users and everybody involved. So with kcp, we've been able to take what looks like one normal Kubernetes and partition it. And so everybody gets a slice of it. You're an administrator in your little slice and you don't have to ask for permission to install new APIs and they don't conflict with anybody else's APIs. So we're really just trying to make it super fast and make it super flexible. So everybody is their own admin. >>So the developer basically looks at it as a resource blob. They can do whatever they want, but it's shared and provisioned. >>Yes. One option. It's like, it's like they have their own cluster, but you don't have to go through the process of actually provisioning a full >>Cluster. And what's the alternative? What's the what's, what's the, what's the benefit and what was the alternative to >>This? So the alternative, you spin up a full cluster, which you know, maybe that's three control plane nodes, you've got multiple workers, you've got a bunch of virtual machines or bare metal, or maybe you take, >>How much time does that take? Just ballpark. >>Anywhere from five minutes to an hour you can use cloud services. Yeah. Gke, E Ks and so on. >>Keep banging away. You're configuring. Yeah. >>Those are faster. Yeah. But it's still like, you still have to wait for that to happen and it costs money to do all of that too. >>Absolutely. And it's complex. Why do something that's been done, if there's a tool that can get you a couple steps down the path, which makes a ton of sense. Something that we think a lot when we're talking about scale. You mentioned earlier, Tohar, when we were chatting before the cams were alive, scale means a lot of different things. Can you dig in there a little bit? >>Yeah, I >>Mean, so when, when >>We talk about scale, >>We are talking about from a user perspective, we are talking about, you know, there are more users, there are more applications, there are more workloads, there are more services being run on Kubernetes now, right? So, and OpenShift. So, so that's one dimension of this scale. The other dimension of the scale is how do you manage all the underlying infrastructure, the clusters, the name spaces, and all the observability data, et cetera. So that's at least two levels of scale. And then obviously there's a third level of scale, which is, you know, there is scale across not just different clouds, but also from cloud to the edge. So there is that dimension of scale. So there are several dimensions of this scale. And the one that again, we are focused on here really is about, you know, this, the first one that I talk about is a user. And when I say user, it could be a developer, it could be an application architect, or it could be an application owner who wants to develop Kubernetes applications for Kubernetes and wants to publish those APIs, if you will, and make it discoverable and then somebody consumes it. So that's the scale we are talking about >>Here. What are some of the enterprise, you guys have a lot of customers, we've talked to you guys before many, many times and other subjects, Red Hat, I mean you guys have all the customers. Yeah. Enterprise, they've been there, done that. And you know, they're, they're savvy. Yeah. But the cloud is a whole nother ballgame. What are they thinking about? What's the psychology of the customer right now? Because now they have a lot of choices. Okay, we get it, we're gonna re-platform refactor apps, we'll keep some legacy on premises for whatever reasons. But cloud pretty much is gonna be the game. What's the mindset right now of the customer base? Where are they in their, in their psych? Not the executive, but more of the the operators or the developers? >>Yeah, so I mean, first of all, different customers are at different levels of maturity, I would say in this. They're all on a journey how I like to describe it. And in this journey, I mean, I see a customers who are really tip of the sphere. You know, they have containerized everything. They're cloud native, you know, they use best of tools, I mean automation, you know, complete automation, you know, quick deployment of applications and all, and life cycle of applications, et cetera. So that, that's kind of one end of this spectrum >>Advanced. Then >>The advances, you know, and, and I, you know, I don't, I don't have any specific numbers here, but I'd say there are quite a few of them. And we see that. And then there is kind of the middle who are, I would say, who are familiar with containers. They know what app modernization, what a cloud application means. They might have tried a few. So they are in the journey. They are kind of, they want to get there. They have some other kind of other issues, organizational or talent and so, so on and so forth. Kinds of issues to get there. And then there are definitely the quota, what I would call the lag arts still. And there's lots of them. But I think, you know, Covid has certainly accelerated a lot of that. I hear that. And there is definitely, you know, more, the psychology is definitely more towards what I would say public cloud. But I think where we are early also in the other trend that I see is kind of okay, public cloud great, right? So people are going there, but then there is the so-called edge also. Yeah. That is for various regions. You, you gotta have a kind of a regional presence, a edge presence. And that's kind of the next kind of thing taking off here. And we can talk more >>About it. Yeah, let's talk about that a little bit because I, as you know, as we know, we're very excited about Edge here at the Cube. Yeah. What types of trends are you seeing? Is that space emerges a little bit more firmly? >>Yeah, so I mean it's, I mean, so we, when we talk about Edge, you're talking about, you could talk about Edge as a, as a retail, I mean locations, right? >>Could be so many things edges everywhere. Everywhere, right? It's all around us. Quite literally. Even on the >>Scale. Exactly. In space too. You could, I mean, in fact you mentioned space. I was, I was going to >>Kinda, it's this world, >>My space actually Kubernetes and OpenShift running in space, believe it or not, you know, So, so that's the edge, right? So we have Industrial Edge, we have Telco Edge, we have a 5g, then we have, you know, automotive edge now and, and, and retail edge and, and more, right? So, and space, you know, So it's very exciting there. So the reason I tag back to that question that you asked earlier is that that's where customers are. So cloud is one thing, but now they gotta also think about how do I, whatever I do in the cloud, how do I bring it to the edge? Because that's where my end users are, my customers are, and my data is, right? So that's the, >>And I think Kubernetes has brought that attention to the laggards. We had the Laed Martin on yesterday, which is an incredible real example of Kubernetes at the edge. It's just incredible story. We covered it also wrote a story about it. So compelling. Cuz it makes it real. Yes. And Kubernetes is real. So then the question is developer productivity, okay, Things are starting to settle in. We've got KCP scaling clusters, things are happening. What about the tool chains? And how do I develop now I got scale of development, more code coming in. I mean, we are speculating that in the future there's so much code in open source that no one has to write code anymore. Yeah. At some point it's like this gluing things together. So the developers need to be productive. How are we gonna scale the developer equation and eliminate the, the complexity of tool chains and environments. Web assembly is super hyped up at this show. I don't know why, but sounds good. No one, no one can tell me why, but I can kind of connect the dots. But this is a big thing. >>Yeah. And it's fitting that you ask about like no code. So we've been working with our friends at Cross Plain and have integrated with kcp the ability to no code, take a whole bunch of configuration and say, I want a database. I want to be a, a provider of databases. I'm in an IT department, there's a bunch of developers, they don't wanna have to write code to create databases. So I can just take, take my configuration and make it available to them. And through some super cool new easy to use tools that we have as a developer, you can just say, please give me a database and you don't have to write any code. I don't have to write any code to maintain that database. I'm actually using community tooling out there to get that spun up. So there's a lot of opportunities out there. So >>That's ease of use check. What about a large enterprise that's got multiple tool chains and you start having security issues. Does that disrupt the tool chain capability? Like there's all those now weird examples emerging, not weird, but like real plumbing challenges. How do you guys see that evolving with Red >>Hat and Yeah, I mean, I mean, talking about that, right? The software, secure software supply chain is a huge concern for everyone after, especially some of the things that have happened in the past few >>Years. Massive team here at the show. Yeah. And just within the community, we're all a little more aware, I think, even than we were before. >>Before. Yeah. Yeah. And, and I think the, so to step back, I mean from, so, so it's not just even about, you know, run time vulnerability scanning, Oh, that's important, but that's not enough, right? So we are talking about, okay, how did that container, or how did that workload get there? What is that workload? What's the prominence of this workload? How did it get created? What is in it? You know, and what, what are, how do I make, make sure that there are no unsafe attack s there. And so that's the software supply chain. And where Red Hat is very heavily invested. And as you know, with re we kind of have roots in secure operating system. And rel one of the reasons why Rel, which is the foundation of everything we do at Red Hat, is because of security. So an OpenShift has always been secure out of the box with things like scc, rollbacks access control, we, which we added very early in the product. >>And now if you kind of bring that forward, you know, now we are talking about the complete software supply chain security. And this is really about right how from the moment the, the, the developer rights code and checks it into a gateway repository from there on, how do you build it? How do you secure it at each step of the process, how do you sign it? And we are investing and contributing to the community with things like cosign and six store, which is six store project. And so that secures the supply chain. And then you can use things like algo cd and then finally we can do it, deploy it onto the cluster itself. And then we have things like acs, which can do vulnerability scanning, which is a container security platform. >>I wanna thank you guys for coming on. I know Savannah's probably got a last question, but my last question is, could you guys each take a minute to answer why has Kubernetes been so successful today? What, what was the magic of Kubernetes that made it successful? Was it because no one forced it? Yes. Was it lightweight? Was it good timing, right place at the right time community? What's the main reason that Kubernetes is enabling all this, all this shift and goodness that's coming together, kind of defacto unifies people, the stacks, almost middleware markets coming around. Again, not to use that term middleware, but it feels like it's just about to explode. Yeah. Why is this so successful? I, >>I think, I mean, the shortest answer that I can give there really is, you know, as you heard the term, I think Satya Nala from Microsoft has used it. I don't know if he was the original person who pointed, but every company wants to be a software company or is a software company now. And that means that they want to develop stuff fast. They want to develop stuff at scale and develop at, in a cloud native way, right? You know, with the cloud. So that's, and, and Kubernetes came at the right time to address the cloud problem, especially across not just one public cloud or two public clouds, but across a whole bunch of public clouds and infrastructure as, and what we call the hybrid clouds. I think the ES is really exploded because of hybrid cloud, the need for hybrid cloud. >>And what's your take on the, the magic Kubernetes? What made it, what's making it so successful? >>I would agree also that it came about at the right time, but I would add that it has great extensibility and as developers we take it advantage of that every single day. And I think that the, the patterns that we use for developing are very consistent. And I think that consistency that came with Kubernetes, just, you have so many people who are familiar with it and so they can follow the same patterns, implement things similarly, and it's just a good fit for the way that we want to get our software out there and have, and have things operate. >>Keep it simple, stupid almost is that acronym, but the consistency and the de facto alignment Yes. Behind it just created a community. So, so then the question is, are the developers now setting the standards? That seems like that's the new way, right? I mean, >>I'd like to think so. >>So I mean hybrid, you, you're touching everything at scale and you also have mini shift as well, right? Which is taking a super macro micro shift. You ma micro shift. Oh yeah, yeah, exactly. It is a micro shift. That is, that is fantastic. There isn't a base you don't cover. You've spoken a lot about community and both of you have, and serving the community as well as your engagement with them from a, I mean, it's given that you're both leaders stepping back, how, how Community First is Red Hat and OpenShift as an organization when it comes to building the next products and, and developing. >>I'll take and, and I'm sure Andy is actually the community, so I'm sure he'll want to a lot of it. But I mean, right from the start, we have roots in open source. I'll keep it, you know, and, and, and certainly with es we were one of the original contributors to Kubernetes other than Google. So in some ways we think about as co-creators of es, they love that. And then, yeah, then we have added a lot of things in conjunction with the, I I talk about like SCC for Secure, which has become part security right now, which the community, we added things like our back and other what we thought were enterprise features needed because we actually wanted to build a product out of it and sell it to customers where our customers are enterprises. So we have worked with the community. Sometimes we have been ahead of the community and we have convinced the community. Sometimes the community has been ahead of us for other reasons. So it's been a great collaboration, which is I think the right thing to do. But Andy, as I said, >>Is the community well set too? Are well said. >>Yes, I agree with all of that. I spend most of my days thinking about how to interact with the community and engage with them. So the work that we're doing on kcp, we want it to be a community project and we want to involve as many people as we can. So it is a heavy focus for me and my team. And yeah, we we do >>It all the time. How's it going? How's the project going? You feel good >>About it? I do. It is, it started as an experiment or set of prototypes and has grown leaps and bounds from it's roots and it's, it's fantastic. Yeah. >>Controlled planes are hot data planes control planes. >>I >>Know, I love it. Making things work together horizontally scalable. Yeah. Sounds like cloud cloud native. >>Yeah. I mean, just to add to it, there are a couple of talks that on KCP at Con that our colleagues s Stephan Schemanski has, and I, I, I would urge people who have listening, if they have, just Google it, if you will, and you'll get them. And those are really awesome talks to get more about >>It. Oh yeah, no, and you can tell on GitHub that KCP really is a community project and how many people are participating. It's always fun to watch the action live to. Sure. Andy, thank you so much for being here with us, John. Wonderful questions this afternoon. And thank all of you for tuning in and listening to us here on the Cube Live from Detroit. I'm Savannah Peterson. Look forward to seeing you again very soon.
SUMMARY :
John, how you doing? This is what's gonna, how are the next generation software revolution? is familiar, but just in case, let's give 'em a quick one-liner pitch so everyone's on the same page. So it's basically Kubernetes, but you know, with a CNCF ecosystem around it to How does it feel to be here? I haven't been to coupon since San Diego, so it's great to be back in And you gotta ask, before we came on camera, you're like, this is like my sixth co con. I mean, so what, What's the magnitude of change? And what's great is seeing lots of new people interested in contributing And the project management side, you get the keys to the Kingdom with Red Hat OpenShift, I mean you should see the number of just case studies that our One of the things we've been reporting here in the Qla scene before, but here more important is just that if you mission of developers being in charge and large scale? And so we're trying to make that faster and easier for, So the developer basically looks at it as a resource blob. It's like, it's like they have their own cluster, but you don't have to go through the process What's the what's, what's the, what's the benefit and what was the alternative to How much time does that take? Anywhere from five minutes to an hour you can use cloud services. Yeah. do all of that too. Why do something that's been done, if there's a tool that can get you a couple steps down the And the one that again, we are focused And you know, they're, they're savvy. they use best of tools, I mean automation, you know, complete automation, And there is definitely, you know, more, the psychology Yeah, let's talk about that a little bit because I, as you know, as we know, we're very excited about Edge here at the Cube. Even on the You could, I mean, in fact you mentioned space. So the reason I tag back to So the developers need to be productive. And through some super cool new easy to use tools that we have as a How do you guys see that evolving with Red I think, even than we were before. And as you know, with re we kind of have roots in secure operating And so that secures the supply chain. I wanna thank you guys for coming on. I think, I mean, the shortest answer that I can give there really is, you know, the patterns that we use for developing are very consistent. Keep it simple, stupid almost is that acronym, but the consistency and the de facto alignment Yes. and serving the community as well as your engagement with them from a, it. But I mean, right from the start, we have roots in open source. Is the community well set too? So the work that we're doing on kcp, It all the time. I do. Yeah. And those are really awesome talks to get more about And thank all of you
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John Ferer | PERSON | 0.99+ |
Stephan Schemanski | PERSON | 0.99+ |
Andy | PERSON | 0.99+ |
Char | PERSON | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Andy Goldstein | PERSON | 0.99+ |
San Diego | LOCATION | 0.99+ |
five minutes | QUANTITY | 0.99+ |
Tushar Katarki | PERSON | 0.99+ |
Tuesday | DATE | 0.99+ |
thousands | QUANTITY | 0.99+ |
Satya Nala | PERSON | 0.99+ |
seven | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
Edge | ORGANIZATION | 0.99+ |
Detroit | LOCATION | 0.99+ |
Motor City, Michigan | LOCATION | 0.99+ |
third level | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Cross Plain | ORGANIZATION | 0.99+ |
six store | QUANTITY | 0.99+ |
Cube | ORGANIZATION | 0.99+ |
one-liner | QUANTITY | 0.99+ |
One option | QUANTITY | 0.99+ |
ORGANIZATION | 0.98+ | |
OpenShift | TITLE | 0.98+ |
Covid | PERSON | 0.98+ |
one | QUANTITY | 0.98+ |
an hour | QUANTITY | 0.98+ |
Red Hat | ORGANIZATION | 0.98+ |
Telco Edge | ORGANIZATION | 0.98+ |
KubeCon | EVENT | 0.98+ |
first one | QUANTITY | 0.98+ |
CloudNativeCon | EVENT | 0.98+ |
Austin | LOCATION | 0.98+ |
OpenShift | ORGANIZATION | 0.97+ |
sixth co con. | QUANTITY | 0.97+ |
each step | QUANTITY | 0.97+ |
ES | TITLE | 0.97+ |
several years ago | DATE | 0.97+ |
today | DATE | 0.97+ |
Kubernetes | TITLE | 0.96+ |
first co con | QUANTITY | 0.96+ |
KCP | ORGANIZATION | 0.95+ |
One | QUANTITY | 0.95+ |
both leaders | QUANTITY | 0.94+ |
cosign | ORGANIZATION | 0.94+ |
two public clouds | QUANTITY | 0.94+ |
Community First | ORGANIZATION | 0.93+ |
one dimension | QUANTITY | 0.91+ |
Red Hat OpenShift | ORGANIZATION | 0.91+ |
first day | QUANTITY | 0.91+ |
Industrial Edge | ORGANIZATION | 0.9+ |
SCC | ORGANIZATION | 0.89+ |
each | QUANTITY | 0.89+ |
one thing | QUANTITY | 0.88+ |
customers | QUANTITY | 0.86+ |
NA 2022 | EVENT | 0.86+ |
GitHub | ORGANIZATION | 0.85+ |
single day | QUANTITY | 0.85+ |
a minute | QUANTITY | 0.83+ |
Red Hat Summit | EVENT | 0.79+ |
Cube Live | TITLE | 0.77+ |
Daniel Rethmeier & Samir Kadoo | Accelerating Business Transformation
(upbeat music) >> Hi everyone. Welcome to theCUBE special presentation here in Palo Alto, California. I'm John Furrier, host of theCUBE. We got two great guests, one for calling in from Germany, or videoing in from Germany, one from Maryland. We've got VMware and AWS. This is the customer successes with VMware Cloud on AWS Showcase: Accelerating Business Transformation. Here in the Showcase at Samir Kadoo, worldwide VMware strategic alliance solution architect leader with AWS. Samir, great to have you. And Daniel Rethmeier, principal architect global AWS synergy at VMware. Guys, you guys are working together, you're the key players in this relationship as it rolls out and continues to grow. So welcome to theCUBE. >> Thank you, greatly appreciate it. >> Great to have you guys both on. As you know, we've been covering this since 2016 when Pat Gelsinger, then CEO, and then then CEO AWS at Andy Jassy did this. It kind of got people by surprise, but it really kind of cleaned out the positioning in the enterprise for the success of VM workloads in the cloud. VMware's had great success with it since and you guys have the great partnerships. So this has been like a really strategic, successful partnership. Where are we right now? You know, years later, we got this whole inflection point coming, you're starting to see this idea of higher level services, more performance are coming in at the infrastructure side, more automation, more serverless, I mean and AI. I mean, it's just getting better and better every year in the cloud. Kind of a whole 'nother level. Where are we? Samir, let's start with you on the relationship. >> Yeah, totally. So I mean, there's several things to keep in mind, right? So in 2016, right, that's when the partnership between AWS and VMware was announced. And then less than a year later, that's when we officially launched VMware Cloud on AWS. Years later, we've been driving innovation, working with our customers, jointly engineering this between AWS and VMware. You know, one of the key things... Together, day in, day out, as far as advancing VMware Cloud on AWS. You know, even if you look at the innovation that takes place with the solution, things have modernized, things have changed, there's been advancements. You know, whether it's security focus, whether it's platform focus, whether it's networking focus, there's been modifications along the way, even storage, right, more recently. One of the things to keep in mind is we're looking to deliver value to our customers together. These are our joint customers. So there's hundreds of VMware and AWS engineers working together on this solution. And then factor in even our sales teams, right? We have VMware and AWS sales teams interacting with each other on a constant daily basis. We're working together with our customers at the end of the day too. Then we're looking to even offer and develop jointly engineered solutions specific to VMware Cloud on AWS. And even with VMware to other platforms as well. Then the other thing comes down to is where we have dedicated teams around this at both AWS and VMware. So even from solutions architects, even to our sales specialists, even to our account teams, even to specific engineering teams within the organizations, they all come together to drive this innovation forward with VMware Cloud on AWS and the jointly engineered solution partnership as well. And then I think one of the key things to keep in mind comes down to we have nearly 600 channel partners that have achieved VMware Cloud on AWS service competency. So think about it from the standpoint, there's 300 certified or validated technology solutions, they're now available to our customers. So that's even innovation right off the top as well. >> Great stuff. Daniel, I want to get to you in a second upon this principal architect position you have. In your title, you're the global AWS synergy person. Synergy means bringing things together, making it work. Take us through the architecture, because we heard a lot of folks at VMware explore this year, formerly VMworld, talking about how the workloads on IT has been completely transforming into cloud and hybrid, right? This is where the action is. Where are you? Is your customers taking advantage of that new shift? You got AIOps, you got ITOps changing a lot, you got a lot more automation, edges right around the corner. This is like a complete transformation from where we were just five years ago. What's your thoughts on the relationship? >> So at first, I would like to emphasize that our collaboration is not just that we have dedicated teams to help our customers get the most and the best benefits out of VMware Cloud and AWS, we are also enabling us mutually. So AWS learns from us about the VMware technology, where VMware people learn about the AWS technology. We are also enabling our channel partners and we are working together on customer projects. So we have regular assembles globally and also virtually on Slack and the usual suspect tools working together and listening to customers. That's very important. Asking our customers where are their needs? And we are driving the solution into the direction that our customers get the best benefits out of VMware Cloud on AWS. And over the time, we really have involved the solution. As Samir mentioned, we just added additional storage solutions to VMware Cloud on AWS. We now have three different instance types that cover a broad range of workloads. So for example, we just edited the I4i host, which is ideally for workloads that require a lot of CPU power, such as, you mentioned it, AI workloads. >> Yeah, so I want to get us just specifically on the customer journey and their transformation, you know, we've been reporting on Silicon angle in theCUBE in the past couple weeks in a big way that the ops teams are now the new devs, right? I mean that sounds a little bit weird, but IT operations is now part of a lot more DataOps, security, writing code, composing. You know, with open source, a lot of great things are changing. Can you share specifically what customers are looking for when you say, as you guys come in and assess their needs, what are they doing, what are some of the things that they're doing with VMware on AWS specifically that's a little bit different? Can you share some of and highlights there? >> That's a great point, because originally, VMware and AWS came from very different directions when it comes to speaking people and customers. So for example, AWS, very developer focused, whereas VMware has a very great footprint in the ITOps area. And usually these are very different teams, groups, different cultures, but it's getting together. However, we always try to address the customer needs, right? There are customers that want to build up a new application from the scratch and build resiliency, availability, recoverability, scalability into the application. But there are still a lot of customers that say, "Well, we don't have all of the skills to redevelop everything to refactor an application to make it highly available. So we want to have all of that as a service. Recoverability as a service, scalability as a service. We want to have this from the infrastructure." That was one of the unique selling points for VMware on-premise and now we are bringing this into the cloud. >> Samir, talk about your perspective. I want to get your thoughts, and not to take a tangent, but we had covered the AWS re:MARS, actually it was Amazon re:MARS, machine learning automation, robotics and space was really kind of the confluence of industrial IoT, software, physical. And so when you look at like the IT operations piece becoming more software, you're seeing things about automation, but the skill gap is huge. So you're seeing low code, no code, automation, you know, "Hey Alexa, deploy a Kubernetes cluster." Yeah, I mean that's coming, right? So we're seeing this kind of operating automation meets higher level services, meets workloads. Can you unpack that and share your opinion on what you see there from an Amazon perspective and how it relates to this? >> Yeah. Yeah, totally, right? And you know, look at it from the point of view where we said this is a jointly engineered solution, but it's not migrating to one option or the other option, right? It's more or less together. So even with VMware Cloud on AWS, yes it is utilizing AWS infrastructure, but your environment is connected to that AWS VPC in your AWS account. So if you want to leverage any of the native AWS services, so any of the 200 plus AWS services, you have that option to do so. So that's going to give you that power to do certain things, such as, for example, like how you mentioned with IoT, even with utilizing Alexa, or if there's any other service that you want to utilize, that's the joining point between both of the offerings right off the top. Though with digital transformation, right, you have to think about where it's not just about the technology, right? There's also where you want to drive growth in the underlying technology even in your business. Leaders are looking to reinvent their business, they're looking to take different steps as far as pursuing a new strategy, maybe it's a process, maybe it's with the people, the culture, like how you said before, where people are coming in from a different background, right? They may not be used to the cloud, they may not be used to AWS services, but now you have that capability to mesh them together. >> Okay. >> Then also- >> Oh, go ahead, finish your thought. >> No, no, no, I was going to say what it also comes down to is you need to think about the operating model too, where it is a shift, right? Especially for that vStor admin that's used to their on-premises environment. Now with VMware Cloud on AWS, you have that ability to leverage a cloud, but the investment that you made and certain things as far as automation, even with monitoring, even with logging, you still have that methodology where you can utilize that in VMware Cloud on AWS too. >> Daniel, I want to get your thoughts on this because at Explore and after the event, as we prep for CubeCon and re:Invent coming up, the big AWS show, I had a couple conversations with a lot of the VMware customers and operators, and it's like hundreds of thousands of users and millions of people talking about and peaked on VMware, interested in VMware. The common thread was one person said, "I'm trying to figure out where I'm going to put my career in the next 10 to 15 years." And they've been very comfortable with VMware in the past, very loyal, and they're kind of talking about, I'm going to be the next cloud, but there's no like role yet. Architects, is it solution architect, SRE? So you're starting to see the psychology of the operators who now are going to try to make these career decisions. Like what am I going to work on? And then it's kind of fuzzy, but I want to get your thoughts, how would you talk to that persona about the future of VMware on, say, cloud for instance? What should they be thinking about? What's the opportunity? And what's going to happen? >> So digital transformation definitely is a huge change for many organizations and leaders are perfectly aware of what that means. And that also means to some extent, concerns with your existing employees. Concerns about do I have to relearn everything? Do I have to acquire new skills and trainings? Is everything worthless I learned over the last 15 years of my career? And the answer is to make digital transformation a success, we need not just to talk about technology, but also about process, people, and culture. And this is where VMware really can help because if you are applying VMware Cloud on AWS to your infrastructure, to your existing on-premise infrastructure, you do not need to change many things. You can use the same tools and skills, you can manage your virtual machines as you did in your on-premise environment, you can use the same managing and monitoring tools, if you have written, and many customers did this, if you have developed hundreds of scripts that automate tasks and if you know how to troubleshoot things, then you can use all of that in VMware Cloud on AWS. And that gives not just leaders, but also the architects at customers, the operators at customers, the confidence in such a complex project. >> The consistency, very key point, gives them the confidence to go. And then now that once they're confident, they can start committing themselves to new things. Samir, you're reacting to this because on your side, you've got higher level services, you've got more performance at the hardware level. I mean, a lot improvements. So, okay, nothing's changed, I can still run my job, now I got goodness on the other side. What's the upside? What's in it for the customer there? >> Yeah, so I think what it comes down to is they've already been so used to or entrenched with that VMware admin mentality, right? But now extending that to the cloud, that's where now you have that bridge between VMware Cloud on AWS to bridge that VMware knowledge with that AWS knowledge. So I will look at it from the point of view where now one has that capability and that ability to just learn about the cloud. But if they're comfortable with certain aspects, no one's saying you have to change anything. You can still leverage that, right? But now if you want to utilize any other AWS service in conjunction with that VM that resides maybe on-premises or even in VMware Cloud on AWS, you have that option to do so. So think about it where you have that ability to be someone who's curious and wants to learn. And then if you want to expand on the skills, you certainly have that capability to do so. >> Great stuff, I love that. Now that we're peeking behind the curtain here, I'd love to have you guys explain, 'cause people want to know what's goes on behind the scenes. How does innovation get happen? How does it happen with the relationships? Can you take us through a day in the life of kind of what goes on to make innovation happen with the joint partnership? Do you guys just have a Zoom meeting, do you guys fly out, you write code, go do you ship things? I mean, I'm making it up, but you get the idea. How does it work? What's going on behind the scenes? >> So we hope to get more frequently together in-person, but of course we had some difficulties over the last two to three years. So we are very used to Zoom conferences and Slack meetings. You always have to have the time difference in mind if you are working globally together. But what we try, for example, we have regular assembles now also in-person, geo-based, so for AMEA, for the Americas, for APJ. And we are bringing up interesting customer situations, architectural bits and pieces together. We are discussing it always to share and to contribute to our community. >> What's interesting, you know, as events are coming back, Samir, before you weigh in this, I'll comment as theCUBE's been going back out to events, we're hearing comments like, "What pandemic? We were more productive in the pandemic." I mean, developers know how to work remotely and they've been on all the tools there, but then they get in-person, they're happy to see people, but no one's really missed the beat. I mean, it seems to be very productive, you know, workflow, not a lot of disruption. More, if anything, productivity gains. >> Agreed, right? I think one of the key things to keep in mind is even if you look at AWS's, and even Amazon's leadership principles, right? Customer obsession, that's key. VMware is carrying that forward as well. Where we are working with our customers, like how Daniel said and meant earlier, right? We might have meetings at different time zones, maybe it's in-person, maybe it's virtual, but together we're working to listen to our customers. You know, we're taking and capturing that feedback to drive innovation in VMware Cloud on AWS as well. But one of the key things to keep in mind is yes, there has been the pandemic, we might have been disconnected to a certain extent, but together through technology, we've been able to still communicate, work with our customers, even with VMware in between, with AWS and whatnot, we had that flexibility to innovate and continue that innovation. So even if you look at it from the point of view, right? VMware Cloud on AWS Outposts, that was something that customers have been asking for. We've been able to leverage the feedback and then continue to drive innovation even around VMware Cloud on AWS Outposts. So even with the on-premises environment, if you're looking to handle maybe data sovereignty or compliance needs, maybe you have low latency requirements, that's where certain advancements come into play, right? So the key thing is always to maintain that communication track. >> In our last segment we did here on this Showcase, we listed the accomplishments and they were pretty significant. I mean geo, you got the global rollouts of the relationship. It's just really been interesting and people can reference that, we won't get into it here. But I will ask you guys to comment on, as you guys continue to evolve the relationship, what's in it for the customer? What can they expect next? Because again, I think right now, we're at an inflection point more than ever. What can people expect from the relationship and what's coming up with re:Invent? Can you share a little bit of kind of what's coming down the pike? >> So one of the most important things we have announced this year, and we will continue to evolve into that direction, is independent scale of storage. That absolutely was one of the most important items customer asked for over the last years. Whenever you are requiring additional storage to host your virtual machines, you usually in VMware Cloud on AWS, you have to add additional nodes. Now we have three different node types with different ratios of compute, storage, and memory. But if you only require additional storage, you always have to get also additional compute and memory and you have to pay for it. And now with two solutions which offer choice for the customers, like FS6 wanted a ONTAP and VMware Cloud Flex Storage, you now have two cost effective opportunities to add storage to your virtual machines. And that offers opportunities for other instance types maybe that don't have local storage. We are also very, very keen looking forward to announcements, exciting announcements, at the upcoming events. >> Samir, what's your reaction take on what's coming down on your side? >> Yeah, I think one of the key things to keep in mind is we're looking to help our customers be agile and even scaled with their needs, right? So with VMware Cloud on AWS, that's one of the key things that comes to mind, right? There are going to be announcements, innovations, and whatnot with upcoming events. But together, we're able to leverage that to advance VMware cloud on AWS. To Daniel's point, storage for example, even with host offerings. And then even with decoupling storage from compute and memory, right? Now you have the flexibility where you can do all of that. So to look at it from the standpoint where now with 21 regions where we have VMware Cloud on AWS available as well, where customers can utilize that as needed when needed, right? So it comes down to, you know, transformation will be there. Yes, there's going to be maybe where workloads have to be adapted where they're utilizing certain AWS services, but you have that flexibility and option to do so. And I think with the continuing events, that's going to give us the options to even advance our own services together. >> Well you guys are in the middle of it, you're in the trenches, you're making things happen, you've got a team of people working together. My final question is really more of a kind of a current situation, kind of future evolutionary thing that you haven't seen this before. I want to get both of your reaction to it. And we've been bringing this up in the open conversations on theCUBE is in the old days, let's go back this generation, you had ecosystems, you had VMware had an ecosystem, AWS had an ecosystem. You know, we have a product, you have a product, biz dev deals happen, people sign relationships, and they do business together and they sell each other's products or do some stuff. Now it's more about architecture, 'cause we're now in a distributed large scale environment where the role of ecosystems are intertwining and you guys are in the middle of two big ecosystems. You mentioned channel partners, you both have a lot of partners on both sides, they come together. So you have this now almost a three dimensional or multidimensional ecosystem interplay. What's your thoughts on this? Because it's about the architecture, integration is a value, not so much innovations only. You got to do innovation, but when you do innovation, you got to integrate it, you got to connect it. So how do you guys see this as an architectural thing, start to see more technical business deals? >> So we are removing dependencies from individual ecosystems and from individual vendors. So a customer no longer has to decide for one vendor and then it is a very expensive and high effort project to move away from that vendor, which ties customers even closer to specific vendors. We are removing these obstacles. So with VMware Cloud on AWS, moving to the cloud, firstly it's not a dead end. If you decide at one point in time because of latency requirements or maybe some compliance requirements, you need to move back into on-premise, you can do this. If you decide you want to stay with some of your services on-premise and just run a couple of dedicated services in the cloud, you can do this and you can man manage it through a single pane of glass. That's quite important. So cloud is no longer a dead end, it's no longer a binary decision, whether it's on-premise or the cloud, it is the cloud. And the second thing is you can choose the best of both worlds, right? If you are migrating virtual machines that have been running in your on-premise environment to VMware Cloud on AWS either way in a very, very fast cost effective and safe way, then you can enrich, later on enrich these virtual machines with services that are offered by AWS, more than 200 different services ranging from object-based storage, load balancing, and so on. So it's an endless, endless possibility. >> We call that super cloud in the way that we generically defining it where everyone's innovating, but yet there's some common services. But the differentiation comes from innovation where the lock in is the value, not some spec, right? Samir, this is kind of where cloud is right now. You guys are not commodity, amazon's completely differentiating, but there's some commodity things happen. You got storage, you got compute, but then you got now advances in all areas. But partners innovate with you on their terms. >> Absolutely. >> And everybody wins. >> Yeah, I 100% agree with you. I think one of the key things, you know, as Daniel mentioned before, is where it's a cross education where there might be someone who's more proficient on the cloud side with AWS, maybe more proficient with the VMware's technology. But then for partners, right? They bridge that gap as well where they come in and they might have a specific niche or expertise where their background, where they can help our customers go through that transformation. So then that comes down to, hey, maybe I don't know how to connect to the cloud, maybe I don't know what the networking constructs are, maybe I can leverage that partner. That's one aspect to go about it. Now maybe you migrated that workload to VMware Cloud on AWS. Maybe you want to leverage any of the native AWS services or even just off the top, 200 plus AWS services, right? But it comes down to that skillset, right? So again, solutions architecture at the back of the day, end of the day, what it comes down to is being able to utilize the best of both worlds. That's what we're giving our customers at the end of the day. >> I mean, I just think it's a refactoring and innovation opportunity at all levels. I think now more than ever, you can take advantage of each other's ecosystems and partners and technologies and change how things get done with keeping the consistency. I mean, Daniel, you nailed that, right? I mean you don't have to do anything. You still run it. Just spear the way you're working on it and now do new things. This is kind of a cultural shift. >> Yeah, absolutely. And if you look, not every customer, not every organization has the resources to refactor and re-platform everything. And we give them a very simple and easy way to move workloads to the cloud. Simply run them and at the same time, they can free up resources to develop new innovations and grow their business. >> Awesome. Samir, thank you for coming on. Daniel, thank you for coming to Germany. >> Thank you. Oktoberfest, I know it's evening over there, weekend's here. And thank you for spending the time. Samir, give you the final word. AWS re:Invent's coming up. We're preparing, we're going to have an exclusive with Adam, with Fryer, we'd do a curtain raise, and do a little preview. What's coming down on your side with the relationship and what can we expect to hear about what you got going on at re:Invent this year? The big show? >> Yeah, so I think Daniel hit upon some of the key points, but what I will say is we do have, for example, specific sessions, both that VMware's driving and then also that AWS is driving. We do have even where we have what are called chalk talks. So I would say, and then even with workshops, right? So even with the customers, the attendees who are there, whatnot, if they're looking to sit and listen to a session, yes that's there, but if they want to be hands-on, that is also there too. So personally for me as an IT background, been in sysadmin world and whatnot, being hands-on, that's one of the key things that I personally am looking forward. But I think that's one of the key ways just to learn and get familiar with the technology. >> Yeah, and re:Invent's an amazing show for the in-person. You guys nail it every year. We'll have three sets this year at theCUBE and it's becoming popular. We have more and more content. You guys got live streams going on, a lot of content, a lot of media. So thanks for sharing that. Samir, Daniel, thank you for coming on on this part of the Showcase episode of really the customer successes with VMware Cloud on AWS, really accelerating business transformation with AWS and VMware. I'm John Furrier with theCUBE, thanks for watching. (upbeat music)
SUMMARY :
This is the customer successes Great to have you guys both on. One of the things to keep in mind Daniel, I want to get to you in a second And over the time, we really that the ops teams are in the ITOps area. And so when you look at So that's going to give you even with logging, you in the next 10 to 15 years." And the answer is to make What's in it for the customer there? and that ability to just I'd love to have you guys explain, and to contribute to our community. but no one's really missed the beat. So the key thing is always to maintain But I will ask you guys to comment on, and memory and you have to pay for it. So it comes down to, you know, and you guys are in the is you can choose the best with you on their terms. on the cloud side with AWS, I mean you don't have to do anything. has the resources to refactor Samir, thank you for coming on. And thank you for spending the time. that's one of the key things of really the customer successes
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Amazon | ORGANIZATION | 0.99+ |
Daniel Rethmeier | PERSON | 0.99+ |
Daniel | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Samir | PERSON | 0.99+ |
Maryland | LOCATION | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
amazon | ORGANIZATION | 0.99+ |
Germany | LOCATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
2016 | DATE | 0.99+ |
100% | QUANTITY | 0.99+ |
Adam | PERSON | 0.99+ |
Samir Kadoo | PERSON | 0.99+ |
more than 200 different services | QUANTITY | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
two solutions | QUANTITY | 0.99+ |
both sides | QUANTITY | 0.99+ |
this year | DATE | 0.99+ |
CubeCon | EVENT | 0.99+ |
Parminder Khosa & Martin Schirmer | IFS Unleashed 2022
(upbeat music) >> Hey everyone, welcome back to theCUBE live in Miami on the floor of IFS Unleashed. I'm your host, Lisa Martin. Had some great conversations. Have more great conversations coming your way. I have two guests joining me. Please welcome Martin Schirmer, the President of Enterprise Service Management, IFS Assyst. And Parminder Khosa, the Senior IT Manager at Parexel. Guys, it's great to have you on the program. >> Lovely to be here. >> It's good to be here. >> Martin, talk to me a little bit... tell the audience a little bit about Assyst so that that get that context before we start asking questions. >> Yeah. Absolutely. So IFS Assyst is a recent acquisition. It's an acquisition we made about a year ago. And fundamentally, it's a platform that takes care of IT service management, enterprise service management and IT operations management. So think of it, of managing sort of the ERP for IT and then broadening that out into the sort of enterprise where you're driving enterprise use cases for all lines of businesses like HR, finance, facilities, so on and so forth. >> Got it. And then Parminder, give the audience just a little bit of a flavor of Parexel, who you guys are, what you do. >> Sure. >> Maybe the impact that you make. >> Yeah, so Parexel is a clinical research organization. And what that means is that we manage drug trials for big pharmaceutical companies. So we're a big company. We're 25,000 people. We have offices in 150 locations all the way from Japan and the east through to the West Coast of the USA. >> Big company. >> Yeah, we are. We are a lot of people. >> And let's start chatting now Martin with some of the questions that you have so we get the understanding of how IFS and Parexel are working together. >> Yeah. Absolutely. I suppose... I mean the first thing is and thank you for traveling here all the way from the UK. (Lisa chuckles) Appreciate it and great energy and vibe. So just what the first question I had really was, you're customer of ours for the last 15 years plus. Maybe just give the audience a bit of context into your journey and how you've evolved from the sort of early years to where you're going into the future. >> Sure. So our history, I was part of a company that Parexel acquired that was already using Assyst. And as Parexel acquired us, they were in the process of also buying Assyst. So it became a kind of natural fit where I carried on with Assyst. And we started relatively small, sort of just the service desktop. And throughout the ongoing 15 years or so, we've just grown and expanded into kind of being a critical tool for Parexel right now. >> Okay, that's fantastic. I mean part of that journey, I know you started in sort of the more they call a ticketing space or IT service management space. Expand a little bit how you've expanded out of that and really moved into the enterprise. >> Sure. So yeah. So when we first rolled Assyst out, it was as I say, purely IT. And eventually we reached out to other business units to say asking questions like, Are you managing your workload through email? Are you managing your workload through Excel spreadsheets? In which case, if you are, we've got a solution for you that will make it a much better experience for your customers. They're all internal. It'll make it much easier for you because you will have official tracking going on through our system. I'll make it better for your management because we can drive metrics from all of the data that we're getting. So if you imagine finance we're getting, kind of 200 miles a day because of the size of our company. And they were just working through them one by one responding, and they becomes just a mess. So we developed forms for them to say, "Okay, Larry raise all your requests here. We will pick it up. We will manage it. We will communicate with you. And once the piece of work that you've asked for is done, we will let you know." And as we go through that process, we'll make it better for us because as I say we're getting those metrics. And we'll make it better for you because we can spot where our gaps are. If a request is taking three days, and of that three days, two days is waiting for someone on our end to respond to you or is waiting for us waiting for a customer to respond, we can iron those out and make it a much better experience for everyone. >> That's fantastic. It's really music to my ears because we always pushing the industry to say move away from just the IT side and really get into the enterprise. And it sounds like you've really gotten a lot of sort of productivity and efficiency gains out of that. >> Definitely, definitely. And it becomes kind of a happy circle. So the finance guys will work with the procurement guys. And they also look... Well, we're doing all of our work through Assyst now. So procurement's a little turnaround. So, well we're using this big spreadsheet to manage all of ours. Can we do the same? And they'll reach out to us and we'll say, "Of course we can. What is your process?" For example, they will say, okay, if someone asks for a new laptop, we need to get the approval from their line manager, from the supplier. We need to do our own internal work and then we will send it out. So imagine if you're doing that in a an email chain. It just becomes chaos. >> Yeah. >> So we will build all of that out for them. And then procurement will talk to HR and it just becomes a snowball. And before you know it, we are doing about 4,000 tickets per day in our Assyst system. And of those, 50% perhaps maybe more than 50% now will be non IT related. >> Oh, that's fantastic. Really music to my ears. And it really breaking down the boundaries or silos within an organization. It's really good. Let the teams work together. Right? >> Definitely. And that's one of the key things that we've learned is that we have to engage completely with our business partners. And our business partners are becoming more and more IT literate as well. So for example, we had a recent big HR solution provided to us. And as part of that, we know there are going to be questions, and queries and perhaps even issues to do with our HR system. So we have to work with us guys, the Assyst front end, the IT HR guys who look after the databases, all of the technology in the background. Then there'll be IT HR who are Workday experts. And then kind of not necessarily at the bottom of the chain will be the HR people themselves who are in their own way, experts in their area, experts in IT in a certain way. So all of those people have to work together. We become the front end, but we have to work with all of those parts of the business. >> That's really great. It's basically what you just said is taking business, IT processes and underpinning solutions. Effectively digital transformation, right? >> Exactly. Yeah. So HR is a great example. They used to have paper flying around with leave request, with sickness requests, with all of those kind of issues. And you said, well if you have an issue with your HR system, you can't raise a leave request, or you can't raise a sickness request, tell us. We will take care of it. We will fix it for you. We will give you the instructions. And we will get rid of all of that paper. >> That's brilliant. Just sort of turning the attention. And all of that, how do you drive the sort of, we'll talk about the autonomous enterprise. How do you drive automation in that process? >> Yeah. Of course, we have to map all of those processes out. Because we're not the experts in HR or procurement or whatever the business area may be. We have to really dig into their work methods, their working areas. What is necessary for them? What is a must have? What is a like to have? What is we don't really need? So we really drive into that processes. Once we've got those, we will automate them. We will build them out in Assyst with the process designer. It's very intuitive now. The latest version is really good to work with. We will do some pretty clever stuff in there. We'll say, okay the manager approval. If the manager is not there, then escalate it to the next person. Then we go to HR and say, okay HR have taken two days to do this. We're not particularly okay with that. So we will escalate it to the next person. And all of that process is completely automated, completely in Assyst. >> Brilliant. I mean obviously, we have a codeless workflow engine with a designer. And if you look at one of the trends from post covid is a war in talent in particular developers. The IDC says there's going to be around 4 million shortage of developers. What is your view on, how easy... Do I need developers? Is it easy, is it difficult to do these workflow extensions and automations? >> Definitely not, no. So the two key areas that you mentioned that with the customizer to develop the forms to make them available to our end users, drag and drop. Really easy to do. You can put some nice filters in there. You can put some nice variables in there. You can drive intelligent drive the forms from there as well. So if option A is correct, then don't show me option B, show me option C. And all of that is codeless, entirely codeless. I don't need to type any code. And when we move on to a process designer that hooks in nicely with the form customizer because we can say, "Okay, if option B on that form is selected, then runs this process." And all of that process is entirely codeless as well. Drag and drop. Creates some tasks. Create some decisions. >> Fantastic. >> Brilliant. >> Sounds really good. Switching gears a little bit. You spoke about experience, and that's also obviously very topical post, well, Covid becoming a remote workforce. Clearly, we need to be digitally connected to our business and organization because the hybrid workforce, as we all know, is here to stay. And that employee experience is fundamental because it is their sort of channel to the engagement of the organization. Of course, that has retention impacts and productivity impact. So just from your perspective, how was Covid, from your perspective, and how easy or difficult was it to get your employees engaged and productive and working? >> Yeah. And for us, it's a double edged sword Covid was. Because of the nature of our business. We do covid stuff. We do drug stuff. So we may have issues with some trials that are related to that. So we need to escalate those. We need to be aware of them and move them to the top of the chain as soon as possible. And then Assyst becomes a source of truth. Everybody knows that if I've got an issue with the current environment that we're living in, I can raise it in Assyst. And everybody knows that's where that information is. There's no need to have huge conference calls or huge email chains to try and follow those around. So with our Assyst platform, with our employees as well, everybody knew that this is where the source of truth was. We didn't have any dropouts. We didn't have any concerns with our system or performance. We knew it was there. We had to do some work like, as I say, around covid issues just to make sure they get pushed up to the top of the chain. But otherwise, we were fine. And great credit to our IT operations team as well who managed that pretty much seamlessly. >> That's brilliant. That's good news. >> Yeah. >> It really is. Just taking a little bit further and talking a little bit about what next. My team has been, I know, talking to your team about the whole area of asset management. Maybe talk to us a little bit about that journey. >> Sure, sure. So we're an ITOM customer as well. So all of our hardware data is stored within the ITOM platform. So we've pushed out the agents to all of our end user machines, so 25,000 agents. And we're in the process of integrating that into our Assyst platform to make that the single source of truth. And that part of that we're working on the software asset management side as well. So we've got a really good idea of where our software assets are. It comes to all license auditing, we know exactly how much we've got there. And the more complex side of it is of course server. So software management management as well. So we're in the process of getting all of that data as well. So once we've done all that, there is other all as the next step. The next step will be to perhaps do monitoring or pushing out software using the ITOM platform and getting rid of some of the disparate systems that we have right now. >> Well that's good news. And I think I saw a study. I think, every single person as an employee carries around 15 or 20 assets with him at any one time. Be it from a PC, phone, physical software licenses, so on and so forth. In that context, I can imagine the business case around it. >> Definitely. Yeah. And every, again, we map every user to their assets and (indistinct) their assets. And again Assyst as a source of truth for that. So if you want to look at my record, so, all right. Pam's got a laptop. He's got a mobile phone. We're thinking about giving him a tablet, but we'll find out. That he's in the process of getting a tablet as well. So I can have a look at my user record and know exactly what I've got with all of the asset tags and the various links that it has to the software pieces so it becomes a big tree of my assets. >> That's wonderful. Just the question I had was, we spoke about breaking down silos and the enterprise use cases and the effect that has. Do you envisage that Assyst can really get to being enterprisewide as, when I say enterprisewide, everybody in the organization effectively using this tool as their sort of source of experience, and level of automation of process? >> Definitely, definitely. As I say, we're getting... We're really pushing to get to that. As I say, 4,000 tickets a day with a user base of 25,000 kind of means that everybody will interact with the system perhaps every two weeks or so. So we're getting to that point and with the new functionality that's coming out with the Assyst product, with the team's integration, and the bot and everything that will bring to us because we are a big. We use teams. We use bots. We use that kind of technology. It will just fit in seamlessly. And trying to break down the silos, as I say finance, procurement, all of the big beasts within our company already are using the Assyst tool. And we want to bring in more and more of those processes as we mature. >> Brilliant. I think Omnichannel's critical. We want to connect from any device from anywhere. It's just the way we work. So I think that's critical. Teams is of course a a tool that most of us have become too familiar with. >> Yup. (chuckles) >> To be fair. (chuckles) It's better to be here in person finally, right? >> Yeah. >> So I think, that's all exciting news. And it's really fantastic. >> Great. >> So I suppose maybe in the time that we have left, what's next? >> What's next for us is that we're in the process of migrating our solution to the cloud, to the IFS cloud. That will open up a huge new user base for us. If we think all of our customers, all of our people who work on studies will have the ability to connect to Assyst and ask questions. That's a lot of it is just ask a question, or raise an issue or ask for something. So we're talking, it could be expanded by hundreds of thousands of new users that will meet more people on the backend to manage those requests as well. So yeah. It's just going to get bigger and bigger. And as you say, with the CMDB work that we're doing as well, that's another big ongoing stream for us. >> It's great because as you know, with Assyst we have a disruptive licensing model. >> Yeah. >> We have a t-shirt size pricing. All you can need based a number of employees. So there's no barriers to entry for you. >> There really is. And that really helps us because as I said initially, particularly when finance came on board and now they're expanding, there is no cost implication for it. The more that we use it, the better it is for. The more bang for buck that we get. >> Yep. That's our mantra. Enterprise users, right? For the price of a cup of coffee, for the price of a user. That's our mantra. >> I love it. You guys have done such a great job of articulating the synergies in the relationship that IFS Assyst has with Paraxel. You talked about the great outcomes that you're achieving. And it's all about Martin, I know, from IFS Assyst perspective, it's all about helping customers achieve those outcomes and those moments of service that are so critical to your customers on the other end staying with you, doing more business. Whether it's the end user customer, whether it's the actual employee. You talked a lot about the customer experience, the employee experience, and what you guys are doing together to enable that. And I always think that the employee experience and the customer experience are like this. They're inextricably linked. You can't, you shouldn't. Otherwise you're going to have problems. >> Yeah, no, absolutely. And there's actually a study on that saying that, 70% of customers generally don't feel they get what they want from organizations. >> 70. Wow! >> And if you take that one step further to what you said, the interconnectivity between customer employee, employee shops on Amazon, right? It's on those websites. So you can't be rolling out and digitally connect to the employee with something that is clunky and has the wrong experience. Like I said, it really affects that level of engagement the employee has with the company which happens to be largely these days remote. >> It does. Last question Martin, is for you. Talk to us about what's next for IFS Assyst. Obviously, we're back in person. There's a lot of momentum about the company. I was talking with Darren, the growth and first half was great. He kind of gave us some teaser about second half, but what's next from your perspective? >> Yeah. So what's next for us is achieving our goal. We are here to disrupt the industry. It's an industry that's dominated by one player and a fair amount of legacy players. We've disrupted the business model as I've told you. We here to do more because it's a simple thing. And that's the word simple. We want to keep things simple. We're going to keep engineering and driving our product forward, right? We've made sure that our platform is up there with the best. Yeah. We've just been certified by pink. Pink is a verification of ITIL four they call it. So it's a body. And the top level is you can get 20 out of 20. We got 17 out of 20. There's only one other vendor that has more than us and it's only by little. And after it's a big white space, the next one is 14. So we on the right track. We are going to of course drive and capture the market. So watch this space. We here to grow. >> We will watch this space. Congratulations on being that disrupter. >> Thank you. >> Parminder great work with what you guys are doing. You did a great job of articulating, as I said, the customers tour here. We appreciate your insights, your time. >> Thank you very much. >> Pleasure. >> All right, my pleasure. >> Thank you. For my guests, I'm Lisa Martin. You're watching The Cube live from Miami on the show floor of IFS Unleashed. We'll be back after a short break.
SUMMARY :
And Parminder Khosa, the tell the audience a sort of the ERP for IT Parminder, give the audience and the east through to We are a lot of people. with some of the questions that you have I mean the first thing is and So it became a kind of natural fit and really moved into the enterprise. from all of the data that we're getting. the industry to say move away So the finance guys will work So we will build all And it really breaking down the boundaries all of the technology in the background. It's basically what you just And we will get rid of all of that paper. And all of that, how do And all of that process And if you look at one of So the two key areas that you mentioned And that employee Because of the nature of our business. That's brilliant. talking to your team And the more complex side the business case around it. and the various links that and the enterprise use cases all of the big beasts It's just the way we work. It's better to be here And it's really fantastic. have the ability to connect It's great because as you know, So there's no barriers to entry for you. And that really helps us coffee, for the price of a user. of articulating the synergies And there's actually a the employee has with the company the growth and first half was great. And the top level is you We will watch this space. as I said, the customers tour here. on the show floor of IFS Unleashed.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
Martin | PERSON | 0.99+ |
Larry | PERSON | 0.99+ |
Japan | LOCATION | 0.99+ |
two days | QUANTITY | 0.99+ |
Miami | LOCATION | 0.99+ |
Martin Schirmer | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
two days | QUANTITY | 0.99+ |
UK | LOCATION | 0.99+ |
three days | QUANTITY | 0.99+ |
Parexel | ORGANIZATION | 0.99+ |
Assyst | ORGANIZATION | 0.99+ |
IFS Assyst | ORGANIZATION | 0.99+ |
70% | QUANTITY | 0.99+ |
20 | QUANTITY | 0.99+ |
Excel | TITLE | 0.99+ |
25,000 | QUANTITY | 0.99+ |
25,000 agents | QUANTITY | 0.99+ |
150 locations | QUANTITY | 0.99+ |
17 | QUANTITY | 0.99+ |
IDC | ORGANIZATION | 0.99+ |
two guests | QUANTITY | 0.99+ |
first question | QUANTITY | 0.99+ |
Omnichannel | ORGANIZATION | 0.99+ |
IFS Assyst | ORGANIZATION | 0.99+ |
Parminder | PERSON | 0.99+ |
15 years | QUANTITY | 0.99+ |
25,000 people | QUANTITY | 0.99+ |
CMDB | ORGANIZATION | 0.99+ |
The Cube | TITLE | 0.99+ |
50% | QUANTITY | 0.99+ |
ITOM | ORGANIZATION | 0.99+ |
20 assets | QUANTITY | 0.99+ |
IFS | ORGANIZATION | 0.99+ |
first half | QUANTITY | 0.99+ |
option B | OTHER | 0.99+ |
one | QUANTITY | 0.98+ |
70 | QUANTITY | 0.98+ |
Pink | ORGANIZATION | 0.98+ |
option C. | OTHER | 0.98+ |
more than 50% | QUANTITY | 0.98+ |
option A | OTHER | 0.98+ |
one time | QUANTITY | 0.98+ |
first thing | QUANTITY | 0.98+ |
Parminder Khosa | PERSON | 0.98+ |
Darren | PERSON | 0.97+ |
one player | QUANTITY | 0.97+ |
Lisa | PERSON | 0.97+ |
Covid | PERSON | 0.97+ |
option B | OTHER | 0.97+ |
Enterprise Service Management | ORGANIZATION | 0.97+ |
two key areas | QUANTITY | 0.95+ |
pink | ORGANIZATION | 0.95+ |
2022 | DATE | 0.94+ |
first | QUANTITY | 0.94+ |
Pam | PERSON | 0.9+ |
about second half | QUANTITY | 0.9+ |
IFS Unleashed | TITLE | 0.89+ |
14 | QUANTITY | 0.88+ |
hundreds of thousands of new users | QUANTITY | 0.87+ |
200 miles a day | QUANTITY | 0.86+ |
4,000 tickets a day | QUANTITY | 0.86+ |
around 4 million shortage | QUANTITY | 0.85+ |
single source | QUANTITY | 0.85+ |
Ashish Palekar & Cami Tavares, AWS | AWS Storage Day 2022
(upbeat music) >> Okay, we're back covering AWS Storage Day 2022 with Ashish Palekar. Who's the general manager of AWS EBS Snapshot and Edge and Cami Tavares. Who's the head of product at Amazon EBS. Thanks for coming back in theCube guys. Great to see you again. >> Great to see you as well, Dave. >> Great to see you, Dave. Ashish, we've been hearing a lot today about companies all kinds of applications to the cloud and AWS and using their data in new ways. Resiliency is always top of mind for companies when they think about just generally their workloads and specifically the clouds. How should they think about customers think about data resiliency? >> Yeah, when we think about data resiliency it's all about making sure that your application data, the data that your application needs is available when it needs it. It's really the ability for your workload to mitigate disruptions or recover from them. And to build that resilient architecture you really need to understand what kinds of disruptions your applications can experience. How broad the impact of those disruptions is, and then how quickly you need to recover. And a lot of this is a function of what the application does, how critical it is. And the thing that we constantly tell customers is, this works differently in the cloud than it does in a traditional on-premises environment. >> What's different about the cloud versus on-prem? Can you explain how it's different? >> Yeah, let me start with a video on-premises one. And in the on-premises one, building resilient architectures is really the customer's responsibility, and it's very challenging. You'll start thinking about what your single points of failure are. To avoid those, you have to build in redundancy, you might build in replication as an example for storage and doing this now means you have to have provision more hardware. And depending on what your availability requirements are, you may even have to start looking for multiple data centers, some in the same regions, some in different geographical locations. And you have to ensure that you're fully automated, so that your recovery processes can take place. And as you can see that's a lot of owners being placed on the customer. One other thing that we hear about is really elasticity and how elasticity plays into the resiliency for applications. As an example, if you experience a sudden spike in workloads, in a on-premises environment, that can lead to resource saturation. And so really you have two choices. One is to sort of throttle the workload and experience resiliency, or your second option becomes buying additional hardware and securing more capacity and keeping it fair low in case of experiencing such a spike. And so your two propositions that are either experiencing resiliency, challenges or paying really to have infrastructure that's lying around. And both of those are different really when you start thinking about the cloud. >> Yeah, there's a third option too, which is lose data, which is not an option. Go ahead- >> Which is not, yeah, I pretty much as a storage person, that is not an option. The reason about that that we think is reasonable for customers to take. The big contrast in the cloud really comes with how we think about capacity. And fundamentally the the cloud gives you that access to capacity so you are not managing that capacity. The infrastructure complexity and the cost associated with that are also just a function of how infrastructure is built really in the cloud. But all of that really starts with the bedrock of how we design for avoiding single points of failure. The best way to explain this is really to start thinking about our availability zones. Typically these availability zones consist of multiple data centers, located in the same regional area to enable high throughput and low latency for applications. But the availability zones themselves are physically independent. They have independent connections to utility power, standalone backup power resources, independent mechanical services and independent network connectivity. We take availability zone independence extremely seriously, so that when customers are building the availability of their workload, they can architect using these multiple zones. And that is something that when I'm talking to customers or Tami is talking to customers, we highly encourage customers to keep in mind as they're building resiliency for their applications. >> Right, so you can have within an availability zone, you can have, you know, instantaneous, you know when you're doing it right. You've got, you've captured that data and you can asynchronously move to outside of that in case there's, the very low probability, but it does happen, you get some disasters. You're minimizing that RPO. And I don't have to worry about that as a customer and figuring out how to do three site data centers. >> That's right. Like that even further, now imagine if you're expanding globally. All those things that we described about like creating new footprint and creating a new region and finding new data centers. As a customer in an on-premises environment, you take that on yourself. Whereas with AWS, because of our global presence, you can expand to a region and bring those same operational characteristics to those environments. And so again, bringing resiliency as you're thinking about expanding your workload, that's another benefit that you get from using the availability zone region architecture that AWS has. >> And as Charles Phillips, former CEO of Infor said, "Friends, don't let friends build data center," so I don't have to worry about building the data center. Let's bring Cami into the discussion here. Cami, think about elastic block storage, it gives, you know customers, you get persistent block storage for EC2 instances. So it's foundational for any mission critical or business critical application that you're building on AWS. How do you think about data resiliency in EBS specifically? I always ask the question, what happens if something goes wrong? So how should we think about data resiliency in EBS specifically? >> Yeah, you're right Dave, block storage is a really foundational piece. When we talk to customers about building in the cloud or moving an application to the cloud, and data resiliency is something that comes up all the time. And with EBS, you know EBS is a very large distributed system with many components. And we put a lot of thought and effort to build resiliency into EBS. So we design those components to operate and fail independently. So when customers create an EBS volume for example, we'll automatically choose the best storage nodes to address the failure domain and the data protection strategy for each of our different volume types. And part of our resiliency strategy also includes separating what we call a volume life cycle control plane. Which are things like creating a volume, or attaching a volume to an EC2 instance. So we separate that control plane, from the storage data plane, which includes all the components that are responsible for serving IO to your instance, and then persisting it to durable media. So what that means is once a volume is created and attached to the instance, the operations on that volume they're independent from the control point function. So even in the case of an infrastructure event, like a power issue, for example, you can recreate an EBS volume from a snapshot. And speaking of snapshots, that's the other core pillar of resiliency in EBS. Snapshots are point in time copies of EBS volumes that would store in S3. And snapshots are actually a regional service. And that means internally we use multiple of the availability zones that Ashish was talking about to replicate your data so that the snapshots can withstand the failure of an availability zone. And so thanks to that availability zone independence, and then this builtin component independence, customers can use that snapshot and recreate an EBS following another AZO or even in another region if they need to. >> Great so, okay, so you touched on some of the things EBS does to build resiliency into the service. Now thinking about over your right shoulders, you know, Joan Deviva, so what can organizations do to build more resilience into their applications on EBS so they can enjoy life without anxiety? >> (laughs) That is a great question. Also something that we love to talk to customers about. And the core thing to think about here is that we don't believe in a one size fits all approach. And so what we are doing in EBS is we give customers different tools so that they can design a resiliency strategy that is custom tailored for their data. And so to do this, this resiliency assessment, you have to think about the context of this specific workload and ask questions like what other critical services depend on this data and what will break if this data's not available and how long can can those systems withstand that, for example. And so the most important step I'll mention it again, snapshots, that is a very important step in a recovery plan. Make sure you have a backup of your data. And so we actually recommend that customers take the snapshots at least daily. And we have features that make that easier for you. For example, Data Lifecycle Manager which is a feature that is entirely free. It allows you to create backup policies, and then you can automate the process of creating the snapshot, so it's very low effort. And then when you want to use that backup to recreate a volume, we have a feature called Fast Snapshot Restore, that can expedite the creation of the volume. So if you have a more, you know a shorter recovery time objective you can use that feature to expedite the recovery process. So that's backup. And then the other pillar we talked to customers about is data replication. Just another very important step when you're thinking about your resiliency and your recovery plans. So with EBS, you can use replication tools that work at the level of the operating system. So that's something like DRBD for example. Or you can use AWS Elastic Disaster Recovery, and that will replicate your data across availability zones or nearby regions too. So we talked about backup and replication, and then the last topic that we recommend customers think about is having a workload monitoring solution in place. And you can do that in EBS, using cloud watch metrics. So you can monitor the health of your EBS volume using those metrics. We have a lot of tips in our documentation on how to measure that performance. And then you can use those performance metrics as triggers for automated recovery workflows that you can build using tools like auto scaling groups for example. >> Great, thank you for that advice. Just quick follow up. So you mentioned your recommendation, at least daily, what kind of granularity, if I want to compress my RPO can I go at a more granular level? >> Yes, you can go more granular and you can use again the daily lifecycle manager to define those policies. >> Great, thank you. Before we go, I want to just quickly cover what's new with EBS. Ashish, maybe you could talk about, I understand you've got something new today. You've got an announcement, take us through that. >> Yeah, thanks for checking in and I'm so glad you asked. We talked about how snapshots help resilience and are a critical part of building resilient architectures. So customers like the simplicity of backing up their EC2 instances, using multi volume snapshots. And what they're looking for is the ability to back up only to exclude specific volumes from the backup, especially those that don't need backup. So think of applications that have cash data, or applications that have temporary data that really doesn't need backup. So today we are adding a new parameter to the create snapshots API, which creates a crash consistent set of snapshots for volumes attached to an EC2 instance. Where customers can now exclude specific volumes from an instance backup. So customers using data life cycle manager that can be touched on, can automate their backups. And again they also get to exclude these specific volumes. So really the feature is not just about convenience, but it's also to help customers save on cost. As many of these customers are managing tens of thousands of snapshots. And so we want to make sure they can take it at the granularity that they need it. So super happy to bring that into the hands of customers as well. >> Yeah, that's a nice option. Okay, Ashish, Cami thank you so much for coming back in theCube, helping us learn about what's new and what's cool and EBS, appreciate your time. >> Thank you for having us Dave. >> Thank you for having us Dave. >> You're very welcome now, if you want to learn more about EBS resilience, stay right here because coming up, we've got a session which is a deep dive on protecting mission critical workloads with Amazon EBS. Stay right there, you're watching theCube's coverage of AWS Storage Day 2022. (calm music)
SUMMARY :
Great to see you again. and specifically the clouds. And the thing that we And so really you have two choices. option too, which is lose data, to capacity so you are not and you can asynchronously that you get from using so I don't have to worry about And with EBS, you know EBS is a very large of the things EBS does And the core thing to So you mentioned your and you can use again the Ashish, maybe you could is the ability to back up only you so much for coming back if you want to learn more
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Ashish | PERSON | 0.99+ |
Ashish Palekar | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Charles Phillips | PERSON | 0.99+ |
Joan Deviva | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Cami | PERSON | 0.99+ |
third option | QUANTITY | 0.99+ |
EBS | ORGANIZATION | 0.99+ |
two propositions | QUANTITY | 0.99+ |
second option | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
Infor | ORGANIZATION | 0.99+ |
Cami Tavares | PERSON | 0.99+ |
both | QUANTITY | 0.99+ |
today | DATE | 0.98+ |
two choices | QUANTITY | 0.98+ |
EBS | TITLE | 0.97+ |
EC2 | TITLE | 0.97+ |
Tami | PERSON | 0.96+ |
tens of thousands of snapshots | QUANTITY | 0.95+ |
each | QUANTITY | 0.95+ |
AZO | TITLE | 0.93+ |
Amazon EBS | ORGANIZATION | 0.91+ |
theCube | ORGANIZATION | 0.89+ |
Ashish | ORGANIZATION | 0.89+ |
single points | QUANTITY | 0.86+ |
three site | QUANTITY | 0.83+ |
single points | QUANTITY | 0.82+ |
DRBD | TITLE | 0.8+ |
Storage Day 2022 | EVENT | 0.78+ |
one size | QUANTITY | 0.76+ |
Elastic Disaster | TITLE | 0.7+ |
Edge | ORGANIZATION | 0.68+ |
CEO | PERSON | 0.63+ |
Lifecycle | TITLE | 0.59+ |
thing | QUANTITY | 0.57+ |
Snapshot | TITLE | 0.49+ |
S3 | TITLE | 0.46+ |
Kam Amir, Cribl | HPE Discover 2022
>> TheCUBE presents HPE Discover 2022 brought to you by HPE. >> Welcome back to theCUBE's coverage of HPE Discover 2022. We're here at the Venetian convention center in Las Vegas Dave Vellante for John Furrier. Cam Amirs here is the director of technical alliances at Cribl'. Cam, good to see you. >> Good to see you too. >> Cribl'. Cool name. Tell us about it. >> So let's see. Cribl' has been around now for about five years selling products for the last two years. Fantastic company, lots of growth, started there 2020 and we're roughly 400 employees now. >> And what do you do? Tell us more. >> Yeah, sure. So I run the technical alliances team and what we do is we basically look to build integrations into platforms such as HPE GreenLake and Ezmeral. And we also work with a lot of other companies to help get data from various sources into their destinations or, you know other enrichments of data in that data pipeline. >> You know, you guys have been on theCUBE. Clint's been on many times, Ed Bailey was on our startup showcase. You guys are successful in this overfunded observability space. So, so you guys have a unique approach. Tell us about why you guys are successful in the product and some of the things you've been doing there. >> Yeah, absolutely. So our product is very complimentary to a lot of the technologies that already exist. And I used to joke around that everyone has these like pretty dashboards and reports but they completely glaze over the fact that it's not easy to get the data from those sources to their destinations. So for us, it's this capability with Cribl' Stream to get that data easily and repeatably into these destinations. >> Yeah. You know, Cam, you and I are both at the Snowflake Summit to John's point. They were like a dozen observability companies there. >> Oh yeah. >> And really beginning to be a crowded space. So explain what value you bring to that ecosystem. >> Yeah, sure. So the ecosystem that we see there is there are a lot of people that are kind of sticking to like effectively getting data and showing you dashboards reports about monitoring and things of that sort. For us, the value is how can we help customers kind of accelerate their adoption of these platforms, how to go from like your legacy SIM or your legacy monitoring solution to like the next-gen observability platform or next-gen security platform >> and what you do really well is the integration and bringing those other toolings to, to do that? >> Correct, correct. And we make it repeatable. >> How'd you end up here? >> HP? So we actually had a customer that actually deployed our software on the HPS world platform. And it was kind of a light bulb moment that, okay this is actually a different approach than going to your traditional, you know, AWS, Google, et cetera. So we decided to kind of hunt this down and figure out how we could be a bigger player in this space. >> You saw the data fabric announcement? I'm not crazy about the term, data fabric is an old NetApp term, and then Gartner kind of twisted it. I like data mesh, but anyway, it doesn't matter. We kind of know what it is, but but when you see an announcement like that how do you look at it? You know, what does it mean to to Cribl' and your customers? >> Yeah. So what we've seen is that, so we work with the data fabric team and we're able to kind of route our data to their, as a data lake, so we can actually route the data from, again all these very sources into this data lake and then have it available for whatever customers want to do with it. So one of the big things that I know Clint talks about is we give customers this, we sell choice. So we give them the ability to choose where they want to send their data, whether that's, you know HP's data lake and data fabric or some other object store or some other destination. They have that choice to do so. >> So you're saying that you can stream with any destination the customer wants? What are some examples? What are the popular destinations? >> Yeah so a lot of the popular destinations are your typical object stores. So any of your cloud object stores, whether it be AWS three, Google cloud storage or Azure blob storage. >> Okay. And so, and you can pull data from any source? >> Laughter: I'd be very careful, but absolutely. What we've seen is that a lot of people like to kind of look at traditional data sources like Syslog and they want to get it to us, a next-gen SIM, but to do so it needs to be converted to like a web hook or some sort of API call. And so, or vice versa, they have this brand new Zscaler for example, and they want to get that data into their SIM but there's no way to do it 'cause a SIM only accepts it as a Syslog event. So what we can do is we actually transform the data and make it so that it lands into that SIM in the format that it needs to be and easily make that a repeatable process >> So, okay. So wait, so not as a Syslog event but in whatever format the destination requires? >> Correct, correct. >> Okay. What are the limits on that? I mean, is this- >> Yeah. So what we've seen is that customers will be able to take, for example they'll take this Syslog event, it's unstructured data but they need to put it into say common information model for Splunk or Elastic common schema for Elastic search or just JSON format for Elastic. And so what we can do is we can actually convert those events so that they land in that transformed state, but we can also route a copy of that event in unharmed fashion, to like an S3 bucket for object store for that long term compliance user >> You can route it to any, basically any object store. Is that right? Is that always the sort of target? >> Correct, correct. >> So on the message here at HPE, first of all I'll get to the marketplace point in a second, but it's cloud to edge is kind of their theme. So data streaming sounds expensive. I mean, you know so how do you guys deal with the streaming egress issue? What does that mean to customers? You guys claim that you can save money on that piece. It's a hotly contested discussion point. >> Laughter: So one of the things that we actually just announced in our 350 release yesterday is the capability of getting data from Windows events, or from Windows hosts, I'm sorry. So a product that we also have is called Cribl' Edge. So our capability of being able to collect data from the edge and then transit it out to whether it be an on-prem, or self-hosted deployment of Cribl', or or maybe some sort of other destination object store. What we do is we actually take the data in in transit and reduce the volume of events. So we can do things like remove white space or remove events that are not really needed and compress or optimize that data so that the egress cost to your point are actually lowered. >> And your data reduction approach is, is compression? It's a compression algorithm? >> So it is a combination, yeah, so it's a combination. So there's some people what they'll do is they'll aggregate the events. So sometimes for example, VPC flow logs are very chatty and you don't need to have all those events. So instead you convert those to metrics. So suddenly you reduced those events from, you know high volume events to metrics that are so small and you still get the same value 'cause you still see the trends and everything. And if later on down the road, you need to reinvestigate those events, you can rehydrate that data with Cribl' replay >> And you'll do the streaming in real time, is that right? >> Yeah. >> So Kafka, is that what you would use? Or other tooling? >> Laughter: So we are complimentary to a Kafka deployment. Customer's already deployed and they've invested in Kafka, We can read off of Kafka and feed back into Kafka. >> If not, you can use your tooling? >> If not, we can be replacing that. >> Okay talk about your observations in the multi-cloud hybrid world because hybrid obviously everyone knows it's a steady state now. On public cloud, on premise edge all one thing, cloud operations, DevOps, data as code all the things we talk about. What's the customer view? You guys have a unique position. What's going on in the customer base? How are they looking at hybrid and specifically multi-cloud, is it stitching together multiple hybrids? Or how do you guys work across those landscapes? >> So what we've seen is a lot of customers are in multiple clouds. That's, you know, that's going to happen. But what we've seen is that if they want to egress data from say one cloud to another the way that we've architected our solution is that we have these worker nodes that reside within these hybrid, these other cloud event these other clouds, I should say so that transmitting data, first egress costs are lowered, but being able to have this kind of, easy way to collect the data and also stitch it back together, join it back together, to a single place or single location is one option that we offer customers. Another solution that we've kind of announced recently is Search. So not having to move the data from all these disparate data sources and data lakes and actually just search the data in place. That's another capability that we think is kind of popular in this hybrid approach. >> And talk about now your relationship with HPE you guys obviously had customers that drove you to Greenlake, obviously what's your experience with them and also talk about the marketplace presence. Is that new? How long has that been going on? Have you seen any results? >> Yeah, so we've actually just started our, our journey into this HPE world. So the first thing was obviously the customer's bringing us into this ecosystem and now our capabilities of, I guess getting ready to be on the marketplace. So having a presence on the marketplace has been huge giving us kind of access to just people that don't even know who we are, being that we're, you know a five year old company. So it's really good to have that exposure. >> So you're going to get customers out of this? >> That's the idea. [Laughter] >> Bring in new market, that's the idea of their GreenLake is that partners fill in. What's your impression so far of GreenLake? Because there seems to be great momentum around HP and opening up their channel their sales force, their customer base. >> Yeah. So it's been very beneficial for us, again being a smaller company and we are a channel first company so that obviously helps, you know bring out the word with other channel partners. But HP has been very, you know open arm kind of getting us into the system into the ecosystem and obviously talking, or giving the good word about Cribl' to their customers. >> So, so you'll be monetizing on GreenLake, right? That's the, the goal. >> That's the goal. >> What do you have to do to get into a position? Obviously, you got a relationship you're in the marketplace. Do you have to, you know, write to their API's or do you just have to, is that a checkbox? Describe what you have to do to monetize. >> Sure. So we have to first get validated on the platform. So the validation process validates that we can work on the Ezmeral GreenLake platform. Once that's been completed, then the idea is to have our logo show up on the marketplace. So customers say, Hey, look, I need to have a way to get transit data or do stuff with data specifically around logs, metrics, and traces into my logging solution or my SIM. And then what we do with them on the back end is we'll see this transaction occur right to their API to basically say who this customer is. 'Cause again, the idea is to have almost a zero touch kind of involvement, but we will actually have that information given to us. And then we can actually monetize on top of it. >> And the visualization component will come from the observability vendor. Is that right? Or is that somewhat, do you guys do some of that? >> So the visualization is right now we're basically just the glue that gets the data to the visualization engine. As we kind of grow and progress our search product that's what will probably have more of a visualization component. >> Do you think your customers are going to predominantly use an observability platform for that visualization? I mean, obviously you're going to get there. Are they going to use Grafana? Or some other tool? >> Or yeah, I think a lot of customers, obviously, depending on what data and what they're trying to accomplish they will have that choice now to choose, you know Grafana for their metrics, logs, et cetera or some sort of security product for their security events but same data, two different kind of use cases. And we can help enable that. >> Cam, I want to ask you a question. You mentioned you were at Splunk and Clint, the CEO and co-founder, was at Splunk too. That brings up the question I want to get your perspective on, we're seeing a modern network here with HPE, with Aruba, obviously clouds kind of going next level you got on premises, edge, all one thing, distributed computing basically, cyber security, a data problem that's solved a lot by you guys and people in this business, making sure data available machine learnings are growing and powering AI like you read about. What's changed in this business? Because you know, Splunking logs is kind of old hat you know, and now you got observability. Unification is a big topic. What's changed now? What's different about the market today around data and these platforms and, and tools? What's your perspective on that? >> I think one of the biggest things is people have seen the amount of volume of data that's coming in. When I was at Splunk, when we hit like a one terabyte deal that was a big deal. Now it's kind of standard. You're going to do a terabyte of data per day. So one of the big things I've seen is just the explosion of data growth, but getting value out of that data is very difficult. And that's kind of why we exist because getting all that volume of data is one thing. But being able to actually assert value from it, that's- >> And that's the streaming core product? That's the whole? >> Correct. >> Get data to where it needs to be for whatever application needs whether it's cyber or something else. >> Correct, correct. >> What's the customer uptake? What's the customer base like for you guys now? How many, how many customers you guys have? What are they doing with the data? What are some of the common things you're seeing? >> Yeah. I mean, it's, it's the basic blocking and tackling, we've significantly grown our customer base and they all have the same problem. They come to us and say, look, I just need to get data from here to there. And literally the routing use case is our biggest use case because it's simple and you take someone that's a an expensive engineer and operations engineer instead of having them going and doing the plumbing of data of just getting logs from one source to another, we come in and actually make that a repeatable process and make that easy. And so that's kind of just our very basic value add right from the get go. >> You can automate that, automate that, make it repeatable. Say what's in the name? Where'd the name come from? >> So Cribl', if you look it up, it's actually kind of an old shiv to get to siphon dirt from gold, right? So basically you just, that's kind of what we do. We filter out all the dirt and leave you the gold bits so you can get value. >> It's kind of what we do on theCUBE. >> It's kind of the gold nuggets. Get all these highlights, hitting Twitter, the golden, the gold nuggets. Great to have you on. >> Cam, thanks for, for coming on, explaining that sort of you guys are filling that gap between, Hey all the observability claims, which are all wonderful but then you got to get there. They got to have a route to get there. That's what got to do. Cribl' rhymes with tribble. Dave Vellante for John Furrier covering HPE Discover 2022. You're watching theCUBE. We'll be right back.
SUMMARY :
2022 brought to you by HPE. Cam Amirs here is the director Tell us about it. for the last two years. And what do you do? So I run the of the things you've been doing there. that it's not easy to get the data and I are both at the Snowflake So explain what value you So the ecosystem that we we make it repeatable. to your traditional, you You saw the data fabric So one of the big things So any of your cloud into that SIM in the format the destination requires? I mean, is this- but they need to put it into Is that always the sort of target? You guys claim that you can that the egress cost to your And if later on down the road, you need to Laughter: So we are all the things we talk about. So not having to move the data customers that drove you So it's really good to have that exposure. That's the idea. Bring in new market, that's the idea so that obviously helps, you know So, so you'll be monetizing Describe what you have to do to monetize. 'Cause again, the idea is to And the visualization the data to the visualization engine. are going to predominantly use now to choose, you know Cam, I want to ask you a question. So one of the big things I've Get data to where it needs to be And literally the routing use Where'd the name come from? So Cribl', if you look Great to have you on. of you guys are filling
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Ed Bailey | PERSON | 0.99+ |
Splunk | ORGANIZATION | 0.99+ |
Cribl | ORGANIZATION | 0.99+ |
Kam Amir | PERSON | 0.99+ |
Cam Amirs | PERSON | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Clint | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Aruba | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
AWS | ORGANIZATION | 0.99+ |
Elastic | TITLE | 0.99+ |
one terabyte | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
yesterday | DATE | 0.99+ |
Kafka | TITLE | 0.99+ |
one option | QUANTITY | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Cam | PERSON | 0.99+ |
Gartner | ORGANIZATION | 0.99+ |
Grafana | ORGANIZATION | 0.98+ |
400 employees | QUANTITY | 0.98+ |
TheCUBE | ORGANIZATION | 0.98+ |
one | QUANTITY | 0.98+ |
theCUBE | ORGANIZATION | 0.98+ |
Splunk | TITLE | 0.98+ |
one thing | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
ORGANIZATION | 0.97+ | |
both | QUANTITY | 0.97+ |
first | QUANTITY | 0.97+ |
first thing | QUANTITY | 0.96+ |
Windows | TITLE | 0.96+ |
Cribl | PERSON | 0.96+ |
one source | QUANTITY | 0.96+ |
first company | QUANTITY | 0.95+ |
single location | QUANTITY | 0.95+ |
about five years | QUANTITY | 0.95+ |
S3 | TITLE | 0.94+ |
five year old | QUANTITY | 0.91+ |
Syslog | TITLE | 0.91+ |
single place | QUANTITY | 0.91+ |
John | PERSON | 0.91+ |
Cribl | TITLE | 0.88+ |
last two years | DATE | 0.84+ |
NetApp | TITLE | 0.83+ |
GreenLake | ORGANIZATION | 0.83+ |
zero touch | QUANTITY | 0.82+ |
Cribl' Stream | ORGANIZATION | 0.81+ |
Ezmeral | ORGANIZATION | 0.8+ |
two different | QUANTITY | 0.78+ |
a terabyte of data per day | QUANTITY | 0.76+ |
Venetian convention center | LOCATION | 0.75+ |
350 release | QUANTITY | 0.75+ |
Zscaler | TITLE | 0.74+ |
one cloud | QUANTITY | 0.7+ |
Greenlake | ORGANIZATION | 0.65+ |
HPE Discover 2022 | EVENT | 0.62+ |
Brian Schwarz, Google Cloud | VeeamON 2022
(soft intro music) >> Welcome back to theCUBE's coverage of VeeamON 2022. Dave Vellante with David Nicholson. Brian Schwarz is here. We're going to stay on cloud. He's the director of product management at Google Cloud. The world's biggest cloud, I contend. Brian, thanks for coming on theCUBE. >> Thanks for having me. Super excited to be here. >> Long time infrastructure as a service background, worked at Pure, worked at Cisco, Silicon Valley guy, techie. So we're going to get into it here. >> I love it. >> I was saying before, off camera. We used to go to Google Cloud Next every year. It was an awesome show. Guys built a big set for us. You joined, right as the pandemic hit. So we've been out of touch a little bit. It's hard to... You know, you got one eye on the virtual event, but give us the update on Google Cloud. What's happening generally and specifically within storage? >> Yeah. So obviously the Cloud got a big boost during the pandemic because a lot of work went online. You know, more things kind of being digitally transformed as people keep trying to innovate. So obviously the growth of Google Cloud, has got a big tailwind to it. So business has been really good, lots of R&D investment. We obviously have an incredible set of technology already but still huge investments in new technologies that we've been bringing out over the past couple of years. It's great to get back out to events to talk to people about 'em. Been a little hard the last couple of years to give people some of the insights. When I think about storage, huge investments, one of the things that some people know but I think it's probably underappreciated is we use the same infrastructure for Google Cloud that is used for Google consumer products. So Search and Photos and all the public kind of things that most people are familiar with, Maps, et cetera. Same infrastructure at the same time is also used for Google Cloud. So we just have this tremendous capability of infrastructure. Google's got nine products that have a billion users most of which many people know. So we're pretty good at storage pretty good at compute, pretty good at networking. Obviously a lot of that kind of shines through on Google Cloud for enterprises to bring their applications, lift and shift and/or modernize, build new stuff in the Cloud with containers and things like that. >> Yeah, hence my contention that Google has the biggest cloud in the world, like I said before. Doesn't have the most IS revenue 'cause that's a different business. You can't comment, but I've got Google Cloud running at $12 billion a year run rate. So a lot of times people go, "Oh yeah, Google they're third place going for the bronze." But that is a huge business. There aren't a lot of 10, $12 billion infrastructure companies. >> In a rapidly growing market. >> And if you do some back of napkin math, whatever, give me 10, 15, let's call it 15% of that, to storage. You've got a big storage business. I know you can't tell us how big, but it's big. And if you add in all the stuff that's not in GCP, you do a lot of storage. So you know storage, you understand the technology. So what is the state of technology? You have a background in Cisco, nearly a networking company, they used to do some storage stuff sort of on the side. We used to say they're going to buy NetApp, of course that never happened. That would've made no sense. Pure Storage, obviously knows storage, but they were a disk array company essentially. Cloud storage, what's different about it? What's different in the technology? How does Google think about it? >> You know, I always like to tell people there's some things that are the same and familiar to you, and there's some things that are different. If I start with some of the differences, object storage in the Cloud, like just fundamentally different. Object storage on-prem, it's been around for a while, often used as kind of like a third tier of storage, maybe a backup target, compliance, something like that. In the cloud, object storage is Tier one storage. Public reference for us, Spotify, okay, use object storage for all the songs out there. And increasingly we see a lot of growth in-- >> Well, how are you defining Tier one storage in that regard? Again, are you thinking streaming service? Okay. Fine. Transactional? >> Spotify goes down and I'm pissed. >> Yeah. This is true. (Dave laughing) >> Not just you, maybe a few million other people too. One is importance, business importance. Tier one applications like critical to the business, like business down type stuff. But even if you look at it for performance, for capabilities, object storage in the cloud, it's a different thing than it was. >> Because of the architecture that you're deploying? >> Yeah. And the applications that we see running on it. Obviously, a huge growth in our business in AI and analytics. Obviously, Google's pretty well known in both spaces, BigQuery, obviously on the analytics side, big massive data warehouses and obviously-- >> Gets very high marks from customers. >> Yeah, very well regarded, super successful, super popular with our customers in Google Cloud. And then obviously AI as well. A lot of AI is about getting structure from unstructured data. Autonomous vehicles getting pictures and videos around the world. Speech recognition, audio is a fundamentally analog signal. You're trying to train computers to basically deal with analog things and it's all stored in object storage, machine learning on top of it, creating all the insights, and frankly things that computers can deal with. Getting structure out of the unstructured data. So you just see performance capabilities, importance as it's really a Tier one storage, much like file and block is where have kind of always been. >> Depending on, right, the importance. Because I mean, it's a fair question, right? Because we're used to thinking, "Oh, you're running your Oracle transaction database on block storage." That's Tier one. But Spotify's pretty important business. And again, on BigQuery, it is a cloud-native born in the cloud database, a lot of the cloud databases aren't, right? And that's one of the reasons why BigQuery is-- >> Google's really had a lot of success taking technologies that were built for some of the consumer services that we build and turning them into cloud-native Google Cloud. Like HDFS, who we were talking about, open source technologies came originally from the Google file system. Now we have a new version of it that we run internally called Colossus, incredible technologies that are cloud scale technologies that you can use to build things like Google Cloud storage. >> I remember one of the early Hadoop worlds, I was talking to a Google engineer and saying, "Well, wow, that's so cool that Hadoop came. You guys were the main spring of that." He goes, "Oh, we're way past Hadoop now." So this is early days of Hadoop (laughs) >> It's funny whenever Google says consumer services, usually consumer indicates just for me. But no, a consumer service for Google is at a scale that almost no business needs at a point in time. So you're not taking something and scaling it up-- >> Yeah. They're Tier one services-- for sure. >> Exactly. You're more often pairing it down so that a fortune 10 company can (laughs) leverage it. >> So let's dig into data protection in the Cloud, disaster recovery in the Cloud, Ransomware protection and then let's get into why Google. Maybe you could give us the trends that you're seeing, how you guys approach it, and why Google. >> Yeah. One of the things I always tell people, there's certain best practices and principles from on-prem that are just still applicable in the Cloud. And one of 'em is just fundamentals around recovery point objective and recovery time objective. You should know, for your apps, what you need, you should tier your apps, get best practice around them and think about those in the Cloud as well. The concept of RPO and RTO don't just magically go away just 'cause you're running in the Cloud. You should think about these things. And it's one of the reasons we're here at the VeeamON event. It's important, obviously, they have a tremendous skill in technology, but helping customers implement the right RPO and RTO for their different applications. And they also help do that in Google Cloud. So we have a great partnership with them, two main offerings that they offer in Google. One is integration for their on-prem things to use, basically Google as a backup target or DR target and then cloud-native backups they have some technologies, Veeam backup for Google. And obviously they also bought Kasten a while ago. 'Cause they also got excited about the container trend and obviously great technologies for those customers to use those in Google Cloud as well. >> So RPO and RTO is kind of IT terms, right? But we think of them as sort of the business requirement. Here's the business language. How much data are you willing to lose? And the business person says, "What? I don't want to lose any data." Oh, how big's your budget, right? Oh, okay. That's RPO. RTO is how fast you want to get it back? "How fast do you want to get it back if there's an outage?" "Instantly." "How much money do you want to spend on that?" "Oh." Okay. And then your application value will determine that. Okay. So that's what RPO and RTO is for those who you may not know that. Sometimes we get into the acronym too much. Okay. Why Google Cloud? >> Yeah. When I think about some of the infrastructure Google has and like why does it matter to a customer of Google Cloud? The first couple things I usually talk about is networking and storage. Compute's awesome, we can talk about containers and Kubernetes in a little bit, but if you just think about core infrastructure, networking, Google's got one of the biggest networks in the world, obviously to service all these consumer applications. Two things that I often tell people about the Google network, one, just tremendous backbone bandwidth across the regions. One of the things to think about with data protection, it's a large data set. When you're going to do recoveries, you're pushing lots of terabytes often and big pipes matter. Like it helps you hit the right recovery time objective 'cause you, "I want to do a restore across the country." You need good networks. And obviously Google has a tremendous network. I think we have like 20 subsea cables that we've built underneath the the world's oceans to connect the world on the internet. >> Awesome. >> The other thing that I think is really underappreciated about the Google network is how quickly you get into it. One of the reasons all the consumer apps have such good response time is there's a local access point to get into the Google network somewhere close to you almost anywhere in the world. I'm sure you can find some obscure place where we don't have an access point, but look Search and Photos and Maps and Workspace, they all work so well because you get in the Google network fast, local access points and then we can control the quality of service. And that underlying substrate is the same substrate we have in Google Cloud. So the network is number one. Second one in storage, we have some really incredible capabilities in cloud storage, particularly around our dual region and multi-region buckets. The multi-region bucket, the way I describe it to people, it's a continent sized bucket. Single bucket name, strongly consistent that basically spans a continent. It's in some senses a little bit of the Nirvana of storage. No more DR failover, right? In a lot of places, traditionally on-prem but even other clouds, two buckets, failover, right? Orchestration, set up. Whenever you do orchestration, the DR is a lot more complicated. You got to do more fire drills, make sure it works. We have this capability to have a single name space that spans regions and it has strong read after write consistency, everything you drop into it you can read back immediately. >> Say I'm on the west coast and I have a little bit of an on-premises data center still and I'm using Veeam to back something up and I'm using storage within GCP. Trace out exactly what you mean by that in terms of a continent sized bucket. Updates going to the recovery volume, for lack of a better term, in GCP. Where is that physically? If I'm on the west coast, what does that look like? >> Two main options. It depends again on what your business goals are. First option is you pick a regional bucket, multiple zones in a Google Cloud region are going to store your data. It's resilient 'cause there's three zones in the region but it's all in one region. And then your second option is this multi-region bucket, where we're basically taking a set of the Google Cloud regions from around North America and storing your data basically in the continent, multiple copies of your data. And that's great because if you want to protect yourself from a regional outage, right? Earthquake, natural disaster of some sort, this multi-region, it basically gives you this DR protection for free and it's... Well, it's not free 'cause you have to pay for it of course, but it's a free from a failover perspective. Single name space, your app doesn't need to know. You restart the app on the east coast, same bucket name. >> Right. That's good. >> Read and write instantly out of the bucket. >> Cool. What are you doing with Veeam? >> So we have this great partnership, obviously for data protection and DR. And I really often segment the conversation into two pieces. One is for traditional on-prem customers who essentially want to use the Cloud as either a backup or a DR target. Traditional Veeam backup and replication supports Google Cloud targets. You can write to cloud storage. Some of these advantages I mentioned. Our archive storage, really cheap. We just actually lowered the price for archive storage quite significantly, roughly a third of what you find in some of the other competitive clouds if you look at the capabilities. Our archive class storage, fast recovery time, right? Fast latency, no hours to kind of rehydrate. >> Good. Storage in the cloud is overpriced. >> Yeah. >> It is. It is historically overpriced despite all the rhetoric. Good. I didn't know that. I'm glad to hear. >> Yeah. So the archive class store, so you essentially read and write into this bucket and restore. So it's often one of the things I joke with people about. I live in Silicon Valley, I still see the tape truck driving around. I really think people can really modernize these environments and use the cloud as a backup target. You get a copy of your data off-prem. >> Don't you guys use tape? >> Well, we don't talk a lot about-- >> No comment. Just checking. >> And just to be clear, when he says cloud storage is overpriced, he thinks that a postage stamp is overpriced, right? >> No. >> If I give you 50 cents, are you going to deliver a letter cross country? No. Cloud storage, it's not overpriced. >> Okay. (David laughing) We're going to have that conversation. I think it's historically overpriced. I think it could be more attractive, relative to the cost of the underlying technology. So good for you guys pushing prices. >> Yeah. So this archive class storage, is one great area. The second area we really work with Veeam is protecting cloud-native workloads. So increasingly customers are running workloads in the Cloud, they run VMware in the Cloud, they run normal VMs, they run containers. Veeam has two offerings in Google that essentially help customers protect that data, hit their RPO, RTO objectives. Another thing that is not different in the Cloud is the need to meet your compliance regulations, right? So having a product like Veeam that is easy to show back to your auditor, to your regulator to make sure that you have copies of your data, that you can hit an appropriate recovery time objective if you're in finance or healthcare, energy. So there's some really good Veeam technologies that work in Google Cloud to protect applications that actually run in Google Cloud all in. >> To your point about the tape truck I was kind of tongue in cheek, but I know you guys use tape. But the point is you shouldn't have to call the tape truck, right, you should go to Google and say, "Okay. I need my data back." Now having said that sometimes the highest bandwidth in the world is putting all this stuff on the truck. Is there an option for that? >> Again, it gets back to this networking capability that I mentioned. Yes. People do like to joke about, okay, trucks and trains and things can have a lot of bandwidth, big networks can push a lot of data around, obviously. >> And you got a big network. >> We got a huge network. So if you want to push... I've seen statistics. You can do terabits a second to a single Google Cloud storage bucket, super computing type performance inside Google Cloud, which from a scale perspective, whether it be network compute, these are things scale. If there's one thing that Google's really, really good at, it's really high scale. >> If your's companies can't afford to. >> Yeah, if you're that sensitive, avoid moving the data altogether. If you're that sensitive, have your recovery capability be in GCP. >> Yeah. Well, and again-- >> So that when you're recovering you're not having to move data. >> It's approximate to, yeah. That's the point. >> Recovering GCV, fail over your VMware cluster. >> Exactly. >> And use the cloud as a DR target. >> We got very little time but can you just give us a rundown of your portfolio in storage? >> Yeah. So storage, cloud storage for object storage got a bunch of regional options and classes of storage, like I mentioned, archive storage. Our first party offerings in the file area, our file store, basic enterprise and high scale, which is really for highly concurrent paralyzed applications. Persistent disk is our block storage offering. We also have a very high performance cash block storage offering and local SSDs. So that's the main kind of food groups of storage, block file object, increasingly doing a lot of work in data protection and in transfer and distributed cloud environments where the edge of the cloud is pushing outside the cloud regions themselves. But those are our products. Also, we spend a lot of time with our partners 'cause Google's really good at building and open sourcing and partnering at the same time hence with Veeam, obviously with file. We partner with NetApp and Dell and a bunch of folks. So there's a lot of partnerships we have that are important to us as well. >> Yeah. You know, we didn't get into Kubernetes, a great example of open source, Istio, Anthos, we didn't talk about the on-prem stuff. So Brian we'll have to have you back and chat about those things. >> I look forward to it. >> To quote my friend Matt baker, it's not a zero sum game out there and it's great to see Google pushing the technology. Thanks so much for coming on. All right. And thank you for watching. Keep it right there. Our next guest will be up shortly. This is Dave Vellante for Dave Nicholson. We're live at VeeamON 2022 and we'll be right back. (soft beats music)
SUMMARY :
He's the director of product Super excited to be here. So we're going to get into it here. You joined, right as the pandemic hit. and all the public kind of things that Google has the In a rapidly What's different in the technology? the same and familiar to you, in that regard? (Dave laughing) storage in the cloud, BigQuery, obviously on the analytics side, around the world. a lot of the cloud of the consumer services the early Hadoop worlds, is at a scale that for sure. so that a fortune 10 company protection in the Cloud, And it's one of the reasons of the business requirement. One of the things to think is the same substrate we have If I'm on the west coast, of the Google Cloud regions That's good. out of the bucket. And I really often segment the cloud is overpriced. despite all the rhetoric. So it's often one of the things No comment. are you going to deliver the underlying technology. is the need to meet your But the point is you shouldn't have a lot of bandwidth, So if you want to push... avoid moving the data altogether. So that when you're recovering That's the point. over your VMware cluster. So that's the main kind So Brian we'll have to have you back pushing the technology.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
David Nicholson | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Brian Schwarz | PERSON | 0.99+ |
David | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Brian | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
50 cents | QUANTITY | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
two pieces | QUANTITY | 0.99+ |
10 | QUANTITY | 0.99+ |
NetApp | ORGANIZATION | 0.99+ |
second option | QUANTITY | 0.99+ |
two offerings | QUANTITY | 0.99+ |
15% | QUANTITY | 0.99+ |
Veeam | ORGANIZATION | 0.99+ |
First option | QUANTITY | 0.99+ |
three zones | QUANTITY | 0.99+ |
Spotify | ORGANIZATION | 0.99+ |
15 | QUANTITY | 0.99+ |
one region | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
BigQuery | TITLE | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Two main options | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
Matt baker | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
second area | QUANTITY | 0.98+ |
Second one | QUANTITY | 0.98+ |
20 subsea cables | QUANTITY | 0.98+ |
10, $12 billion | QUANTITY | 0.98+ |
two main offerings | QUANTITY | 0.97+ |
North America | LOCATION | 0.97+ |
nine products | QUANTITY | 0.97+ |
two buckets | QUANTITY | 0.96+ |
one thing | QUANTITY | 0.96+ |
Single | QUANTITY | 0.96+ |
Hadoop | TITLE | 0.95+ |
Google Cloud | TITLE | 0.95+ |
one eye | QUANTITY | 0.95+ |
Anthos | ORGANIZATION | 0.95+ |
Two things | QUANTITY | 0.94+ |
Pure | ORGANIZATION | 0.94+ |
first party | QUANTITY | 0.92+ |
VeeamON 2022 | EVENT | 0.91+ |
pandemic | EVENT | 0.91+ |
Dave Cope, Spectro Cloud | Kubecon + Cloudnativecon Europe 2022
>>The cube presents, Coon and cloud native con Europe 22 brought to you by the cloud native computing foundation. >>Lisia Spain, a cuon cloud native con Europe 2022. I'm Keith towns, along with Paul Gillon, senior editor, enterprise architecture for Silicon angle. Welcome Paul, >>Thank you, Keith pleasure to work >>With you. You know, we're gonna have some amazing people this week. I think I saw stat this morning, 65% of the attendees, 7,500 folks. First time Q con attendees. This is your first conference. >>It is my first cubic con and it is amazing to see how many people are here and to think of, you know, just a couple of years ago, three years ago, we were still talking about what the cloud was and what the cloud was gonna do and how we were gonna integrate multiple clouds. And now we have this whole new framework for computing that is just rifled out of, out of nowhere. And as we can see by the number of people who are here, this has become a, a, this is the dominant trend in enterprise architecture right now, how to adopt Kubernetes and containers, build microservices based applications, and really get to that, that transparent cloud that has been so elusive. >>It has been elusive. And we are seeing vendors from startups with just a, a few dozen people to some of the traditional players we see in the enterprise space with thousands of employees looking to capture kind of lightning in a bottle, so to speak this elusive concept of multi-cloud. >>And what we're seeing here is very typical of an early stage conference. I've seen many times over the years where the, the floor is really dominated by companies, frankly, I've never heard of that. Many of them are only two or three years old, and you don't see the big, the big dominant computing players with, with the presence here that these smaller companies have. That's very typical. We saw that in the PC age, we saw it in the early days of Unix and, and it's happening again. And what will happen over time is that a lot of these companies will be acquired. There'll be some consolidation. And the nature of this show will change, I think, dramatically over the next couple or three years, but there is an excitement and an energy in this auditorium today that is, is really a lot of fun and very reminiscent of other new technologies just as they press it. >>Well, speaking of new technologies, we have Dave Cole, CR O chief revenue officer that's right. Chief marketing officer that's right of spec cloud. Welcome to the show. Thank >>You. It's great to be here. >>So let's talk about this big ecosystem. Okay. Kubernetes. Yes. Solve problem. >>Well, you know, the, the dream is, well, first of all, applications are really the lifeblood of a company, whether it's our phone or whether it's a big company trying to connect with its customer, it's about applications. And so the whole idea today is how do I build these applications to build that tight relationship with my customers? And how do I reinvent these applications rapidly in, along comes containerization, which helps you innovate more quickly. And certainly a dominant technology. There is Kubernetes. And the, the question is how do you get Kubernetes to help you build applications that can be born anywhere and live anywhere and take advantage of the places that it's running, cuz everywhere has pluses and minuses. >>So you know what the promise of Kubernetes from when I first read about it years ago is runs on my laptop. Yep. I can push it to any cloud, any platform that's that's right. Where's the gap. Where are we in that, in that phase? Like talk to me about scale. Is that, is that, is it that simple? >>Well, that act is actually the problem is that date while the technology is the dominant containerization technology and orchestration technology, it really still takes a power user. It really hasn't been very approachable to the masses. And so it was these very expensive, highly skilled resources that sit in a dark corner that have focused on Kubernetes, but that, that now is trying to evolve to make it more accessible to the masses. It's not about sort of hand wiring together. What is a typical 20 layer stack to really manage Kubernetes and then have your engineers manually can reconfigure it and make sure everything works together. Now it's about how do I create these stacks, make it easy to deploy and manage at scale. So we've gone from sort of DIY developer centric to all right, now, how do I manage this at scale? >>Now this is a point that is important, I think is often overlooked. This is not just about Kubernetes. This is about a whole stack of cloud native technologies. Yes. And you who is going to, who is going to integrate that, all that stuff, piece that stuff together, right? Obviously you have a, a role in that. Yes. But in the enterprise, what is the awareness level of how complex this stack is and how difficult it is to assemble? >>We, we see a recognition of that, that we've had developers working on Kubernetes and applications, but now when we say, how do we weave it into our production environments? How do we ensure things like scalability and governance? How do we have this sort of interesting mix of innovation, flexibility, but with control. And that's sort of an interesting combination where you want developers to be able to run fast and use the latest tools, but you need to create these guardrails to deploy it at scale. >>So where do the developers fit in that operation stack then? Is this, is Kubernetes an AI ops or an ops a task, or is it sort of a shared task across the development spectrum? >>Well, I think there's a desire to allow application developers, to just focus on the application and have a Kubernetes related technology that ensures that all of the infrastructure and related application services are just there to support them. And because the typical stack from the operating system to the application can be up to 20 different layers components. You just want all those components to work together. You don't want application developers to worry about those things. And the latest technologies like spectra cloud there's others are making that easy application engineers focus on their apps, all of the infrastructure and the services are taken care of. And those apps can then live natively on any environment. >>So help paint this picture for us. You know, I get got AKs ETS and those, all of these distributions OpenShift, the tan zoo, where is spec cloud helping me to kind of cobble together all these different distros I thought distro was the, was the thing like, just like Lennox has different distros, you know, right. Randy said different distros >>That actually is the irony. Is that sort of the age of debating, the distros largely is over. There are a lot of distros and if you look at them, there are largely shades of gray in being different from each other. But the Kubernetes distribution is just one element of like 20 elements that all have to work together. So right now what's what's happening is that it's not about the distribution it's now, how do I, again, sorry to repeat myself, but move this into a, into scale. How do I move it into deploy at scale, to be able to manage ongoing at scale, to be able to innovate at scale, to allow engineers, as I said, use the coolest tools, but still have technical guardrails that the, the enterprise knows they'll be in control of what, >>What does at scale mean to the enterprise customers you're talking to now? What do they mean when they say that? >>Well, I think it's interesting cuz we think scale's different cuz we've all been in the industry and it's frankly sort of boring old wor word, but today it means different things. Like how do I automate the deployment at scale? How do I be able to make it really easy to provision resources for applications on any environment from either a virtualized or bare metal data center cloud or today edge is really big where people are trying to push applications out to be closer to this source of the data. And so you want to be able to deploy it scale you wanna manage at scale, you wanna make it easy to, as I said earlier, allow application developers to build their applications, but it ops wants the ability to ensure security and governance and all of that. And then finally innovate at scale. If you look at this show, it's interesting, three years ago, when we started spectra cloud, there are about 1400 businesses or technologies in the Kubernetes ecosystem today there's over 1800 and all of these technologies made up of open source and commercial, all versioning at different rates. It becomes an insurmountable problem unless you can set those guardrails sort of that balance between flexibility and control, let developers access the technologies. But again, manage it as a part of your normal processes of a, of a scale of operation. >>So, so Dave, I'm a little challenged here cuz I'm hearing two where I typically consider conflicting terms. Okay. Flexibility control. Yes. In order to achieve control, I need complexity in order to choose flexibility. I need t-shirt one t-shirt fits all right. To and I, and I, and I get simplicity. How can I get both that just doesn't you know, compute >>Well thus the opportunity and the challenge at the same time. So you're right. So developers want choice, good developers want the ability to choose the latest technology so they can innovate rapidly. And yet it ops wants to be able to make sure that there are guard rails. And so with some of today's technologies like spectral cloud, it is you have the ability to get both. We actually worked with dimensional research and we sponsor an annual state of Kubernetes survey. We found this last summer, that two out of three, it executives said you could not have both flexibility and control together, but in fact they want it. And so it is this interesting balance. How do I give engineers the ability to get anything they want, but it ops the ability to establish control. And that's why Kubernetes is really at its next inflection point. Whereas I mentioned, it's not debates about the distro or DIY projects. It's not big incumbents creating siloed Kubernetes solutions. But in fact it's about allowing all these technologies to work together and be able to establish these controls. And that's, that's really where the industry is today. >>Enterprise enterprise CIOs do not typically like to take chances. Now we were talking about the growth in the market that you described from 1400, 1800 vendors. Most of these companies, very small startups are, are enterprises. Are you seeing them willing to take a leap with these unproven companies or are they holding back and waiting for the IBMs, the HPS, the Microsofts to come in with the VMwares with whatever they solution they have? >>I, I think so. I mean, we sell to the global 2000. We had yesterday as a part of edge day here at the event, we had GE healthcare as one of our customers telling their story. And they're a market share leader in medical imaging equipment. X-rays MRIs, cat scans, and they're, they're starting to treat those as edge devices. And so here is a very large established company, a leader in their industry, working with people like spectral cloud, realizing that Kubernetes is interesting technology. The edge is an interesting thought, but how do I marry the two together? So we are seeing large corporations seeing so much of an opportunity that they're working with the smaller companies, the latest technology. >>So let's talk about the edge a little. You kind of opened it up there. Yeah. How should customers think about the edge versus the cloud data center or even bare metal? >>Actually it's a well bare bare metal is fairly easy is that many people are looking to reduce some of the overhead or inefficiencies of the virtualized environment. And, but we've had really sort of parallel little white tornadoes. We've had bare metal as infrastructure that's been developing and then we've had orchestration technology's developing, but they haven't really come together very well lately. We're finally starting to see that come together. Spectra cloud contributed to open source a metal as a service technology that finally brings these two worlds together. Making bare metal much more approachable to the inters enterprise edge is interesting because it seems pretty obvious. You wanna push your application out closer to your source of data, whether it's AI in fencing or O T or anything like that, you don't wanna worry about intermittent connectivity or latency or anything like that. But people have wanted to be able to treat the edge as if it's almost like a cloud where all I worry about is the app. >>So really the edge to us is just the next extension in a multi-cloud sort of motif where I want these edge devices to require low it resources to automate the provisioning, automate the ongoing version management patch management really act like a cloud. And we're seeing this as very, very popular now. And I just used the GE healthcare example of that. Imagine a cat scan machine, I'm making this part up in China and that's just an edge device. And it's, it's doing medical imagery, which is very intense in terms of data. You want to be able to process it quickly and accurately as close to the endpoint, the healthcare provider as possible. >>So let's talk about that in some level of detail, as we think about kind of edge and you know, these fixed devices such as imaging device, are we putting agents on there? Are we looking at something talking back to the cloud, where does special cloud inject and help make that simple, that problem of just having dispersed endpoints all over the world? Simpler? >>Sure. Well we announced our edge Kubernetes edge solution at a big medical conference called, called hymns months ago. And what we allow you to do is we allow the application engineers to develop their application. And then you can de you can design this declarative model, this cluster API, but beyond cluster profile, which determines which additional application services you need and the edge device, all the person has to do with the endpoint is plug in the power plug in the communications. It registers the edge device. It automates the deployment of the full stack. And then it does the ongoing versioning and patch management, sort of a self-driving edge device running Kubernetes. And we make it just very, very easy. No, it resources required at the endpoint, no expensive field engineering resources to go to these endpoints twice a year to apply new patches and things like that, all >>Automated, but there's so many different types of edge devices with different capabilities, different operating systems, some have no operating system. Yeah. I mean, what, that seems like a much more complex environment, just calling it, the edge is simple, but what you're really talking about is thousands of different devices, right? That you have to run your applications on how, how are you dealing with that? >>So one of the ways is that we're really unbiased. In other words, we're OS and distro agnostic. So we don't want to debate about which distribution you like. We don't want to debate about, you know, which OS you want to use. The truth is you're right. There's different environments and different choices that you'll wanna make. And so the key is, is how do you incorporate those and also recognize everything beyond those, you know, OS and Kubernetes and all of that and manage that full stack. So that's what we do is we allow you to choose which tools you want to use and let it be deployed and managed on any environment. >>And who's respo, I'm sorry, key. Who's responsible for making Kubernetes run on the edge device. >>We do. We provision the entire stack. I mean, of course the company does using our product, but we provision the entire Kubernetes infrastructure stack all the application services and the application itself on that device. >>So I would love to dig into like where pods happen and all that, but provisioning is getting to the point that it's a solve problem. Day two. Yes. Like we, you know, you just mentioned hymns, highly regulated environments. How does spec cloud helping with configuration management change control, audit, compliance, et cetera, the hard stuff. >>Yep. And one of the things we do, you bring up a good point is we manage the full life cycle from day zero, which is sort of create, deploy all the way to day two, which is about, you know, access control, security. It's about ongoing versioning and patch management. It's all of that built into the platform. And, but you're right. Like the medical industry has a lot of regulations. And so you need to be able to make sure that everything works. It's always up to the latest level, have the highest level of security. And so all that's built into the platform. It's not just a fire and forget it really is about that full life cycle of deploying, managing on an ongoing basis. >>Well, Dave, I'd love to go into a great deal of detail with you about kind of this day two option. I think we'll be covering a lot more of that topic, Paul, throughout the week, as we talk about just, you know, as we've gotten past, you know, how do I deploy Kubernetes pod to how do I actually operate it? >>Absolutely, absolutely. The devil is in the details as they say, >>Well, and also too, you have to recognize that the edge has some very unique requirements. You want very small form factors. Typically you want low it resources. It has to be sort of zero touch or low touch because if you're a large food provider with 20,000 store locations, you don't wanna send out field engineers two or three times a year to update them. So it really is an interesting beast and we have some exciting technology and people like GE are using that. >>Well, Dave, thanks a lot for coming on to Q you're now Cub Alon. You've not been on before. >>I have actually. Yes. Oh. But I always enjoy it. >>It's great conversation. Foria Spain. I'm Keith towns along with Paul Gillon and you're watching the cue, the leader in high tech coverage.
SUMMARY :
The cube presents, Coon and cloud native con Europe 22 brought to I'm Keith towns, along with Paul Gillon, senior editor, enterprise architecture morning, 65% of the attendees, 7,500 folks. It is my first cubic con and it is amazing to see how many people are here and to think of, a few dozen people to some of the traditional players we see in the enterprise space with And the nature Welcome to the show. So let's talk about this big ecosystem. And so the So you know what the promise of Kubernetes from when I first read about it years ago is runs Well, that act is actually the problem is that date while the technology is the dominant containerization And you who is going where you want developers to be able to run fast and use the latest tools, but you need to create these from the operating system to the application can be up to 20 different layers components. different distros, you know, right. Is that sort of the age of debating, the distros largely is over. And so you want to be able to deploy it scale you wanna manage I get both that just doesn't you know, compute How do I give engineers the ability to get anything they want, but it ops the ability Now we were talking about the growth in the market that you described from 1400, day here at the event, we had GE healthcare as one of our customers So let's talk about the edge a little. is the app. So really the edge to us is just the next extension in a multi-cloud sort of motif And what we allow you to do is we allow the application a much more complex environment, just calling it, the edge is simple, but what you're really talking about is thousands And so the key is, is how do you incorporate those and also recognize everything Who's responsible for making Kubernetes run on the edge device. I mean, of course the company does using our product, is getting to the point that it's a solve problem. And so all that's built into the platform. Well, Dave, I'd love to go into a great deal of detail with you about The devil is in the details as they say, Well, and also too, you have to recognize that the edge has some very unique requirements. Well, Dave, thanks a lot for coming on to Q you're now Cub Alon. I have actually. I'm Keith towns along with Paul Gillon and
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Paul Gillon | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Dave Cope | PERSON | 0.99+ |
Dave Cole | PERSON | 0.99+ |
China | LOCATION | 0.99+ |
Randy | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
Paul | PERSON | 0.99+ |
Keith | PERSON | 0.99+ |
20 layer | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
65% | QUANTITY | 0.99+ |
Spectro Cloud | ORGANIZATION | 0.99+ |
GE | ORGANIZATION | 0.99+ |
yesterday | DATE | 0.99+ |
20 elements | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
three years | QUANTITY | 0.99+ |
7,500 folks | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
first conference | QUANTITY | 0.99+ |
three years ago | DATE | 0.99+ |
Microsofts | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
last summer | DATE | 0.98+ |
one element | QUANTITY | 0.98+ |
IBMs | ORGANIZATION | 0.98+ |
First time | QUANTITY | 0.98+ |
Cloudnativecon | ORGANIZATION | 0.97+ |
Kubernetes | TITLE | 0.97+ |
Kubecon | ORGANIZATION | 0.97+ |
over 1800 | QUANTITY | 0.97+ |
first | QUANTITY | 0.97+ |
1400 | QUANTITY | 0.96+ |
20,000 store | QUANTITY | 0.96+ |
about 1400 businesses | QUANTITY | 0.96+ |
this week | DATE | 0.95+ |
twice a year | QUANTITY | 0.95+ |
two worlds | QUANTITY | 0.95+ |
first cubic con | QUANTITY | 0.94+ |
couple of years ago | DATE | 0.94+ |
Cub Alon | PERSON | 0.93+ |
Day two | QUANTITY | 0.93+ |
this morning | DATE | 0.92+ |
Unix | TITLE | 0.91+ |
zero | QUANTITY | 0.91+ |
months ago | DATE | 0.91+ |
years | DATE | 0.9+ |
day two | QUANTITY | 0.89+ |
Kubernetes | ORGANIZATION | 0.88+ |
day zero | QUANTITY | 0.86+ |
Lisia Spain | PERSON | 0.85+ |
three times a year | QUANTITY | 0.82+ |
Keith | LOCATION | 0.82+ |
2022 | EVENT | 0.82+ |
thousands of employees | QUANTITY | 0.81+ |
up to 20 different layers | QUANTITY | 0.81+ |
Foria | LOCATION | 0.8+ |
1800 vendors | QUANTITY | 0.8+ |
two option | QUANTITY | 0.78+ |
2022 | DATE | 0.77+ |
Kevin L. Jackson, GC GlobalNet | CUBE Conversation, September 2021
(upbeat music) >> Hello and welcome to this special CUBE conversation. I'm John Furrier, host of theCUBE here, remote in Washington, DC, not in Palo Alto, but we're all around the world with theCUBE as we are virtual. We're here recapping the Citrix Launchpad: Cloud (accelerating IT modernization) announcements with CUBE alumni Kevin Jackson, Kevin L. Jackson, CEO of GC Global Net. Kevin, great to see you. Thanks for coming on. >> No, thank you very much, John. It's always a pleasure to be on theCUBE. >> It's great to have. You always have great insights. But here, we're recapping the event, Citrix Launchpad: Cloud (accelerator IT modernization). And again, we're seeing this theme constantly now, IT modernization, application modernization. People are now seeing clearly what the pandemic has shown us all that there's a lot of projects that need to be up-leveled or kill. There's a lot of things happening and going on. What's your take of what you heard? >> Well, you know, from a general point of view, organizations can no longer put off this digitalization and the modernization of their IT. Many of these projects have been on a shelf waiting for the right time or, you know, the budget to get right. But when the pandemic hit, everyone found themselves in the virtual world. And one of the most difficult things was how do you make decisions in the virtual world when you can't physically be with someone? How do you have a meeting when you can't shake someone's hand? And they all sort of, you know, stared at each other and virtually, of course, to try to figure this out. And they dusted off all of the technologies they had on the shelf that they were, you know, they were told to use years ago, but just didn't feel that it was right. And now it became necessary. It became the way of life. And the thing that really jumped at me yesterday, well, jumped at me with Launchpad, the Launchpad of the cloud is that Citrix honed in on the key issues with this virtual world. I mean, delivering applications, knowing what the internet state is so that you could select the right sources for information and data. And making security holistic. So you didn't have to, it was no longer sort of this bolted on thing. So, I mean, we are in the virtual world to stay. >> You know, good call out there. Honing in was a good way to put it. One quote I heard from Tim (Minahan) was, you know, he said one thing that's become painfully evident is a lot of companies are going through the pandemic and they're experiencing the criticality of the application experience. And he says, "Application experience is the new currency." Okay, so the pandemic, we all kind of know what's going on there. It's highlighting all the needs. But this idea of an application experience is the new currency is a very interesting comment because, I mean, you nailed it. Everyone's working from home. The whole work is shifting. And the applications, they kind of weren't designed to be this way 100%. >> Right, right. You know, the thing about the old IT was that you would build something and you would deploy it and you would use it for a period of time. You know, a year, two years, three years, and then there would be an upgrade. You would upgrade your hardware, you would upgrade your applications, and then you go through the process again, you know? What was it referred to as, it wasn't modernization, but it was refresh. You know, you would refresh everything. Well today, refresh occurs every day. Sometimes two or three times a day. And you don't even know it's occurring. Especially in the application world, right? I think I was looking at something about Chrome, and I think we're at like Chrome 95. It's like Chrome is updated constantly as a regular course of business. So you have to deploy this, understand when it's going to be deployed, and the customers and users, you can't stop their work. So this whole application delivery and security aspect is completely different than before. That's why this, you know, this intent driven solution that Citrix has come up with is so revolutionary. I mean, by being able to know the real business needs and requirements, and then translating them to real policies that can be enforced, you can really, I guess, project the needs, requirement of the organization anywhere in the world immediately with the applications and with this security platform. >> I want to get your reactions to something because that's right on point there, because when we look at the security piece and the applications you see, okay, your mind goes okay, old IT, new IT. Now with cloud, with the pandemic showing that cloud scale matters, a couple themes have come from that used to be inside the ropes concepts. Virtualization, virtual, and automation. Those two concepts are going mainstream because now automation with data and virtual, virtual work, virtual CUBE, I mean, we're doing virtual interviews. Virtualization is coming here. So building on those things. New things are happening around those two concepts. Automation is becoming much more programmable, much more real time, not just repetitive tasks. Virtual is not just doing virtual work from home. It's integrating that virtual experience into other applications. This requires a whole new organizational structure mindset. What's your thoughts on that? >> Well, one of the things is the whole concept of automation. It used to be a nice to have. Something that you could do maybe to improve your particular process, not all of the processes. And then it became the only way of reacting to reality. Humans, it was no longer possible for humans to recognize a need to change and then execute on that change within the allotted time. So that's why automation became a critical element of every business process. And then it expanded that this automated process needed to be connect and interact with that automated process and the age of the API. And then the organization grew from only relying on itself to relying on its ecosystem. Now an organization had to automate their communications, their integration, the transfer of data and information. So automation is key to business and globalization creates that requirement, or magnifies that requirement. >> One of the things we heard in the event was, obviously Citrix has the experience with virtual apps, virtual desktop, all that stuff, we know that. But as the cloud grows in, they're making a direct statement around Citrix is going to add value on top of the cloud services. Because that's the reality of the hybrid, and now soon to be multi-cloud workflows or architectures. How do you see that evolve? Is that something that's being driven by the cloud or the app experience or both? What's your take on that focus of Citrix taking their concepts and leadership to add value on top of the cloud? >> To be honest, I don't like referring to the cloud. It gives an impression that there's only a single cloud and it's the same no matter what. That couldn't be further from the truth. A typical organization will consume services from three to five cloud service providers. And these providers aren't working with each other. Their services are unique, independent. And it's up to the enterprise to determine which applications and how those applications are presented to their employees. So it's the enterprise that's responsible for the employee experience. Integrating data from one cloud service provider to another cloud service provider within this automated business process or multiple business processes. So I see Citrix is really helping the enterprise to continually monitor performance from these independent cloud service provider and to optimize that experience. You know, the things like, where is the application being consumed for? What is the latency today on the internet? What type of throughput do I need from cloud service provider A versus cloud service provider B? All of this is continually changing. So the it's the enterprise that needs to constantly monitor the performance degradation and look at outages and all of that. So I think, you know, Citrix is on point by understanding that there's no single cloud. Hybrid and multi-cloud is the cloud. It's the real world. >> You know, that's a great call. And I think it's naive for enterprises to think that, you know, Microsoft is sitting there saying hmm, let's figure out a way to really work well with AWS. And vice versa, right? I mean, and you got Google, right? They all have their own specialties. I mean, Amazon web service has got great compliance action going on there. Much back stronger than Microsoft. Microsoft's got much deeper legacy and integration to their base, and Google's doing great with developers. So they're all kind of picking their lanes, but they all exist. So the question in the enterprise is what? Do I, how do I deal with that? And again, this is an opportunity for Citrix, right? So this kind of comes down to the single pane of glass (indistinct) always talks about, or how do I manage this new environment that I need to operate in? Because I will want to take advantage of some of the Google goodness and the Azure and the AWS. But now I got my own on premises. Bare metals grow. You're seeing more bare metal deals going down now because the cloud operations has come on premises. >> Yeah, and in fact, that's hybrid IT, right? I always see that there are an enterprise, when enterprise thinks about modernizing or digitally transforming a business process, you have three options, right? You could put it in your own data center. In fact, building a data center and optimizing a data center for a particular process is the cheapest and most efficient way of executing a business process. But it's only way cheaper and efficient if that process is also stable and consistent. I'll say, but some are like that. But you can also do a managed service provider. But that is a distinctly different approach. And the third option is a cloud service provider. So this is a hybrid IT environment. It's not just cloud. It's sort of, you know, it's not smart to think everything's going to go into the cloud. >> It's distributed computing. We see (indistinct). >> Yeah, yeah, absolutely. I mean, in today's paperless world, don't you still use a pen and paper and pencil? Yes. The right tool for the right job. So it's hybrid IT. Cloud is not always a perfect thing. And that's something that I believe Citrix has looked at. That interface between the enterprise and all of these choices when it comes to delivering applications, delivering the data, integrating that data, and making it secure. >> And I think that's a winning positioning to have this app experience, the currency narrative, because that ultimately is an outcome that you need to win on. And with the cloud and the cloud scale that goes on with all the multiple services now available, the company's business model is app driven, right? That's their application. So I love that, and I love that narrative. Also like this idea of app delivering security. It's kind of in the weeds a little bit, but it highlights this hybrid IT concept you were saying. So I got to ask you as the expert in the industry in this area, you know, as you have intent, what do they call it? Intent driven solution for app delivering security. Self healing, continuous optimization, et cetera, et cetera. The KPIs are changing, right? So I want to get your thoughts on that. Because now, as IT shifts to be much faster, whether it's security teams or IT teams to service that DevOps speed, shifting left everyone talks about, what's the KPIs that are changing? What is the new KPIs that the managers and people can work through as a north star or just tactically? What's your thoughts? >> Well, actually, every KPI has to relate to either the customer experience or the employee experience, and sometimes even more important, your business partner experience. That's the integration of these business processes. And one of the most important aspects that people really don't think about is the API, the application programming interface. You know, you think about software applications and you think about hardware, but how is this hardware deployed? How do you deploy and expand the number of servers based upon more usage from your customer? It's via the API. You manage the customer experience via APIs. You manage your ability to interact with your business partners through the API, their experience. You manage how efficient and effective your employees are through their experience with the IT and the applications through the API. So it's all about that, you know, that experience. Everybody yells customer experience, but it's also your employee experience and your partner experience. So that depends upon this integrated holistic approach to applications and the API security. The web app, the management of bots, and the protection of your APIs. >> Yeah, that really nailed it. I think the position is good. You know, if you can get faster app delivery, keep the security in line, and not bolt it on after the fact and reduce costs, that's a winning formula. And obviously, stitching together the service layer of app and software for all the cloud services is really key. I got to ask you though, Kevin, since you and I have riffed on theCUBE about this before, more importantly now than ever with the pandemic, look at the work edge. People working at home and what's causing the office spaces changing. The entire network architecture. I mean, I was talking to a big enterprise that said, oh yeah, we had, you know, the network for the commercial and the network for dial up now 100% provisioned for everyone at home. The radical change to the structural interface has completely changed the game. What is your view on this? I mean, give us your, where does it go? What happens next? >> So it's not what's next, it's where we are right now. And you need to be able to be, work from anywhere at any time across multiple devices. And on top of that, you have to be able to adapt to constant change in both the devices, the applications, the environment, and a business model. I did a interview with Citrix, actually, from an RV in the middle of a park, right? And it's like, we did video, we did it live. I think it was through LinkedIn live. But I mean, you need to be able to do anything from anywhere. And the enterprise needs to support that business imperative. So I think that's key. It's it's not the future, it's the today. >> I mean, the final question I have for you is, okay, is the frog in the boiling water? At what point does the CIO and the IT leaders, I mean, their minds are probably blown. I can only imagine. The conversations I've been having, it's been, you know, be agile, do it in the cloud, do it at speed, fix the security, programmable infrastructure. What? How fast can I run? This is the management challenge. How are people dealing with this when you talk to them? >> First of all, the IT professional needs to focus on the business needs, the business requirements, the business key performance indicators, not technology, and a business ROI. The CIO has to be right there in the C sweep of understanding what's needed by the business. And there also has to be an expert in being able to translate these business KPIs into IT requirements, all right? And understanding that all of this is going to be within a realm of constant change. So the CIO, the CTO, and the IT professional needs to realize their key deliverable is business performance. >> Kevin, great insight. Loved having you on theCUBE. Thanks for coming on. I really appreciate your time highlighting and recapping the Citrix Launchpad: Cloud announcements. Accelerating IT modernization can't go fast enough. People, they want to go faster. >> Faster, faster, yes. >> So great stuff. Thanks for coming, I appreciate it. >> Thank you, John. I really enjoyed it. >> Okay, it's theCUBE conversation. I'm John Furrier, host of theCUBE. Thanks for watching. (upbeat music)
SUMMARY :
the world with theCUBE It's always a pleasure to be on theCUBE. that need to be up-leveled or kill. and the modernization of their IT. And the applications, and the customers and users, and the applications you see, okay, and the age of the API. One of the things we and it's the same no matter what. and the Azure and the AWS. And the third option is It's distributed computing. That interface between the enterprise What is the new KPIs that the managers and the protection of your APIs. and the network for dial up And the enterprise needs to support CIO and the IT leaders, and the IT professional highlighting and recapping the Citrix Launchpad: Cloud announcements. So great stuff. I really enjoyed it. I'm John Furrier, host of theCUBE.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
Kevin Jackson | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Kevin | PERSON | 0.99+ |
Tim | PERSON | 0.99+ |
three | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Kevin L. Jackson | PERSON | 0.99+ |
100% | QUANTITY | 0.99+ |
September 2021 | DATE | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Citrix | ORGANIZATION | 0.99+ |
Kevin L. Jackson | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
three years | QUANTITY | 0.99+ |
a year | QUANTITY | 0.99+ |
GC Global Net | ORGANIZATION | 0.99+ |
Minahan | PERSON | 0.99+ |
third option | QUANTITY | 0.99+ |
Chrome | TITLE | 0.99+ |
both | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
two years | QUANTITY | 0.99+ |
three options | QUANTITY | 0.99+ |
Washington, DC | LOCATION | 0.99+ |
one | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
pandemic | EVENT | 0.98+ |
two concepts | QUANTITY | 0.98+ |
One quote | QUANTITY | 0.98+ |
GC GlobalNet | ORGANIZATION | 0.97+ |
Chrome 95 | TITLE | 0.97+ |
five cloud service providers | QUANTITY | 0.96+ |
CUBE | ORGANIZATION | 0.96+ |
one thing | QUANTITY | 0.96+ |
ORGANIZATION | 0.95+ | |
three times a day | QUANTITY | 0.95+ |
single pane | QUANTITY | 0.92+ |
single cloud | QUANTITY | 0.92+ |
Citrix Launchpad | TITLE | 0.9+ |
CEO | PERSON | 0.88+ |
theCUBE | ORGANIZATION | 0.87+ |
things | QUANTITY | 0.84+ |
years ago | DATE | 0.83+ |
First | QUANTITY | 0.82+ |
one cloud service provider | QUANTITY | 0.8+ |
Citrix Launchpad: Cloud | TITLE | 0.79+ |
One of | QUANTITY | 0.72+ |
Citrix Launchpad: Cloud | TITLE | 0.72+ |
agile | TITLE | 0.64+ |
Azure | TITLE | 0.51+ |
CUBE | EVENT | 0.49+ |
Unpacking IBM's Summer 2021 Announcement | CUBEconversation
(upbeat music) >> There are many constants in the storage business, relentlessly declining costs per bit. Innovations that perpetually battle the laws of physics, a seemingly endless flow of venture capital, very intense competition. And there's one other constant in the storage industry, Eric Herzog. And he joins us today in this CUBE video exclusive to talk about IBM's recent storage announcements. Eric, welcome back to theCUBE. Great to see you, my friend. >> Great Dave, thank you very much. Of course, IBM always loves to participate with theCUBE and everything you guys do. Thank you very much for inviting us to come today. >> Really our pleasure. So we're going to cover a lot of ground. IBM Storage made a number of announcements this month around data resilience. You've got a new as a service model. You've got performance enhancements. Eric, can you give us, give us the top line summary of the hard news? >> Yeah. Top line. IBM is enhancing data and cyber resiliency across all non mainframe platforms. We already have it on the mainframe of course, and we're changing CapEx to OpEx with our storage as a service. Those are the key takeaways and the hot ticket items from an end user perspective. >> So maybe we could start with sort of the cyber piece. I mean, wow. I mean the last 18 months have been incredible and you're just seeing, you know, new levels of threats. The work from home pivot has created greater exposure. Organizations are kind of rethinking hybrid. You're seeing the ascendancy of some of the sort of hot cyber startups, but, but you're also seeing the, not only of the attack vectors winded, but the, the techniques are different. You know, threat hunting has become much more important. Your responses to threats. You have to be really careful the whole ransomware thing. So what are some of the big trends that you guys are seeing that are kind of informing how you approach the market? >> Well, first of all, it's gotten a lot worse. In fact, Fortune magazine just released the Fortune 500 a couple of weeks ago, and they had a survey that's public of CEOs, and they said, "What's the number one threat to your business? With no list just what's the number one threat?" Cyber security was number one 66% of the Fortune 500 Chief Executive Officers. Not CIOs not CTOs, but literally the CEOs of the biggest companies in the world. However, it's not just big companies. It hits the mid size, the small companies, everyone is open now to cyber threats and cyber attacks. >> Yeah. So for sure. And it's (chuckles) across the board. Let's talk about your solution, the announcement that you made here. Safeguard Copy, I think is what the branding is. >> Yeah. So what we've done is we've got a number of different technologies within our storage portfolio. For example, with our Spectrum Protect product, we can see anomalous pattern detection and backup data sets. Why would that matter? If I am going to hold theCUBE for ransom, if I don't get control of your secondary storage, snaps, replicas, and backups, you can just essentially say, I'm not paying you. You could just do a recovery, right? So we have anomalous protection there. We see encryption, we encrypt at rest with no performance penalty with our FlashSystem's family. We do air gapping. And in case of safeguarded copy, it's a form of air gapping. So we see physical air gapping with tape. logical air gapping, but to a remote location with snaps or replicas to your Cloud provider, and then local logical on-prem, which is what safeguarded copy does. We've had this technology for many years now on the mainframe platform. And we brought it down to the non mainframe environments, Linux, UNIX, and the Windows Server world by putting safeguarded copy on our FlashSystem's portfolio. >> So, okay. So part of the strategy is air gapping. So you're taking a copy, your air gapping it. You probably, you probably take those snaps, you know, at different intervals, you mix that up, et cetera. How do you manage the copies? How do you ensure if I have to do a recovery that you've got kind of a consistent data set? >> Yeah. So a couple things, first of all, we can create on a single FlashSystem array the full array up to 15,000 immutable copies, essentially they're weren't, you can't delete them, you can't change them. On a per volume basis, you can have 255. This is all managed with our storage copy manager, which can automate the entire process. Creation, deletion, frequency, and even recovery mode. So for example, I could have volume one and volume one perhaps I need to make immutable copies every four hours, while at 255 divided by four a day, I can go for many months and still be making those immutable copies. But with our Copy Services Manager, you can set up to be only 30 days, 60 days, you can set the frequency and once you set it up, it's all automated. And you can even integrate with IBM's QRadar, which is a threat detection and breach software from the security division of IBM. And when certain threats hit, it can actually automatically kick off a safeguarded copy. So what we do is make sure you've got that incredibly rapid recovery. And in fact, you can get air gapping, remotely. We have this on the main frame and a number of large global Fortune 500's actually do double air gapping, local logical, right? So they can do recovery in just a couple hours if they have an attack. And then they take that local logical and either go remote logical. Okay. Which gives them a second level of protection, or they'll go out to tape. So you can use this in a myriad of ways. You can have multiple protection. We even, by the way Dave, have three separate different admin levels. So you can have three different types of admins. One admin can't delete, one admin can. So that way you're also safe from what I'll call industrial espionage. So you can never know if someone's going to be stealing stuff from inside with multiple administrative capabilities, it makes it more difficult for someone to steal your data and then sell it to somebody. >> So, okay. Yeah, right. Because immutable is sort of, well, you're saying that you can set it up so that only one admin has control over that, is that right? If you want it... >> There's three, there's three admins with different levels of control. >> Right. >> And the whole point of having a three admins with different levels of control, is you have that extra security from an internal IT perspective versus one person, again, think of the old war movies, you know, nuclear war movies. Thank God it's never happened. Where two guys turn the key. So you've got some protection, we've got multiple admin level to do that as well. So it's a great solution with the air gapping. It's rapid recovery because it's local, but it is fully logically air gapped separated from the host. It's immutable, it's WORM, Write Once, Read Many can't delete can't change. Can't do anything. And you can automate all the management with our Copy Services Manager software that will work with safeguard copy. >> You, you talked about earlier, you could detect anomalous behavior. So, so presumably this can help with, with detecting threats, is that? >> Well, that's what our spectrum protect product does. My key point was we have all levels of data resiliency across the whole portfolio, whether it be encrypting data at rest, with our VTLs, we can encrypt in-flight. We have safeguarded copy on the mainframe, safeguarded copy on FlashSystems, any type of storage, including our competitor storage. You could air gap it to tape, right? With our spectrum virtualized software in our SAN Volume Controller, you could actually air gap out to a Cloud for 500 arrays that aren't even ours. So what we've done is put in a huge set of data and cyber resiliency across the portfolio. One thing that I've noticed, Dave, that's really strange. Storage is intrinsic to every data center, whether you're big, medium, or small. And when most people think about a cybersecurity strategy from a corporate perspective, they usually don't even think about storage. I've been shocked, but I've been in meetings with CEOs and VPs and they said, "oh, you're right, storage is, is a risk." I don't know why they don't think of it. And clearly many of the security channel partners, right? You have channel that are very focused on security and security consultants, they often don't think about the storage gaps. So we're trying to make sure, A, we've got broad coverage, primary storage, secondary storage, backup, you know, all kinds of things that we can do. And we make sure that we're talking to the end users, as well as the channel to realize that if you don't have data resilience storage, you do not have a corporate cybersecurity strategy because you just left out the storage part. >> Right on. Eric, are you seeing any use case patterns emerge in the customer base? >> Well, the main use case is prioritizing workloads. Obviously, as you do the immutable copies, you chew up capacity. Right now there's a good reason to do that. So you've got these immutable copies, but what they're doing is prioritizing workloads. What are the workloads? I absolutely have to have up and going rapidly. What are other workloads that are super important, but I could do maybe remote logical air gapping? What ones can I put out to tape? Where I have a logical, where I have a true physical air gap. But of course tape can take a long recovery time. So they're prioritizing their applications, workloads and use case to figure out what they need to have a safeguarded copy with what they could do. And by the way, they're trying to do that as well. You know, with our FlashSystem products, we could encrypt data at rest with no performance penalty. So if you were getting, you know, 30,000 database records and they were taken, you know, 10 seconds for sake of argument, when you encrypt, normally you slow that down. Well, guess what, when you encrypt with our FlashSystem product. So in fact, you know, it's interesting Dave, we have a comprehensive and free cyber resiliency assessment, no charge to the end-user, no charge to a business partner if they want to engage with us. And we will look at based on the NIST framework, any gaps. So for example, if theCUBE said, these five databases are most critical databases, then part of our cyber resilience assess and say, "ah, well, we noticed that you're not encrypting those. Why are you not encrypting those?" And by the way, that cyber resilience assessment works not only for IBM storage, but any storage estate they've got. So if they're homogenous, we can evaluate that if they're heterogeneous in their storage estate would evaluate that, and it is vendor agnostic and conforms to the NIST framework, which of course is adopted all over the world. And it's a great thing for people to get free, no obligation. You don't have to buy a single thing from IBM. It's just a free assessment of their storage and what cyber security exposure they have in their storage estate. And that's a free thing that we offer that includes safeguarded copy, encryption, air gapping, all the various functionality. And we'll say, "why are you not encrypting? Why are you not air gapping?" That if it's that important, "what, why are you leaving these things exposed?" So that's what our free cyber resilience assessment does. >> Got to love those freebies take advantage of those for sure. A lot of, a lot of organizations will charge big bucks for those. You know, maybe not ridiculously huge bucks, but you're talking tens of thousands. Sometimes you'll get up to hundreds of thousands of dollars for that type of type of assessment. So that's, you've got to take advantage of that if you're a customer out there. You know, I, I wanted to ask you about just kind of shift topics here and get into the, as a service piece of it. So you guys announced your, your as a service for storage, a lot of people have also done that. What do we need to know about the IBM Solution? And what's different from the others, maybe two part question, but what's the first part. What do we need to know? >> A couple of thing is, from an overall strategy perspective, you don't buy storage. It's a full OpEx model. IBM retains legal title. We own it. We'll do the software upgrades as needed. We may even go ahead and swap the physical system out. You buy an SLA, a tier if you will. You buy capacity, performance, we own it. So let's take an easy one. Our tier two, we give you our worst case performance at 2,250 IOPS per terabyte. Our competitors by the way, when you look at their contracts and look what they're putting out there, they will give you their best case number. So if they're two is 2,250, that's the best case. With us it's our worst case, which means if your applications or workloads get 4,000 IOPS per terabyte, it's free. We don't charge you for that. We give you the worst case scenario and our numbers are higher than our competition. So we make sure that we're differentiated true OpEx model. It's not a modified Lease model. So it's truly converts CapEx into operational expense. We have a base as everybody does, but we have a variable. And guess what? There's the base price and the variable price are the same. So if you don't use the variable, we don't charge you. We bill you for 1/4 in arrears, every feature function that's on our FlashSystem technology such as safeguarded copy, which we just talked about. AI based tiering, data at rest encryption with no performance penalty, data in compression with no performance, all those features you get, all of them, all we're doing is giving you an option. We still let you buy CapEx. We will let you lease with IBM Global Financial Services. And guess what? You could do a full OpEx model. The technology though, our flash core modules, our spectrum virtualized software is all the same. So it's all the same feature function. It's not some sort of stripped down model. We even offer Dave, 100% availability option. We give Six Nines of availability as a default, several of the competitor, which is only five minutes and 26 seconds of downtime, several of our competitors, guess what they give? Fournines. If you want five or six, you got to pay for it. We just give you six as a default differentiator, but then we're the only vendor to offer 100% availability guarantee. Now that is an option. It's the one option. But since we're already at Six Nines, when our competitors are at Four or Five Nines, we already have better availability with our storage as a service than the competition does. >> So let me just make this, make sure I'm clear on this. So you got Six Nines as part of the service. That's >> Absolutely >> Fundamental. And I get, I can pay up for 100% availability option. And, >> Yes you can. >> So what does that, what does that mean? Practically? You're putting in redundancies and, >> Right, right. So we have a technology known as HyperSwap. We have several public references by the way, at ibm.com. We've been shipping HyperSwap on both the mainframe, probably eight or nine years now. We brought it to our FlashSystem product probably five years ago. As I mentioned, we've got public references. You don't pay for the software by the way, you do have to have a dual node cluster. And HyperSwap allows you to do that. But you can do that as a service. You can buy it. You can do as CapEx, right? When you need the additional FlashSystem to go with it again, the software is free. So you're not to pay for the software. You just have to pay for the additional system level componentry, but you can do that as a service and have it completely be an OpEx model as well. We even assign a technical account manager to every account. Every account gets a technical account manager. If you will, concierge service comes with every OpEx version of our storage as a service. >> So what does that mean? What does that concierge do? Just paying attention to (indistinct) >> Concierge service will do a quarterly, a quarterly review with you. So let's say theCUBE bought 10,000 other analyst firms in the industry. You're now the behemoth. And you at theCUBE are using IBM storage as a service. You call up your technical account manager to say, "Guess what? We just bought these companies. We're going to convert them all to storage as a service, A, we need a higher tier, you could upgrade the tier B, we have a one-year contract, but you know what we'd like to extend it to two, C, we think we need more capacity." You tell your technical account manager, they'll take care of all of that for you, as well as giving you best practices. For example, if you decide you want to do safeguarded copy, which you can do, because it's built into our spectrum virtualized software, which is part of our storage as a service, we can give you best practices on that he would tell you, or she would tell you about our integration with our security visions, QRadar. So those are various best practices. So the technical account manager makes sure the software is always up to date, right? All the little things that you would have to do yourself if you own it, we take care of, because we legally own it, which is allow you to buy it as a service. So it is a true OpEx model from a financial perspective. >> In the term of the contracts are what? One, two and three years. >> One to five. >> Yeah. Okay. >> If you don't renew and you don't cancel, we'll automatically re up you at the exact tier you're at, at the exact same price. Several of our competitors, by the way, if you do that, they actually charge you a premium until you sign a contract. We do not. So if you have a contract based on tier two, right? We go buy SLA tier one, tier two, tier three. So if I have a tier two contract at theCUBE, and you forgot to get the contract done at the end of two years, but you still want it, you can go for the next 2/4. I mean, well our business partner as I should say, "Dave, don't you want to sign a contract, you said you like it." Obviously you would, but we will let you stay. You just say, now I want to keep it without a contract. And we don't charge your premium. Our competitors if you don't have a contract, they charge your premium. If you keep it installed without putting a contract in place. So little things like that clearly differentiate what we do. We don't charge a premium. If you go above the base. One of the competitors, in fact, when you go into the variable space, okay? And by the way, we provide 50% extra capacity. We over-provision. The other competitors usually do 25%. We do 50%. No charge, is just part of the service. So the other vendors, if you go into the variable space, they raised the price. So if it's $5, you know, for X capacity and you go into the, which is your base, and then you go above that, they charge you $7 and 50 cents. We don't. It's $5 at the base and $5 at the variable. Now obviously your variable can be very big or very small, but whatever the variable is, we charge you. But we do not charge you an a bigger price. Couple of competitors when you go into the variable world, they charge you more. Guess what it gets you to do, raise your base capacity. (Eric laughs) >> Yeah. I mean, that's, that should, the math should be the opposite of that, in my view. If you make a commitment to a vendor, say, okay, I'm going to commit to X. You have a nice chart on this, actually in your, in your deck. If I'm going to commit to X, and then I'm going to add on, I would think the add on price per bit should be at the same or lower. It shouldn't be higher. Right? And I get, I get what you're saying there. They're forcing you to jack up the base, but then you're taking all the risk. That's not a shared risk model. I get... >> And that's why we made sure that we don't do that. In fact, Dave, you can, you know, the fact that we don't charge you a premium if you go beyond your contract period and say, "I still wanted to do it, but I haven't done the contract yet." The other guys charge you a premium, if you go beyond your contract period. We don't do that either. So we try to be end-user friendly, customer friendly, and we've also factored in our business partners can participate in this program. At least one of our competitors came out with a program and guess what? Partners could not participate. It was all direct. And that company by happens to have about 80% of their business through the channel and their partners were basically cut out of the model, which by the way, is what a lot of Cloud providers had done in the past as well. So it was not a channel friendly model, we're channel friendly, we're end user-friendly, it's all about ease of use. In fact, when you need more capacity, it takes about 10 minutes to get the new capacity up and going, that's it? >> How long does it take to set up? How long does it take to set up initially? And how long does it take to get new capacity? >> So, first of all, we deploy either in a Colo facility that you've contracted with, including Equinix, Equinix, is part of our press release, or we install on your site. So the technical account managers is assigned, he would call up theCUBE and say, "When is it okay for us to come install the storage?" We install it. You don't install anything. You just say, here's your space. Go ahead and install. We do the installation. You then of course do the normal rationing of the capacity to this goes to this Oracle, this goes to SAP. This goes to Mongo or Cassandra, right? You do that part, but we install it. We get it up and going. We get it turned on. We hook it up to your switching infrastructure. If you've got switching infrastructure, we do all of that. And then when you need more capacity, we use our storage insights pro which automatically monitors capacity, performance, and potential tech support problems. So we give you 50% extra, right? If you drop that to 25%, so you now don't have 50% extra anymore, you only have 25% extra, we'll, the technical account manager would call you and say, "Dave, do you know that we'd like to come install extra capacity at no charge to get you back up to that 50% margin?" So we always call because it's on your site or in your Colo facility, right? We own the asset, but we set it up and you know, it takes a week or two, whatever it takes to ship to whatever location. Now by the way, our storage as a service for 2021 will be in North America and Europe only, we are really expanding our storage as a service outside into Asia and into Latin America, et cetera, but not until 2022. So we'll start out with North America and Europe first. >> So I presume part of that is figuring out just the compensation models right? And so how, how did you solve that? I mean, you can't, you know, you don't seem to be struggling with that. Like some do. I think there's some people dipping their toes in the water. Was that because, you know, IBM's got experience with like SAS pricing or how were you thinking about that and how did you deal with kind of the internal (indistinct) >> Sure. So, first of all, we've had for several years, our storage utility model. >> Right? >> Our storage utility model has been sort of a hybrid part CapEx and part OpEx. So first of all, we were already halfway there to an OpEx model with our storage utility model that's item, number one. It also gave us the experience of the billing. So for example, we bill you for a full quarter. We don't send you a monthly bill. We send you a quarterly bill. And guess what, we always bill you in arrears. So for example, since theCUBE is going to be a customer this quarter, we will send you a bill for this quarter in October for the October quarter, we'll send you a bill for that quarter in January. Okay. And if it goes up, it goes up. If it goes down, it goes down. And if you don't use any variable, there's no bill. Because what we do is the base you pay for once a year, the variable you pay for by on a quarterly basis. So if you, if you are within the base, we don't send you a bill at all because there's no bill. You didn't go into the variable capacity area at all. >> I love that. >> When you have a variable It can go up and down. >> Is that unique to some, do some competitors try to charge you up front? Like if it's a one-year term. (Dave laughs) >> Everbody charges, everybody builds yearly on the base capacity. Pretty much everyone does that. >> Okay, so upfront you pay for the base? Okay. >> Right. And the variable can be zero. If you really only use the base, then there is no variable. We only bill for it's a pay for what you use model. So if you don't use any of the variable, we never charge you for variable. Now, you know, because you guys have written about it, storage grows exponentially. So the odds of them ending up needing some of the variable is moderately high. The other thing we've done is we didn't just look at what we've done with our storage utility model, but we actually looked at Cloud providers. And in fact, not only IBM storage, but almost every of our competitors does a comparison to Cloud pricing. And when you do apples to apples, Cloud vendors are more expensive than storage as a services, not just from us, but pretty much for a moment. So let's take an example. We're Six Nines by default. Okay. So as you know, most Cloud providers provide three or Fournines as the default. They'll let you get five or Six Nines, but guess what? They charge you extra. So item number one. Second thing, performance, as you know, the performance of Cloud storage is usually very weak, but you can make it faster if you want to. They charge extra for that. We're sitting at 2,250 terabytes per IOPS, excuse me, per terabytes. That's incredible performance If you've got 100 terabytes, okay. And if your applications and workloads and that's the worst case, by the way, which differentiates from our competitors who usually quote the best case, we quote you the worst case and our worst case by the way, is almost always higher than their best cases in each of the tiers. So at their middle tier, our worst case is usually better than their best case. But the point is, if you get 4,000 IOPS per terabyte and you're on a tier two contract, it's a two-tier contract. And in fact, let's say that theCUBE has a five-year deal. And we base this on our FlashSystem technology. And so let's say for tier two, for sake of argument, FlashSystem, 7,200. We come out two years after theCUBE has it installed with the FlashSystem, 7,400. And let's say the FlashSystem, 7,400, won't deliver a 2,250 IOPS per terabyte, but 5,000, if we choose to replace it, 'cause remember it's our physical property. We own it. If we choose to replace that 7,200 with a 7,400, and now you get 5,000 IOPS per terabyte, it's free. You signed a tier two contract for five years. So two years later, if we decide to put a different physical system there and it's faster, or has four more software features, we don't charge you for any of that. You signed an SLA for tier two. >> You haven't Paid for capacity, right? All right. >> You are paying for the capacity (indistinct) performance, you don't pay for that. If we swap it out and the, the array is physically faster, and has got five new software features. You pay nothing, you pay what your original contract was based on the capacity. >> What I'm saying is you're learning from the Cloud providers 'cause you are a Cloud provider. But you know, a lot of the Cloud providers always sort of talk about how they lower prices. They lower prices, but you know, well, you worked at storage companies your whole life and they, they lower prices on a regular basis because they 'cause the cost of the curve. And so. >> Right. The cost of storage to Cloud, I mean, the average price decline in the storage industry is between 15 and 25%, depending on the year, every single year. >> Right. >> As, you know, you used to be with one of those analysts firms that used to track it by the numbers. So you've seen the numbers. >> For sure. Absolutely. >> On average it drops 15 to 25% every year. >> So, what's driving this then? If it's, it's not necessarily, is it the shift from, from CapEx to OPEX? Is it just a more convenient model than on a Cloud like model? How do you see that? >> So what's happened in IT overall is of course it started with people like salesforce.com. Well, over 10 years ago, and of course it's swept the software industry software as a service. So once that happened, then you now see infrastructure as a service, servers, switches, storage, and an IBM with our storage as a service, we're providing that storage capability. So that as a service model, getting off of the traditional licensing in the software world, which still is out there, but it's mostly now is mostly software as a service has now moved into the infrastructure space. From our perspective, we are giving our business partners and our customers, the choice. You still want to buy it. No problem. You want to lease it? No problem. You want a full OpEx model. No problem. So for us, we're able to offer any of the three options. The, as a service model that started in software has moved now into the systems world. So people want to change often that CapEx into OpEx, we can even see Global Fortune 500s where one division is doing something and a different division might do something else, or they might do it different by geography. In a certain geography, they buy our FlashSystem products and other geographies they lease them. And in other geographies it's, as a service. We are delivering the same feature, function, benefit from a performance availability software function. We just give them a different way to procure. Do you want CapEx you want leasing or OpEx you pick what you want, we'll deliver the right solution for you. >> So, you got the optionality. And that's great. You've thought that out, but, but the reason I'm asking Eric, is I'm trying to figure out this is not just for you for everybody. Is this a check-off item or is this going to be the prevailing way in which storage is consumed? So if you had, if you had a guess, let's go far out. So we're not making any near-term forecast, but end of the decade, is this going to be the dominant model or is it going to be, you know, one of the few. >> It will be one of a few, but it'll be a big few. It'll be the big, one of the biggest. So for sake of argument, there we'll still be CapEx, they'll still be OpEx they'll still be, or there will be OpEx and they're still be leasing, but I will bet you, you know, at the end of this decade, it'll be 40 to 50% will be on the OpEx model. And the other two will have the other 50%. I don't think it's going to move to everything 'cause remember, it's a little easier during the software world. In the system world, you've got to put the storage, the servers, or the networking on the prem, right? Otherwise you're not truly, you know, you got to make it a true OpEx model. There's legal restrictions. You have to make it OpEx, if not, then, you know, based on the a country's practice, depending on the country, you're in, they could say, "Well, no, you really bought that. It's not really a service model." So there's legal constraints that the software worldwise easier to get through and easier to get to bypass. Right? So, and remember, now everything is software as a service, but go back when salesforce.com was started, everyone in the enterprise was doing ELAs and all the small companies were buying some sort of contract, right, or buying by the (indistinct) basis. It took a while for that to change. Now, obviously the predominant model is software as a service, but I would argue given when salesforce.com started, which was, you know, 2007 or so, it took a good 10 years for software as a service to become the dominant level. So I think A, it won't take 10 full years because the software world has blazed a trail now for the systems world. But I do think you'll see, right. We're sitting here know halfway through 2021, that you're going to have a huge percentage. Like I said, the dominant percentage will be OpEx, but the other two will still be there as well. >> Right. >> By the way, you know in software, almost, no one's doing ELAs these days, right? A few people still do, but it's very rare, right? It's all software as a service. So we see that over time doing the same thing in the, in the infrastructure side, but we do think it will be slower. And we'll, we'll offer all three as, as long as customers want it. >> I think you're right. I think it's going to be mixed. Like, do I care more about my income statement or my balance sheet and the different companies or individual different divisions are going to have different requirements. Eric, you got to leave it there. Thanks much for your time and taking us through this announcement. Always great to see you. >> Great. Thank you very much. We really appreciate our time with theCUBE. >> All right. Thank you for watching this CUBE conversation. This is Dave Vellante and we'll see you next time. (upbeat music)
SUMMARY :
in the storage business, and everything you guys do. Eric, can you give us, and the hot ticket items how you approach the market? of the Fortune 500 Chief the announcement that you made here. you can just essentially say, So part of the strategy is air gapping. So you can use this in a myriad of ways. If you want it... different levels of control. And you can automate all the management you could detect anomalous behavior. And clearly many of the security are you seeing any use So in fact, you know, So you guys announced your, So if you don't use the So you got Six Nines And I get, And HyperSwap allows you to do that. we can give you best practices on that In the term of the contracts are what? Yeah. So the other vendors, if you If you make a commitment if you go beyond your So we give you 50% extra, right? and how did you deal with kind of the So, first of all, we've the variable you pay for When you have a variable to charge you up front? on the base capacity. Okay, so upfront you pay for the base? So if you don't use any of the variable, You haven't Paid for capacity, right? you pay what your original contract was But you know, decline in the storage industry As, you know, For sure. 15 to 25% every year. Do you want CapEx you want leasing or OpEx So if you had, if not, then, you know, By the way, you know in software, Eric, you got to leave it there. Thank you very much. Thank you for watching
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Eric | PERSON | 0.99+ |
One | QUANTITY | 0.99+ |
Equinix | ORGANIZATION | 0.99+ |
Asia | LOCATION | 0.99+ |
$7 | QUANTITY | 0.99+ |
Eric Herzog | PERSON | 0.99+ |
$5 | QUANTITY | 0.99+ |
six | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
IBM Global Financial Services | ORGANIZATION | 0.99+ |
40 | QUANTITY | 0.99+ |
five-year | QUANTITY | 0.99+ |
North America | LOCATION | 0.99+ |
2,250 | QUANTITY | 0.99+ |
60 days | QUANTITY | 0.99+ |
OPEX | ORGANIZATION | 0.99+ |
100% | QUANTITY | 0.99+ |
25% | QUANTITY | 0.99+ |
one-year | QUANTITY | 0.99+ |
Latin America | LOCATION | 0.99+ |
50% | QUANTITY | 0.99+ |
Europe | LOCATION | 0.99+ |
5,000 | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
three admins | QUANTITY | 0.99+ |
CapEx | ORGANIZATION | 0.99+ |
10 years | QUANTITY | 0.99+ |
2,250 terabytes | QUANTITY | 0.99+ |
10 seconds | QUANTITY | 0.99+ |
2021 | DATE | 0.99+ |
2007 | DATE | 0.99+ |
October quarter | DATE | 0.99+ |
a week | QUANTITY | 0.99+ |
100 terabytes | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
15 | QUANTITY | 0.99+ |
255 | QUANTITY | 0.99+ |
7,200 | QUANTITY | 0.99+ |
two guys | QUANTITY | 0.99+ |
26 seconds | QUANTITY | 0.99+ |
North America | LOCATION | 0.99+ |
five years ago | DATE | 0.99+ |
FlashSystem | TITLE | 0.99+ |
first part | QUANTITY | 0.99+ |
eight | QUANTITY | 0.99+ |
five minutes | QUANTITY | 0.99+ |
two-tier | QUANTITY | 0.99+ |
one division | QUANTITY | 0.99+ |
two part | QUANTITY | 0.99+ |
each | QUANTITY | 0.99+ |
five years | QUANTITY | 0.99+ |
nine years | QUANTITY | 0.99+ |
Samme Allen, Event Expert | CUBEconversation
>>Overnight 2020 forced us to get digital video, right? For the first 90 days, it was pretty awkward to say the least, but as people became more comfortable with home setups and lighting and just the weirdness of being locked down and shut in the frequency, the quality, and I think the watch ability of virtual conversations improved quite dramatically welcome to the cube. My name is Dave Vellante. And with me to talk about what we learned and can take away from producing video content during the isolation economy is event expert conference facilitator and MC extraordinary Sam Allen, Sam, come inside the cube. Welcome. >>Thank you so much for having me really lovely to be here with you. >>Pleasure. So I gotta ask you, you know, am I right? Do we actually have more watchable video online or now, or are we all sort of zoom fatigued out? >>I think if people watch the cube, I think you've got some incredible content online. You guys are the pros. I think we are still in this change format right now. Uh, we've got people who are doing it well, who started really early, tried failed, pick themselves back up, try it again, and are producing some really good pieces of content looking outside of perhaps the norm to create some great visual, some great conferences and events. I think on the whole, sadly, I think we still have a way to go, which is great for the likes of us in terms of helping those professionals become more professional and just trying to differentiate between what's just a zoom meeting. And actually what's an experience for communications for our audiences. >>I want to get into some of the best practice and maybe some of the do's and don'ts, but, but let's roll back a little bit. Tell us about yourself and how you got into this business. >>I'd love to say I've been a virtual event designer and MC moderator for years, but as we know, the world has turned itself on its head in the past 14, 15 months. Prior to that, I've been in the event and conference industry for about 20 years. Most recently, traveling the world, uh, onstage presenting moderating, hosting conferences, across various different industries from pharmaceuticals to finance, through to industry associations, telecoms, et cetera. Um, my world fell apart just about February 23rd, 2020, as many people did. I was excitingly booked to work with a lot of clients through Nova, uh, through to November, 2020. That didn't happen. And we have a couple of choices as an entrepreneur, uh, pick ourselves up or stay down on the ground. So I chose that first option. I studied online event design. I was a meeting and event designer already, but there are big nuances. When we work in the world of online, I've picked myself up, started studying online event design. I was fortunate. My clients trusted me. So we managed to pivot, uh, several of their events early on during the pandemic into the world of virtual. We've had some incredible feedback from our participants and we have gone from strength to strength. I now work with several other associate MCs experienced in this digital field, working with new and existing clients in terms of designing a better experience for those who are watching us on our screens. Now >>That's awesome. I love the reinvention story. I, Sam, I didn't know. You could take a class in this stuff. So tell me about that. And what was that like? >>I think one of the things when, you know, when we are in, and I'm sure many of our viewers today have said in the wonderful conference theaters and we'll be back in those rooms soon, uh, everything is done with experienced a V and technical and event producers and venue people. Whereas in the online world, I'm here, uh, in sunny London on my own, making sure that I have the right sound, the right connectivity, the rights, uh, visuals, all of these things are things that we just didn't have to do. And we have to do that for every single content contributors. So studying an online event design course back at the very beginning of lockdown really helped me understand the checklist that we need to have for our clients, the things that we need to assume. And most importantly, the things that can go wrong so that we can pick up on those as quickly as we can and try and create these seamless and engaging experiences for our audiences. So I would say to anyone, who's sort of looking into this and really don't know where to start. It's probably good to go and have a look at an online event design course. >>Thank you for that. So, so tell me what, what were some of the things as you look back on 2020, and you think about the work that you did with your clients and maybe even observing some, some of your non-clients, what were the, some of the, some of the mistakes that people made and we can get into some of the best practice. >>Well, as all good people who are being interviewed say, well, you're going to have to wait for my book to be published later on in the year with all the things that have gone wrong and all the ways we've rectified it. But I think one of the major things that we've we've had is obviously this world of distraction, we've all seen it with the cat lawyer. We've seen it with the kids coming in and we've humanized. I think the world of events, which I think is a really positive experience for us all, we are all humans and events are about bringing humans together, human connection. So I think there's a positive side to that, but equally by the same token, we we've seen people, maybe not really getting under the skin of, you know, what's the difference between a zoom meeting and an event experience in terms of what people have been wearing. >>Um, I've had an awkward conversation when we've taken a zoom background away from a speaker and you don't want to know what was hanging on the door. We also had a situation where we lost, um, we've, we've lost speakers and we've had to jump in due to connectivity issues that, you know, we've tested them, but then they've ended up broadcasting from somewhere else. So I think some of that seamless technology, and I would say to anyone, uh, to try and not suffer those challenges, I would say, test, test, rehearse, test, and rehearse again, and make sure you've got that team of people around you. I think a lot of people think that it's very easy to do this, Dave, as I know you and your team will know it is not a, you wouldn't just because I happen to like flying. You wouldn't want me to fly your aircraft. And I think there's the same analogy in terms of running your online event, um, and digital communication experiences. >>Oh, you think, I mean, I w I think we found it that running virtual events is, is harder because there's, first of all, there's so much unknown. You can't really call a late, late stage audible. I mean, things are locked in when you're doing a simulive. I presume you found the same thing and your clients have, have learned a lot in that regard. >>I think it's, um, a lot more work. I think there's a lot more work pre event. Pre-conference pre-meeting that, um, people are still trying to get their, their minds around when we hosted an event in person where you'll get there the day or two before during set-up, we then have a very, very long two, three, four days, depending on how long that event is, where we've got our speakers of the same room, they've all flown in. We know that they've arrived. We know they've checked into the hotel. What we don't have are any of those variables in this world. So we need to make sure that we're working with all of those content providers. And if like me, you work in the association world where you can have up to 90 or a hundred different speakers over a course of a Congress, we've got to fit in the time to make sure that we've tech checked. >>We've worked with panels so that we can make sure that they're dynamic and we've got people looking as well as sounding good. So I think one of those things is that is exceptionally, uh, huge amounts of pre-planning that people need to factor in. I think the second thing is people need to not underestimate how exhausting it is when you don't have the vibe of a live audience. Uh, especially as they'd be considerate of your keynote speakers, especially if they're not professionals, they haven't been doing this. They're not comfortable with a green light. It is tiring, um, trying to visualize 1,004 and a half thousand 25 people, one person in the same room as you would be quite nice. And we haven't had that for the past 12 months. So I think we've learned a lot from that. And we've got some good tips and tricks now that we can, we can use, but, um, I'm pretty sure a lot of our content providers and speakers are looking forward to seeing people back in a really, yeah, fantastic. Well, >>That brings me to my next question. Let's make this the last one, just as we begin to get a little bit more comfortable with, with virtual now we're getting vaccinated. People are, there's huge up demand for face to face. So now we have this new thing of hybrid, uh, which is going to be really interesting to see how that plays out. What are you seeing? What's your expectation for that sort of new abnormal? >>That's an incredibly good question. And we have to start with the new C word is the H word, which is hybrid. I think we have a lot of people getting worried about what hybrid looks like, but I think if, if you think with a design thinking mind, when you're looking at event planning, the virtual or the in-person audience adjust another stakeholder. So if you're spending that time to plan out your meeting or event, the way you should be, then you can factor those people in. I am excited about this world. I think it becomes so much more inclusive for organizations moving forward. And DNI is something that has often been forgotten in the world of conferences and events. And I think the hybrid role gives us all the opportunity to, to have that choice. I think people especially event organizers because it's their job believe that everybody wants to be in a room and not everybody does and not everybody can. And now this is a really, really exciting opportunity to do things differently, to do things, to become more inclusive. And of course, to be more sustainable. >>Sam, you're really an inspiration. I mean, a lot of people out there have to reinvent themselves. You've, you've done it. You retrained you, you started a new type of business that drew on your existing passion, but it's really fantastic to have you on. Thanks for sharing your expertise best of luck in the future. It's great having you. >>Thanks, Dave. >>All right. Thanks for watching everybody. This is Dave Volante for the cube. We'll see you next time.
SUMMARY :
shut in the frequency, the quality, and I think the watch ability of Do we actually have more watchable video I think we are still in this change format right now. I want to get into some of the best practice and maybe some of the do's and don'ts, So we managed to pivot, uh, several of their events early I love the reinvention story. I think one of the things when, you know, when we are in, and I'm sure many of our and you think about the work that you did with your clients and maybe even observing some, some of your non-clients, I think the world of events, which I think is a really positive experience for us And I think there's the same analogy in terms of running your online event, I presume you found the same thing and your clients have, I think there's a lot more work pre event. I think the second thing is people need to not underestimate how exhausting it is when you So now we have this new thing of hybrid, I think we have a lot of people getting worried about what hybrid looks I mean, a lot of people out there have to reinvent themselves. This is Dave Volante for the cube.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Sam Allen | PERSON | 0.99+ |
November, 2020 | DATE | 0.99+ |
Dave | PERSON | 0.99+ |
Samme Allen | PERSON | 0.99+ |
Dave Volante | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
Sam | PERSON | 0.99+ |
three | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
four days | QUANTITY | 0.99+ |
one person | QUANTITY | 0.99+ |
second thing | QUANTITY | 0.99+ |
about 20 years | QUANTITY | 0.98+ |
first option | QUANTITY | 0.98+ |
first 90 days | QUANTITY | 0.97+ |
February 23rd, 2020 | DATE | 0.95+ |
London | LOCATION | 0.94+ |
up to 90 | QUANTITY | 0.93+ |
1,004 and a half thousand 25 people | QUANTITY | 0.88+ |
a hundred different speakers | QUANTITY | 0.87+ |
Nova | ORGANIZATION | 0.83+ |
past | DATE | 0.8+ |
Congress | ORGANIZATION | 0.8+ |
single content | QUANTITY | 0.77+ |
past 12 months | DATE | 0.74+ |
months | QUANTITY | 0.61+ |
Overnight 2020 | TITLE | 0.51+ |
DNI | ORGANIZATION | 0.49+ |
14 | QUANTITY | 0.48+ |
15 | DATE | 0.4+ |
George Lumpkin & Neil Mendelson, Oracle | CUBE Conversation, April 2021
(bright upbeat music) >> Hi well, this is Dave Vellante. We're digging deeper into the world of database. You know, there are a lot of ways to skin a cat and different vendors take different approaches and we're reaching out to the technologists to get their perspective on the major trends that they're seeing in the market, 'cause we want to understand the different ways in which you can solve problems. So look, if you have thoughts and the technical chops on this topic, I'd love to interview you. Just ping me at at DVellante, on Twitter, a lot of ways to get ahold of me. Anyway, we recently spoke with Andrew Mendelsohn, who is Oracle's EVP and he's responsible for database server technologies. And we talked a lot about Oracle's ADW, Autonomous Data Warehouse. And we looked at the cloud database strategy that Oracle is taking and the company's plans and how they're different maybe from other solutions in the marketplace, but I wanted to dig deeper. And so today we have two members of Mendelsohn's team on The Cube, and we're going to probe a little bit. George Lumpkin, is the Vice President of Autonomous Data Warehouse. And Neil Mendelson is the VP of Modern Data Warehouse, that business for Oracle. They're both 20-year veterans of Oracle. When I reached out to Steve Savannah, who's a colleague of mine for many years, he's always telling me how great Oracle is relative to the competition. So I said, okay, come on The Cube and talk about this, give me your best people. And he said, whatever these two don't know about cloud data warehouse, it isn't worth knowing anyway. So with that said gentlemen, welcome to The Cube. Thanks so much for coming on. >> Thank you. >> Hey, glad to be here. >> So George, let's start with you. And maybe we could recap for some of the viewers who might not be familiar with the interview that I did with Andy. In your words, what exactly is an Autonomous Data Warehouse? Is this cloud native? Is it an Oracle buzzword? What is it? >> Well, I mean, Autonomous Data Warehouse is Oracle's cloud data warehouse. It's a service that built to allow business users to get more value from their data. That's what the cloud data warehouse market is. Autonomous Data Warehouse is absolutely cloud native. This is a huge misconception that people might have when they first sort of hear about something, this service because they think this is a Oracle database, right? Oracle makes databases. This is the same old database I knew from 10 years ago. And that's absolutely not true. We built a cloud native service or data warehousing built it with cloud features. You know, if your understanding of the cloud data warehouse market is based upon how you thought things look 10 years ago, like Snowflake wouldn't have even existed, right? You can't base your understanding of Oracle based upon that. We have a modern service that's highly elastic, provides cloud capabilities like online patching and it's fully autonomous. It's really built the business users so they don't need to worry about administering their database. >> So I want to come back and actually ask you some questions about that, but let me follow up and talk about some of the evolution of the ADW. And where did you start? I think it was 2018, maybe where you came from, where you are today, maybe you can take us through the technological progression and maybe the path you took to get here. >> And so 2018, was when we released the service and made generally available, but of course, you know we started much earlier than that. And this was started within my product management team, and other organization. So we really sat down with a blank sheet of paper and we said, what should the data warehouse in the cloud look like? You know, let's put aside everything that Oracle does for its on-prem customers and think about how the cloud should be different. And the first thing that we said was, well, you know, if Oracle writes the database software, and Oracle builds its own hardware, and Oracle has created its own cloud, why do we need customers to manage a database? And that's where the idea of autonomous database came from. That Oracle is managing the entire ecosystem. And therefore we built a database that we believe it's far and away the simplest to use simplest data warehouse in the market. And that's been our focus since we started with 2018. And that continues to be our focus, looking at more ways that we can make an Autonomous Data Warehouse as simpler and easier for business users to get more value out of their data. >> Awesome, one more question. And actually Neil, you might want to chime in on this as well. So just from a technical perspective, you know forget the marketing claims and all the BS. How do you compare ADW to the so-called born in the cloud data warehouses? You mentioned Snowflake, you know Redshift, is Redshift born in the cloud. Well, it was par XL but Amazon's done some good work around Redshift. I think big query is maybe probably a better example 'cause it was, you know, like Snowflake started in the cloud but how do you compare ADW to some of these other so-called born in the cloud data warehouses? >> I think part of this, you mentioned Redshift wasn't important in the cloud. It was, you know, a code base taken from a prior company that was on-premise company. So they adapted it to the cloud, right? And you know, we have done, as George said, much of the same, which is, you know, our starting point was not you know another company's code base, but our starting point was our own code base. But as George said, it's less about the starting point and it's more about where you envision the end point, right? Which is that, you know, whatever your starting point is, I think we have a fundamental different view of the endpoint. Amazon talks about how they're literally built for you know, a cloud built for developers, right? You know, builders, right? And you know Oracle wasn't first in the infrastructure business, we entered through applications business. And all of a sudden, you know we began taking on 100s of 1000s and 100s of even more customers that were SAS customers. Underneath was the database and all the infrastructure. One of the things that we took away from that was that we couldn't possibly hire enough people DBA, to manage all the infrastructure below our applications customers. So one of the things that influenced this is that, you know customers expect SAS applications to just take care of themselves, right? So we had to essentially modify the infrastructure to allow it to do so as well, right? And we're bringing that capability to those people who, you know, may or may not have an application, but their interest is, you know more of this self-service agility type of aspect. >> So it seems to me and Georgia was sort of alluding to this before. I mean, when you mentioned Snowflake a couple of times, and then Neil, something you just said, I'm going to pick up on is you've been around for a long time. And you know, when I talked to the Snowflake people, they know Oracle, a lot of them came from Oracle. They understand I think how you can't just build Oracle overnight and build in the capabilities that Oracle has and the recovery. And you talk to customers and you know you are the gold standard of, you know especially mission critical databases, so I get that. But now you just sort of hit on it, is it takes a lot of people and skill to run the database. So that's the problem that you're saying you were attacking, is that, am I getting that right? >> Right, right, so the people that you talked about who originally built Snowflake came from Oracle, but they came from Oracle more than a decade ago. So their context is over a decade old, right? In the meantime, we've been busy, you know building a economies and many other capabilities, right? Their view of Oracle is that view that was back more than 10 years ago, right? They're still adding capability. So a really good example of this illustration is Oracle as you said, it's the most capable system that's out there and has been for many years. We've been focusing on how do we simplify that and how do we use machine learning embedded within the system itself? Because core to the concept of autonomous is that inside, is this machine learning system that's continually improving, right? That's the whole notion. Where in Snowflakes case, they're still adding functionality. Last year, they added masking which you know functionality they didn't have, but when they added the capability, they added it without, you know, the ability for a business user to actually take advantage of it. There's no capability for a business user to actually find the information that needs to be masked. And then after the information is found, you require a technical person to actually implement the mask. In Oracle's case, we've had masking and those capabilities for a long time, our focus was to be able to provide a simple tool that a business user can use that doesn't need technical or security experience. Find the data that needs to be masked PII data, and then hit a button and have it masked for you. So, you know, they're still, you know, without this notion of a strategy to move toward the system to heal itself and to manage itself, they're just going to continue. As they continue to add more capability, they will in turn add more complexity. What we're trying to do is take complexity out while others are adding it in, its an ironic twist. >> It is an ironic twist. It is interesting to look at it. And I don't want to make this about Snowflake. But I mean, Hey, I like what they're doing. I like them. I know the management, they're growing like crazy and you know and the customers tell me, hey, this is really simple. And it's simple by design. I mean, to your point over time it's going to get, you know, more and more complex. I was talking to Andy, I think it was Andy. He was saying, you know, they've got the different sizes you've got to shape some, you know, they call it t-shirt sizes. And I was like, okay, I got a small, I got a medium and a large, maybe that's okay. But you guys would say, we give more granular you know, a scaling, I guess is the point there, right? I mean George, I don't know if you can comment on that. It just a different strategy. You've got a company that was founded well, I guess, 2015 versus one that was founded in 1977. So you would think the latter has, you know way more function than the former, but George, anything you'd add to this conversation? >> Yeah, I mean, I'm always amazed that there are these database systems that are perceived as cloud native and they do things like sell you database sizes by t-shirt sizes, as you described. I mean, if you look at Snowflake, it's small, medium, large extra large too extra large, but they're all factors of two. You're getting a size of your database of two, four, eight, six, 32, et cetera. Or if you look at AWS Redshift, you're buying your database by the nodes. You say, how many nodes do you want? And in both those cases, this is a cloud native. This is saying we have some hardware underneath our database and we need you, Mr. Customer, to tell us how many servers you want. That's not the way the clouds should work, right? And I think this is one of the things that we did with Autonomous Data Warehouse. We said, no, that's not how the rules should work. We still run our database on hardware, we still have nodes and servers. We should tell the customer, how many CPU's you would like for your data warehouse? You want 16? Sounds good. You want 18? Yeah, we can give you 18. We're not, you know, we're not selling these to you in bundles of eight or bundles of six or powers of two. We'll sell you what you need. That's what cloud elasticity should be. Not this idea that oh, we are a database that should be managed by IT. IT already knows about servers and nodes. Therefore it's okay if we tell people your cloud data warehouse runs on nodes. Within Oracle as Neil said, we wouldn't. The data warehouse should be used by the people who want to actually analyze their data, should be used by the business users. >> Well, and so the other piece of cloud native that has become popular, is this idea of separating compute from storage and being able to scale those two independent of each other which is pretty important, right? Because you don't want to have to pay for a chunk of compute if you don't need the storage and vice versa. Maybe you could talk about that, how you solve that problem, to the extent that you solve that problem. >> Absolutely, we do separate compute print storage with Autonomous Data Warehouse. When you come in and you say, I need 10 CPU's for my data warehouse and I need two terabytes of storage. Those are two dependent decisions that you make. So they're not tied together in any way. And, you are exactly right, Dave, this is how things should work in the cloud. You should pay for what you need, pay for what you use, not be constrained by having big sets of storage you have to use for a given amount CPU or vice versa. >> Okay, go ahead Neil, please. >> Oh, just to add on to that, you know, the other aspect that comes into play is that, you know, so your starting point is X, whatever that happens to be. Over time that changes. And we all know that workloads vary right throughout the day throughout the month, throughout the year by various events that occur maybe the close of the year, close of business at the end of the quarter, it maybe you know, holiday season for retailers and so forth. So, you know, it's not only the starting point, but how do you actually manage the growth, right? scaling up and scaling down, right? In our case, we tried, as George said, we abstracted that completely for the customer basically said check a box, which has auto scale. So, if the system is required more resources, will apply more resources. And we do so instantaneously without any downtime whatsoever, right? Because you know, again, you know, people think in terms of these systems have now become business critical. So if the business critical, you can't just shut down to expand. Imagine during the holiday season is your business is ramping up. And then all of a sudden you have to scale, right? And your system either shuts down, reboots itself, right? Or it slows down to the point that it's a crawl and all your customers get frustrated. We don't do that. You click a button, auto scale and we take care of it for you smoothing out those lumps, right? Without any technical assistance. And again, if you look at Redshift, you look at all these various systems, they require technical assistance to be able to figure out not only your initial data, but how you scale out over time. >> Interesting, okay. So all is said, you know, a lot of companies are using Azure, AWS Google for infrastructure, why would these customers not just use their database? Why would they switch to Oracle or ADW? >> Well, I think Neil will probably add something. I want to start by saying a huge number of our existing Autonomous Data Warehouse customers today are customers of AWS and Azure. They are pulling data from AWS and Azure and bringing it into an Oracle Autonomous Data Warehouse. And we built feature Joe, I focused on product managers. We feel featured for that. And so it's perfectly viable and it it's almost commonplace, that the very largest enterprises to be doing that. But then coming to the question of why would they want to do it? I don't know, Neil, you want to take that? >> Yeah, yeah, so one of the things that we've really see emerge here is you know, a data warehouse doesn't generate the transactions on itself, right? So the data has to come from somewhere, right? And you ask yourself, well, where does the data come from? Well, in a lot of cases, that data is coming from applications and increasingly SAS applications that the company has deployed. And those are, you know, HR applications, you know, CRM applications, you know ERP applications and many vertical applications. In Oracle's case, what we've done is we say, okay, well, we have the application, this transactional thing, we have the infrastructure from the economist data warehouse, why don't we just make it really, really easy? And if you're an Oracle applications customer, that's already running on the Oracle cloud, we will essentially provide you the ability to create a data warehouse from that information, right? With a clicker, with largely either with a product and service or quick start kit. You don't start from scratch, you start from where you are. And there are many cases that where you are has data, very much as George mentioned before telcos, banks, insurance companies, governments, all of the data that they want to analyze, a lot of that data guess where it's coming from, it's coming from Oracle applications. So it makes sense to be able to have both the data that's generated and the data that's being analyzed close to the same place. Because at the end of the day, the payoff pitch for any form of analysis is not coming up with an insight, oh, I realized X, Y, Z, but it's rather putting the insight directly into production. And that's where, when you have this stuff spread all over God's greener trying to go from insight into action can take months, if not years. The reason that a lot of customers are now turning to us is that they need to be much more agile and they need to be able to turn that insight into action immediately without it being a science project. >> Okay, thank you for that. So let's tick them off. Like what are the top things that customers can get from Oracle Autonomous Data Warehouse, that they couldn't get from say a Snowflake or Redshift or Big query or SQL server or something yet. I appreciate you guys' willingness to talk about the competition. Let's tick them off. What are the most important things that we should know about that they can't get elsewhere? >> So first, I mean, we already talked about a couple of what we think are really the major themes of Autonomous Data Warehouse. The services is autonomous. You don't need to worry about managing it, anyone can manage the data warehouse. The service is elastic. You can buy and pay for what you use. You know, those are just what we think of as being the general characteristics of Autonomous Data Warehouse. But you know, when you come to your question of, hey, what do we give that other vendors don't provide? And I think the one angle that Autonomous Data Warehouse does a really good job is and Neil was just discussing this, it focuses on the business problems, right? We have years and years of experience with not just database security, but data security, right? You know, every cloud vendor can say, oh we encrypt all your data, we have these compliance certifications, all of these things. And what they're saying is, we are securing your database, we are securing your database infrastructure. At Oracle of course has to do those as well. But where we go further, is we say, hey, no, no, no, no, no, we know what business users want. They want to secure their data. What kind of data am I storing? Do I have PII data? Could you detect whether there's PII data and tell me about it in case some user loaded something that I wasn't aware of? What kind of privileges did I give my users? Can you make sure that those privileges are right? And can you tell me if users were given privileges that they're not using maybe I need to take them away. These are the problems that Oracle's tackled in security over the last 20 years. It's really more about the business problem. Yeah, some other, oh, go ahead. >> Oh, I'm sorry, I got so many questions for you guys. We'll get back to that 'cause it sounds like there's a long list. (laughs) >> We have nowhere to go.(laughs) I want to pick up with George on something you said about elasticity. Is it true pay by the drink? Do you have a consumption pricing? I mean, can I dial it up and dial it down whenever I want? How does that work? >> Yes, I mean not to be too many technical details, but you say, I want 14 CPU's that's what your database runs at. You can change that default number anytime you want online, right? You can say, okay, I'm coming up on my quarter end, I'm going to raise my database 20 CPU. We just do it on the ply. We just adjust the size--- >> What about the other way? What about coming down? Can I go down to one? >> You go down, you can go down to one--- >> And you're not going to charge me for 14 if I go down to one? >> No, if you set it down to one, you get charged for one, right? >> Okay, that's good, that's good. >> In the background, you know we are also allowing levels of auto scaling. You say, if you say hey, I want to charged for 14 and Oracle, can you take care of all those scaling for me? So if a bunch of people jump on at 5:00 PM, to run some queries, 'cause the executive said, hey, I need a report by tomorrow morning. We'll take care of that for you. We'll let you go beyond 14 and only charge you for exactly what you use for those extra CPU's beyond 14. >> Okay, thank you. Go ahead, Neil. >> And maybe, if we add, you know, Andy talked about this when he was on that show with you last week, right? And you know, he talked about this concept of a converged database, but let me talk about it in the way that we see it from a business point of view, right? You know, business users are looking to, you know ask a variety of questions, right? And those questions need to be able to relate to both you know, the customer themselves, the relationship that the customer might have with others. You know, today we talk about like the social network and who are influencers within that, and then where they actually conduct business. Which is really, you know, in every case, it's on some form of increasingly on a mobile device. So in that case, you want to be able to ask questions, which is not only, you know, who should I focus on, but who are the key influencers within this community, right? That could influence others? And does that happen in a particular place in time? Meaning, you know, let's say pre COVID, it might happen at a coffee shop or somewhere else. We can answer all of those questions and more inside of the autonomous system without having to replicate the data out to one system that does graph and another system that does spatial, a third system that does this. It's like a business user. It's like, wait a minute, come on, you're trying to tell me that I need a separate system and replicate the data just be able to understand location? The answer in many cases is yes, you have to have separate, which a business person says, well, that's absurd. Can't I just do this all in one system? You can with Oracle. >> So look, I'm not trying to be the snarky journalist or analyst here but I want to keep pushing on this issue. So here we are, it's 2021. It's April. We're like a third of the way through the year. And so far, nobody has come out and said, okay, we're going to deliver Autonomous Data Warehouse just like Oracle. So I asked myself, well, why is Oracle doing this? You guys answered, you know, to reduce the labor cost. But I asked myself, is this how they're solving the problem of keeping relevant a database that spans five decades? And you guys said, no, no, this is cloud native born in the cloud, you know started essentially with a new mindset. But is this a trend that others are going to follow? You know, and if so, why haven't we seen it this idea of a self-driving databases? Why is it right now unique to Oracle? What's really going on here? >> So I think there's a really interesting thing that's happening, it's not visible outside of Oracle. It's very visible for those of us who work inside of the development organization. You know, if you look at Oracle, I can tell you bad. I mean, I think it's safe to presume Oracle has the largest database development organization on the planet, right? I mean, it was kind of the largest database or large most used database for the past two decades. And what's happened is we pivoted to building a cloud platform. We're not just building a database, we're taking all of these resources that we have with all these expertise of building database software. We were saying, we now have to build the platform to run and manage the database software in the cloud, right? And it's a little bit like, you know I think to make people relate to it a little better, there was a really good quote from Elon Musk couple of years ago, talking about Tesla. Like everyone looks at the car, right? Tesla, the car is really great. The hard part of this, is building the factory, and that's analogy holds for Oracle. What we're building is the cloud battery. And what we have transitioned is our database development organization is now building as robust a cloud as possible. So that you know, when we increase the number of databases by 10 X, we don't add 10 X, more cloud ops people to manage it. We are ramping up developer building features to automate the management of our cloud infrastructure. And with that automation, we get better ability, less errors, more security. We give benefits to our cloud data warehouse customers with it. And I think this something really important to realize, right? We build database software. We build, you know, an engineered system built for databases called exit data, and we build a cloud platform. And these are really equal tiers in what we are building and developing today in 2021 from Oracle database development organization. >> Well, you mentioned exit data, I want to shift gears here a little bit and talk about we're seeing this hybrid cloud on-premises clouds, they're finally gaining some traction. I got to give props Oracle's cloud of customers really the early to that game. I think it was the first in my view anyway, true same same vision, took you guys a little while to get there but it was the right vision. And the thing I always say about Oracle people don't understand is Oracle invest in R and D, your chairman is also the CTO. You guys are serious about technical investment so you know, that's where innovation comes from. But, and we heard during your recent earnings call, we heard some positive comments on this. So what's your take on delivering autonomous data warehouse on-prem and how do you compare with say Snowflake and AWS in that area? Snowflake, Frank Slootman, I've had him on record saying we're not going to do that halfway house. Forget it, we are always going to be in the cloud. We're never going to do an on-prem installation. AWS, we'll see to date. Yeah, I don't think you can get a Redshift for instance in outposts, but maybe that'll come. But, how do you see that emerging? What's your difference there? Maybe Neil, you could talk about that. >> Yeah, so, you know, I think, you know, customers had a lot of regulated industries, right? Still have concerns about the public cloud. And I think that when you hear statements like, you know, we're never going to do, you know, on-prem. Well, economist cloud at customer, it's not a classic on-prem solution. What it is, it's a piece of our cloud delivered in your data center. It's still the cloud software. Oracle manages it, Oracle, you know, the system itself manages itself and we take care of that responsibility so you don't have to. The differences is that we can make that available in a public cloud as well as in a private cloud, right? And there are so many use cases, you know, that you can imagine from a regulatory point of view, or just from a comfort point of view, where customers are choosing, they want the ability to decide for themselves where to place this stuff as compared to only having one option, right? And you know, you look at a lot of what's happening in the emerging world where, you know, there are a lot of places in the world that may not have, you know, really really high-speed internet connections to make, you know a public cloud feasible. Well, in that case, whether you're talking about, you know an oil rig or you're talking about something else, right? We can put that capability where it needs to be close to the operation that you're talking about, irrespective of the deployment option. >> Well, let me just follow up on that because I think it's interesting that, you know Frank Slootman said that to me, I oftentimes around AWS I say, never say never 'cause they'll surprise you, right? And I've learned that with Andy Jassy, but one of the things that seems difficult for on-prem, would be to separate that compute from storage because you have to actually physically move in resources. I think about Vertica Xeon mode. It's not quite the same, same. So, I mean, in that regard, maybe you're not the same same. And maybe that dogma makes sense for some companies. For Oracle, obviously you've got a huge on-prem state, thoughts on that. >> So, you know, clearly, you know, so typically what we'll do is that we'll provide additional hardware beyond what the customer might expect and that allows them to use the capabilities of expansion, right? We also have the ability to allow the customer to expand from their cloud of customer into the public cloud as well, of which we have a lot of those situations. So we can provide a level of elasticity, even on-premises by over provisioning the systems, well not charging the customer until they use only based on what they consume, right? Combined together with the ability for us to augment their usage in the public cloud as well, right? Where others, again are constraint, right? Because they only have a single option. >> Right, well, you've got the capital resources to do that as well which is not to be overlooked. Okay, I mean, I've blown our time here but you guys are so awesome. (laughs) I appreciate the candor. So last question and George, if you want to throw in a couple of those other tick boxes, you know the differentiators, please feel free, but for both of you, if you can leave customers with the one key point or the top key points on how Oracle Autonomous Data Warehouse can really help them improve their business in the near term, what would they be? Maybe George, you could start and then Neil you bring us home. >> Yeah, I mean, I think that, as I said before, our starting point with Autonomous Data Warehouse, is how can we build a better customer experience in the cloud? And I think, and this continues throughout 2021, and I think that the big theme here is the business users should be able to get value directly from their data warehouses. We talked a few times about how a line of business user should be able to manage their own data, should be able to load their own data warehouse, should be able to start to work with their own data, should be able to run machine learning, model of build machine learning, models against that data and all of that built in, and delivered in Autonomous Data Warehouse. And we think that this is, you know we see our customer organizations large and small, the light bulbs starting to go on how easy the services to use to and how completed it is for helping business users get value from their data. And just adding onto what George said, you know, the development organization has done a tremendous job of really simplifying this cooperation. What we also tried to do that on the business side. You know, when a customer has an on-prem situation, they're looking at moving to the cloud, whether lift and shift or modernized, they're looking at costs, they're looking at risk and they're looking at time. So one of the things we look at is how do we mitigate that? How do we mitigate the cost, the risk and the time? Well, this week, I think we announced our new cloud lift program and the cloud lift program is what Oracle will provide to its cloud engineering resources around the world is that we will do, we will take the cost, the risk and the time out of the equation and Oracle will work directly with the customer or the customer's partner of choice, maybe an Accenture or Deloitte, and we will move them, right? You know, at little or no cost, most cases there's no cost whatsoever, right? We mitigate the risk because we're taking the risk on. And we've built a lot of automated tools to make that go very quickly, right? And securely, and then finally, we do it in a very very short amount of time as compared to what you would need to do with, you know 'cause there is no Redshift on-premises. There is no Snowflake on-premises. You have to convert from what you already have to that, right? And, but the company beyond the technological barriers that George talked about were also trying to smooth the operation so that a business itself can make a decision that not only did they not need the technical people to operate it, they won't need an entire consulting contract with millions of dollars in order to actually do the movement to the cloud. >> Well, guys, I really appreciate you coming on the program and again, your candor to speak openly about you know, your approach, the competitors. And so it's great having you, really really thank you for, for your time. >> Appreciate it. >> And thank you for watching everybody. Look, if you guys want to come back, go toe to toe with these guys, say the word you're always welcome to come on The Cube. One thing for sure, Oracle are serious, when it comes to database. Thank you for watching. This is Dave Vellante. We'll see you next time. (bright music)
SUMMARY :
And Neil Mendelson is the for some of the viewers of the cloud data warehouse and maybe the path you took to get here. And the first thing that we And actually Neil, you might want to chime And you know, we have And you know, when I talked In the meantime, we've been busy, you know it's going to get, you know, not selling these to you to the extent that you solve that problem. decisions that you make. Oh, just to add on to that, you know, So all is said, you know, I don't know, Neil, you want to take that? And those are, you know, HR applications, I appreciate you guys' And can you tell me if many questions for you guys. George on something you said but you say, I want 14 CPU's In the background, you Okay, thank you. And maybe, if we add, you know, born in the cloud, you So that you know, when we really the early to that game. And I think that when you hear interesting that, you know We also have the ability to you know the differentiators, And we think that this is, you know speak openly about you know, And thank you for watching everybody.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Andy | PERSON | 0.99+ |
George | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Andrew Mendelsohn | PERSON | 0.99+ |
Neil | PERSON | 0.99+ |
Neil Mendelson | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
George Lumpkin | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Deloitte | ORGANIZATION | 0.99+ |
Steve Savannah | PERSON | 0.99+ |
1977 | DATE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Frank Slootman | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
2015 | DATE | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
2018 | DATE | 0.99+ |
April | DATE | 0.99+ |
100s | QUANTITY | 0.99+ |
5:00 PM | DATE | 0.99+ |
April 2021 | DATE | 0.99+ |
tomorrow morning | DATE | 0.99+ |
Tesla | ORGANIZATION | 0.99+ |
10 CPU | QUANTITY | 0.99+ |
Last year | DATE | 0.99+ |
Oracle Autonomous Data Warehouse | ORGANIZATION | 0.99+ |
Rahul Pathak, AWS | AWS re:Invent 2020
>>from around the globe. It's the Cube with digital coverage of AWS reinvent 2020 sponsored by Intel and AWS. Yeah, welcome back to the cubes. Ongoing coverage of AWS reinvent virtual Cuba's Gone Virtual along with most events these days are all events and continues to bring our digital coverage of reinvent With me is Rahul Pathak, who is the vice president of analytics at AWS A Ro. It's great to see you again. Welcome. And thanks for joining the program. >>They have Great co two and always a pleasure. Thanks for having me on. >>You're very welcome. Before we get into your leadership discussion, I want to talk about some of the things that AWS has announced. Uh, in the early parts of reinvent, I want to start with a glue elastic views. Very notable announcement allowing people to, you know, essentially share data across different data stores. Maybe tell us a little bit more about glue. Elastic view is kind of where the name came from and what the implication is, >>Uh, sure. So, yeah, we're really excited about blue elastic views and, you know, as you mentioned, the idea is to make it easy for customers to combine and use data from a variety of different sources and pull them together into one or many targets. And the reason for it is that you know we're really seeing customers adopt what we're calling a lake house architectural, which is, uh, at its core Data Lake for making sense of data and integrating it across different silos, uh, typically integrated with the data warehouse, and not just that, but also a range of other purpose. Both stores like Aurora, Relation of Workloads or dynamodb for non relational ones. And while customers typically get a lot of benefit from using purpose built stores because you get the best possible functionality, performance and scale forgiven use case, you often want to combine data across them to get a holistic view of what's happening in your business or with your customers. And before glue elastic views, customers would have to either use E. T. L or data integration software, or they have to write custom code that could be complex to manage, and I could be are prone and tough to change. And so, with elastic views, you can now use sequel to define a view across multiple data sources pick one or many targets. And then the system will actually monitor the sources for changes and propagate them into the targets in near real time. And it manages the anti pipeline and can notify operators if if anything, changes. And so the you know the components of the name are pretty straightforward. Blues are survivalists E T Elling data integration service on blue elastic views about our about data integration their views because you could define these virtual tables using sequel and then elastic because it's several lists and will scale up and down to deal with the propagation of changes. So we're really excited about it, and customers are as well. >>Okay, great. So my understanding is I'm gonna be able to take what's called what the parlance of materialized views, which in my laypersons terms assumes I'm gonna run a query on the database and take that subset. And then I'm gonna be ableto thio. Copy that and move it to another data store. And then you're gonna automatically keep track of the changes and keep everything up to date. Is that right? >>Yes. That's exactly right. So you can imagine. So you had a product catalog for example, that's being updated in dynamodb, and you can create a view that will move that to Amazon Elasticsearch service. You could search through a current version of your catalog, and we will monitor your dynamodb tables for any changes and make sure those air all propagated in the real time. And all of that is is taken care of for our customers as soon as they defined the view on. But they don't be just kept in sync a za long as the views in effect. >>Let's see, this is being really valuable for a person who's building Looks like I like to think in terms of data services or data products that are gonna help me, you know, monetize my business. Maybe, you know, maybe it's a simple as a dashboard, but maybe it's actually a product. You know, it might be some content that I want to develop, and I've got transaction systems. I've got unstructured data, may be in a no sequel database, and I wanna actually combine those build new products, and I want to do that quickly. So So take me through what I would have to do. You you sort of alluded to it with, you know, a lot of e t l and but take me through in a little bit more detail how I would do that, you know, before this innovation. And maybe you could give us a sense as to what the possibilities are with glue. Elastic views? >>Sure. So, you know, before we announced elastic views, a customer would typically have toe think about using a T l software, so they'd have to write a neat L pipeline that would extract data periodically from a range of sources. They then have to write transformation code that would do things like matchup types. Make sure you didn't have any invalid values, and then you would combine it on periodically, Write that into a target. And so once you've got that pipeline set up, you've got to monitor it. If you see an unusual spike in data volume, you might have to add more. Resource is to the pipeline to make a complete on time. And then, if anything changed in either the source of the destination that prevented that data from flowing in the way you would expect it, you'd have toe manually, figure that out and have data, quality checks and all of that in place to make sure everything kept working but with elastic views just gets much simpler. So instead of having to write custom transformation code, you right view using sequel and um, sequel is, uh, you know, widely popular with data analysts and folks that work with data, as you well know. And so you can define that view and sequel. The view will look across multiple sources, and then you pick your destination and then glue. Elastic views essentially monitors both the source for changes as well as the source and the destination for any any issues like, for example, did the schema changed. The shape of the data change is something briefly unavailable, and it can monitor. All of that can handle any errors, but it can recover from automatically. Or if it can't say someone dropped an important table in the source. That was part of your view. You can actually get alerted and notified to take some action to prevent bad data from getting through your system or to prevent your pipeline from breaking without your knowledge and then the final pieces, the elasticity of it. It will automatically deal with adding more resource is if, for example, say you had a spiky day, Um, in the markets, maybe you're building a financial services application and you needed to add more resource is to process those changes into your targets more quickly. The system would handle that for you. And then, if you're monetizing data services on the back end, you've got a range of options for folks subscribing to those targets. So we've got capabilities like our, uh, Amazon data exchange, where people can exchange and monetize data set. So it allows this and to end flow in a much more straightforward way. It was possible before >>awesome. So a lot of automation, especially if something goes wrong. So something goes wrong. You can automatically recover. And if for whatever reason, you can't what happens? You quite ask the system and and let the operator No. Hey, there's an issue. You gotta go fix it. How does that work? >>Yes, exactly. Right. So if we can recover, say, for example, you can you know that for a short period of time, you can't read the target database. The system will keep trying until it can get through. But say someone dropped a column from your source. That was a key part of your ultimate view and destination. You just can't proceed at that point. So the pipeline stops and then we notify using a PS or an SMS alert eso that programmatic action can be taken. So this effectively provides a really great way to enforce the integrity of data that's going between the sources and the targets. >>All right, make it kindergarten proof of it. So let's talk about another innovation. You guys announced quicksight que, uh, kind of speaking to the machine in my natural language, but but give us some more detail there. What is quicksight Q and and how doe I interact with it. What What kind of questions can I ask it >>so quick? Like you is essentially a deep, learning based semantic model of your data that allows you to ask natural language questions in your dashboard so you'll get a search bar in your quick side dashboard and quick site is our service B I service. That makes it really easy to provide rich dashboards. Whoever needs them in the organization on what Q does is it's automatically developing relationships between the entities in your data, and it's able to actually reason about the questions you ask. So unlike earlier natural language systems, where you have to pre define your models, you have to pre define all the calculations that you might ask the system to do on your behalf. Q can actually figure it out. So you can say Show me the top five categories for sales in California and it'll look in your data and figure out what that is and will prevent. It will present you with how it parse that question, and there will, in line in seconds, pop up a dashboard of what you asked and actually automatically try and take a chart or visualization for that data. That makes sense, and you could then start to refine it further and say, How does this compare to what happened in New York? And we'll be able to figure out that you're tryingto overlay those two data sets and it'll add them. And unlike other systems, it doesn't need to have all of those things pre defined. It's able to reason about it because it's building a model of what your data means on the flight and we pre trained it across a variety of different domains So you can ask a question about sales or HR or any of that on another great part accused that when it presents to you what it's parsed, you're actually able toe correct it if it needs it and provide feedback to the system. So, for example, if it got something slightly off you could actually select from a drop down and then it will remember your selection for the next time on it will get better as you use it. >>I saw a demo on in Swamis Keynote on December 8. That was basically you were able to ask Quick psych you the same question, but in different ways, you know, like compare California in New York or and then the data comes up or give me the top, you know, five. And then the California, New York, the same exact data. So so is that how I kind of can can check and see if the answer that I'm getting back is correct is ask different questions. I don't have to know. The schema is what you're saying. I have to have knowledge of that is the user I can. I can triangulate from different angles and then look and see if that's correct. Is that is that how you verify or there are other ways? >>Eso That's one way to verify. You could definitely ask the same question a couple of different ways and ensure you're seeing the same results. I think the third option would be toe, uh, you know, potentially click and drill and filter down into that data through the dash one on, then the you know, the other step would be at data ingestion Time. Typically, data pipelines will have some quality controls, but when you're interacting with Q, I think the ability to ask the question multiple ways and make sure that you're getting the same result is a perfectly reasonable way to validate. >>You know what I like about that answer that you just gave, and I wonder if I could get your opinion on this because you're you've been in this business for a while? You work with a lot of customers is if you think about our operational systems, you know things like sales or E r. P systems. We've contextualized them. In other words, the business lines have inject context into the system. I mean, they kind of own it, if you will. They own the data when I put in quotes, but they do. They feel like they're responsible for it. There's not this constant argument because it's their data. It seems to me that if you look back in the last 10 years, ah, lot of the the data architecture has been sort of generis ized. In other words, the experts. Whether it's the data engineer, the quality engineer, they don't really have the business context. But the example that you just gave it the drill down to verify that the answer is correct. It seems to me, just in listening again to Swamis Keynote the other day is that you're really trying to put data in the hands of business users who have the context on the domain knowledge. And that seems to me to be a change in mindset that we're gonna see evolve over the next decade. I wonder if you could give me your thoughts on that change in the data architecture data mindset. >>David, I think you're absolutely right. I mean, we see this across all the customers that we speak with there's there's an increasing desire to get data broadly distributed into the hands of the organization in a well governed and controlled way. But customers want to give data to the folks that know what it means and know how they can take action on it to do something for the business, whether that's finding a new opportunity or looking for efficiencies. And I think, you know, we're seeing that increasingly, especially given the unpredictability that we've all gone through in 2020 customers are realizing that they need to get a lot more agile, and they need to get a lot more data about their business, their customers, because you've got to find ways to adapt quickly. And you know, that's not gonna change anytime in the future. >>And I've said many times in the The Cube, you know, there are industry. The technology industry used to be all about the products, and in the last decade it was really platforms, whether it's SAS platforms or AWS cloud platforms, and it seems like innovation in the coming years, in many respects is coming is gonna come from the ecosystem and the ability toe share data we've We've had some examples today and then But you hit on. You know, one of the key challenges, of course, is security and governance. And can you automate that if you will and protect? You know the users from doing things that you know, whether it's data access of corporate edicts for governance and compliance. How are you handling that challenge? >>That's a great question, and it's something that really emphasized in my leadership session. But the you know, the notion of what customers are doing and what we're seeing is that there's, uh, the Lake House architectural concept. So you've got a day late. Purpose build stores and customers are looking for easy data movement across those. And so we have things like blue elastic views or some of the other blue features we announced. But they're also looking for unified governance, and that's why we built it ws late formation. And the idea here is that it can quickly discover and catalog customer data assets and then allows customers to define granular access policies centrally around that data. And once you have defined that, it then sets customers free to give broader access to the data because they put the guardrails in place. They put the protections in place. So you know you can tag columns as being private so nobody can see them on gun were announced. We announced a couple of new capabilities where you can provide row based control. So only a certain set of users can see certain rose in the data, whereas a different set of users might only be able to see, you know, a different step. And so, by creating this fine grained but unified governance model, this actually sets customers free to give broader access to the data because they know that they're policies and compliance requirements are being met on it gets them out of the way of the analyst. For someone who can actually use the data to drive some value for the business, >>right? They could really focus on driving value. And I always talk about monetization. However monetization could be, you know, a generic term, for it could be saving lives, admission of the business or the or the organization I meant to ask you about acute customers in bed. Uh, looks like you into their own APs. >>Yes, absolutely so one of quick sites key strengths is its embed ability. And on then it's also serverless, so you could embed it at a really massive scale. And so we see customers, for example, like blackboard that's embedding quick side dashboards into information. It's providing the thousands of educators to provide data on the effectiveness of online learning. For example, on you could embed Q into that capability. So it's a really cool way to give a broad set of people the ability to ask questions of data without requiring them to be fluent in things like Sequel. >>If I ask you a question, we've talked a little bit about data movement. I think last year reinvent you guys announced our A three. I think it made general availability this year. And remember Andy speaking about it, talking about you know, the importance of having big enough pipes when you're moving, you know, data around. Of course you do. Doing tearing. You also announced Aqua Advanced Query accelerator, which kind of reduces bringing the computer. The data, I guess, is how I would think about that reducing that movement. But then we're talking about, you know, glue, elastic views you're copying and moving data. How are you ensuring you know, maintaining that that maximum performance for your customers. I mean, I know it's an architectural question, but as an analytics professional, you have toe be comfortable that that infrastructure is there. So how does what's A. W s general philosophy in that regard? >>So there's a few ways that we think about this, and you're absolutely right. I think there's data volumes were going up, and we're seeing customers going from terabytes, two petabytes and even people heading into the exabyte range. Uh, there's really a need to deliver performance at scale. And you know, the reality of customer architectures is that customers will use purpose built systems for different best in class use cases. And, you know, if you're trying to do a one size fits all thing, you're inevitably going to end up compromising somewhere. And so the reality is, is that customers will have more data. We're gonna want to get it to more people on. They're gonna want their analytics to be fast and cost effective. And so we look at strategies to enable all of this. So, for example, glue elastic views. It's about moving data, but it's about moving data efficiently. So What we do is we allow customers to define a view that represents the subset of their data they care about, and then we only look to move changes as efficiently as possible. So you're reducing the amount of data that needs to get moved and making sure it's focused on the essential. Similarly, with Aqua, what we've done, as you mentioned, is we've taken the compute down to the storage layer, and we're using our nitro chips to help with things like compression and encryption. And then we have F. P. J s in line to allow filtering an aggregation operation. So again, you're tryingto quickly and effectively get through as much data as you can so that you're only sending back what's relevant to the query that's being processed. And that again leads to more performance. If you can avoid reading a bite, you're going to speed up your queries. And that Awkward is trying to do. It's trying to push those operations down so that you're really reducing data as close to its origin as possible on focusing on what's essential. And that's what we're applying across our analytics portfolio. I would say one other piece we're focused on with performance is really about innovating across the stack. So you mentioned network performance. You know, we've got 100 gigabits per second throughout now, with the next 10 instances and then with things like Grab it on to your able to drive better price performance for customers, for general purpose workloads. So it's really innovating at all layers. >>It's amazing to watch it. I mean, you guys, it's a It's an incredible engineering challenge as you built this hyper distributed system. That's now, of course, going to the edge. I wanna come back to something you mentioned on do wanna hit on your leadership session as well. But you mentioned the one size fits all, uh, system. And I've asked Andy Jassy about this. I've had a discussion with many folks that because you're full and and of course, you mentioned the challenges you're gonna have to make tradeoffs if it's one size fits all. The flip side of that is okay. It's simple is you know, 11 of the Swiss Army knife of database, for example. But your philosophy is Amazon is you wanna have fine grained access and to the primitives in case the market changes you, you wanna be able to move quickly. So that puts more pressure on you to then simplify. You're not gonna build this big hairball abstraction layer. That's not what he gonna dio. Uh, you know, I think about, you know, layers and layers of paint. I live in a very old house. Eso your That's not your approach. So it puts greater pressure on on you to constantly listen to your customers, and and they're always saying, Hey, I want to simplify, simplify, simplify. We certainly again heard that in swamis presentation the other day, all about, you know, minimizing complexity. So that really is your trade office. It puts pressure on Amazon Engineering to continue to raise the bar on simplification. Isn't Is that a fair statement? >>Yeah, I think so. I mean, you know, I think any time we can do work, so our customers don't have to. I think that's a win for both of us. Um, you know, because I think we're delivering more value, and it makes it easier for our customers to get value from their data way. Absolutely believe in using the right tool for the right job. And you know you talked about an old house. You're not gonna build or renovate a house of the Swiss Army knife. It's just the wrong tool. It might work for small projects, but you're going to need something more specialized. The handle things that matter. It's and that is, uh, that's really what we see with that, you know, with that set of capabilities. So we want to provide customers with the best of both worlds. We want to give them purpose built tools so they don't have to compromise on performance or scale of functionality. And then we want to make it easy to use these together. Whether it's about data movement or things like Federated Queries, you can reach into each of them and through a single query and through a unified governance model. So it's all about stitching those together. >>Yeah, so far you've been on the right side of history. I think it serves you well on your customers. Well, I wanna come back to your leadership discussion, your your leadership session. What else could you tell us about? You know, what you covered there? >>So we we've actually had a bunch of innovations on the analytics tax. So some of the highlights are in m r, which is our managed spark. And to do service, we've been able to achieve 1.7 x better performance and open source with our spark runtime. So we've invested heavily in performance on now. EMR is also available for customers who are running and containerized environment. So we announced you Marnie chaos on then eh an integrated development environment and studio for you Marco D M R studio. So making it easier both for people at the infrastructure layer to run em are on their eks environments and make it available within their organizations but also simplifying life for data analysts and folks working with data so they can operate in that studio and not have toe mess with the details of the clusters underneath and then a bunch of innovation in red shift. We talked about Aqua already, but then we also announced data sharing for red Shift. So this makes it easy for red shift clusters to share data with other clusters without putting any load on the central producer cluster. And this also speaks to the theme of simplifying getting data from point A to point B so you could have central producer environments publishing data, which represents the source of truth, say into other departments within the organization or departments. And they can query the data, use it. It's always up to date, but it doesn't put any load on the producers that enables these really powerful data sharing on downstream data monetization capabilities like you've mentioned. In addition, like Swami mentioned in his keynote Red Shift ML, so you can now essentially train and run models that were built in sage maker and optimized from within your red shift clusters. And then we've also automated all of the performance tuning that's possible in red ships. So we really invested heavily in price performance, and now we've automated all of the things that make Red Shift the best in class data warehouse service from a price performance perspective up to three X better than others. But customers can just set red shift auto, and it'll handle workload management, data compression and data distribution. Eso making it easier to access all about performance and then the other big one was in Lake Formacion. We announced three new capabilities. One is transactions, so enabling consistent acid transactions on data lakes so you can do things like inserts and updates and deletes. We announced row based filtering for fine grained access control and that unified governance model and then automated storage optimization for Data Lake. So customers are dealing with an optimized small files that air coming off streaming systems, for example, like Formacion can auto compact those under the covers, and you can get a 78 x performance boost. It's been a busy year for prime lyrics. >>I'll say that, z that it no great great job, bro. Thanks so much for coming back in the Cube and, you know, sharing the innovations and, uh, great to see you again. And good luck in the coming here. Well, >>thank you very much. Great to be here. Great to see you. And hope we get Thio see each other in person against >>I hope so. All right. And thank you for watching everybody says Dave Volonte for the Cube will be right back right after this short break
SUMMARY :
It's great to see you again. They have Great co two and always a pleasure. to, you know, essentially share data across different And so the you know the components of the name are pretty straightforward. And then you're gonna automatically keep track of the changes and keep everything up to date. So you can imagine. services or data products that are gonna help me, you know, monetize my business. that prevented that data from flowing in the way you would expect it, you'd have toe manually, And if for whatever reason, you can't what happens? So if we can recover, say, for example, you can you know that for a So let's talk about another innovation. that you might ask the system to do on your behalf. but in different ways, you know, like compare California in New York or and then the data comes then the you know, the other step would be at data ingestion Time. But the example that you just gave it the drill down to verify that the answer is correct. And I think, you know, we're seeing that increasingly, You know the users from doing things that you know, whether it's data access But the you know, the notion of what customers are doing and what we're seeing is that admission of the business or the or the organization I meant to ask you about acute customers And on then it's also serverless, so you could embed it at a really massive But then we're talking about, you know, glue, elastic views you're copying and moving And you know, the reality of customer architectures is that customers will use purpose built So that puts more pressure on you to then really what we see with that, you know, with that set of capabilities. I think it serves you well on your customers. speaks to the theme of simplifying getting data from point A to point B so you could have central in the Cube and, you know, sharing the innovations and, uh, great to see you again. thank you very much. And thank you for watching everybody says Dave Volonte for the Cube will be right back right after
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Rahul Pathak | PERSON | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
David | PERSON | 0.99+ |
California | LOCATION | 0.99+ |
New York | LOCATION | 0.99+ |
Andy | PERSON | 0.99+ |
Swiss Army | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
December 8 | DATE | 0.99+ |
Dave Volonte | PERSON | 0.99+ |
last year | DATE | 0.99+ |
2020 | DATE | 0.99+ |
third option | QUANTITY | 0.99+ |
Swami | PERSON | 0.99+ |
each | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
A. W | PERSON | 0.99+ |
this year | DATE | 0.99+ |
10 instances | QUANTITY | 0.98+ |
A three | COMMERCIAL_ITEM | 0.98+ |
78 x | QUANTITY | 0.98+ |
two petabytes | QUANTITY | 0.98+ |
five | QUANTITY | 0.97+ |
Amazon Engineering | ORGANIZATION | 0.97+ |
Red Shift ML | TITLE | 0.97+ |
Formacion | ORGANIZATION | 0.97+ |
11 | QUANTITY | 0.96+ |
one | QUANTITY | 0.96+ |
one way | QUANTITY | 0.96+ |
Intel | ORGANIZATION | 0.96+ |
One | QUANTITY | 0.96+ |
five categories | QUANTITY | 0.94+ |
Aqua | ORGANIZATION | 0.93+ |
Elasticsearch | TITLE | 0.93+ |
terabytes | QUANTITY | 0.93+ |
both worlds | QUANTITY | 0.93+ |
next decade | DATE | 0.92+ |
two data sets | QUANTITY | 0.91+ |
Lake Formacion | ORGANIZATION | 0.9+ |
single query | QUANTITY | 0.9+ |
Data Lake | ORGANIZATION | 0.89+ |
thousands of educators | QUANTITY | 0.89+ |
Both stores | QUANTITY | 0.88+ |
Thio | PERSON | 0.88+ |
agile | TITLE | 0.88+ |
Cuba | LOCATION | 0.87+ |
dynamodb | ORGANIZATION | 0.86+ |
1.7 x | QUANTITY | 0.86+ |
Swamis | PERSON | 0.84+ |
EMR | TITLE | 0.82+ |
one size | QUANTITY | 0.82+ |
Red Shift | TITLE | 0.82+ |
up to three X | QUANTITY | 0.82+ |
100 gigabits per second | QUANTITY | 0.82+ |
Marnie | PERSON | 0.79+ |
last decade | DATE | 0.79+ |
reinvent 2020 | EVENT | 0.74+ |
Invent | EVENT | 0.74+ |
last 10 years | DATE | 0.74+ |
Cube | COMMERCIAL_ITEM | 0.74+ |
today | DATE | 0.74+ |
A Ro | EVENT | 0.71+ |
three new capabilities | QUANTITY | 0.71+ |
two | QUANTITY | 0.7+ |
E T Elling | PERSON | 0.69+ |
Eso | ORGANIZATION | 0.66+ |
Aqua | TITLE | 0.64+ |
Cube | ORGANIZATION | 0.63+ |
Query | COMMERCIAL_ITEM | 0.63+ |
SAS | ORGANIZATION | 0.62+ |
Aurora | ORGANIZATION | 0.61+ |
Lake House | ORGANIZATION | 0.6+ |
Sequel | TITLE | 0.58+ |
P. | PERSON | 0.56+ |
Updatable Encryption
>>Hi, everyone. My name is Dan Bonnie and I want to thank the organizers for inviting me to speak. Since I only have 15 >>minutes, I decided to talk about something relatively simple that will hopefully be useful to entity. This is joint work with my students Sabah Eskandarian and Sam Kim. And with Morrissey, this work will appear it, uh, the upcoming Asia crypt and is available on E print if anyone wants this to learn more about what I'm going to talk about, So >>I want to tell you the story >>of storing encrypted data in the cloud. >>So all of us have lots of data, and typically we'd rather not >>store the data on our local machines. But rather we'd like to move the data to the cloud so that the cloud can handle back up in the cloud, can handle access control on this data and allow us to share it with others. However, for some types of data, we'd rather not have the data available in the cloud in the clear. And so what we dio is we encrypt the data before we send it to the cloud, and the customer is the one that's holding the key. So the cloud has cipher text, and the customer is the only one that has the key that could decrypt that data. >>Now, whenever dealing with encrypted data, there is a very common requirements called key rotation. So key rotation refers to the act of taking a cipher text and basically re encrypting it under a different key without changing the underlying data. Okay. And the reason we do that is so that an old key basically >>stops working, right? So we re encrypt the data under a new key, and as a result, the old red key can no longer decrypt the data. So it's a way for us to expire keys so that Onley the new key can decrypt the current data stored in the cloud. Of >>course, when we do this, we have to assume that the cloud actually doesn't store the old cipher text. So we're just going to assume that the cloud deletes the old cipher text, and the only thing the cloud has is on Lee, >>the latest version of the cipher text which can only be decrypted using the latest version of the key. >>So why do we do key rotations. Well, it turns out it's actually quite a good idea for one reason. Like we said, it limits the lifetime of a key. If I give you a key today, you can decrypt the data today. But after I do key rotation on my data, the key that I gave you no longer works. Okay, so it's a way to limit the lifetime of a key. And it's a good idea, for example, in an organization that might have temporary employees. Basically, you might give those temporary employees a key. But once they leave effectively, >>the keys will stop working after the key rotation has been done. >>Not only is it a good idea, it's actually >>a requirement in many standards. So, for example, this requires key rotation, the payment industry and requires periodic he rotation. So it's a fairly common requirement out there. The >>problem is, how do we do key >>rotation when the data is stored in the cloud? Yeah, so there are >>two options that immediately come to mind, but both are problematic. The first option is we can download the entire data >>set onto our client machines. Things could be terabytes or petabytes of data so it's a huge amount of data that we might need to download on to the client >>machine, decrypt it under the old Ke re encrypted under the new key and then upload all >>that data back to the cloud. So that works and it's fine. The only problem is it's very expensive. You have to move the data back and forth in and out of the cloud. The >>other option, of course, is to send the actual old key in the new key to the cloud and then have the cloud re encrypt using the old key and re encrypt, then using the new key. And of course, that also works. >>But it's insecure because now the cloud will get to see your data in the clear. So >>the question is what to do. And it turns out there is a better option, which is called up datable encryption, so obtainable encryption works as follows. What we do is we take our old key and our new key, and we combine them together using some sort of ah kee Reekie generation algorithm. What this algorithm will do is it will generate a short key. That's a combination of the old and new key. We can then send the re encryption key over to the cloud. The cloud can then use this key to encrypt re encrypt the entire data in the cloud. So in doing so, basically, the cloud is able to do the rotation for us. But the hope is that the cloud learns >>nothing about the data in doing that. Okay, so the re encryption key that we send to the cloud should reveal nothing to the cloud about the actual data that's being held in the cloud. So obtainable encryption is relatively old concept. I guess it was first studied in one of our papers back from 2013. There were stronger definitions given in the work of Everest power it all in 2017. And there's been a number of papers studying this this concept since. So >>before we talk about the constructions for available encryption, let me just quickly make >>sure the syntax is clear. Just so we see how this works. So basically there's a key generation algorithm that generates a key from a security parameter. Then, when we encrypt a message using a particular key, we're gonna break the cipher text into a short header and the actual cipher text the hitter and the cipher text gets into the >>cloud. And like I said, this header is going to be short and independent of the message length. Then when we want to do rotation, what we'll do is basically will use the old key in the new key along with the cipher text header to produce what we call >>a re encryption key will denote that by Delta. Okay, so the way this works is we will download the header from the >>Cloud Short header Computer Encryption key, send their encryption key to the cloud, and then the cloud will use the re encrypt algorithm that uses the re encryption key and the old cipher >>text to produce the new cipher text. And then this new cipher text will be stored in the cloud. And again, I repeat, the assumption is that the cloud is gonna erase the old cipher text. It is going to erase the re encryption key that we send to it. >>And finally, at the end of the day, when we want to decrypt the actual cipher text in the cloud, we download >>the cipher text on the cloud we decrypted using the key K and recover the actual message in. >>Okay, So in this new work with my students, we set out to look Atmore efficient constructions for available encryption. So the first thing we did is we realize there's some issues >>with the current security definitions and so we strengthen the security definitions in particular, we strengthen them in a couple of ways, but in particular, we'd like to make sure that the actual cipher text has stored in the cloud doesn't actually revealed a number of key rotations. Yeah, so a rotated cipher text should look indistinguishable from a fresh cipher text. >>But not only that, That actually should also guarantee >>that the number of key rotations is not leaked by from just looking at the cipher text. So generally, we'd like to hide the number of key rotations so that it doesn't reveal private information about what's what's encrypted inside the cipher text. >>But our main goal was to look at more efficient construction. So we looked at two constructions, one based >>on a lattice based key home or fake. Prof. So actually, the main point of this work was actually to study the performance of a lattice based key home or fake prof relative to the existing of datable encryption systems >>and then the other. The other construction we give is what's called a nested. Construction would just uses plain old symmetric encryption. And interestingly, what we show is that in fact, the nested construction is actually the best construction we have as long as the number of key rotations is not too high. Yes, so if we do under 50 re encryptions, just go ahead and use the nested construction basically from symmetric encryption. However, if we do more than 50 key rotations, all of a sudden the lattice >>based construction becomes the best one that we have. >>I want to emphasize here that are our goal for using lattices. That was not to get quantum resistance. We wanted to use lattices just because >>lettuces are fast. Yeah, and so we wanted to gain from the performance of lattice is not from the security that they provide >>eso I guess before I talk about the constructions, I have to quickly just remind you of how >>what what the security model is, what it is we're trying to achieve and I have to say the security model for available encryption is not that easy to explain here, You know, the adversary gets to see lots of keys. He gets to see lots of re encryption keys. He gets to see lots of >>cipher text. So instead of giving you the full definition, I'm just gonna give you kind >>of the intuition for what this definition is trying to achieve. And I'm going to point you to the paper for the details. So >>really, what the definition is trying to say >>is the following settings. Right. So imagine we have a cipher text that's encrypted under a certain key K. At >>some point later on in the future, the cipher text gets re encrypted using a re encryption key Delta. Okay, so now the new cipher text is encrypted under the key K prime. And what we're basically trying to achieve in the definition is to say that well, if the adversary gets to see the old cipher text >>the new cipher text and they re encryption key, then they learn nothing about the message. And they can't harm the integrity of the cipher text. >>Similarly, if they just see the old key and the new >>cipher text. They learn nothing about the message, and they can't harm the integrity of the cipher text. And similarly, if you see an old cipher text in a new key, same thing. Yeah, this is again overly simplified because in reality, the adversary gets to see lots of cipher, text and lots of keys and lots of encryption keys. And there are all these correctness conditions for when he's supposed Thio learn something and whatnot. And so I'm going to defer this to the paper. But this gives you at least the intuition for what the definition is trying to >>achieve. So now let's turn to constructions, so the first construction we'll look >>at it is kind of the classic way to construct available encryption using what's called the key home or fake. Prof. Sochi Home or for Pierre Efs were used by the or Pincus and Rain go back in 99 there were defined in our paper. BLM are back in 2013 the point of the BLM. Our paper was mainly to construct key home or fake pl refs without random oracles. So first, let me explain what Akiyama Murphy pf >>is. So it's basically a Pierre F where we have home amorphous, um, >>relative to the key. So you can see here if I give you the prof under two different keys at the point X, I can add those values and get the PF under the some of the keys at the same point x. Okay, so that's what the key home or fake property lets >>us dio. And so keyhole Norfolk PRS were used to construct a datable encryption schemes. The first thing we show is that, in fact, using keyhole graphic PRS, weaken build an update Abel encryption scheme that satisfies are stronger security definitions. So again, I'm not going to go through this construction. But just to give you intuition for why key Horrific Pff's are useful for update Abel encryption. Let me just say that the re encryption key is gonna be the some of the old key and the new key. And to see why that's useful. Let's imagine we're encrypting >>a message using counter mode so you can see here a message is being encrypted using a P r f applied to a counter, I >>Well, if I give the cloud K one plus K to the cloud >>can evaluate F F K one plus K two at the point I and if we subtract that from the >>cipher text, then by the key home or FIC properties, you'll see that F K one cancels out. And basically we're left with an encryption of them under the ki minus K two. So we were able to transform the cipher text for an encryption under K one to an encryption under minus K two. Yeah, and that's kind of the reason why they're useful. But of course, in reality, the construction >>has many, many more bells and whistles to it to satisfy the security definition. Okay, so >>what do we know about Qihoo? Norfolk? Pff's? Well, the first key home or fake prof is based on the d. D H assumption. And that's just the standards PF from D d H. It's not difficult to see that this >>construction actually is key human Norfolk. >>In this work, we're particularly interested in the keyhole morphing prof that comes from lattices. So our question was, can we optimize the ski home amorphous prof to get a very fast update Abel encryption scheme? And so the answer is yes, we can. And to do that we use the ring learning with error problems. So our goal was really to kind of evaluate obtainable encryption as it applies to lattices. So that's the first construction. The second construction, like I said, is purely based on symmetric encryption, and it's kind of an enhancement of what we call the Trivial Update Abel encryption scheme. So what's the Trivial Update? Abel encryption scheme? Well, basically, we would look at >>a standard encryption where we encrypt the message using some message key. And then we encrypt the message key using the actual client key. These are all symmetric encryptions. The client basically clinic. He would be >>K, and the header would be the message encryption key. Now, when we want to rotate the keys, all we will do is basically we would generate a new message. >>Encryption key will call a K body prime. We'll send that over to the cloud that the >>cloud will encrypt the entire old cipher text under the new key and then encrypt a new key along with the old key under a new clients key, which we call Cape Prime. So what gets sent to the cloud is this K body prime and header prime and the cloud is able to do its operation and re encrypt the old cipher text. The new client key becomes K prime. And of course, we can continue this over and over in kind of an onion like encryption where we keep encrypting the old cipher text under a new message. He The benefit of the scheme, of course, is that it only uses >>symmetric encryption, so it's actually quite fast, so that's pretty good. >>Unfortunately, this is not quite secure. And the reason this is not secure is because the cipher >>text effectively grows with a number of key rotations. So the cipher text actually leaks the number of key rotations, and so it doesn't actually satisfy our definitions. Nevertheless, we're able to give a nest of construction that does satisfy our definitions. So it does hide the number of key rotations. And again, there are lots of details in this constructions. I'm going to point you to the paper for how the nested encryption works. So >>now we get to the main point that I wanted to make, which is >>comparing the different constructions. So let's compare the lattice based construction with a D. D H but its construction and the symmetric nested construction for the DTH based construction. We're going to use the GPRS system just for a comparison point, >>so you can see that for four kilobyte message >>blocks, the lattice based system is about 300 times faster than the D. D H P A system. And the reason we're able to get such a high throughput is, of course, lattices air more efficient but also were able to use the A V X instructions for speed up. And we've also optimized the ring that we're using quite a bit specifically for this purpose. Nevertheless, when we compared to the symmetric system, we see that the symmetric system is still in order of magnitude faster than even a lot of system. And so for encryption and re encryption purposes that the symmetric based system is the fastest that we have. When we go to a larger message blocks 32 kilobyte message blocks, you see that the benefit of the latter system is even greater over the D d H system. But the symmetric system performs even better Now if you think back to how the symmetric system works. It creates many layers of encryption and >>as a result, during decryption, we have to decrypt all these >>layers. So decryption in the symmetric system takes linear time in the number of re encryptions. So you can see this in this graph where the time to decrypt increases linearly with the number of re encryptions, whereas the key home or FIC methods take constant amount of time to decrypt, no matter how many re encryptions there are, the crossover point is about 50 re encryptions, Which is why we said that if in the lifetime of the cipher text we expect fewer than 50 re encryptions, you might as well use the symmetric nested system. But if you're doing frequently encryptions, let's say weekly re encryptions, you might end up with many more than 50 re encryptions, in which case the lattice based key home or fix scheme is the best up datable system we have today. >>So I'm going to stop here. But let me leave you with one open problem if you're interested in questions in this area. So let me say that in our latest based construction, because of the noise that's involved in latest constructions. It turns out we had toe slightly weaken >>our definitions of security to get the security proof to go through. I think it's an interesting problem to see if we can build a lattice based system that's as efficient as the one that we have, but one that satisfies our full security definition. Okay, so I'll stop here, and I'm happy to take any questions. Thank you very much.
SUMMARY :
My name is Dan Bonnie and I want to thank the organizers for inviting me to speak. minutes, I decided to talk about something relatively simple that will hopefully be useful to entity. So the cloud has cipher text, And the reason we do that is so that an old key basically so that Onley the new key can decrypt the current data stored in the cloud. So we're just going to assume that the cloud deletes the old cipher text, and the only thing the cloud But after I do key rotation on my data, the key that I gave you no longer the payment industry and requires periodic he rotation. The first option is we can download the entire data it's a huge amount of data that we might need to download on to the client that data back to the cloud. other option, of course, is to send the actual old key in the new key to the cloud and But it's insecure because now the cloud will get to see your data in the clear. So in doing so, basically, the cloud is able to do the rotation for us. Okay, so the re encryption key that we send to the cloud should reveal hitter and the cipher text gets into the And like I said, this header is going to be short and independent of the message length. Okay, so the way this works is we will download the header from And again, I repeat, the assumption is that the cloud is gonna erase the old cipher text. So the first thing we did is we realize there's some issues cipher text has stored in the cloud doesn't actually revealed a number of key rotations. that the number of key rotations is not leaked by from just looking at the cipher So we looked at two constructions, one based Prof. So actually, the main point of this work was actually the nested construction is actually the best construction we have as long as the number of key rotations I want to emphasize here that are our goal for using lattices. from the security that they provide encryption is not that easy to explain here, You know, the adversary gets to see lots of keys. So instead of giving you the full definition, I'm just gonna give you kind of the intuition for what this definition is trying to achieve. is the following settings. if the adversary gets to see the old cipher text integrity of the cipher text. And so I'm going to defer this to the paper. So now let's turn to constructions, so the first construction we'll look at it is kind of the classic way to construct available encryption using what's called the key home or fake. So you can see here if I give you the prof under two different keys at the point X, Let me just say that the re encryption key is gonna be the some of the old key and the new key. Yeah, and that's kind of the reason why they're useful. Okay, so And that's just the standards PF from D d H. It's not difficult to see that this And so the answer is yes, we can. And then we encrypt the message key using the actual client key. K, and the header would be the message encryption key. We'll send that over to the cloud that the He The benefit of the scheme, of course, is that it only uses And the reason this is not secure is because the cipher So the cipher text actually leaks So let's compare the lattice based construction with a D. And so for encryption and re encryption purposes that the So decryption in the symmetric system takes linear time in the number of re encryptions. So let me say that in our latest based construction, because of the noise that's involved in latest constructions. our definitions of security to get the security proof to go through.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
2013 | DATE | 0.99+ |
Dan Bonnie | PERSON | 0.99+ |
Sam Kim | PERSON | 0.99+ |
2017 | DATE | 0.99+ |
first option | QUANTITY | 0.99+ |
Morrissey | PERSON | 0.99+ |
two constructions | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
second construction | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
two options | QUANTITY | 0.99+ |
Pierre Efs | PERSON | 0.99+ |
one reason | QUANTITY | 0.99+ |
32 kilobyte | QUANTITY | 0.99+ |
first | QUANTITY | 0.98+ |
Akiyama Murphy | PERSON | 0.98+ |
Delta | ORGANIZATION | 0.98+ |
under 50 re encryptions | QUANTITY | 0.97+ |
K body prime | COMMERCIAL_ITEM | 0.97+ |
more than 50 key rotations | QUANTITY | 0.97+ |
99 | DATE | 0.97+ |
Sochi | PERSON | 0.96+ |
first construction | QUANTITY | 0.96+ |
first thing | QUANTITY | 0.95+ |
K two | OTHER | 0.95+ |
first key | QUANTITY | 0.94+ |
one | QUANTITY | 0.93+ |
more than 50 re encryptions | QUANTITY | 0.92+ |
two different keys | QUANTITY | 0.92+ |
Thio | PERSON | 0.92+ |
15 >>minutes | QUANTITY | 0.9+ |
petabytes | QUANTITY | 0.88+ |
K prime | COMMERCIAL_ITEM | 0.88+ |
about 50 re encryptions | QUANTITY | 0.87+ |
K one | OTHER | 0.86+ |
four kilobyte | QUANTITY | 0.86+ |
Norfolk | LOCATION | 0.85+ |
Pincus and Rain | ORGANIZATION | 0.85+ |
Prof. | PERSON | 0.83+ |
one of our papers | QUANTITY | 0.82+ |
about 300 times | QUANTITY | 0.81+ |
lots of cipher | QUANTITY | 0.77+ |
lots of keys | QUANTITY | 0.76+ |
terabytes | QUANTITY | 0.76+ |
50 re encryptions | QUANTITY | 0.73+ |
one open | QUANTITY | 0.71+ |
F K one | OTHER | 0.69+ |
Cape Prime | COMMERCIAL_ITEM | 0.69+ |
Trivial Update | OTHER | 0.63+ |
K two | OTHER | 0.61+ |
fewer than | QUANTITY | 0.59+ |
Sabah Eskandarian | PERSON | 0.57+ |
Trivial | OTHER | 0.56+ |
Abel | ORGANIZATION | 0.55+ |
K body | COMMERCIAL_ITEM | 0.54+ |
Onley | ORGANIZATION | 0.53+ |
lots | QUANTITY | 0.52+ |
Qihoo | ORGANIZATION | 0.52+ |
Lee | ORGANIZATION | 0.48+ |
prime | OTHER | 0.42+ |
Asia | LOCATION | 0.33+ |
Everest | TITLE | 0.29+ |
Abel | TITLE | 0.29+ |
Ido Safruti, PerimeterX | Cloud Native Insights
>> From The Cube Studios in Palo Alto, in Boston, connecting with thought leaders around the globe. These are Cloud Native Insights. >> Hi, I'm Stu Miniman the host of Cloud Native Insights where we're talking to companies and practitioners about how they take advantage of the innovation and agility of the cloud. Happy to welcome to the program I have first time guests, you know, Ido Safruti he is the co-founder and CTO of Perimeter X going to talk him in a dual role, both as a practitioner and their adoption of Cloud Native Technologies serverless specifically as well as they are a Cloud Native supplier in the security realm. Ido thanks so much for joining us. Nice to have you on the program. >> Yeah, good to be here. Thanks. >> All right. So Ido, if you could, you're co founder of Perimeter X, give us just, if you would, a little bit of your background and you know, what Perimeter X does and we'll, go into it from there. >> Sure. So as CTO, I'm in charge of the research, engineering, and product team at Perimeter X, we are a vendor, a Cloud Native vendor of web application security protecting all kinds of different business logic abuses for our customers, mostly large websites that are in demand of web-scale. So not only doing the protection or the application, but also integrated into multiple infrastructure and running at scale. We're solving problems like account takeover, carding, a major card data skimming and so on. >> One of the conversations we've been having the last couple of years from security is, you know, there's no shortage of new threats, the surface area of attack, keep getting more here in 2020, everybody's working from home more, the people that are doing attacks didn't stop working. So if you could just, you know, how long has Perimeter X been around? And I want to lead up to the discussion of serverless, you know, what was the architecture considerations before? And what started leading you towards making a change architecturally? >> Yeah, so Perimeter X was founded almost six years ago, a little less than six years ago. And we were a Cloud Native Solution to begin with. We identified the challenges of where the gap security in native cloud application is. For in many cases, security solutions are not leveraging the breadth and the new architecture of where applications are built. And we're more of trying to slap in a standard enterprise security and on other cloud infrastructure. When we started, we wanted to integrate and adopt the cloud and adopt the flexibility of the specificity of the edge to help enhance our customer's infrastructure by adding security onto that versus forcing them to rearchitect it when they integrate security into it. >> Well, it's addressing, you say six years ago. I can't remember hearing the term Cloud Native that long ago. Obviously Cloud has been around for a while, but when I started this one of the discussions around Cloud Native was, Oh, people were talking about adopting containers and Kubernetes. And I said, they're great tools to help from, you know, the infrastructure standpoint, but you're talking about right, living in the Cloud, taking advantage of cloud services, you know. That's where we really see the opportunity in Cloud Native. So, you know, when you say you were built for the cloud, but you know, things like containers, server lists probably weren't doing those six years ago, maybe, or were you? >> Actually, yeah, so we started early versions of obviously all dockerized Grenades was not that great back then. So we were orchestrating some things on our own and gradually adopting other orchestration and mesh for our own service that is obviously running on multiple cloud vendors. But from us, from our point of view, the key for cloud was how can we enable our customers, and how can we integrate better with them in a way that enhance their infrastructure versus add friction? Because the challenge usually with security, is that security in most cases or traditionally, was adding friction and delays and complexity to developer process. And we're designing our solution to begin with on how can we leverage these new technologies? How can we leverage the fact that CDNs and edges are becoming smarter and can, you can start deploying your own payloads and logic to make our logic integrated with them and to partner with this cloud players in order to enable our customers to add these additional tiers. And I think this is from my point of view, one of the key capabilities of having the capabilities of computed edge and serverless, is making a lightweight integration and making your existing infrastructure smarter by making it easy to incorporate third party vendors or other solutions or more logic without forcing a wholly architectural solution. >> Yeah, no, no. You bring up some great points. I remember back the early days of Docker, it was, can we get the atomic unit to be closer to what the application is. But you know, my background is in infrastructure and it was okay, It went from the server to the VM, to the container. Yeah, there's an application that sits on top of it, but I don't think about it as opposed to serverless starts with the developer first and you know, how I build my application and then there are certain things that I have to worry about the platform. So, help us understand doing containers, looking at serverless, was it okay, we're going to completely overhaul and throw out what we had because there's something new and better. Are you doing still some containers and some serverless? Help us understand, you know, what drove that transition and what the outcomes were? >> Yeah, so our infrastructure our machine learning algorithms, the data processing that the heavy lifting that we're running on our own infrastructure, which is again, Cloud Native Infrastructure. But something that we're managing in many cases is using containers is using other environments because we were running heavy payloads. We're not fully relying on some other platform to run for us. We're leveraging a lot of these technologies to run it and run it in a more efficient way. Where we're adopting serverless is both in some of the front end decisions. So making smarter load balancing decision integrating with some other cloud vendors to help make sure that requests are coming in the right view, and things like this, but where it is more important even then is how can we make ourselves relevant for customers to adopt serverless and how can we help introduce security into these environments? Because, if you're looking at traditional security, if you're, if you're so it's more about, if I go to that one, how can I enable our customers adopt serverless? How can I enable our customers adopt new technologies into cloud? Because it could be a limitation if you're, if you're a security policy or if your architecture is such, that requires everything to go through a specific security proxy or some firewall, it may force you to utilize very limited architectures. If you want to deploy now with payload on some, on Lambda or on, on your CDN, it typically will be way in front of your traditional enterprise security solutions. How can you make that application smarter? How can you make that application sort, self-sufficient by connecting modules, by making sure that you're including modules that integrate security, and bring the security with you everywhere. So, so this is the motion that we're trying to define here. >> Well, and I'm sure you've got a really interesting viewpoint that I'd love to hear on this, Ido. So if you look at, you know, most new technologies, especially in the cloud space, serverless specifically, you know, costs that should be less expensive, you know, flexible. I should be able to, you know, make changes, and speed. I should be able to do more faster, but always when you look at those, you say, well, but what about security? Can I do all of those things, you know, be faster, better, cheaper, more agile, and not be less secure? So I'd love to hear any thoughts you have on kind of the, you know, the typical things, but also your security angle on them? >> Yeah. So one of the benefit of using serverless, and I think there are two types, initially thinking of serverless one is running your code in some, backend application, that may access different things, but you don't need to manage for scale because there is some platform that manage that. Which is one great option, what you're seeing more and more, and we're working in collaboration with Fastly and where you can see that on other edge platforms is having this notion of serverless, How can you deploy code to the edge? And the benefit there is that you can mitigate a lot of the risks outside your data center, outside of your cloud, that if there is, and this is where security plays so well with that, because you want to mitigate the risks and the attack as far away from your application as possible. So if you can deploy the logic that is doing that, or making decisions at the edge, it helps you improve your infrastructure cost. It helps you improve some of the applications that are still in the backend, so you can gradually forward deploy some of the logic that is relevant at the edge and getting the scalability, getting this ability to scale without limit, because a CDN or edge vendor, he has a lot of capacity and withhold if it's a denial of service attack, or if it's any other type of attack, weigh this logic in hand. Or even, sometimes it's just skill. Maybe you had a very good marketing campaign and you were having a lot of traffic. If you can deploy this skill somewhere that can handle that in a distributed, efficient way, you are having even better. >> Well, and it sounds like that that fits into what Perimeter X does. You know, when I think about edge, you know, scale concerns, security concerns are, you know, some of those top of mind as are just, you know, how. You know, can automation things like machine learning or AI help me? Cause usually that scale or a distributed nature of it means that it's not necessarily something that people alone could take care of themselves. Am I getting right, a little bit where Perimeter X is helping their customers? >> Yeah, yeah, yeah. And the idea is to connect, to help and to help offline offset some of the logic or some of the capabilities that, that you don't want your business to be an expert in. So if you're a retailer, you want to be able to sell the best to optimize accomodation for your customers and to handle that you don't want to be an expert in detecting bots or in identifying malicious code or things of that sort. And if you can offset that and with a lightweight, easy integration that does not limit your ability to innovate and adopt new technologies, this is what we're trying to help. Let us focus at this. But by integrating the edge by integrating with partners like Fastly and so that we can help enhance the infrastructure and add more capabilities, where you can focus on doing your own business and we can help allow and enable additional technologies. >> Along your serverless journey, what partners, what other vendors were helpful along the way? As I've looked at it, it's a relatively young ecosystem, but it's robust. So, you know, I'm curious who, some of the companies that have helped along the way? >> Yep. I think Fastly is definitely one that is from their earlier infrastructure. They always had the component of exposing their edge and making it more programmable via configuration and setting logic. And now rolling out a computed edge that is giving even more flexibility. Other CDNs are opening their edge as well with all kinds of tools, again, Lambda from AWS and other services. So this is one component of how do you manage that? How do you always read that? There are issues of how much state can you manage their access to data? And there are different services that allows that. Other platforms, which are more of the platform as a service that are not traditionally considered serverless. And you can think of it as eCommerce platforms helps you deploy your logic and some sometimes go to application into their ecosystem and helps you focus on again, managing your application. So think of Magento, think of a Salesforce cloud, these kind of commerce applications that you can deploy your logic. They're all fit into that ecosystem of help you. You want to write your code to that, your key on and let someone else manage the scale, let someone else manage some of the things that are common tool. >> Well, yeah, that's definitely one you see diversity of solutions at edge. You know, very different from if you were thinking kind of their traditional enterprise data center. Any, you know, as a CTO, when you look at edge, you know, where we were the maturation of this whole solution, or are there areas specifically that you expect in the next, you know, six, 12, 18 months that we will see some things solidify, mature down the line. >> Yeah. Yeah. So I think that the state where the edge compute is at now is more about deploying logic that is remote from the data center. So there is a limit. And if you look across different vendors to the more IO or data access capabilities of these loads. So if you can write the code and make it self sufficient, it's easier and it's more common to find platforms that will love it. What you're starting to see is how you add the data layer into that tier and making it more accessible. And that opens the gate for many more reach an interesting reputation, because once you can have a key value store, and once you can manage a state and modify configuration, you can then start deploying more complex applications and make more decisions. Do I see the billing system running entirely on the edge? probably not. There are things where you want to store it in the database. There are things that make sense to have it in some backend infrastructure, but a lot of payloads more and more environments are going there. And I think these additional services of queuing services, data services, database like services. So can, can I run a transaction on the edge? These kinds of technologies are currently emerging and you can see them in different levels for different vendors. And they will definitely open the gate even further for more and more patrons will be adopted at the edge. >> All right. Well, Ido last question I have for you, What advice would you give for your peers out there? as you said, you know, you were early in Docker adoption. You've done serverless adoption, you know, Edge is something that is gaining a lot of attention. What advice would you give to people here in 2020 as they look at, you know, the variety of Cloud Native options out there? >> I think the easy one is anything new that you build look around and figure out what is the best technology that can help you get there faster? And how can you build in a more strategic way for C-suite executive, if it's the CTO, CIO, CSO, think on how can you enable your team to move faster? How can you enable your team by the solutions and technologies that you select to have the flexibility of moving faster? how can you enable them to, to adopt new technologies and make it available? How can, and this is, you need some practices because you need to make sure that you are getting the right metrics. So whenever that you're using vendors that will help you collect and monitor the services and get the insights, because suddenly if anyone can deploy anything anywhere, then there is some concern about loss of control. So finding the right vendors that can help you or adopting the right processes that helps you gain this visibility while still enabling them to go anywhere. This is key. At least for us, it was key. And this is from wearing my product hat when we're building our services, this is what we're trying to enable our customers to do with this security. >> Well, Ido Safruti, thank you so much for sharing your journey, really appreciate you having on the program. >> Sure, thanks. >> And if you have people we should talk to, I would love hearing the stories of Cloud Native, how those adjustments are going and sharing your information with your peers. I'm Stu Miniman and look forward to hearing more your Cloud Native sites. (Calming music)
SUMMARY :
leaders around the globe. Nice to have you on the program. Yeah, good to be here. So Ido, if you could, So as CTO, I'm in charge of the of years from security is, you know, and the new architecture of but you know, things like you can start deploying your and you know, how I build my application How can you make that application smarter? So if you look at, you know, And the benefit there is that you as are just, you know, how. and to handle that you don't want to be an So, you know, I'm curious applications that you can that you expect in the next, and once you can manage a as they look at, you know, the variety of How can you enable your team by the thank you so much for And if you have
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Ido Safruti | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Perimeter X | ORGANIZATION | 0.99+ |
six | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
Ido | PERSON | 0.99+ |
Boston | LOCATION | 0.99+ |
two types | QUANTITY | 0.99+ |
Cloud Native Insights | ORGANIZATION | 0.99+ |
12 | QUANTITY | 0.99+ |
Lambda | TITLE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
six years ago | DATE | 0.99+ |
both | QUANTITY | 0.98+ |
one component | QUANTITY | 0.97+ |
PerimeterX | ORGANIZATION | 0.97+ |
one | QUANTITY | 0.97+ |
Fastly | ORGANIZATION | 0.96+ |
The Cube Studios | ORGANIZATION | 0.96+ |
Magento | TITLE | 0.96+ |
Cloud Native | TITLE | 0.95+ |
Cloud Native Technologies | ORGANIZATION | 0.95+ |
18 months | QUANTITY | 0.95+ |
first time | QUANTITY | 0.95+ |
Cloud Native Insights | ORGANIZATION | 0.94+ |
Cloud Native | ORGANIZATION | 0.94+ |
One | QUANTITY | 0.92+ |
less than | DATE | 0.91+ |
one great option | QUANTITY | 0.9+ |
CTO | PERSON | 0.89+ |
first | QUANTITY | 0.81+ |
Docker | TITLE | 0.81+ |
Cloud | TITLE | 0.79+ |
dual | QUANTITY | 0.77+ |
last couple of years | DATE | 0.7+ |
Salesforce | TITLE | 0.63+ |
Dr. Tim Wagner & Shruthi Rao | Cloud Native Insights
(upbeat electronic music) >> Narrator: From theCUBE studios in Palo Alto and Boston, connecting with thought leaders all around the world, this is a CUBE conversation! >> Hi, I'm Stu Miniman, your host for Cloud Native Insight. When we launched this series, one of the things we wanted to talk about was that we're not just using cloud as a destination, but really enabling new ways of thinking, being able to use the innovations underneath the cloud, and that if you use services in the cloud, that you're not necessarily locked into a solution or can't move forward. And that's why I'm really excited to help welcome to the program, I have the co-founders of Vendia. First we have Dr. Tim Wagner, he is the co-founder and CEO of the company, as well as generally known in the industry as the father of Serverless from the AWS Lambda, and his co-founder, Shruthi Rao, she is the chief business officer at Vendia, also came from AWS where she worked on blockchain solutions. Tim, Shruthi, thanks so much for joining us. >> Thanks for having us in here, Stu. Great to join the show. >> All right, so Shruthi, actually if we could start with you because before we get into Vendia, coming out of stealth, you know, really interesting technology space, you and Tim both learned a lot from working with customers in your previous jobs, why don't we start from you. Block chain of course had a lot of learnings, a lot of things that people don't understand about what it is and what it isn't, so give us a little bit about what you've learned and how that lead towards what you and Tim and the team are doing with Vendia. >> Yeah, absolutely, Stu! One, the most important thing that we've all heard of was this great gravitational pull towards blockchain in 2018 and 2019. Well, I was one of the founders and early adopters of blockchain from Bitcoin and Ethereum space, all the way back from 2011 and onwards. And at AWS I started the Amazon Managed Blockchain and launched Quantum Ledger Database, two services in the block chain category. What I learned there was, no surprise, there was a gold rush to blockchain from many customers. We, I personally talked to over 1,092 customers when I ran Amazon Managed Blockchain for the last two years. And I found that customers were looking at solving this dispersed data problem. Most of my customers had invested in IoT and edge devices, and these devices were gathering massive amounts of data, and on the flip side they also had invested quite a bit of effort in AI and ML and analytics to crunch this data, give them intelligence. But guess what, this data existed in multiple parties, in multiple clouds, in multiple technology stacks, and they needed a mechanism to get this data from wherever they were into one place so they could the AI, ML, analytics investment, and they wanted all of this to be done in real time, and to gravitated towards blockchain. But blockchain had quite a bit of limitations, it was not scalable, it didn't work with the existing stack that you had. It forced enterprises to adopt this new technology and entirely new type of infrastructure. It didn't work cross-cloud unless you hired expensive consultants or did it yourself, and required these specialized developers. For all of these reasons, we've seen many POCs, majority of POCs just dying on the vine and not ever reaching the production potential. So, that is when I realized that what the problem to be solved was not a trust problem, the problem was dispersed data in multiple clouds and multiple stacks problem. Sometimes multiple parties, even, problem. And that's when Tim and I started talking about, about how can we bring all of the nascent qualities of Lambda and Serverless and use all of the features of blockchain and build something together? And he has an interesting story on his own, right. >> Yeah. Yeah, Shruthi, if I could, I'd like to get a little bit of that. So, first of all for our audience, if you're watching this on the minute, probably want to hit pause, you know, go search Tim, go watch a video, read his Medium post, about the past, present, and future of Serverless. But Tim, I'm excited. You and I have talked in the past, but finally getting you on theCUBE program. >> Yeah! >> You know, I've looked through my career, and my background is infrastructure, and the role of infrastructure we know is always just to support the applications and the data that run business, that's what is important! Even when you talk about cloud, it is the applications, you know, the code, and the data that are important. So, it's not that, you know, okay I've got near infinite compute capacity, it's the new things that I can do with it. That's a comment I heard in one of your sessions. You talked about one of the most fascinating things about Serverless was just the new creativity that it inspired people to do, and I loved it wasn't just unlocking developers to say, okay I have new ways to write things, but even people that weren't traditional coders, like lots of people in marketing that were like, "I can start with this and build something new." So, I guess the question I have for you is, you know we had this idea of Platform as a Service, or even when things like containers launched, it was, we were trying to get close to that atomic unit of the application, and often it was talked about, well, do I want it for portability? Is it for ease of use? So, you've been wrangling and looking at this (Tim laughing) from a lot of different ways. So, is that as a starting point, you know, what did you see the last few years with Lambda, and you know, help connect this up to where Shruthi just left off her bit of the story. >> Absolutely. You know, the great story, the great success of the cloud is this elimination of undifferentiated heavy lifting, you know, from getting rid of having to build out a data center, to all the complexity of managing hardware. And that first wave of cloud adoption was just phenomenally successful at that. But as you say, the real thing businesses wrestle with are applications, right? It's ultimately about the business solution, not the hardware and software on which it runs. So, the very first time I sat down with Andy Jassy to talk about what eventually become Lambda, you know, one of the things I said was, look, if we want to get 10x the number of people to come and, you know, and be in the cloud and be successful it has to be 10 times simpler than it is today. You know, if step one is hire an amazing team of distributed engineers to turn a server into a full tolerance, scalable, reliable business solution, now that's going to be fundamentally limiting. We have to find a way to put that in a box, give that capability, you know, to people, without having them go hire that and build that out in the first place. And so that kind of started this journey for, for compute, we're trying to solve the problem of making compute as easy to use as possible. You know, take some code, as you said, even if you're not a diehard programmer or backend engineer, maybe you're just a full-stack engineer who loves working on the front-end, but the backend isn't your focus, turn that into something that is as scalable, as robust, as secure as somebody who has spent their entire career working on that. And that was the promise of Serverless, you know, outside of the specifics of any one cloud. Now, the challenge of course when you talk to customers, you know, is that you always heard the same two considerations. One is, I love the idea of Lamdba, but it's AWS, maybe I have multiple departments or business partners, or need to kind of work on multiple clouds. The other challenge is fantastic for compute, what about data? You know, you've kind of left me with, you're giving me sort of half the solution, you've made my compute super easy to use, can you make my data equally easy to use? And so you know, obviously the part of the genesis of Vendia is going and tackling those pieces of this, giving all that promise and ease of use of Serverless, now with a model for replicated state and data, and one that can cross accounts, machines, departments, clouds, companies, as easily as it scales on a single cloud today. >> Okay, so you covered quite a bit of ground there Tim, if you could just unpack that a little bit, because you're talking about state, cutting across environments. What is it that Vendia is bringing, how does that tie into solutions like, you know, Lamdba as you mentioned, but other clouds or even potentially on premises solutions? So, what is, you know, the IP, the code, the solution that Vendia's offering? >> Happy to! So, let's start with the customer problem here. The thing that every enterprise, every company, frankly, wrestles with is in the modern world they're producing more data than ever, IMT, digital journeys, you know, mobile, edge devices. More data coming in than ever before, at the same time, more data getting consumed than ever before with deep analytics, supply chain optimization, AI, ML. So, even more consumers of ever more data. The challenge, of course, is that data isn't always inside a company's four walls. In fact, we've heard 80% or more of that data actually lives outside of a company's control. So, step one to doing something like AI, ML, isn't even just picking a product or selecting a technology, it's getting all of your data back together again, so that's the problem that we set out to solve with Vendia, and we realized that, you know, and kind of part of the genesis for the name here, you know, Vendia comes from Venn Diagram. So, part of that need to bring code and data together across companies, across tech stacks, means the ability to solve some of these long-standing challenges. And we looked at the two sort of big movements out there. Two of them, you know, we've obviously both been involved in, one of them was Serverless, which amazing ability to scale, but single account, single cloud, single company. The other one is blockchain and distributed ledgers, manages to run more across parties, across clouds, across tech stacks, but doesn't have a great mechanism for scalability, it's really a single box deployment model, and obviously there are a lot of limitations with that. So, our technology, and kind of our insight and breakthrough here was bringing those two things together by solving the problems in each of them with the best parts of the other. So, reimagine the blockchain as a cloud data implementation built entirely out of Serverless components that have all of the scale, the cost efficiencies, the high utilization, like all of the ease of deployment that something like Lambda has today, and at the same time, you know, bring state to Serverless. Give things like Lambda and the equivalent of other clouds a simple, easy, built-in model so that applications can have multicloud, multi-account state at all times, rather than turning that into a complicated DIY project. So, that was our insight here, you know and frankly where a lot of the interesting technology for us is in turning those centralized services, a centralized version of Serverless Compute or Serverless Database into a multi-account, multicloud experience. And so that's where we spent a lot of time and energy trying to build something that gives customers a great experience. >> Yeah, so I've got plenty of background in customers that, you know, have the "information silos", if you will, so we know, when the unstructured data, you know so much of it is not searchable, I can't leverage it. Shruthi, but maybe it might make sense, you know, what is, would you say some of the top things some of your early customers are saying? You know, I have this pain point, that's pointing me in your direction, what was leading them to you? And how does the solution help them solve that problem? >> Yeah, absolutely! One of our design partners, our lead design partners is this automotive company, they're a premier automotive company, they want, their end goal is to track car parts for warranty recall issues. So, they want to track every single part that goes into a particular car, so they're about 30 to 35,000 parts in each of these cars, and then all the way from manufacturing floor to when the car is sold, and when that particular part is replaced eventually, towards the end of the lifecycle of that part. So for this, they have put together a small test group of their partners, a couple of the parts manufacturers, they're second care partners, National Highway Safety Administration is part of this group, also a couple of dealers and service centers. Now, if you just look at this group of partners, you will see some of these parties have high technology, technology backgrounds, just like the auto manufacturers themselves or the part manufacturers. Low modality or low IT-competency partners such as the service centers, for them desktop PCs are literally the IT competency, and so does the service centers. Now, most of, majority of these are on multiple clouds. This particular auto customer is on AWS and manufactures on Azure, another one is on GCP. Now, they all have to share these large files between each other, making sure that there are some transparency and business rules applicable. For example, two partners who make the same parts or similar parts cannot see each other's data. Most of the participants cannot see the PII data that are not applicable, only the service center can see that. National Highway Safety Administration has read access, not write access. A lot of that needed to be done, and their alternatives before they started using Vendia was either use point-to-point APIs, which was very expensive, very cumbersome, it works for a finite small set of parties, it does not scale, as in when you add more participants into this particular network. And the second option for them was blockchain, which they did use, and used Hyperledger Fabric, they used Ethereum Private to see how this works, but the scalability, with Ethereum Private, it's about 14 to 15 transactions per second, with Hyperledger Fabric it taps out at 100, or 150 on a good day, transaction through, but it's not just useful. All of these are always-on systems, they're not Serverless, so just provisioning capacity, our customers said it took them two to three weeks per participant. So, it's just not a scalable solution. With Vendia, what we delivered to them was this virtual data lake, where the sources of this data are on multiple clouds, are on multiple accounts owned by multiple parties, but all of that data is shared on a virtual data lake with all of the permissions, with all of the logging, with all of the security, PII, and compliance. Now, this particular auto manufacturer and the National Highway Safety Administration can run their ML algorithms to gain intelligence off of it, and start to understand patterns, so when certain parts go bad, or what's the propensity of a certain manufacturing unit producing faulty parts, and so on, and so forth. This really shows you this concept of unstructured data being shared between parties that are not, you know, connected with each other, when there are data silos. But I'd love to follow this up with another example of, you know, the democratization, democratization is very important to Vendia. When Tim launched Lambda and founded the AWS Serverless movement as a whole, and at AWS, one thing, very important thing happened, it lowered the barrier to entry for a new wave of businesses that could just experiment, try out new things, if it failed, they scrap it, if it worked, they could scale it out. And that was possible because of the entry point, because of the paper used, and the architecture itself, and we are, our vision and mission for Vendia is that Vendia fuels the next generation of multi-party connected distributed applications. My second design partner is actually a non-profit that, in the animal welfare industry. Their mission is to maintain a no-kill for dogs and cats in the United States. And the number one reason for over populations of dogs and cats in the shelters is dogs lost, dogs and cats lost during natural disasters, like the hurricane season. And when that happens, and when, let's say your dogs get lost, and you want to find a dog, the ID or the chip-reading is not reliable, they want to search this through pictures. But we also know that if you look at a picture of a dog, four people can come up with four different breed names, and this particular non-profit has 2,500 plus partners across the U.S., and they're all low to no IT modalities, some of them have higher IT competency, and a huge turnover because of volunteer employees. So, what we did for them was came up with a mechanism where they could connect with all 2,500 of these participants very easily in a very cost-effective way and get all of the pictures of all of the dogs in all these repositories into one data lake so they can run some kind of a dog facial recognition algorithm on it and identify where my lost dog is in minutes as opposed to days it used to take before. So, you see a very large customer with very sophisticated IT competency use this, also a non-profit being able to use this. And they were both able to get to this outcome in days, not months or years, as, blockchain, but just under a few days, so we're very excited about that. >> Thank you so much for the examples. All right, Tim, before we get to the end, I wonder if you could take us under the hood a little bit here. My understanding, the solution that you talk about, it's universal apps, or what you call "unis" -- >> Tim: Unis? (laughs) >> I believe, so if I saw that right, give me a little bit of compare and contrast, if you will. Obviously there's been a lot of interest in what Kubernetes has been doing. We've been watching closely, you know there's connections between what Kubernetes is doing and Serverless with the Knative project. When I saw the first video talking about Vendia, you said, "We're serverless, and we're containerless underneath." So, help us understand, because at, you know, a super high level, some of the multicloud and making things very flexible sound very similar. So you know, how is Vendia different, and why do you feel your architecture helps solve this really challenging problem? >> Sure, sure, awesome! You know, look, one of the tenets that we had here was that things have to be as easy as possible for customers, and if you think about the way somebody walks up today to an existing database system, right? They say, "Look, I've got a schema, I know the shape of my data." And a few minutes later I can get a production database, now it's single user, single cloud, single consumer there, but it's a very fast, simple process that doesn't require having code, hiring a team, et cetera, and we wanted Vendia to work the same way. Somebody can walk up with a JSON schema, hand it to us, five minutes later they have a database, only now it's a multiparty database that's decentralized, so runs across multiple platforms, multiple clouds, you know, multiple technology stacks instead of being single user. So, that's kind of goal one, is like make that as easy to use as possible. The other key tenet though is we don't want to be the least common denominator of the cloud. One of the challenges with saying everyone's going to deploy their own servers, they're going to run all their own software, they're going to build, you know, they're all going to co-deploy a Kubernetes cluster, one of the challenges with that is that, as Shruthi was saying, first, anyone for whom that's a challenge, if you don't have a whole IT department wrapped around you that's a difficult proposition to get started on no matter how amazing that technology might be. The other challenge with it though is that it locks you out, sort of the universe of a lock-in process, right, is the lock-out process. It locks you out of some of the best and brightest things the public cloud providers have come up with, and we wanted to empower customers, you know, to pick the best degree. Maybe they want to go use IBM Watson, maybe they want to use a database on Google, and at the same time they want to ingest IoT on AWS, and they wanted all to work together, and want all of that to be seamless, not something where they have to recreate an experience over, and over, and over again on three different clouds. So, that was our goal here in producing this. What we designed as an architecture was decentralized data storage at the core of it. So, think about all the precepts you hear with blockchain, they're all there, they all just look different. So, we use a no SQL database to store data so that we can scale that easily. We still have a consensus algorithm, only now it's a high speed serverless and cloud function based mechanism. You know, instead of smart contracts, you write things in a cloud function like Lambda instead, so no more learning Solidity, now you can use any language you want. So, we changed how we think about that architecture, but many of those ideas about people, really excited about blockchain and its capabilities and the vision for the future are still alive and well, they've just been implemented in a way that's far more practical and effective for the enterprise. >> All right, so what environments can I use today for your solution, Shruthi talked about customers spanning across some of the cloud, so what's available kind of today, what's on the roadmap in the future? Will this include beyond, you know, maybe the top five or six hyper scalers? Can I do, does it just require Serverless underneath? So, will things that are in a customer's own data center eventually support that. >> Absolutely. So, what we're doing right now is having people sign up for our preview release, so in the next few weeks, we're going to start turning that on for early access to developers. That'll be, the early access program, will be multi-account, focused on AWS, and then end of summer, well be doing our GA release, which will be multicloud, so we'll actually be able to operate across multiple clouds, multiple cloud services, on different platforms. But even from day one, we'll have API support in there. So, if you got a service, could even be running on a mainframe, could be on-prem, if it's API based you can still interact with the data, and still get the benefits of the system. So, developers, please start signing up, you can go find more information on vendia.net, and we're really looking forward to getting some of that early feedback and hear more from the people that we're the most excited to have start building these projects. >> Excellent, what a great call to action to get the developers and users in there. Shruthi, if you could just give us the last bit, you know, the thing that's been fascinating, Tim, when I look at the Serverless movement, you know, I've talked to some amazing companies that were two or three people (Tim laughing) and out of their basement, and they created a business, and they're like, "Oh my gosh, I got VC funding, and it's usually sub $10,000,000. So, I look at your team, I'd heard, Tim, you're the primary coder on the team. (Tim laughing) And when it comes to the seed funding it's, you know, compared to many startups, it's a small number. So, Shruthi, give us a little bit if you could the speeds and feeds of the company, your funding, and any places that you're hiring. Yeah, we are definitely hiring, lets me start from there! (Tim laughing) We're hiring for developers, and we are also hiring for solution architects, so please go to vendia.net, we have all the roles listed there, we would love to hear from you! And the second one, funding, yes. Tim is our main developer and solutions architect here, and look, the Serverless movement really helped quite a few companies, including us, to build this, bring this to market in record speeds, and we're very thankful that Tim and AWS started taking the stands, you know back in 2014, 2013, to bring this to market and democratize this. I think when we brought this new concept to our investors, they saw what this could be. It's not an easy concept to understand in the first wave, but when you understand the problem space, you see that the opportunity is pretty endless. And I'll say this for our investors, on behalf of our investors, that they saw a real founder market-fit between us. We're literally the two people who have launched and ran businesses for both Serverless and blockchain at scale, so that's what they thought was very attractive to them, and then look, it's Tim and I, and we're looking to hire 8 to 10 folks, and I think we have gotten to a space where we're making a meaningful difference to the world, and we would love for more people to join us, join this movement and democratize this big dispersed data problem and solve for this. And help us create more meanings to the data that our customers and companies worldwide are creating. We're very excited, and we're very thankful for all of our investors to be deeply committed to us and having conviction on us. >> Well, Shruthi and Tim, first of all, congratulations -- >> Thank you, thank you. >> Absolutely looking forward to, you know, watching the progress going forward. Thanks so much for joining us. >> Thank you, Stu, thank you. >> Thanks, Stu! >> All right, and definitely tune in to our regular conversations on Cloud Native Insights. I'm your host Stu Miniman, and looking forward to hearing more about your Cloud Native Insights! (upbeat electronic music)
SUMMARY :
and CEO of the company, Great to join the show. and how that lead towards what you and Tim and on the flip side You and I have talked in the past, it is the applications, you know, and build that out in the first place. So, what is, you know, the and at the same time, you know, And how does the solution and get all of the solution that you talk about, and why do you feel your architecture and at the same time they Will this include beyond, you know, and hear more from the people and look, the Serverless forward to, you know, and looking forward to hearing more
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Shruthi | PERSON | 0.99+ |
Tim | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
2018 | DATE | 0.99+ |
2014 | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
Two | QUANTITY | 0.99+ |
80% | QUANTITY | 0.99+ |
Shruthi Rao | PERSON | 0.99+ |
2019 | DATE | 0.99+ |
National Highway Safety Administration | ORGANIZATION | 0.99+ |
two partners | QUANTITY | 0.99+ |
National Highway Safety Administration | ORGANIZATION | 0.99+ |
2011 | DATE | 0.99+ |
2013 | DATE | 0.99+ |
8 | QUANTITY | 0.99+ |
Boston | LOCATION | 0.99+ |
second option | QUANTITY | 0.99+ |
10 times | QUANTITY | 0.99+ |
Stu | PERSON | 0.99+ |
Vendia | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
United States | LOCATION | 0.99+ |
U.S. | LOCATION | 0.99+ |
10x | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
Tim Wagner | PERSON | 0.99+ |
two people | QUANTITY | 0.99+ |
vendia.net | OTHER | 0.99+ |
two services | QUANTITY | 0.99+ |
first video | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
2,500 plus partners | QUANTITY | 0.99+ |
each | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
five minutes later | DATE | 0.99+ |
today | DATE | 0.98+ |
100 | QUANTITY | 0.98+ |
IBM | ORGANIZATION | 0.98+ |
First | QUANTITY | 0.98+ |
over 1,092 customers | QUANTITY | 0.98+ |
three people | QUANTITY | 0.98+ |
two things | QUANTITY | 0.98+ |
Amazon | ORGANIZATION | 0.98+ |
150 | QUANTITY | 0.98+ |
AWS Lambda | ORGANIZATION | 0.98+ |
Mike Ferris, Red Hat | IBM Think 2020
>>From the cube studios in Palo Alto in Boston. It's the cube covering IBM thing brought to you by IBM. >>Welcome back. I'm Stu Miniman and we're here with the cubes coverage of IBM. Thank you. 2020. The global experience reaching all of the participants of the event where they are. I'm happy to welcome back one of our cube alumni, Mike Farris, who is the vice president of corporate development and strategy at red hat. Mike, it's great to see you. Likewise too. Happy to be here. All right, so what Mike, uh, you know, lots of things to talk about a few weeks back. Uh, of course the management changes happened. Uh, we're fresh off of a red hat summit. Uh, I, I had a pleasure really talking to a lot of your peers, uh, your new boss, uh, and uh, you know, many of the customers. Uh, but for our, I think audience, right? Bring us up to speed. Uh, you know, back in 2019, it, uh, the, the largest software acquisition ever, uh, completed with IBM buying red hat and there've been some management changes, uh, some people, uh, switching roles. >>And, and you've got a new title, so, uh, bring her audience speed. Sure. Absolutely. So it's, it's been an exciting several, several months as we've gone through this. Of course. Um, we knew things were going to happen, things were announced clearly with Jenny's retirement quite a while ago. Um, but certainly, you know, the Arvin announcement and then as well as having both Jim Whitehurst become president. Okay. Oh, Cormier becoming CEO of red hat. You know, it's been an exciting several months trying to try to go through this and understand, you know, what would change and frankly, what would not change. Um, I'll say from red hats perspective, having been with red hat for coming up, you're on 20 years, uh, not a lot is really changed. We're still focused on our mission of being the owner leading enterprise open source software company, uh, focusing on both taking our, our platforms, both red hat enterprise Linux and now OpenShift a Ford in the market, partnering around middleware components, hardening around our management, uh, as well as our storage elements. >>So, you know, our mission hasn't changed and that's kind of one of the key aspects of this. I'll say that certainly, you know, with Arvind now as CEO of IBM and Jim Whitehurst is president of IBM along with Oh for me or being, you know, CEO of red hat and we've got a really strong leadership group in place at IBM that understands what red hat is, what we mean to the customer and just as importantly what we mean to the open source community. Uh, and, and that type of action and, and, and drive is certainly something that, that we think, you know, that leadership in place will help to ensure that the value we've delivered to customers, frankly from day one back when we launched red hat enterprise Linux or red hat advanced server, frankly, uh, it's something that, that we'll be able to continue to do and drive in the community and with the customers as we move forward. >>Yeah. Mike, it's interesting when we look out, uh, on the, the ecosystems and happening out there, we understand for customers sometimes it might be challenging to say, Hey, I listened to 10 different vendors and they all say the same words. I've got multi hybrid cloud, digital modernization, things like that. Well, with our hat as a, as an analyst firm, we kind of say, okay, everybody does things a little bit different. Do you know if you look at the big cloud players, they are all playing different games. When we looked at the IBM strategy pre acquisition of red hat and red hat, they line up pretty well, you know, red hat. Yeah, very much. At summit it was open hybrid cloud. Uh, when I look at IBM, maybe a little bit more talk of multicloud than hybrid. Well, but hybrid is long bend a piece of it. >>So yeah. Okay. Give us a little bit of the inside, you know, with your strategy hat on it. How much had it been okay. Strong alignment, obviously IBM and red hat decades. Um, but you know, there are some places where, uh, you need to make sure that people understand that, you know, red sat still please markers with all the clouds. And of course IBM has services that span many places, but they also have, you know, products and services that are, uh, it was particular to IBM thing. Absolutely. And I think, you know, it's important to note, and this is well established that, you know, one of the core, uh, justifications and reasons for the acquisition was really around red hats. A physician, not just an open source, but in the hybrid cloud. Um, we've been talking about that for sure many years in fact, before most of the vendor's name has predicted up. >>Um, uh, but just as importantly, I think if you look back at Marvin Krishna's announcements on frankly the day that he was named CEO, uh, you know, he starts talking about things like IBM's focus being hybrid. Yeah. AI. And how did those things come together and who were the participants in that value being delivered? Certainly from red hat's perspective is, as we've said, we've been talking about hybrid and delivering on hybrid for many years now. Now that's being, being pushed as part of the IBM overall message. Um, and so certainly being able to leverage that value and extend it throughout the ecosystem that IBM brings throughout the software that IBM has and their services. You know, certainly we think we've got a, a good opportunity to really take that message broader in the market. Um, you know, with again, with, with both Paul and Jim, president and CEO of red hat working together and we'll be able to take that and leverage that capability throughout all of IBM generally. >>Yeah. I'm glad you brought up the AI piece because one of the things that really struck me, thumb it often we're talking about plot worms and we're talking about infrastructure. And while that is my background, we understand that the reason infrastructure exists is because my Apple, that application and one of the most important piece of applications or data. So, you know, red hat of course has a strong history with hi guys, uh, to applications and data. You, you've got an operating system as you know, one of the core pieces of what you're doing. And when I think about IBM and its strengths, well the first thing I probably think of is services. But the second thing I think of was all of the businesses productivity, uh, the databases, you know, all these applications that IBM has. I read it over the years, uh, wondering if we can just click down one notch and you talk about, uh, you know, hybrid cloud and AI and everything. >>How are IBM and red hat helping customers build all of those new applications go through those transformations, uh, to really be modern enterprises? Yeah, so certainly if you look at red hat's history where we focused very much on building the platforms and again, whether that was red hat, enterprise, Linux open shift or J boss, you know, our focus has been how can we make a standardized platform, it will work across the industry regardless of use case or industry verdict. IBM, you know, has both platforms as well as a lot of investment in capabilities in the higher level value services as well as the specializations. And use of these applications and platforms for specific vertical industries. And a lot of what they've been able to bring to the table with your investments in Watson and AI as well as a lot of their data services has certainly start to come to fruition. >>And when we start taking these two in combination and applying, for example, a focus on developers, developer tools, being able to bring a value to not just uh, the operations folks, but also the developer side and really put a lot of the AI capabilities cross that we're starting to see, you know, accelerated value, accelerated use. And then if you layer that on top of a hybrid approach, you know, we've got a very strong message that crosses everything from, you know, existing applications to net new applications before developing from their DevOps cycle all the way through their operation cycle at the bottom end where they're, they're actually trying to do boy cross multiple platforms, multiple infrastructures, and keep everything consistently managed, secured and operated. And that's, that's really the overall message that we're seeing as we talk about this together with IBM. All right. So, Mike, you touched on some of the products that that red hat, uh, offers in the portfolio. >>Uh, it was, it was a real focus at summit, not really to talk about the announcements, you know, a week before a summit two came out. Yeah. Uh, OpenShift bar dog four wasn't a big w blob. Uh, you know, give us the update on really the red hat portfolio and you know, where are those points? You know, IBM is helping red hat scale. Yeah. So certainly you've touched on some of the big ones, right? Well, OpenShift itself with the four dot. Four release brings a lot of new capabilities, uh, that are being brought forward to those customers. I have a better management, better capabilities and what they can do from monitoring service, et cetera. Um, but certainly also things like what we're doing with OpenShift virtualization, which was another announcement. There were, we're actually doing, you know, bringing a game, changing capability to the market, uh, and enabling customers that have both existing, uh, virtual virtualized environments and also new or, or migrated or transformed a container, native environments and running those on the same platform. >>With the same management infrastructure, we see that as huge to be able to simplify the management capabilities, understand cost and be able to control those environments in a much more consistent way. Uh, secondly, uh, you know, one of the big things that's been happening is really around advanced container management. What we're calling an ACM. Uh, this is, this is a good example of how red hat and IBM have worked together, uh, to bring existing IBM capabilities and what they had called a multi cluster management or MCM and bring those not just into red hat yes. Part of our platforms, but also have red hat take the step of open sourcing that and making it part of the industry standard through open source community. So being able to take that type of value that IBM had matured, take it through red hat into the open source community, but simultaneously deliver it to our customers. >>Yeah. Open shift and make it part of the platform. It's something we really see as, as a huge value add. Mmm. We're also doing a lot more with hyperscalers, especially in the space of OpenShift managed services. Uh, you saw some of those last week and I would encourage everyone to go out and, and look at the Paul Cormier and Scott Guthrie announcements that we did. There was a keynote, a video that you can go review. Uh, but, but certainly, uh, certainly the focus on how do we work with these hyperscalers inclusive of IBM, uh, to make open shift and much more fluid deployment option, have it more, more service oriented, a both on premise and off premise so the customers can actually, uh, work together better in it. Yeah. A red hat I think has always done a really good job of highlighting those partnerships. It's way easy on the outside to talk about the competitive nature of the industry. >>And I remember a few years ago, a red hat made, you know, a strong partnership with AWS. You mentioned, you know, Scott Guthrie from Microsoft. Well, okay. Not Satya Nadella. Okay. Love it last year, but Microsoft long partner. Oh, okay. Of course, with IBM back to the earliest days, uh, and with red hat or, uh, you know, in the much more recent days, uh, there was those partnerships. So critically important. ACM definitely an area, uh, we want to watch it. It was really question we had had, if you look at last year, Microsoft announced Azure, uh, there are lots of solutions announced as to how am I going to manage in this multicloud world. Um, because it's not, my piece is everywhere. It's now I need to manage a lot of things that are out of my control from different vendors and hopefully we learned a lot of the lessons from the multi-vendor era that will be fixed in the multi cloud era. >>Oh, absolutely. And you know, arc was part of our discussion with Scott Guthrie last week or Paul's discussion and you'll see a demo of that. But I would also expect that you'll see more things coming from us markers as well. Right. You know, this is about building a platform, a hybrid platform that works in a multicloud world and being able to describe that in a very consistent way. Manage it. You were at entitled it in a very consistent way of across all the vendors, inclusive of both self and managed services, only one option. And so we're very focused on doing that. Um, IBM, certainly AXA assisting in that, helping grow it. But overall this focus is really about red has perspective about making that hybrid, right? the leading hybrid platform, the leading Coobernetti's. Okay. uh, in the industry. And that's, that's really where starting from with OpenShift. >>All right. So, so Mike, we started out the discussion talking about some of the changes and you know, where red hat stays, red hat and where the company is working together. Obviously the leadership changes. Oh, we're a big piece. Uh, congratulations you, you got, you know, a new role. I've seen quite a few people, uh, with some new titles. Uh, you know, w which is always nice to see. Uh, the, the people that have been working for a long time. The other area where seems from the outside there coordinated effort is around the covert response. So, you know, I've seen the, the public letters from, from Arvin Krishna of course. red hat and Paul Cormier's letter. Well, he is there. Uh, IBM was one of the first companies that we had heard from, uh, that said, Hey, you know, we're not going to RSA conference this year. >>We're moving digital, uh, with the events. So no real focus on them boys. And then of course boarding customers. Yeah. How does that covert response happen? And am I right from the outside that it looks like there, there is a bit of United right attack, this global pandemic response. It is a, you know, I think there's two levels to this. Certainly between red hat and IBM were well coordinated. Um, within, within red hat we have, uh, we have teams that are specifically dedicated to making sure, yeah, our associates and more importantly, uh, our customers and the overall communities are well-served through this. As you said earlier in the interview, uh, certainly we hold back on any significant product announcements at summit, including with some of our partners merely because we wanted to maintain this focus on how can we help everyone through this very unfortunate experience. >>Um, and so, you know, as obviously a lot of us, all of us are sitting at home now globally. Uh, the focus is very much how do we stay connected or we keep the business flowing as much as possible through this and, and, and keep people safe and secure in their environments and make sure that we serve both the customers and the associates. Yes. Awesome away. So there's a lot of sensitivity and we want to make sure that, you know, the industry and the overall world knows, uh, that we're very focused on keeping people healthy and moving forward as we, as we work through this together as a world. Yeah. Well, Mike Ferris, thank you so much for the update. It's been been a pleasure catching up. Great. Thanks dude. Appreciate it. All right. Stay tuned for lots more coverage from IBM. Think 20, 20. The global digital experience. Okay. To a minimum. And thank you. We're watching. Thank you.
SUMMARY :
IBM thing brought to you by IBM. uh, and uh, you know, many of the customers. Um, but certainly, you know, the Arvin announcement and then as well as having both Jim Whitehurst become president. is president of IBM along with Oh for me or being, you know, CEO of red hat and we've got a really hat and red hat, they line up pretty well, you know, red hat. And I think, you know, it's important to note, and this is well established frankly the day that he was named CEO, uh, you know, he starts talking about things like IBM's uh, the databases, you know, all these applications that IBM has. IBM, you know, has both platforms as well as cross that we're starting to see, you know, accelerated value, accelerated use. on really the red hat portfolio and you know, where are those points? Uh, secondly, uh, you know, one of the big things that's been happening is really around advanced container Uh, you saw some of those last week and I would encourage everyone to go out and, and with red hat or, uh, you know, in the much more recent days, uh, there was those partnerships. And you know, arc was part of our discussion with Scott Guthrie last week or Paul's discussion and you'll see a demo So, so Mike, we started out the discussion talking about some of the changes and you know, It is a, you know, I think there's two levels to this. and we want to make sure that, you know, the industry and the overall world knows,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
IBM | ORGANIZATION | 0.99+ |
Mike | PERSON | 0.99+ |
Jim Whitehurst | PERSON | 0.99+ |
Mike Farris | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Arvin Krishna | PERSON | 0.99+ |
Mike Ferris | PERSON | 0.99+ |
Satya Nadella | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Paul Cormier | PERSON | 0.99+ |
Scott Guthrie | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
2019 | DATE | 0.99+ |
Paul | PERSON | 0.99+ |
AXA | ORGANIZATION | 0.99+ |
20 years | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
Cormier | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
last week | DATE | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Arvind | PERSON | 0.99+ |
both | QUANTITY | 0.99+ |
Jenny | PERSON | 0.99+ |
two levels | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
one option | QUANTITY | 0.98+ |
10 different vendors | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
red hat | ORGANIZATION | 0.98+ |
OpenShift | TITLE | 0.97+ |
Ford | ORGANIZATION | 0.97+ |
second thing | QUANTITY | 0.97+ |
this year | DATE | 0.97+ |
Linux | TITLE | 0.97+ |
first thing | QUANTITY | 0.97+ |
Marvin Krishna | PERSON | 0.97+ |
Boston | LOCATION | 0.97+ |
RSA | EVENT | 0.95+ |
two | QUANTITY | 0.95+ |
red hat | TITLE | 0.95+ |
UNLIST TILL 4/2 - Sizing and Configuring Vertica in Eon Mode for Different Use Cases
>> Jeff: Hello everybody, and thank you for joining us today, in the virtual Vertica BDC 2020. Today's Breakout session is entitled, "Sizing and Configuring Vertica in Eon Mode for Different Use Cases". I'm Jeff Healey, and I lead Vertica Marketing. I'll be your host for this Breakout session. Joining me are Sumeet Keswani, and Shirang Kamat, Vertica Product Technology Engineers, and key leads on the Vertica customer success needs. But before we begin, I encourage you to submit questions or comments during the virtual session, you don't have to wait, just type your question or comment in the question box below the slides, and click submit. There will be a Q&A session at the end of the presentation, we will answer as many questions as we're able to during that time, any questions we don't address, we'll do our best to answer them off-line. Alternatively, visit Vertica Forums, at forum.vertica.com, post your question there after the session. Our Engineering Team is planning to join the forums to keep the conversation going. Also as reminder, that you can maximize your screen by clicking the double arrow button in the lower-right corner of the slides, and yes, this virtual session is being recorded, and will be available to view on-demand this week. We'll send you a notification as soon as it's ready. Now let's get started! Over to you, Shirang. >> Shirang: Thanks Jeff. So, for today's presentation, we have picked Eon Mode concepts, we are going to go over sizing guidelines for Eon Mode, some of the use cases that you can benefit from using Eon Mode. And at last, we are going to talk about, some tips and tricks that can help you configure and manage your cluster. Okay. So, as you know, Vertica has two modes of operation, Eon Mode and Enterprise Mode. So the question that you may have is, which mode should I implement? So let's look at what's there in the Enterprise Mode. Enterprise Mode, you have a cluster, with general purpose compute nodes, that have locally at their storage. Because of this tight integration of compute and storage, you get fast and reliable performance all the time. Now, amount of data that you can store in Enterprise Mode cluster, depends on the total disk capacity of the cluster. Again, Enterprise Mode is more suitable for on premise and cloud deployments. Now, let's look at Eon Mode. To take advantage of cloud economics, Vertica implemented Eon Mode, which is getting very popular among our customers. In Eon Mode, we have compute and storage, that are separated by introducing S3 Bucket, or, S3 compliant storage. Now because of this separation of compute and storage, you can take advantages like mapping all dynamic scale-out and scale-in. Isolation of your workload, as well as you can load data in your cluster, without having to worry about the total disk capacity of your local nodes. Obviously, you know, it's obvious from what they accept, Eon Mode is suitable for cloud deployment. Some of our customers who take advantage of the features of Eon Mode, are also deploying it on premise, by introducing S3 compliant slash web storage. Okay? So, let's look at some of the terminologies used in Eon Mode. The four things that I want to talk about are, communal storage. It's a shared storage, or S3 compliant shared storage, a bucket that is accessible from all the nodes in your cluster. Shard, is a segment of data, stored on the communal storage. Subscription, is the binding with nodes and shards. And last, depot. Depot is a local copy or, a local cache, that can help query in group performance. So, shard is a segment of data stored in communal storage. When you create a Eon Mode cluster, you have to specify the shard count. Shard count decide the maximum number of nodes that will participate in your query. So, Vertica also will introduce a shard, called replica shard, that will hold the data for replicated projections. Subscriptions, as I said before, is a binding between nodes and shards. Each node subscribes to one or more shards, and a shard has at least two nodes that subscribe to it for case 50. Subscribing nodes are responsible for writing and reading from shard data. Also subscriber node holds up-to-date metadata for a catalog of files that are present in the shard. So, when you connect to Vertica node, Vertica will automatically assign you set of nodes and subscriptions that will process your query. There are two important system tables. There are node subscriptions, and session subscriptions, that can help you understand this a little bit more. So let's look at what's on the local disk of your Eon Mode cluster. So, on local disk, you have depot. Depot is a local file system cache, that can hold subset of the data, or copy of the data, in communal storage. Other things that are there, are temp storage, temp storage is used for storing data belonging to temporary tables, and, the data that spills through this, when you are processing queries. And last, is catalog. Catalog is a persistent copy of Vertica, catalog that is written to this. The writes happen at every commit. You only need the persistent copy at node startup. There is also a copy of Vertica catalog, stored in communal storage, called durability. The local copy is synced to the copy in communal storage via service, at the interval of five minutes. So, let's look at depot. Now, as I said before, depot is your file system cache. It's help to reduce network traffic, and slow performance of your queries. So, we make assumption, that when we load data in Vertica, that's the data that you may most frequently query. So, every data that is loaded in Vertica is first entering the depot, and then as a part of same transaction, also synced to communal storage for durability. So, when you query, when you run a query against Vertica, your queries are also going to find the files in the depot first, to be used, and if the files are not found, the queries will access files from communal storage. Now, the behavior of... you know, the new files, should first enter the depot or skip depot can be changed by configuration parameters that can help you skip depot when writing. When the files are not found in depot, we make assumption that you may need those files for future runs of your query. Which means we will fetch them asynchronously into the depot, so that you have those files for future runs. If that's not the behavior that you intend, you can change configuration around return, to tell Vertica to not fetch them when you run your query, and this configuration parameter can be set at database level, session level, query level, and we are also introducing a user level parameter, where you can change this behavior. Because the depot is going to be limited in size, compared to amount of data that you may store in your Eon cluster, at some point in time, your depot will be full, or hit the capacity. To make space for new data that is coming in, Vertica will evict some of the files that are least frequently used. Hence, depot is going to be your query performance enhancer. You want to shape the extent of your depot. And, so what you want to do is, to decide what shall be in your depot. Now Vertica provides some of the policies, called pinning policies, that can help you pin of statistics table or addition of a table, into a depot, at subcluster level, or at the database level. And Sumeet will talk about this a bit more in his future slides. Now look at some of the system tables that can help you understand about the size of the depot, what's in your depot, what files were evicted, what files were recently fetched into the depot. One of the important system tables that I have listed here is DC_FILE_READS. DC_FILE_READS can be used to figure out if your transaction or query fetched with data from depot, from communal storage, or component. One of the important features of Eon Mode is a subcluster. Vertica lets you divide your cluster into smaller execution groups. Now, each of the execution groups has a set of nodes together subscribed to all the shards, and can process your query independently. So when you connect one node in the subcluster, that node, along with other nodes in the subcluster, will only process your query. And because of that, we can achieve isolation as well as, you know, fetches, scale-out and scale-in without impacting what's happening on the cluster. The good thing about subclusters, is all the subclusters have access to the communal storage. And because of this, if you load data in one subcluster, it's accessible to the queries that are running in other subclusters. When we introduced subclusters, we knew that our customers would really love these features, and, some of the things that we were considering is, we knew that our customers would dynamically scale out and in, lots of-- they would add and remove lots of subclusters on demand, and we had to provide that ab-- we had to give this feature, or provide ability to add and remove subclusters in a fast and reliable way. We knew that during off-peak hours, our customers would shut down many of their subclusters, that means, more than half of the nodes could be down. And we had to make adjustment to our quorum policy which requires at least half of the nodes to be up for database to stay up. We also were aware that customers would add hundreds of nodes in the cluster, which means we had to make adjustments to the catalog and commit policy. To take care of all these three requirements we introduced two types of subclusters, primary subclusters, and secondary subclusters. Primary subclusters is the one that you get by default when you create your first Eon cluster. The nodes in the primary subclusters are always up, that means they stay up and participate in the quorum. The nodes in the primary subcluster are responsible for processing commits, and also maintain a persistent copy, of catalog on disk. This is a subcluster that you would use to process all your ETL jobs, because the topper more also runs on the node, in the primary subcluster. If you want now at this point, have another subcluster, where you would like to run queries, and also, build this cluster up and down depending on the demand or the, depending on the workload, you would create a new subcluster. And this subcluster will be off-site secondary in nature. Now secondary subclusters have nodes that don't participate in quorums, so if these nodes are down, Vertica has no impact. These nodes are also not responsible for processing commit, though they maintain up-to-date copies of the catalog in memory. They don't store catalog on disk. And these are subclusters that you can add and remove very quickly, without impacting what is running on the other subclusters. We have customers running hundreds of nodes, subclusters with hundreds of nodes, and subclusters of size like 64 node, and they can bring this subcluster up and down, or add and remove, within few minutes. So before I go into the sizing of Eon Mode, I just want to say one more thing here. We are working very closely with some of our customers who are running Eon Mode and getting better feedback from that on a regular basis. And based on the feedback, we are making lots of improvements and fixes in every hot-fix that we put out. So if you are running Eon Mode, and want to be part of this group, I suggest that, you keep your cluster current with latest hot-fixes and work with us to give us feedback, and get the improvements that you need to be successful. So let's look at what there-- What we need, to size Eon clusters. Sizing Eon clusters is very different from sizing Enterprise Mode cluster. When you are running Enterprise Mode cluster or when you're sizing Vertica cluster running Enterprise Mode, you need to take into account the amount of data that you want to store, and the configuration of your node. Depending on which you decide, how many nodes you will need, and then start the cluster. In Eon Mode, to size a cluster, you need few things like, what should be your shard count. Now, shard count decides the maximum number of nodes that will participate in your query. And we'll talk about this little bit more in the next slide. You will decide on number of nodes that you will need within a subcluster, the instance type you will pick for running statistic subcluster, and how many subclusters you will need, and how many of them should be running all the time, and how many should be running in a dynamic mode. When it comes to shard count, you have to pick shard count up front, and you can't change it once your database is up and running. So, we... So, you need to pick shard count depending the number of nodes, are the same number of nodes that you will need to process a query. Now one thing that we want to remember here, is this is not amount of data that you have in database, but this is amount of data your queries will process. So, you may have data for six years, but if your queries process last month of data, on most of the occasions, or if your dashboards are processing up to six weeks, or ten minutes, based on whatever your needs are, you will decide or pick the number of shards, shard count and nodes, based on how much data your queries process. Looking at most of our customers, we think that 12 is a good number that should work for most of our customers. And, that means, the maximum number of nodes in a subcluster that will process queries is going to be 12. If you feel that, you need more than 12 nodes to process your query, you can pick other numbers like 24 or 48. If you pick a higher number, like 48, and you go with three nodes in your subcluster, that means node subscribes to 16 primary and 16 secondary shard subscription, which totals to 32 subscriptions per node. That will leave your catalog in a broken state. So, pick shard count appropriately, don't pick prime numbers, we suggest 12 should work for most of our customers, if you think you process more than, you know, the regular, the regular number that, or you think that your customers, you think your queries process terabytes of data, then pick a number like 24. Don't pick a prime number. Okay? We are also coming up with features in Vertica like current scaling, that will help you run more-- run queries on more than, more nodes than the number of shards that you pick. And that feature will be coming out soon. So if you have picked a smaller shard count, it's not the end of the story. Now, the next thing is, you need to pick how many nodes you need within your subclusters, to process your query. Ideal number would be node number equal to shard count, or, if you want to pick a number that is less, pick node count which is such that each of the nodes has a balanced distribution of subscriptions. When... So over here, you can have, option where you can have 12 nodes and 12 shards, or you can have two subclusters with 6 nodes and 12 shards. Depending on your workload, you can pick either of the two options. The first option, where you have 12 nodes and 12 shards, is more suitable for, more suitable for batch applications, whereas two subclusters with, with six nodes each, is more suitable for desktop type applications. Picking subclusters is, it depends on your workload, you can add remove nodes relative to isolation, or Elastic Throughput Scaling. Your subclusters can have nodes of different sizes, and you need to make sure that the nodes within the subcluster have to be homogenous. So this is my last slide before I hand over to Sumeet. And this I think is very important slide that I want you to pay attention to. When you pick instance, you are going to pick instance based on workload and query budget. I want to make it clear here that we want you to pay attention to the local disk, because you have depot on your local disk, which is going to be your query performance enhancer for all kinds of deployment, in cloud, as well as on premise. So you'd expect of what you read, or what you heard, depots still play a very important role in every Eon deployment, and they act like performance enhancers. Most of our customers choose Vertica because they love the performance we offer, and we don't want you to compromise on the performance. So pick nodes with some amount of local disk, at least two terabytes is what we suggest. i3 instances in Amazon have, you know, come up with a good local disk that is very helpful, and some of our customers are benefiting from. With that said, I want to pass it over to Sumeet. >> Sumeet: So, hi everyone, my name is Sumeet Keswani, and I'm a Product Technology Engineer at Vertica. I will be discussing the various use cases that customers deploy in Eon Mode. After that, I will go into some technical details of SQL, and then I'll blend that into the best practices, in Eon Mode. And finally, we'll go through some tips and tricks. So let's get started with the use cases. So a very basic use case that users will encounter, when they start Eon Mode the first time, is they will have two subclusters. The first subcluster will be the primary subcluster, used for ETL, like Shirang mentioned. And this subcluster will be mostly on, or always on. And there will be another subcluster used for, purely for queries. And this subcluster is the secondary subcluster and it will be on sometimes. Depending on the use case. Maybe from nine to five, or Monday to Friday, depending on what application is running on it, or what users are doing on it. So this is the most basic use case, something users get started with to get their feet wet. Now as the use of the deployment of Eon Mode with subcluster increases, the users will graduate into the second use case. And this is the next level of deployment. In this situation, they still have the primary subcluster which is used for ETL, typically a larger subcluster where there is more heavier ETL running, pretty much non-stop. Then they have the usual query subcluster which will use for queries, but they may add another one, another secondary subcluster for ad-hoc workloads. The motivation for this subcluster is to isolate the unpredictable workload from the predictable workload, so as not to impact certain isolates. So you may have ad-hoc queries, or users that are running larger queries or bad workloads that occur once in a while, from running on a secondary subcluster, on a different secondary subcluster, so as to not impact the more predictable workload running on the first subcluster. Now there is no reason why these two subclusters need to have the same instances, they can have different number of nodes, different instance types, different depot configurations. And everything can be different. Another benefit is, they can be metered differently, they can be costed differently, so that the appropriate user or tenant can be billed the cost of compute. Now as the use increases even further, this is what we see as the final state of a very advanced Eon Mode deployment here. As you see, there is the primary subcluster of course, used for ETL, very heavy ETL, and that's always on. There are numerous secondary subclusters, some for predictable applications that have a very fine-tuned workload that needs a definite performance. There are other subclusters that have different usages, some for ad-hoc queries, others for demanding tenants, there could be still more subclusters for different departments, like Finance, that need it maybe at the end of the quarter. So very, very different applications, and this is the full and final promise of Eon, where there is workload isolation, there is different metering, and each app runs in its own compute space. Okay, so let's talk about a very interesting feature in Eon Mode, which we call Hibernate and Revive. So what is Hibernate? Hibernating a Vertica database is the act of dissociating all the computers on the database, and shutting it down. At this point, you shut down all compute. You still pay for storage, because your data is in the S3 bucket, but all the compute has been shut down, and you do not pay for compute anymore. If you have reserved instances, or any other instances you can use them for different applications, and your Vertica database is shut down. So this is very similar to stop database, in Eon Mode, you're stopping all compute. The benefit of course being that you pay nothing anymore for compute. So what is Revive, then? The Revive is the opposite of Hibernate, where you now associate compute with your S3 bucket or your storage, and start up the database. There is one limitation here that you should be aware of, is that the size of the database that you have during Hibernate, you must revive it the same size. So if you have a 12-node primary subcluster when hibernating, you need to provision 12 nodes in order to revive. So one best practice comes down to this, is that you must shrink your database to the smallest size possible before you hibernate, so that you can revive it in the same size, and you don't have to spin up a ton of compute in order to revive. So basically, what this means is, when you have decided to hibernate, we ask you to remove all your secondary subclusters and shrink your primary subcluster down to the bare minimum before you hibernate it. And the benefit being, is when you do revive, you will have, you will be able to do so with the mimimum number of nodes. And of course, before you hibernate, you must cleanly shut down the database, so that all the data can be synced to S3. Finally, let's talk about backups and replication. Backups and replications are still supported in Eon Mode, we sometimes get the question, "We're in S3, and S3 has nine nines of reliability, we need a backup." Yes, we highly recommend backups, you can back-up by using the VBR script, you can back-up your database to another bucket, you can also copy the bucket and revive to a different, revive a different instance of your database. This is very useful because many times people want staging or development databases, and they need some of the data from production, and this is a nice way to get that. And it also makes sure that if you accidentally delete something you will be able to get back your data. Okay, so let's go into best practices now. I will start, let's talk about the depot first, which is the biggest performance enhancer that we see for queries. So, I want to state very clearly that reading from S3, or a remote object store like S3 is very slow, because data has to go over the network, and it's very expensive. You will pay for access cost. This is where S3 is not very cheap, is that every time you access the data, there is an ATI and access cost levied. Now the depot is a performance enhancing feature that will improve the performance of queries by keeping a local cache of the data that is most frequently used. It will also reduce the cost of accessing the data because you no longer have to go to the remote object store to get the data, since it's available on a local and permanent volume. Hence depot shaping is a very important aspect of performance tuning in an Eon database. What we ask you to do is, if you are going to use a specific table or partition frequency, you can choose to pin it, in the depot, so that if your depot is under pressure or is highly utilized, these objects that are most frequently used are kept in the depot. So therefore, depot, depot shaping is the act of setting eviction policies, instead you prevent the eviction of files that you believe you need to keep, so for example, you may keep the most recent year's data or the most recent, recent partition in the depot, and thereby all queries running on those partitions will be faster. At this time, we allow you to pin any table or partition in the depot, but it is not subcluster-based. Future versions of Vertica will allow you fine-tuning the depot based on each subcluster. So, let's now go and understand a little bit of internals of how a SQL query works in Eon Mode. And, once I explain this, we will blend into best practice and it will become much more clearer why we recommend certain things. So, since S3 is our layer of durability, where data is persistent in an Eon database. When you run an insert query, like, insert into table value one, or something similar. Data is synchronously written into S3. So, it will control returns back to the client, the copy of the data is first stored in the local depot, and then uploaded to S3. And only then do we hand the control back to the client. This ensures that if something bad were to happen, the data will be persistent. The second, the second types of SQL transactions are what we call DTLs, which are catalog operations. So for example, you create a table, or you added a column. These operations are actually working with metadata. Now, as you may know, S3 does not offer mutable storage, the storage in S3 is immutable. You can never append to a file in S3. And, the way transaction logs work is, they are append operation. So when you modify the metadata, you are actually appending to a transaction log. So this poses an interesting challenge which we resolve by appending to the transaction log locally in the catalog, and then there is a service that syncs the catalog to S3 every five minutes. So this poses an interesting challenge, right. If you were to destroy or delete an instance abruptly, you could lose the commits that happened in the last five minutes. And I'll speak to this more in the subsequent slides. Now, finally let's look at, drops or truncates in Eon. Now a drop or a truncate is really a combination of the first two things that we spoke about, when you drop a table, you are making, a drop operation, you are making a metadata change. You are telling Vertica that this table no longer exists, so we go into the transaction log, and append into the transaction log, that this table has been removed. This log of course, will be synced every five minutes to S3, like we spoke. There is also the secondary operation of deleting all the files that were associated with data in this table. Now these files are on S3. And we can go about deleting them synchronously, but that would take a lot of time. And we do not want to hold up the client for this duration. So at this point, we do not synchronously delete the files, we put the files that need to be removed in a reaper queue. And return the control back to the client. And this has the performance benefit as to the drops appear to occur really fast. This also has a cost benefit, batching deletes, in big batches, is more performant, and less costly. For example, on Amazon, you could delete 1,000 files at a time in a single cost. So if you batched your deletes, you could delete them very quickly. The disadvantage of this is if you were to terminate a Vertica customer abruptly, you could leak files in S3, because the reaper queue would not have had the chance to delete these files. Okay, so let's, let's go into best practices after speaking, after understanding some technical details. So, as I said, reading and writing to S3 is slow and costly. So, the first thing you can do is, avoid as many round trips to S3 as possible. The bigger the batches of data you load, the better. The better performance you get, per commit. The fact thing is, don't read and write from S3 if you can avoid it. A lot of our customers have intermediate data processing which they think temporarily they will transform the data before finally committing it. There is no reason to use regular tables for this kind of intermediate data. We recommend using local temporary tables, and local temporary tables have the benefit of not having to upload data to S3. Finally, there is another optimization you can make. Vertica has the concept of active partitions and inactive partitions. Active partitions are the ones where you have recently loaded data, and Vertica is lazy about merging these partitions into a single ROS container. Inactive partitions are historical partitions, like, consider last year's data, or the year before that data. Those partitions are aggressively merging into a single container. And how do we know how many partitions are active and inactive? Well that's based on the configuration parameter. If you load into an inactive partition, Vertica is very aggressive about merging these containers, so we download the entire partition, merge the records that you loaded into it, and upload it back again. This creates a lot of network traffic, and I said, accessing data is, from S3, slow and costly. So we recommend you not load into inactive partitions. You should load into the most recent or active partitions, and if you happen to load into inactive partitions, set your active partition count correctly. Okay, let's talk about the reaper queue. Depending on the velocity of your ETL, you can pile up a lot of files that need to be deleted asynchronously. If you were were to terminate a Vertica customer without allowing enough time for these files to get deleted, you could leak files in S3. Now, of course if you use local temporary tables this problem does not occur because the files were never created in S3, but if you are using regular tables, you must allow Vertica enough time to delete these files, and you can change the interval at which we delete, and how much time we allow to delete and shut down, by exiting some configuration parameters that I have mentioned here. And, yeah. Okay, so let's talk a little bit about a catalog at this point. So, the catalog is synced every five minutes onto S3 for persistence. And, the catalog truncation version is the minimum, minimal viable version of the catalog to which we can revive. So, for instance, if somebody destroyed a Vertica cluster, the entire Vertica cluster, the catalog truncation version is the mimimum viable version that you will be able to revive to. Now, in order to make sure that the catalog truncation version is up to date, you must always shut down your Vertica cluster cleanly. This allows the catalog to be synced to S3. Now here are some SQL commands that you can use to see what the catalog truncation version is on S3. For the most part, you don't have to worry about this if you're shutting down cleanly, so, this is only in cases of disaster or some event where all nodes were terminated, without... without the user's permission. And... And finally let's talk about backups, so one more time, we highly recommend you take backups, you know, S3 is designed for 99.9% availability, so there could be a, maybe an occasional down-time, making sure you have backups will help you if you accidentally drop a table. S3 will not protect you against data that was deleted by accident, so, having a backup helps you there. And why not backup, right, storage is cheap. You can replicate the entire bucket and have that as a backup, or have DR plus, you're running in a different region, which also sources a backup. So, we highly recommend that you make backups. So, so with this I would like to, end my presentation, and we're ready for any questions if you have it. Thank you very much. Thank you very much.
SUMMARY :
Also as reminder, that you can maximize your screen and get the improvements that you need to be successful. So, the first thing you can do is,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff | PERSON | 0.99+ |
Sumeet | PERSON | 0.99+ |
Sumeet Keswani | PERSON | 0.99+ |
Shirang Kamat | PERSON | 0.99+ |
Jeff Healey | PERSON | 0.99+ |
6 nodes | QUANTITY | 0.99+ |
Vertica | ORGANIZATION | 0.99+ |
five minutes | QUANTITY | 0.99+ |
six years | QUANTITY | 0.99+ |
ten minutes | QUANTITY | 0.99+ |
12 nodes | QUANTITY | 0.99+ |
Shirang | PERSON | 0.99+ |
1,000 files | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
12 shards | QUANTITY | 0.99+ |
forum.vertica.com | OTHER | 0.99+ |
99.9% | QUANTITY | 0.99+ |
two modes | QUANTITY | 0.99+ |
S3 | TITLE | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
first subcluster | QUANTITY | 0.99+ |
first time | QUANTITY | 0.99+ |
two options | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
first option | QUANTITY | 0.99+ |
each | QUANTITY | 0.99+ |
two subclusters | QUANTITY | 0.99+ |
Each node | QUANTITY | 0.99+ |
hundreds of nodes | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
each app | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
last year | DATE | 0.99+ |
second | QUANTITY | 0.99+ |
One | QUANTITY | 0.98+ |
three nodes | QUANTITY | 0.98+ |
SQL | TITLE | 0.98+ |
Eon Mode | TITLE | 0.98+ |
single container | QUANTITY | 0.97+ |
this week | DATE | 0.97+ |
16 secondary shard subscription | QUANTITY | 0.97+ |
two types | QUANTITY | 0.97+ |
Sizing and Configuring Vertica in Eon Mode for Different Use Cases | TITLE | 0.97+ |
Vertica | TITLE | 0.97+ |
one limitation | QUANTITY | 0.97+ |
UNLIST TILL 4/2 - Tapping Vertica's Integration with TensorFlow for Advanced Machine Learning
>> Paige: Hello, everybody, and thank you for joining us today for the Virtual Vertica BDC 2020. Today's breakout session is entitled "Tapping Vertica's Integration with TensorFlow for Advanced Machine Learning." I'm Paige Roberts, Opensource Relations Manager at Vertica, and I'll be your host for this session. Joining me is Vertica Software Engineer, George Larionov. >> George: Hi. >> Paige: (chuckles) That's George. So, before we begin, I encourage you guys to submit questions or comments during the virtual session. You don't have to wait. Just type your question or comment in the question box below the slides and click submit. So, as soon as a question occurs to you, go ahead and type it in, and there will be a Q and A session at the end of the presentation. We'll answer as many questions as we're able to get to during that time. Any questions we don't get to, we'll do our best to answer offline. Now, alternatively, you can visit Vertica Forum to post your questions there, after the session. Our engineering team is planning to join the forums to keep the conversation going, so you can ask an engineer afterwards, just as if it were a regular conference in person. Also, reminder, you can maximize your screen by clicking the double-arrow button in the lower right corner of the slides. And, before you ask, yes, this virtual session is being recorded, and it will be available to view by the end this week. We'll send you a notification as soon as it's ready. Now, let's get started, over to you, George. >> George: Thank you, Paige. So, I've been introduced. I'm a Software Engineer at Vertica, and today I'm going to be talking about a new feature, Vertica's Integration with TensorFlow. So, first, I'm going to go over what is TensorFlow and what are neural networks. Then, I'm going to talk about why integrating with TensorFlow is a useful feature, and, finally, I am going to talk about the integration itself and give an example. So, as we get started here, what is TensorFlow? TensorFlow is an opensource machine learning library, developed by Google, and it's actually one of many such libraries. And, the whole point of libraries like TensorFlow is to simplify the whole process of working with neural networks, such as creating, training, and using them, so that it's available to everyone, as opposed to just a small subset of researchers. So, neural networks are computing systems that allow us to solve various tasks. Traditionally, computing algorithms were designed completely from the ground up by engineers like me, and we had to manually sift through the data and decide which parts are important for the task and which are not. Neural networks aim to solve this problem, a little bit, by sifting through the data themselves, automatically and finding traits and features which correlate to the right results. So, you can think of it as neural networks learning to solve a specific task by looking through the data without having human beings have to sit and sift through the data themselves. So, there's a couple necessary parts to getting a trained neural model, which is the final goal. By the way, a neural model is the same as a neural network. Those are synonymous. So, first, you need this light blue circle, an untrained neural model, which is pretty easy to get in TensorFlow, and, in edition to that, you need your training data. Now, this involves both training inputs and training labels, and I'll talk about exactly what those two things are on the next slide. But, basically, you need to train your model with the training data, and, once it is trained, you can use your trained model to predict on just the purple circle, so new training inputs. And, it will predict the training labels for you. You don't have to label it anymore. So, a neural network can be thought of as... Training a neural network can be thought of as teaching a person how to do something. For example, if I want to learn to speak a new language, let's say French, I would probably hire some sort of tutor to help me with that task, and I would need a lot of practice constructing and saying sentences in French. And a lot of feedback from my tutor on whether my pronunciation or grammar, et cetera, is correct. And, so, that would take me some time, but, finally, hopefully, I would be able to learn the language and speak it without any sort of feedback, getting it right. So, in a very similar manner, a neural network needs to practice on, example, training data, first, and, along with that data, it needs labeled data. In this case, the labeled data is kind of analogous to the tutor. It is the correct answers, so that the network can learn what those look like. But, ultimately, the goal is to predict on unlabeled data which is analogous to me knowing how to speak French. So, I went over most of the bullets. A neural network needs a lot of practice. To do that, it needs a lot of good labeled data, and, finally, since a neural network needs to iterate over the training data many, many times, it needs a powerful machine which can do that in a reasonable amount of time. So, here's a quick checklist on what you need if you have a specific task that you want to solve with a neural network. So, the first thing you need is a powerful machine for training. We discussed why this is important. Then, you need TensorFlow installed on the machine, of course, and you need a dataset and labels for your dataset. Now, this dataset can be hundreds of examples, thousands, sometimes even millions. I won't go into that because the dataset size really depends on the task at hand, but if you have these four things, you can train a good neural network that will predict whatever result you want it to predict at the end. So, we've talked about neural networks and TensorFlow, but the question is if we already have a lot of built-in machine-learning algorithms in Vertica, then why do we need to use TensorFlow? And, to answer that question, let's look at this dataset. So, this is a pretty simple toy dataset with 20,000 points, but it shows, it simulates a more complex dataset with some sort of two different classes which are not related in a simple way. So, the existing machine-learning algorithms that Vertica already has, mostly fail on this pretty simple dataset. Linear models can't really draw a good line separating the two types of points. Naïve Bayes, also, performs pretty badly, and even the Random Forest algorithm, which is a pretty powerful algorithm, with 300 trees gets only 80% accuracy. However, a neural network with only two hidden layers gets 99% accuracy in about ten minutes of training. So, I hope that's a pretty compelling reason to use neural networks, at least sometimes. So, as an aside, there are plenty of tasks that do fit the existing machine-learning algorithms in Vertica. That's why they're there, and if one of your tasks that you want to solve fits one of the existing algorithms, well, then I would recommend using that algorithm, not TensorFlow, because, while neural networks have their place and are very powerful, it's often easier to use an existing algorithm, if possible. Okay, so, now that we've talked about why neural networks are needed, let's talk about integrating them with Vertica. So, neural networks are best trained using GPUs, which are Graphics Processing Units, and it's, basically, just a different processing unit than a CPU. GPUs are good for training neural networks because they excel at doing many, many simple operations at the same time, which is needed for a neural network to be able to iterate through the training data many times. However, Vertica runs on CPUs and cannot run on GPUs at all because that's not how it was designed. So, to train our neural networks, we have to go outside of Vertica, and exporting a small batch of training data is pretty simple. So, that's not really a problem, but, given this information, why do we even need Vertica? If we train outside, then why not do everything outside of Vertica? So, to answer that question, here is a slide that Philips was nice enough to let us use. This is an example of production system at Philips. So, it consists of two branches. On the left, we have a branch with historical device log data, and this can kind of be thought of as a bunch of training data. And, all that data goes through some data integration, data analysis. Basically, this is where you train your models, whether or not they are neural networks, but, for the purpose of this talk, this is where you would train your neural network. And, on the right, we have a branch which has live device log data coming in from various MRI machines, CAT scan machines, et cetera, and this is a ton of data. So, these machines are constantly running. They're constantly on, and there's a bunch of them. So, data just keeps streaming in, and, so, we don't want this data to have to take any unnecessary detours because that would greatly slow down the whole system. So, this data in the right branch goes through an already trained predictive model, which need to be pretty fast, and, finally, it allows Philips to do some maintenance on these machines before they actually break, which helps Philips, obviously, and definitely the medical industry as well. So, I hope this slide helped explain the complexity of a live production system and why it might not be reasonable to train your neural networks directly in the system with the live device log data. So, a quick summary on just the neural networks section. So, neural networks are powerful, but they need a lot of processing power to train which can't really be done well in a production pipeline. However, they are cheap and fast to predict with. Prediction with a neural network does not require GPU anymore. And, they can be very useful in production, so we do want them there. We just don't want to train them there. So, the question is, now, how do we get neural networks into production? So, we have, basically, two options. The first option is to take the data and export it to our machine with TensorFlow, our powerful GPU machine, or we can take our TensorFlow model and put it where the data is. In this case, let's say that that is Vertica. So, I'm going to go through some pros and cons of these two approaches. The first one is bringing the data to the analytics. The pros of this approach are that TensorFlow is already installed, running on this GPU machine, and we don't have to move the model at all. The cons, however, are that we have to transfer all the data to this machine and if that data is big, if it's, I don't know, gigabytes, terabytes, et cetera, then that becomes a huge bottleneck because you can only transfer in small quantities. Because GPU machines tend to not be that big. Furthermore, TensorFlow prediction doesn't actually need a GPU. So, you would end up paying for an expensive GPU for no reason. It's not parallelized because you just have one GPU machine. You can't put your production system on this GPU, as we discussed. And, so, you're left with good results, but not fast and not where you need them. So, now, let's look at the second option. So, the second option is bringing the analytics to the data. So, the pros of this approach are that we can integrate with our production system. It's low impact because prediction is not processor intensive. It's cheap, or, at least, it's pretty much as cheap as your system was before. It's parallelized because Vertica was always parallelized, which we'll talk about in the next slide. There's no extra data movement. You get the benefit from model management in Vertica, meaning, if you import multiple TensorFlow models, you can keep track of their various attributes, when they were imported, et cetera. And, the results are right where you need them, inside your production pipeline. So, two cons are that TensorFlow is limited to just prediction inside Vertica, and, if you want to retrain your model, you need to do that outside of Vertica and, then, reimport. So, just as a recap of parallelization. Everything in Vertica is parallelized and distributed, and TensorFlow is no exception. So, when you import your TensorFlow model to your Vertica cluster, it gets copied to all the nodes, automatically, and TensorFlow will run in fenced mode which means that it the TensorFlow process fails for whatever reason, even though it shouldn't, but if it does, Vertica itself will not crash, which is obviously important. And, finally, prediction happens on each node. There are multiple threads of TensorFlow processes running, processing different little bits of data, which is faster, much faster, than processing the data line by line because it happens all in a parallelized fashion. And, so, the result is fast prediction. So, here's an example which I hope is a little closer to what everyone is used to than the usual machine learning TensorFlow example. This is the Boston housing dataset, or, rather, a small subset of it. Now, on the left, we have the input data to go back to, I think, the first slide, and, on the right, is the training label. So, the input data consists of, each line is a plot of land in Boston, along with various attributes, such as the level of crime in that area, how much industry is in that area, whether it's on the Charles River, et cetera, and, on the right, we have as the labels the median house value in that plot of land. And, so, the goal is to put all this data into the neural network and, finally, get a model which can train... I don't know, which can predict on new incoming data and predict a good housing value for that data. Now, I'm going to go through, step by step, how to actually use TensorFlow models in Vertica. So, the first step I won't go into much detail on because there are countless tutorials and resources online on how to use TensorFlow to train a neural network, so that's the first step. Second step is to save the model in TensorFlow's 'frozen graph' format. Again, this information is available online. The third step is to create a small, simple JSON file describing the inputs and outputs of the model, and what data type they are, et cetera. And, this is needed for Vertica to be able to translate from TensorFlow land into Vertica equal land, so that it can use a sequel table instead of the input set TensorFlow usually takes. So, once you have your model file and your JSON file, you want to put both of those files in a directory on a node, any node, in a Vertica cluster, and name that directory whatever you want your model to ultimately be called inside of Vertica. So, once you do that you can go ahead and import that directory into Vertica. So, this import model's function already exists in Vertica. All we added was a new category to be able to import. So, what you need to do is specify the pass to your neural network directory and specify that the category that the model is is a TensorFlow model. Once you successfully import, in order to predict, you run this brand new predict TensorFlow function, so, in this case, we're predicting on everything from the input table, which is what the star means. The model name is Boston housing net which is the name of your directory, and, then, there's a little bit of boilerplate. And, the two ID and value after the as are just the names of the columns of your outputs, and, finally, the Boston housing data is whatever sequel table you want to predict on that fits the import type of your network. And, this will output a bunch of predictions. In this case, values of houses that the network thinks are appropriate for all the input data. So, just a quick summary. So, we talked about what is TensorFlow and what are neural networks, and, then, we discussed that TensorFlow works best on GPUs because it needs very specific characteristics. That is TensorFlow works best for training on GPUs while Vertica is designed to use CPUs, and it's really good at storing and accessing a lot of data quickly. But, it's not very well designed for having neural networks trained inside of it. Then, we talked about how neural models are powerful, and we want to use them in our production flow. And, since prediction is fast, we can go ahead and do that, but we just don't want to train there, and, finally, I presented Vertica TensorFlow integration which allows importing a trained neural model, a trained neural TensorFlow model, into Vertica and predicting on all the data that is inside Vertica with few simple lines of sequel. So, thank you for listening. I'm going to take some questions, now.
SUMMARY :
and I'll be your host for this session. So, as soon as a question occurs to you, So, the second option is bringing the analytics to the data.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Vertica | ORGANIZATION | 0.99+ |
Philips | ORGANIZATION | 0.99+ |
Boston | LOCATION | 0.99+ |
George | PERSON | 0.99+ |
99% | QUANTITY | 0.99+ |
20,000 points | QUANTITY | 0.99+ |
second option | QUANTITY | 0.99+ |
Charles River | LOCATION | 0.99+ |
ORGANIZATION | 0.99+ | |
thousands | QUANTITY | 0.99+ |
Paige Roberts | PERSON | 0.99+ |
third step | QUANTITY | 0.99+ |
first step | QUANTITY | 0.99+ |
George Larionov | PERSON | 0.99+ |
first option | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
Second step | QUANTITY | 0.99+ |
Paige | PERSON | 0.99+ |
each line | QUANTITY | 0.99+ |
two branches | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
two options | QUANTITY | 0.99+ |
hundreds | QUANTITY | 0.99+ |
300 trees | QUANTITY | 0.99+ |
two approaches | QUANTITY | 0.99+ |
millions | QUANTITY | 0.99+ |
first slide | QUANTITY | 0.99+ |
TensorFlow | TITLE | 0.99+ |
Tapping Vertica's Integration with TensorFlow for Advanced Machine Learning | TITLE | 0.99+ |
two types | QUANTITY | 0.99+ |
two different classes | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
both | QUANTITY | 0.99+ |
Vertica | TITLE | 0.99+ |
first one | QUANTITY | 0.98+ |
two cons | QUANTITY | 0.97+ |
about ten minutes | QUANTITY | 0.97+ |
two hidden layers | QUANTITY | 0.97+ |
French | OTHER | 0.96+ |
each node | QUANTITY | 0.95+ |
one | QUANTITY | 0.95+ |
end this week | DATE | 0.94+ |
two ID | QUANTITY | 0.91+ |
four things | QUANTITY | 0.89+ |
UNLIST TILL 4/2 - End-to-End Security
>> Paige: Hello everybody and thank you for joining us today for the virtual Vertica BDC 2020. Today's breakout session is entitled End-to-End Security in Vertica. I'm Paige Roberts, Open Source Relations Manager at Vertica. I'll be your host for this session. Joining me is Vertica Software Engineers, Fenic Fawkes and Chris Morris. Before we begin, I encourage you to submit your questions or comments during the virtual session. You don't have to wait until the end. Just type your question or comment in the question box below the slide as it occurs to you and click submit. There will be a Q&A session at the end of the presentation and we'll answer as many questions as we're able to during that time. Any questions that we don't address, we'll do our best to answer offline. Also, you can visit Vertica forums to post your questions there after the session. Our team is planning to join the forums to keep the conversation going, so it'll be just like being at a conference and talking to the engineers after the presentation. Also, a reminder that you can maximize your screen by clicking the double arrow button in the lower right corner of the slide. And before you ask, yes, this whole session is being recorded and it will be available to view on-demand this week. We'll send you a notification as soon as it's ready. I think we're ready to get started. Over to you, Fen. >> Fenic: Hi, welcome everyone. My name is Fen. My pronouns are fae/faer and Chris will be presenting the second half, and his pronouns are he/him. So to get started, let's kind of go over what the goals of this presentation are. First off, no deployment is the same. So we can't give you an exact, like, here's the right way to secure Vertica because how it is to set up a deployment is a factor. But the biggest one is, what is your threat model? So, if you don't know what a threat model is, let's take an example. We're all working from home because of the coronavirus and that introduces certain new risks. Our source code is on our laptops at home, that kind of thing. But really our threat model isn't that people will read our code and copy it, like, over our shoulders. So we've encrypted our hard disks and that kind of thing to make sure that no one can get them. So basically, what we're going to give you are building blocks and you can pick and choose the pieces that you need to secure your Vertica deployment. We hope that this gives you a good foundation for how to secure Vertica. And now, what we're going to talk about. So we're going to start off by going over encryption, just how to secure your data from attackers. And then authentication, which is kind of how to log in. Identity, which is who are you? Authorization, which is now that we know who you are, what can you do? Delegation is about how Vertica talks to other systems. And then auditing and monitoring. So, how do you protect your data in transit? Vertica makes a lot of network connections. Here are the important ones basically. There are clients talk to Vertica cluster. Vertica cluster talks to itself. And it can also talk to other Vertica clusters and it can make connections to a bunch of external services. So first off, let's talk about client-server TLS. Securing data between, this is how you secure data between Vertica and clients. It prevents an attacker from sniffing network traffic and say, picking out sensitive data. Clients have a way to configure how strict the authentication is of the server cert. It's called the Client SSLMode and we'll talk about this more in a bit but authentication methods can disable non-TLS connections, which is a pretty cool feature. Okay, so Vertica also makes a lot of network connections within itself. So if Vertica is running behind a strict firewall, you have really good network, both physical and software security, then it's probably not super important that you encrypt all traffic between nodes. But if you're on a public cloud, you can set up AWS' firewall to prevent connections, but if there's a vulnerability in that, then your data's all totally vulnerable. So it's a good idea to set up inter-node encryption in less secure situations. Next, import/export is a good way to move data between clusters. So for instance, say you have an on-premises cluster and you're looking to move to AWS. Import/Export is a great way to move your data from your on-prem cluster to AWS, but that means that the data is going over the open internet. And that is another case where an attacker could try to sniff network traffic and pull out credit card numbers or whatever you have stored in Vertica that's sensitive. So it's a good idea to secure data in that case. And then we also connect to a lot of external services. Kafka, Hadoop, S3 are three of them. Voltage SecureData, which we'll talk about more in a sec, is another. And because of how each service deals with authentication, how to configure your authentication to them differs. So, see our docs. And then I'd like to talk a little bit about where we're going next. Our main goal at this point is making Vertica easier to use. Our first objective was security, was to make sure everything could be secure, so we built relatively low-level building blocks. Now that we've done that, we can identify common use cases and automate them. And that's where our attention is going. Okay, so we've talked about how to secure your data over the network, but what about when it's on disk? There are several different encryption approaches, each depends on kind of what your use case is. RAID controllers and disk encryption are mostly for on-prem clusters and they protect against media theft. They're invisible to Vertica. S3 and GCP are kind of the equivalent in the cloud. They also invisible to Vertica. And then there's field-level encryption, which we accomplish using Voltage SecureData, which is format-preserving encryption. So how does Voltage work? Well, it, the, yeah. It encrypts values to things that look like the same format. So for instance, you can see date of birth encrypted to something that looks like a date of birth but it is not in fact the same thing. You could do cool stuff like with a credit card number, you can encrypt only the first 12 digits, allowing the user to, you know, validate the last four. The benefits of format-preserving encryption are that it doesn't increase database size, you don't need to alter your schema or anything. And because of referential integrity, it means that you can do analytics without unencrypting the data. So again, a little diagram of how you could work Voltage into your use case. And you could even work with Vertica's row and column access policies, which Chris will talk about a bit later, for even more customized access control. Depending on your use case and your Voltage integration. We are enhancing our Voltage integration in several ways in 10.0 and if you're interested in Voltage, you can go see their virtual BDC talk. And then again, talking about roadmap a little, we're working on in-database encryption at rest. What this means is kind of a Vertica solution to encryption at rest that doesn't depend on the platform that you're running on. Encryption at rest is hard. (laughs) Encrypting, say, 10 petabytes of data is a lot of work. And once again, the theme of this talk is everyone has a different key management strategy, a different threat model, so we're working on designing a solution that fits everyone. If you're interested, we'd love to hear from you. Contact us on the Vertica forums. All right, next up we're going to talk a little bit about access control. So first off is how do I prove who I am? How do I log in? So, Vertica has several authentication methods. Which one is best depends on your deployment size/use case. Again, theme of this talk is what you should use depends on your use case. You could order authentication methods by priority and origin. So for instance, you can only allow connections from within your internal network or you can enforce TLS on connections from external networks but relax that for connections from your internal network. That kind of thing. So we have a bunch of built-in authentication methods. They're all password-based. User profiles allow you to set complexity requirements of passwords and you can even reject non-TLS connections, say, or reject certain kinds of connections. Should only be used by small deployments because you probably have an LDAP server, where you manage users if you're a larger deployment and rather than duplicating passwords and users all in LDAP, you should use LDAP Auth, where Vertica still has to keep track of users, but each user can then use LDAP authentication. So Vertica doesn't store the password at all. The client gives Vertica a username and password and Vertica then asks the LDAP server is this a correct username or password. And the benefits of this are, well, manyfold, but if, say, you delete a user from LDAP, you don't need to remember to also delete their Vertica credentials. You can just, they won't be able to log in anymore because they're not in LDAP anymore. If you like LDAP but you want something a little bit more secure, Kerberos is a good idea. So similar to LDAP, Vertica doesn't keep track of who's allowed to log in, it just keeps track of the Kerberos credentials and it even, Vertica never touches the user's password. Users log in to Kerberos and then they pass Vertica a ticket that says "I can log in." It is more complex to set up, so if you're just getting started with security, LDAP is probably a better option. But Kerberos is, again, a little bit more secure. If you're looking for something that, you know, works well for applications, certificate auth is probably what you want. Rather than hardcoding a password, or storing a password in a script that you use to run an application, you can instead use a certificate. So, if you ever need to change it, you can just replace the certificate on disk and the next time the application starts, it just picks that up and logs in. Yeah. And then, multi-factor auth is a feature request we've gotten in the past and it's not built-in to Vertica but you can do it using Kerberos. So, security is a whole application concern and fitting MFA into your workflow is all about fitting it in at the right layer. And we believe that that layer is above Vertica. If you're interested in more about how MFA works and how to set it up, we wrote a blog on how to do it. And now, over to Chris, for more on identity and authorization. >> Chris: Thanks, Fen. Hi everyone, I'm Chris. So, we're a Vertica user and we've connected to Vertica but once we're in the database, who are we? What are we? So in Vertica, the answer to that questions is principals. Users and roles, which are like groups in other systems. Since roles can be enabled and disabled at will and multiple roles can be active, they're a flexible way to use only the privileges you need in the moment. For example here, you've got Alice who has Dbadmin as a role and those are some elevated privileges. She probably doesn't want them active all the time, so she can set the role and add them to her identity set. All of this information is stored in the catalog, which is basically Vertica's metadata storage. How do we manage these principals? Well, depends on your use case, right? So, if you're a small organization or maybe only some people or services need Vertica access, the solution is just to manage it with Vertica. You can see some commands here that will let you do that. But what if we're a big organization and we want Vertica to reflect what's in our centralized user management system? Sort of a similar motivating use case for LDAP authentication, right? We want to avoid duplication hassles, we just want to centralize our management. In that case, we can use Vertica's LDAPLink feature. So with LDAPLink, principals are mirrored from LDAP. They're synced in a considerable fashion from the LDAP into Vertica's catalog. What this does is it manages creating and dropping users and roles for you and then mapping the users to the roles. Once that's done, you can do any Vertica-specific configuration on the Vertica side. It's important to note that principals created in Vertica this way, support multiple forms of authentication, not just LDAP. This is a separate feature from LDAP authentication and if you created a user via LDAPLink, you could have them use a different form of authentication, Kerberos, for example. Up to you. Now of course this kind of system is pretty mission-critical, right? You want to make sure you get the right roles and the right users and the right mappings in Vertica. So you probably want to test it. And for that, we've got new and improved dry run functionality, from 9.3.1. And what this feature offers you is new metafunctions that let you test various parameters without breaking your real LDAPLink configuration. So you can mess around with parameters and the configuration as much as you want and you can be sure that all of that is strictly isolated from the live system. Everything's separated. And when you use this, you get some really nice output through a Data Collector table. You can see some example output here. It runs the same logic as the real LDAPLink and provides detailed information about what would happen. You can check the documentation for specifics. All right, so we've connected to the database, we know who we are, but now, what can we do? So for any given action, you want to control who can do that, right? So what's the question you have to ask? Sometimes the question is just who are you? It's a simple yes or no question. For example, if I want to upgrade a user, the question I have to ask is, am I the superuser? If I'm the superuser, I can do it, if I'm not, I can't. But sometimes the actions are more complex and the question you have to ask is more complex. Does the principal have the required privileges? If you're familiar with SQL privileges, there are things like SELECT, INSERT, and Vertica has a few of their own, but the key thing here is that an action can require specific and maybe even multiple privileges on multiple objects. So for example, when selecting from a table, you need USAGE on the schema and SELECT on the table. And there's some other examples here. So where do these privileges come from? Well, if the action requires a privilege, these are the only places privileges can come from. The first source is implicit privileges, which could come from owning the object or from special roles, which we'll talk about in a sec. Explicit privileges, it's basically a SQL standard GRANT system. So you can grant privileges to users or roles and optionally, those users and roles could grant them downstream. Discretionary access control. So those are explicit and they come from the user and the active roles. So the whole identity set. And then we've got Vertica-specific inherited privileges and those come from the schema, and we'll talk about that in a sec as well. So these are the special roles in Vertica. First role, DBADMIN. This isn't the Dbadmin user, it's a role. And it has specific elevated privileges. You can check the documentation for those exact privileges but it's less than the superuser. The PSEUDOSUPERUSER can do anything the real superuser can do and you can grant this role to whomever. The DBDUSER is actually a role, can run Database Designer functions. SYSMONITOR gives you some elevated auditing permissions and we'll talk about that later as well. And finally, PUBLIC is a role that everyone has all the time so anything you want to be allowed for everyone, attach to PUBLIC. Imagine this scenario. I've got a really big schema with lots of relations. Those relations might be changing all the time. But for each principal that uses this schema, I want the privileges for all the tables and views there to be roughly the same. Even though the tables and views come and go, for example, an analyst might need full access to all of them no matter how many there are or what there are at any given time. So to manage this, my first approach I could use is remember to run grants every time a new table or view is created. And not just you but everyone using this schema. Not only is it a pain, it's hard to enforce. The second approach is to use schema-inherited privileges. So in Vertica, schema grants can include relational privileges. For example, SELECT or INSERT, which normally don't mean anything for a schema, but they do for a table. If a relation's marked as inheriting, then the schema grants to a principal, for example, salespeople, also apply to the relation. And you can see on the diagram here how the usage applies to the schema and the SELECT technically but in Sales.foo table, SELECT also applies. So now, instead of lots of GRANT statements for multiple object owners, we only have to run one ALTER SCHEMA statement and three GRANT statements and from then on, any time that you grant some privileges or revoke privileges to or on the schema, to or from a principal, all your new tables and views will get them automatically. So it's dynamically calculated. Now of course, setting it up securely, is that you want to know what's happened here and what's going on. So to monitor the privileges, there are three system tables which you want to look at. The first is grants, which will show you privileges that are active for you. That is your user and active roles and theirs and so on down the chain. Grants will show you the explicit privileges and inherited_privileges will show you the inherited ones. And then there's one more inheriting_objects which will show all tables and views which inherit privileges so that's useful more for not seeing privileges themselves but managing inherited privileges in general. And finally, how do you see all privileges from all these sources, right? In one go, you want to see them together? Well, there's a metafunction added in 9.3.1. Get_privileges_description which will, given an object, it will sum up all the privileges for a current user on that object. I'll refer you to the documentation for usage and supported types. Now, the problem with SELECT. SELECT let's you see everything or nothing. You can either read the table or you can't. But what if you want some principals to see subset or a transformed version of the data. So for example, I have a table with personnel data and different principals, as you can see here, need different access levels to sensitive information. Social security numbers. Well, one thing I could do is I could make a view for each principal. But I could also use access policies and access policies can do this without introducing any new objects or dependencies. It centralizes your restriction logic and makes it easier to manage. So what do access policies do? Well, we've got row and column access policies. Rows will hide and column access policies will transform data in the row or column, depending on who's doing the SELECTing. So it transforms the data, as we saw on the previous slide, to look as requested. Now, if access policies let you see the raw data, you can still modify the data. And the implication of this is that when you're crafting access policies, you should only use them to refine access for principals that need read-only access. That is, if you want a principal to be able to modify it, the access policies you craft should let through the raw data for that principal. So in our previous example, the loader service should be able to see every row and it should be able to see untransformed data in every column. And as long as that's true, then they can continue to load into this table. All of this is of course monitorable by a system table, in this case access_policy. Check the docs for more information on how to implement these. All right, that's it for access control. Now on to delegation and impersonation. So what's the question here? Well, the question is who is Vertica? And that might seem like a silly question, but here's what I mean by that. When Vertica's connecting to a downstream service, for example, cloud storage, how should Vertica identify itself? Well, most of the time, we do the permissions check ourselves and then we connect as Vertica, like in this diagram here. But sometimes we can do better. And instead of connecting as Vertica, we connect with some kind of upstream user identity. And when we do that, we let the service decide who can do what, so Vertica isn't the only line of defense. And in addition to the defense in depth benefit, there are also benefits for auditing because the external system can see who is really doing something. It's no longer just Vertica showing up in that external service's logs, it's somebody like Alice or Bob, trying to do something. One system where this comes into play is with Voltage SecureData. So, let's look at a couple use cases. The first one, I'm just encrypting for compliance or anti-theft reasons. In this case, I'll just use one global identity to encrypt or decrypt with Voltage. But imagine another use case, I want to control which users can decrypt which data. Now I'm using Voltage for access control. So in this case, we want to delegate. The solution here is on the Voltage side, give Voltage users access to appropriate identities and these identities control encryption for sets of data. A Voltage user can access multiple identities like groups. Then on the Vertica side, a Vertica user can set their Voltage username and password in a session and Vertica will talk to Voltage as that Voltage user. So in the diagram here, you can see an example of how this is leverage so that Alice could decrypt something but Bob cannot. Another place the delegation paradigm shows up is with storage. So Vertica can store and interact with data on non-local file systems. For example, HGFS or S3. Sometimes Vertica's storing Vertica-managed data there. For example, in Eon mode, you might store your projections in communal storage in S3. But sometimes, Vertica is interacting with external data. For example, this usually maps to a user storage location in the Vertica side and it might, on the external storage side, be something like Parquet files on Hadoop. And in that case, it's not really Vertica's data and we don't want to give Vertica more power than it needs, so let's request the data on behalf of who needs it. Lets say I'm an analyst and I want to copy from or export to Parquet, using my own bucket. It's not Vertica's bucket, it's my data. But I want Vertica to manipulate data in it. So the first option I have is to give Vertica as a whole access to the bucket and that's problematic because in that case, Vertica becomes kind of an AWS god. It can see any bucket, any Vertica user might want to push or pull data to or from any time Vertica wants. So it's not good for the principals of least access and zero trust. And we can do better than that. So in the second option, use an ID and secret key pair for an AWS, IAM, if you're familiar, principal that does have access to the bucket. So I might use my, the analyst, credentials, or I might use credentials for an AWS role that has even fewer privileges than I do. Sort of a restricted subset of my privileges. And then I use that. I set it in Vertica at the session level and Vertica will use those credentials for the copy export commands. And it gives more isolation. Something that's in the works is support for keyless delegation, using assumable IAM roles. So similar benefits to option two here, but also not having to manage keys at the user level. We can do basically the same thing with Hadoop and HGFS with three different methods. So first option is Kerberos delegation. I think it's the most secure. It definitely, if access control is your primary concern here, this will give you the tightest access control. The downside is it requires the most configuration outside of Vertica with Kerberos and HGFS but with this, you can really determine which Vertica users can talk to which HGFS locations. Then, you've got secure impersonation. If you've got a highly trusted Vertica userbase, or at least some subset of it is, and you're not worried about them doing things wrong but you want to know about auditing on the HGFS side, that's your primary concern, you can use this option. This diagram here gives you a visual overview of how that works. But I'll refer you to the docs for details. And then finally, option three, this is bringing your own delegation token. It's similar to what we do with AWS. We set something in the session level, so it's very flexible. The user can do it at an ad hoc basis, but it is manual, so that's the third option. Now on to auditing and monitoring. So of course, we want to know, what's happening in our database? It's important in general and important for incident response, of course. So your first stop, to answer this question, should be system tables. And they're a collection of information about events, system state, performance, et cetera. They're SELECT-only tables, but they work in queries as usual. The data is just loaded differently. So there are two types generally. There's the metadata table, which stores persistent information or rather reflects persistent information stored in the catalog, for example, users or schemata. Then there are monitoring tables, which reflect more transient information, like events, system resources. Here you can see an example of output from the resource pool's storage table which, these are actually, despite that it looks like system statistics, they're actually configurable parameters for using that. If you're interested in resource pools, a way to handle users' resource allocation and various principal's resource allocation, again, check that out on the docs. Then of course, there's the followup question, who can see all of this? Well, some system information is sensitive and we should only show it to those who need it. Principal of least privilege, right? So of course the superuser can see everything, but what about non-superusers? How do we give access to people that might need additional information about the system without giving them too much power? One option's SYSMONITOR, as I mentioned before, it's a special role. And this role can always read system tables but not change things like a superuser would be able to. Just reading. And another option is the RESTRICT and RELEASE metafunctions. Those grant and revoke access to from a certain system table set, to and from the PUBLIC role. But the downside of those approaches is that they're inflexible. So they only give you, they're all or nothing. For a specific preset of tables. And you can't really configure it per table. So if you're willing to do a little more setup, then I'd recommend using your own grants and roles. System tables support GRANT and REVOKE statements just like any regular relations. And in that case, I wouldn't even bother with SYSMONITOR or the metafunctions. So to do this, just grant whatever privileges you see fit to roles that you create. Then go ahead and grant those roles to the users that you want. And revoke access to the system tables of your choice from PUBLIC. If you need even finer-grained access than this, you can create views on top of system tables. For example, you can create a view on top of the user system table which only shows the current user's information, uses a built-in function that you can use as part of the view definition. And then, you can actually grant this to PUBLIC, so that each user in Vertica could see their own user's information and never give access to the user system table as a whole, just that view. Now if you're a superuser or if you have direct access to nodes in the cluster, filesystem/OS, et cetera, then you have more ways to see events. Vertica supports various methods of logging. You can see a few methods here which are generally outside of running Vertica, you'd interact with them in a different way, with the exception of active events which is a system table. We've also got the data collector. And that sorts events by subjects. So what the data collector does, it extends the logging and system table functionality, by the component, is what it's called in the documentation. And it logs these events and information to rotating files. For example, AnalyzeStatistics is a function that could be of use by users and as a database administrator, you might want to monitor that so you can use the data collector for AnalyzeStatistics. And the files that these create can be exported into a monitoring database. One example of that is with the Management Console Extended Monitoring. So check out their virtual BDC talk. The one on the management console. And that's it for the key points of security in Vertica. Well, many of these slides could spawn a talk on their own, so we encourage you to check out our blog, check out the documentation and the forum for further investigation and collaboration. Hopefully the information we provided today will inform your choices in securing your deployment of Vertica. Thanks for your time today. That concludes our presentation. Now, we're ready for Q&A.
SUMMARY :
in the question box below the slide as it occurs to you So for instance, you can see date of birth encrypted and the question you have to ask is more complex.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Chris | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Chris Morris | PERSON | 0.99+ |
second option | QUANTITY | 0.99+ |
Vertica | ORGANIZATION | 0.99+ |
Paige Roberts | PERSON | 0.99+ |
two types | QUANTITY | 0.99+ |
first option | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
Alice | PERSON | 0.99+ |
second approach | QUANTITY | 0.99+ |
Paige | PERSON | 0.99+ |
third option | QUANTITY | 0.99+ |
AWS' | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
Today | DATE | 0.99+ |
first approach | QUANTITY | 0.99+ |
second half | QUANTITY | 0.99+ |
each service | QUANTITY | 0.99+ |
Bob | PERSON | 0.99+ |
10 petabytes | QUANTITY | 0.99+ |
Fenic | PERSON | 0.99+ |
first | QUANTITY | 0.99+ |
first source | QUANTITY | 0.99+ |
first one | QUANTITY | 0.99+ |
Fen | PERSON | 0.98+ |
S3 | TITLE | 0.98+ |
One system | QUANTITY | 0.98+ |
first objective | QUANTITY | 0.98+ |
each user | QUANTITY | 0.98+ |
First role | QUANTITY | 0.97+ |
each principal | QUANTITY | 0.97+ |
4/2 | DATE | 0.97+ |
each | QUANTITY | 0.97+ |
both | QUANTITY | 0.97+ |
Vertica | TITLE | 0.97+ |
First | QUANTITY | 0.97+ |
one | QUANTITY | 0.96+ |
this week | DATE | 0.95+ |
three different methods | QUANTITY | 0.95+ |
three system tables | QUANTITY | 0.94+ |
one thing | QUANTITY | 0.94+ |
Fenic Fawkes | PERSON | 0.94+ |
Parquet | TITLE | 0.94+ |
Hadoop | TITLE | 0.94+ |
One example | QUANTITY | 0.93+ |
Dbadmin | PERSON | 0.92+ |
10.0 | QUANTITY | 0.92+ |
UNLIST TILL 4/2 - A Deep Dive into the Vertica Management Console Enhancements and Roadmap
>> Jeff: Hello, everybody, and thank you for joining us today for the virtual Vertica BDC 2020. Today's breakout session is entitled "A Deep Dive "into the Vertica Mangement Console Enhancements and Roadmap." I'm Jeff Healey of Vertica Marketing. I'll be your host for this breakout session. Joining me are Bhavik Gandhi and Natalia Stavisky from Vertica engineering. But before we begin, I encourage you to submit questions or comments during the virtual session. You don't have to wait, just type your question or comment in the question box below the slides and click submit. There will be a Q and A session at the end of the presentation. We'll answer as many questions as we're able to during that time. Any questions we don't address, we'll do our best to answer them offline. Alternatively visit Vertica Forums at forum.vertica.com. Post your question there after the session. Our engineering team is planning to join the forums to keep the conversation going well after the event. Also, a reminder that you can maximize the screen by clicking the double arrow button in the lower right corner of the slides. And yes, this virtual session is being recorded and will be available to you on demand this week. We'll send you a notification as soon as it's ready. Now let's get started. Over to you, Bhavik. >> Bhavik: All right. So hello, and welcome, everybody doing this presentation of "Deep Dive into the Vertica Management Console Enhancements and Roadmap." Myself, Bhavik, and my team member, Natalia Stavisky, will go over a few useful announcements on Vertica Management Console, discussing a few real scenarios. All right. So today we will go forward with the brief introduction about the Management Console, then we will discuss the benefits of using Management Console by going over a couple of user scenarios for the query taking too long to run and receiving email alerts from Management Console. Then we will go over a few MC features for what we call Eon Mode databases, like provisioning and reviving the Eon Mode databases from MC, managing the subcluster and understanding the Depot. Then we will go over some of the future announcements on MC that we are planning. All right, so let's get started. All right. So, do you want to know about how to provision a new Vertica cluster from MC? How to analyze and understand a database workload by monitoring the queries on the database? How do you balance the resource pools and use alerts and thresholds on MC? So, the Management Console is basically our answer and we'll talk about its capabilities and new announcements in this presentation. So just to give a brief overview of the Management Console, who uses Management Console, it's generally used by IT administrators and DB admins. Management Console can be used to monitor both Eon Mode and Enterprise Mode databases. Why to use Management Console? You can use Management Console for provisioning Vertica databases and cluster. You can manage the already existing Vertica databases and cluster you have, and you can use various tools on Management Console like query execution, Database Designer, Workload Analyzer, and set up alerts and thresholds to get notified by some of your activities on the MC. So let's go over a few benefits of using Management Console. Okay. So using Management Console, you can view and optimize resource pool usage. Management Console helps you to identify some critical conditions on your Vertica cluster. Additionally, you can set up various thresholds thresholds in MC and get other data if those thresholds are triggered on the database. So now let's dig into the couple of scenarios. So for the first scenario, we will discuss about queries taking too long and using workload analyzer to possibly help to solve the problem. In the second scenario, we will go over alert email that you received from your Management Console and analyzing the problem and taking required actions to solve the problem. So let's go over the scenario where queries are taking too long to run. So in this example, we have this one query that we are running using the query execution on MC. And for some reason we notice that it's taking about 14.8 seconds seconds to execute this query, which is higher than the expected run time of the query. The query that we are running happens to be the query used by MC during the extended monitoring. Notice that the table name and the schema name which is ds_requests_issued, and, is the schema used for extended monitoring. Now in 10.0 MC we have redesigned the Workload Analyzer and Recommendations feature to show the recommendations and allow you to execute those recommendations. In our example, we have taken the table name and figured the tuning descriptions to see if there are any tuning recommendations related to this table. As we see over here, there are three tuning recommendations available for that table. So now in 10.0 MC, you can select those recommendations and then run them. So let's run the recommendations. All right. So once recommendations are run successfully, you can go and see all the processed recommendations that you have run previously. Over here we see that there are three recommendations that we had selected earlier have successfully processed. Now we take the same query and run it on the query execution on MC and hey, it's running really faster and we see that it takes only 0.3 seconds to run the query and, which is about like 98% decrease in original runtime of the query. So in this example we saw that using a Workload Analyzer tool on MC you can possibly triage and solve issue for your queries which are taking to long to execute. All right. So now let's go over another user scenario where DB admin's received some alert email messages from MC and would like to understand and analyze the problem. So to know more about what's going on on the database and proactively react to the problems, DB admins using the Management Console can create set of thresholds and get alerted about the conditions on the database if the threshold values is reached and then respond to the problem thereafter. Now as a DB admin, I see some email message notifications from MC and upon checking the emails, I see that there are a couple of email alerts received from MC on my email. So one of the messages that I received was for Query Resource Rejections greater than 5, pool, midpool7. And then around the same time, I received another email from the MC for the Failed Queries greater than 5, and in this case I see there are 80 failed queries. So now let's go on the MC and investigate the problem. So before going into the deep investigation about failures, let's review the threshold settings on MC. So as we see, we have set up the thresholds under the database settings page for failed queries in the last 10 minutes greater than 5 and MC should send an email to the individual if the threshold is triggered. And also we have a threshold set up for queries and resource rejections in the last five minutes for midpool7 set to greater than 5. There are various other thresholds on this page that you can set if you desire to. Now let's go and triage those email alerts about the failed queries and resource rejections that we had received. To analyze the failed queries, let's take a look at the query statistics page on the database Overview page on MC. Let's take a look at the Resource Pools graph and especially for the failed queries for each resource pools. And over to the right under the failed query section, I see about like, in the last 24 hours, there are about 6,000 failed queries for midpool7. And now I switch to view to see the statistics for each user and on this page I see for User MaryLee on the right hand side there are a high number of failed queries in last 24 hours. And to know more about the failed queries for this user, I can click on the graph for this user and get the reasons behind it. So let's click on the graph and see what's going on. And so clicking on this graph, it takes me to the failed queries view on the Query Monitoring page for database, on Database activities tab. And over here, I see there are a high number of failed queries for this user, MaryLee, with the reasons stated as, exceeding high limit. To drill down more and to know more reasons behind it, I can click on the plus icon on the left hand side for each failed queries to get the failure reason for each node on the database. So let's do that. And clicking the plus icon, I see for the two nodes that are listed, over here it says there are insufficient resources like memory and file handles for midpool7. Now let's go and analyze the midpool7 configurations and activities on it. So to do so, I will go over to the Resource Pool Monitoring view and select midpool7. I see the resource allocations for this resource pool is very low. For example, the max memory is just 1MB and the max concurrency is set to 0. Hmm, that's very odd configuration for this resource pool. Also in the bottom right graph for the resource rejections for midpool7, the graph shows very high values for resource rejection. All right. So since we saw some odd configurations and odd resource allocations for midpool7, I would like to see when this resource, when the settings were changed on the resource pools. So to do this, I can preview the audit logs on, are available on the Management Console. So I can go onto the Vertica Audit Logs and see the logs for the resource pool. So I just (mumbles) for the logs and figuring the logs for midpool7. I see on February 17th, the memory and other attributes for midpool7 were modified. So now let's analyze the resource activity for midpool7 around the time when the configurations were changed. So in our case we are using extended monitoring on MC for this database, so we can go back in time and see the statistics over the larger time range for midpool7. So viewing the activities for midpool7 around February 17th, around the time when these configurations were changed, we see a decrease in resource pool usage. Also, on the bottom right, we see the resource rejections for this midpool7 have an increase, linear increase, after the configurations were changed. I can select a point on the graph to get the more details about the resource rejections. Now to analyze the effects of the modifications on midpool7. Let's go over to the Query Monitoring page. All right, I will adjust the time range around the time when the configurations were changed for midpool7 and completed activities queries for user MaryLee. And I see there are no completed queries for this user. Now I'm taking a look at the Failed Queries tab and adjusting the time range around the time when the configurations were changed. I can do so because we are using extended monitoring. So again, adjusting the time, I can see there are high number of failed queries for this user. There about about like 10,000 failed queries for this user after the configurations were changed on this resource pool. So now let's go and modify the settings since we know after the configurations were changed, this user was not able to run the queries. So you can change the resource pool settings of using Management Console's database settings page and under the Resource Pools tab. So selecting the midpool7, I see the same odd configurations for this resource pool that we saw earlier. So now let's go and modify it, the settings. So I will increase the max memory and modify the settings for midpool7 so that it has adequate resources to run the queries for the user. Hit apply on the right hand top to see the settings. Now let's do the validation after we change the resource pool attributes. So let's go over to the same query monitoring page and see if MaryLee user is able to run the queries for midpool7. We see that now, after the configuration, after the change, after we changed the configuration for midpool7, the user can run the queries successfully and the count for Completed Queries has increased after we modified the settings for this midpool7 resource pool. And also viewing the resource pool monitoring page, we can validate that after the new configurations for midpool7 has been applied and also the resource pool usage after the configuration change has increased. And also on the bottom right graph, we can see that the resource rejections for midpool7 has decreased over the time after we modified the settings. And since we are using extended monitoring for this database, I can see that the trend in data for these resource pools, the before and after effects of modifying the settings. So initially when the settings were changed, there were high resource rejections and after we again modified the settings, the resource rejections went down. Right. So now let's go work with the provisioning and reviving the Eon Mode Vertica database cluster using the Management Console on different platform. So Management Console supports provisioning and reviving of Eon Mode databases on various cloud environments like AWS, the Google Cloud Platform, and Pure Storage. So for Google, for provisioning the Vertica Management Console on Google Cloud Platform you can use launch a template. Or on AWS environment you can use the cloud formation templates available for different OS's. Once you have provisioned Vertica Management Console, you can provision the Vertica cluster and databases from MC itself. So you can provision a Vertica cluster, you can select the Create new database button available on the homepage. This will open up the wizard to create a new database and cluster. In this example, we are using we are using the Google Cloud Platform. So the wizard will ask me for varius authentication parameters for the Google Cloud Platform. And if you're on AWS, it'll ask you for the authentication parameters for the AWS environment. And going forward on the Wizard, it'll ask me to select the instance Type. I will select for the new Vertica cluster. And also provide the communal location url for my Eon Mode database and all the other preferences related to the new cluster. Once I have selected all the preferences for my new cluster I can preview the settings and I can hit, if I am, I can hit Create if all looks okay. So if I hit Create, this will create a new, MC will create a new GCP instances because we are on the GCP environment in this example. It will create a cluster on this instance, it'll create a Vertica Eon Mode Database on this cluster. And it will, additionally, you can load the test data on it if you like to. Now let's go over and revive the existing Eon Mode database from the communal location. So you can do it the same using the Management Console by selecting the Revive Eon Mode database button on the homepage. This will again open up the wizard for reviving the Eon Mode database. Again, in this example, since we are using GCP Platform, it will ask me for the Google Cloud storage authentication attributes. And for reviving, it will ask me for the communal location so I can enter the Google Storage bucket and my folder and it will discover all the Eon Mode databases located under this folder. And I can select one of the databases that I would like to revive. And it will ask me for other Vertica preferences and for this video, for this database reviving. And once I enter all the preferences and review all the preferences I can hit Revive the database button on the Wizard. So after I hit Revive database it will create the GCP instances. The number of GCP instances that I created would be seen as the number of hosts on the original Vertica cluster. It will install the Vertica cluster on this data, on this instances and it will revive the database and it will start the database. And after starting the database, it will be imported on the MC so you can start monitoring on it. So in this example, we saw you can provision and revive the Vertica database on the GCP Platform. Additionally, you can use AWS environment to provision and revive. So now since we have the Eon Mode database on MC, Natalia will go over some Eon Mode features on MC like managing subcluster and Depot activity monitoring. Over to you, Natalia. >> Natalia: Okay, thank you. Hello, my name is Natalia Stavisky. I am also a member of Vertica Management Console Team. And I will talk today about the work I did to allow users to manage subclusters using the Management Console, and also the work I did to help users understand what's going on in their Depot in the Vertica Eon Mode database. So let's look at the picture of the subclusters. On the Manage page of Vertica Management Console, you can see here is a page that has blue tabs, and the tab that's active is Subclusters. You can see that there are two subclusters are available in this database. And for each of the subclusters, you can see subcluster properties, whether this is the primary subcluster or secondary. In this case, primary is the default subcluster. It's indicated by a star. You can see what nodes belong to each subcluster. You can see the node state and node statistics. You can also easily add a new subcluster. And we're quickly going to do this. So once you click on the button, you'll launch the wizard that'll take you through the steps. You'll enter the name of the subcluster, indicate whether this is secondary or primary subcluster. I should mention that Vertica recommends having only one primary subcluster. But we have both options here available. You will enter the number of nodes for your subcluster. And once the subcluster has been created, you can manage the subcluster. What other options for managing subcluster we have here? You can scale up an existing subcluster and that's a similar approach, you launch the wizard and (mumbles) nodes. You want to add to your existing subcluster. You can scale down a subcluster. And MC validates requirements for maintaining minimal number of nodes to prevent database shutdown. So if you can not remove any nodes from a subcluster, this option will not be available. You can stop a subcluster. And depending on whether this is a primary subcluster or secondary subcluster, this option may be available or not available. Like in this picture, we can see that for the default subcluster this option is not available. And this is because shutting down the default subcluster will cause the database to shut down as well. You can terminate a subcluster. And again, the MC warns you not to terminate the primary subcluster and validates requirements for maintaining minimal number of nodes to prevent database shutdown. So now we are going to talk a little more about how the MC helps you to understand what's going on in your Depot. So Depot is one of the core of Eon Mode database. And what are the frequently asked questions about the Depot? Is the Depot size sufficient? Are a subset of users putting a high load on the database? What tables are fetched and evicted repeatedly, we call it "re-fetched," in Depot? So here in the Depot Activity Monitoring page, we now have four tabs that allow you to answer those questions. And we'll go a little more in detail through each of them, but I'll just mention what they are for now. At a Glance shows you basic Depot configuration and also shows you query executing. Depot Efficiency, we'll talk more about that and other tabs. Depot Content, that shows you what tables are currently in your Depot. And Depot Pinning allows you to see what pinning policies have been created and to create new pinning policies. Now let's go through a scenario. Monitoring performance of workloads on one subcluster. As you know, Eon Mode database allows you to have multiple subclusters and we'll explore how this feature is useful and how we can use the Management Console to make decisions regarding whether you would like to have multiple subclusters. So here we have, in my setup, a single subcluster called default_subcluster. It has two users that are running queries that are accessing tables, mostly in schema public. So the query started executing and we can see that after fetching tables from Communal, which is the red line, the rest of the time the queries are executing in Depot. The green line is indicating queries running in Depot. The all nodes Depot is about 88% full, a steady flow, and the depot size seems to be sufficient for query executions from Depot only. That's the good case scenario. Now at around 17 :15, user Sherry got an urgent request to generate a report. And at, she started running her queries. We can see that picture is quite different now. The tables Sherry is querying are in a different schema and are much larger. Now we can see multiple lines in different colors. We can see a bunch of fetches and evictions which are indicated by blue and purple bars, and a lot of queries are now spilling into Communal. This is the red and orange lines. Orange line is an indicator of a query running partially in Depot and partially getting fetched from Communal. And the red line is data fetched from Communal storage. Let's click on the, one of the lines. Each data point, each point on the line, it'll take you to the Query Details page where you can see more about what's going on. So this is the page that shows us what queries have been run in this particular time interval which is on top of this page in orange color. So that's about one minute time interval and now we can see user Sherry among the users that are running queries. Sherry's queries involve large tables and are running against a different schema. We can see the clickstream schema in the name of the, in part of the query request. So what is happening, there is not enough Depot space for both the schema that's already in use and the one Sherry needs. As a result, evictions and fetches have started occurring. What other questions we can ask ourself to help us understand what's going on? So how about, what tables are most frequently re-fetched? So for that, we will go to the Depot Efficiency page and look at the middle, the middle chart here. We can see the larger version of this chart if we expand it. So now we have 10 tables listed that are most frequently being re-fetched. We can see that there is a clickstream schema and there are other schemas so all of those tables are being used in the queries, fetched, and then there is not enough space in the Depot, they getting evicted and they get re-fetched again. So what can be done to enable all queries to run in Depot? Option one can be increase the Depot size. So we can do this by running the following queries, which (mumbles) which nodes and storage location and the new Depot size. And I should mention that we can run this query from the Management Console from the query execution page. So this would have helped us to increase the Depot size. What other options do we have, for example, when increasing Depot size is not an option? We can also provision a second subcluster to isolate workloads like Sherry's. So we are going to do this now and we will provision a second subcluster using the Manage page. Here we're creating subcluster for Sherry or for workloads like hers. And we're going to create a (mumbles). So Sherry's subcluster has been created. We can see it here, added to the list of the subclusters. It's a secondary subcluster. Sherry has been instructed to use the new SherrySubcluster for her work. Now let's see what happened. We'll go again at Depot Activity page and we'll look at the At a Glance tab. We can see that around >> 18: 07, Sherry switched to running her queries on SherrySubcluster. On top of this page, you can see subcluster selected. So we currently have two subclusters and I'm looking, what happened to SherrySubcluster once it has been provisioned? So Sherry started using it and the lines after initial fetching from Depot, which was from Communal, which was the red line, after that, all Sherry's queries fit in Depot, which is indicated by green line. Also the Depot is pretty full on those nodes, about 90% full. But the queries are processed efficiently, there is no spilling into Communal. So that's a good case scenario. Let's now go back and take a look at the original subcluster, default subcluster. So on the left portion of the chart we can see multiple lines, that was activity before Sherry switched to her own designated subcluster. At around 18:07, after Sherry switched from the subcluster to using her designated subcluster, there is no, she is no longer using the subcluster, she is not putting a load in it. So the lines after that are turning a green color, which means the queries that are still running in default subcluster are all running in Depot. We can also see that Depot fetches and evictions bars, those purple and blue bars, are no longer showing significant numbers. Also we can check the second chart that shows Communal Storage Access. And we can see that the bars have also dropped, so there is no significant access for Communal Storage. So this problem has been solved. Each of the subclusters are serving queries from Depot and that's our most efficient scenario. Let's also look at the other tabs that we have for Depot monitoring. Let's look at Depot Efficiency tab. It has six charts and I'll go through each one of them quickly. Files Reads by Location gives an indicator of where the majority of query execution took place in Depot or in Communal. Top 10 Re-Fetches into Depot, and imagine the charts earlier in our user case, it shows tables that are most frequently fetched and evicted and then fetched again. These are good candidates to get pinned if increasing Depot size is not an option. Note that both of these charts have an option to select time interval using calendar widget. So you can get the information about the activity that happened during that time interval. Depot Pinning shows what portion of your Depot is pinned, both by byte count and by table count. And the three tables at the bottom show Depot structure. How long tables stay in Depot, we would like tables to be fetched in Depot and stay there for a long time, how often they are accessed, again, the tables in Depot, we would like to see them accessed frequently, and what the size range of tables in Depot. Depot Content. This tab allows us to search for tables that are currently in Depot and also to see stats like table size in Depot. How often tables are accessed and when were they last accessed. And the same information that's available for tables in Depot is also available on projections and partition levels for those tables. Depot Pinning. This tab allows users to see what policies are currently existing and so you can do this by clicking on the first little button and click search. This'll show you all existing policies that are already created. The second option allows you to search for a table and create a policy. You can also use the action column to modify existing policies or delete them. And the third option provides details about most frequently re-fetched tables, including fetch count, total access count, and number of re-fetched bytes. So all this information can help to make decisions regarding pinning specific tables. So that's about it about the Depot. And I should mention that the server team also has a very good presentation on the, webinar, on the Eon Mode database Depot management and subcluster management. that strongly recommend it to attend or download the slide presentation. Let's talk quickly about the Management Console Roadmap, what we are planning to do in the future. So we are going to continue focusing on subcluster management, there is still a lot of things we can do here. Promoting/demoting subclusters. Load balancing across subclusters, scheduling subcluster actions, support for large cluster mode. We'll continue working on Workload Analyzer enhancement recommendation, on backup and restore from the MC. Building custom thresholds, and Eon on HDFS support. Okay, so we are ready now to take any questions you may have now. Thank you.
SUMMARY :
for the virtual Vertica BDC 2020. and all the other preferences related to the new cluster. and the depot size seems to be sufficient So on the left portion of the chart
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Natalia Stavisky | PERSON | 0.99+ |
Sherry | PERSON | 0.99+ |
MaryLee | PERSON | 0.99+ |
Jeff Healey | PERSON | 0.99+ |
Natalia | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
February 17th | DATE | 0.99+ |
second scenario | QUANTITY | 0.99+ |
10 tables | QUANTITY | 0.99+ |
forum.vertica.com | OTHER | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
1MB | QUANTITY | 0.99+ |
two users | QUANTITY | 0.99+ |
first scenario | QUANTITY | 0.99+ |
second option | QUANTITY | 0.99+ |
Vertica | ORGANIZATION | 0.99+ |
Bhavik | PERSON | 0.99+ |
80 failed queries | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Depot | ORGANIZATION | 0.99+ |
third | QUANTITY | 0.99+ |
Each | QUANTITY | 0.99+ |
six charts | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
each point | QUANTITY | 0.99+ |
three recommendations | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
each | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
Bhavik Gandhi | PERSON | 0.99+ |
midpool7 | TITLE | 0.99+ |
two nodes | QUANTITY | 0.99+ |
second chart | QUANTITY | 0.99+ |
two subclusters | QUANTITY | 0.98+ |
second subcluster | QUANTITY | 0.98+ |
Each data point | QUANTITY | 0.98+ |
each user | QUANTITY | 0.98+ |
both options | QUANTITY | 0.98+ |
4/2 | DATE | 0.98+ |
Eon | ORGANIZATION | 0.97+ |
this week | DATE | 0.97+ |
each subcluster | QUANTITY | 0.97+ |
about 90% | QUANTITY | 0.97+ |
three tables | QUANTITY | 0.96+ |
0 | QUANTITY | 0.96+ |
about 14.8 seconds seconds | QUANTITY | 0.96+ |
one subcluster | QUANTITY | 0.95+ |