Chat w/ Arctic Wolf exec re: budget restraints could lead to lax cloud security
>> Now we're recording. >> All right. >> Appreciate that, Hannah. >> Yeah, so I mean, I think in general we continue to do very, very well as a company. I think like everybody, there's economic headwinds today that are unavoidable, but I think we have a couple things going for us. One, we're in the cyberspace, which I think is, for the most part, recession proof as an industry. I think the impact of a recession will impact some vendors and some categories, but in general, I think the industry is pretty resilient. It's like the power industry, no? Recession or not, you still need electricity to your house. Cybersecurity is almost becoming a utility like that as far as the needs of companies go. I think for us, we also have the ability to do the security, the security operations, for a lot of companies, and if you look at the value proposition, the ROI for the cost of less than one to maybe two or three, depending on how big you are as a customer, what you'd have to pay for half to three security operations people, we can give you a full security operations. And so the ROI is is almost kind of brain dead simple, and so that keeps us going pretty well. And I think the other areas, we remove all that complexity for people. So in a world where you got other problems to worry about, handling all the security complexity is something that adds to that ROI. So for us, I think what we're seeing is mostly is some of the larger deals are taking a little bit longer than they have, some of the large enterprise deals, 'cause I think they are being a little more cautious about how they spend it, but in general, business is still kind of cranking along. >> Anything you can share with me that you guys have talked about publicly in terms of any metrics, or what can you tell me other than cranking? >> Yeah, I mean, I would just say we're still very, very high growth, so I think our financial profile would kind of still put us clearly in the cyber unicorn position, but I think other than that, we don't really share business metrics as a private- >> Okay, so how about headcount? >> Still growing. So we're not growing as fast as we've been growing, but I don't think we were anyway. I think we kind of, we're getting to the point of critical mass. We'll start to grow in a more kind of normal course and speed. I don't think we overhired like a lot of companies did in the past, even though we added, almost doubled the size of the company in the last 18 months. So we're still hiring, but very kind of targeted to certain roles going forward 'cause I do think we're kind of at critical mass in some of the other functions. >> You disclose headcount or no? >> We do not. >> You don't, okay. And never have? >> Not that I'm aware of, no. >> Okay, on the macro, I don't know if security's recession proof, but it's less susceptible, let's say. I've had Nikesh Arora on recently, we're at Palo Alto's Ignite, and he was saying, "Look," it's just like you were saying, "Larger deal's a little harder." A lot of times customers, he was saying customers are breaking larger deals into smaller deals, more POCs, more approvals, more people to get through the approval, not whole, blah, blah, blah. Now they're a different animal, I understand, but are you seeing similar trends, and how are you dealing with that? >> Yeah, I think the exact same trends, and I think it's just in a world where spending a dollar matters, I think a lot more oversight comes into play, a lot more reviewers, and can you shave it down here? Can you reduce the scope of the project to save money there? And I think it just caused a lot of those things. I think, in the large enterprise, I think most of those deals for companies like us and Palo and CrowdStrike and kind of the upper tier companies, they'll still go through. I think they'll just going to take a lot longer, and, yeah, maybe they're 80% of what they would've been otherwise, but there's still a lot of business to be had out there. >> So how are you dealing with that? I mean, you're talking about you double the size of the company. Is it kind of more focused on go-to-market, more sort of, maybe not overlay, but sort of SE types that are going to be doing more handholding. How have you dealt with that? Or have you just sort of said, "Hey, it is what it is, and we're not going to, we're not going to tactically respond to. We got long-term direction"? >> Yeah, I think it's more the latter. I think for us, it's we've gone through all these things before. It just takes longer now. So a lot of the steps we're taking are the same steps. We're still involved in a lot of POCs, we're involved in a lot of demos, and I don't think that changed. It's just the time between your POC and when someone sends you the PO, there's five more people now got to review things and go through a budget committee and all sorts of stuff like that. I think where we're probably focused more now is adding more and more capabilities just so we continue to be on the front foot of innovation and being relevant to the market, and trying to create more differentiators for us and the competitors. That's something that's just built into our culture, and we don't want to slow that down. And so even though the business is still doing extremely, extremely well, we want to keep investing in kind of technology. >> So the deal size, is it fair to say the initial deal size for new accounts, while it may be smaller, you're adding more capabilities, and so over time, your average contract values will go up? Are you seeing that trend? Or am I- >> Well, I would say I don't even necessarily see our average deal size has gotten smaller. I think in total, it's probably gotten a little bigger. I think what happens is when something like this happens, the old cream rises to the top thing, I think, comes into play, and you'll see some organizations instead of doing a deal with three or four vendors, they may want to pick one or two and really kind of put a lot of energy behind that. For them, they're maybe spending a little less money, but for those vendors who are amongst those getting chosen, I think they're doing pretty good. So our average deal size is pretty stable. For us, it's just a temporal thing. It's just the larger deals take a little bit longer. I don't think we're seeing much of a deal velocity difference in our mid-market commercial spaces, but in the large enterprise it's a little bit slower. But for us, we have ambitious plans in our strategy or on how we want to execute and what we want to build, and so I think we want to just continue to make sure we go down that path technically. >> So I have some questions on sort of the target markets and the cohorts you're going after, and I have some product questions. I know we're somewhat limited on time, but the historical focus has been on SMB, and I know you guys have gone in into enterprise. I'm curious as to how that's going. Any guidance you can give me on mix? Or when I talk to the big guys, right, you know who they are, the big managed service providers, MSSPs, and they're like, "Poo poo on Arctic Wolf," like, "Oh, they're (groans)." I said, "Yeah, that's what they used to say about the PC. It's just a toy. Or Microsoft SQL Server." But so I kind of love that narrative for you guys, but I'm curious from your words as to, what is that enterprise? How's the historical business doing, and how's the entrance into the enterprise going? What kind of hurdles are you having, blockers are you having to remove? Any color you can give me there would be super helpful. >> Yeah, so I think our commercial S&B business continues to do really good. Our mid-market is a very strong market for us. And I think while a lot of companies like to focus purely on large enterprise, there's a lot more mid-market companies, and a much larger piece of the IT puzzle collectively is in mid-market than it is large enterprise. That being said, we started to get pulled into the large enterprise not because we're a toy but because we're quite a comprehensive service. And so I think what we're trying to do from a roadmap perspective is catch up with some of the kind of capabilities that a large enterprise would want from us that a potential mid-market customer wouldn't. In some case, it's not doing more. It's just doing it different. Like, so we have a very kind of hands-on engagement with some of our smaller customers, something we call our concierge. Some of the large enterprises want more of a hybrid where they do some stuff and you do some stuff. And so kind of building that capability into the platform is something that's really important for us. Just how we engage with them as far as giving 'em access to their data, the certain APIs they want, things of that nature, what we're building out for large enterprise, but the demand by large enterprise on our business is enormous. And so it's really just us kind of catching up with some of the kind of the features that they want that we lack today, but many of 'em are still signing up with us, obviously, and in lieu of that, knowing that it's coming soon. And so I think if you look at the growth of our large enterprise, it's one of our fastest growing segments, and I think it shows anything but we're a toy. I would be shocked, frankly, if there's an MSSP, and, of course, we don't see ourself as an MSSP, but I'd be shocked if any of them operate a platform at the scale that ours operates. >> Okay, so wow. A lot I want to unpack there. So just to follow up on that last question, you don't see yourself as an MSSP because why, you see yourselves as a technology platform? >> Yes, I mean, the vast, vast, vast majority of what we deliver is our own technology. So we integrate with third-party solutions mostly to bring in that telemetry. So we've built our own platform from the ground up. We have our own threat intelligence, our own detection logic. We do have our own agents and network sensors. MSSP is typically cobbling together other tools, third party off-the-shelf tools to run their SOC. Ours is all homegrown technology. So I have a whole group called Arctic Wolf Labs, is building, just cranking out ML-based detections, building out infrastructure to take feeds in from a variety of different sources. We have a full integration kind of effort where we integrate into other third parties. So when we go into a customer, we can leverage whatever they have, but at the same time, we produce some tech that if they're lacking in a certain area, we can provide that tech, particularly around things like endpoint agents and network sensors and the like. >> What about like identity, doing your own identity? >> So we don't do our own identity, but we take feeds in from things like Okta and Active Directory and the like, and we have detection logic built on top of that. So part of our value add is we were XDR before XDR was the cool thing to talk about, meaning we can look across multiple attack surfaces and come to a security conclusion where most EDR vendors started with looking just at the endpoint, right? And then they called themselves XDR because now they took in a network feed, but they still looked at it as a separate network detection. We actually look at the things across multiple attack surfaces and stitch 'em together to look at that from a security perspective. In some cases we have automatic detections that will fire. In other cases, we can surface some to a security professional who can go start pulling on that thread. >> So you don't need to purchase CrowdStrike software and integrate it. You have your own equivalent essentially. >> Well, we'll take a feed from the CrowdStrike endpoint into our platform. We don't have to rely on their detections and their alerts, and things of that nature. Now obviously anything they discover we pull in as well, it's just additional context, but we have all our own tech behind it. So we operate kind of at an MSSP scale. We have a similar value proposition in the sense that we'll use whatever the customer has, but once that data kind of comes into our pipeline, it's all our own homegrown tech from there. >> But I mean, what I like about the MSSP piece of your business is it's very high touch. It's very intimate. What I like about what you're saying is that it's software-like economics, so software, software-like part of it. >> That's what makes us the unicorn, right? Is we do have, our concierges is very hands-on. We continue to drive automation that makes our concierge security professionals more efficient, but we always want that customer to have that concierge person as, is almost an extension to their security team, or in some cases, for companies that don't even have a security team, as their security team. As we go down the path, as I mentioned, one of the things we want to be able to do is start to have a more flexible model where we can have that high touch if you want it. We can have the high touch on certain occasions, and you can do stuff. We can have low touch, like we can span the spectrum, but we never want to lose our kind of unique value proposition around the concierge, but we also want to make sure that we're providing an interface that any customer would want to use. >> So given that sort of software-like economics, I mean, services companies need this too, but especially in software, things like net revenue retention and churn are super important. How are those metrics looking? What can you share with me there? >> Yeah, I mean, again, we don't share those metrics publicly, but all's I can continue to repeat is, if you looked at all of our financial metrics, I think you would clearly put us in the unicorn category. I think very few companies are going to have the level of growth that we have on the amount of ARR that we have with the net revenue retention and the churn and upsell. All those aspects continue to be very, very strong for us. >> I want to go back to the sort of enterprise conversation. So large enterprises would engage with you as a complement to their existing SOC, correct? Is that a fair statement or not necessarily? >> It's in some cases. In some cases, they're looking to not have a SOC. So we run into a lot of cases where they want to replace their SIEM, and they want a solution like Arctic Wolf to do that. And so there's a poll, I can't remember, I think it was Forrester, IDC, one of them did it a couple years ago, and they found out that 70% of large enterprises do not want to build the SOC, and it's not 'cause they don't need one, it's 'cause they can't afford it, they can't staff it, they don't have the expertise. And you think about if you're a tech company or a bank, or something like that, of course you can do it, but if you're an international plumbing distributor, you're not going to (chuckles), someone's not going to graduate from Stanford with a cybersecurity degree and go, "Cool, I want to go work for a plumbing distributor in their SOC," right? So they're going to have trouble kind of bringing in the right talent, and as a result, it's difficult to go make a multimillion-dollar investment into a SOC if you're not going to get the quality people to operate it, so they turn to companies like us. >> Got it, so, okay, so you're talking earlier about capabilities that large enterprises require that there might be some gaps, you might lack some features. A couple questions there. One is, when you do some of those, I inferred some of that is integrations. Are those integrations sort of one-off snowflakes or are you finding that you're able to scale those across the large enterprises? That's my first question. >> Yeah, so most of the integrations are pretty straightforward. I think where we run into things that are kind of enterprise-centric, they definitely want open APIs, they want access to our platform, which we don't do today, which we are going to be doing, but we don't do that yet today. They want to do more of a SIEM replacement. So we're really kind of what we call an open XDR platform, so there's things that we would need to build to kind of do raw log ingestion. I mean, we do this today. We have raw log ingestion, we have log storage, we have log searching, but there's like some of the compliance scenarios that they need out of their SIEM. We don't do those today. And so that's kind of holding them back from getting off their SIEM and going fully onto a solution like ours. Then the other one is kind of the level of customization, so the ability to create a whole bunch of custom rules, and that ties back to, "I want to get off my SIEM. I've built all these custom rules in my SIEM, and it's great that you guys do all this automatic AI stuff in the background, but I need these very specific things to be executed on." And so trying to build an interface for them to be able to do that and then also simulate it, again, because, no matter how big they are running their SIEM and their SOC... Like, we talked to one of the largest financial institutions in the world. As far as we were told, they have the largest individual company SOC in the world, and we operate almost 15 times their size. So we always have to be careful because this is a cloud-based native platform, but someone creates some rule that then just craters the performance of the whole platform, so we have to build kind of those guardrails around it. So those are the things primarily that the large enterprises are asking for. Most of those issues are not holding them back from coming. They want to know they're coming, and we're working on all of those. >> Cool, and see, just aside, I was talking to CISO the other day, said, "If it weren't for my compliance and audit group, I would chuck my SIEM." I mean, everybody wants to get rid of their SIEM. >> I've never met anyone who likes their SIEM. >> Do you feel like you've achieved product market fit in the larger enterprise or is that still something that you're sorting out? >> So I think we know, like, we're on a path to do that. We're on a provable path to do that, so I don't think there's any surprises left. I think everything that we know we need to do for that is someone's writing code for it today. It's just a matter of getting it through the system and getting into production. So I feel pretty good about it. I think that's why we are seeing such a high growth rate in our large enterprise business, 'cause we share that feedback with some of those key customers. We have a Customer Advisory Board that we share a lot of this information with. So yeah, I mean, I feel pretty good about what we need to do. We're certainly operate at large enterprise scales, so taking in the amount of the volume of data they're going to have and the types of integrations they need. We're comfortable with that. It's just more or less the interfaces that a large enterprise would want that some of the smaller companies don't ask for. >> Do you have enough tenure in the market to get a sense as to stickiness or even indicators that will lead toward retention? Have you been at it long enough in the enterprise or you still, again, figuring that out? >> Yeah, no, I think we've been at it long enough, and our retention rates are extremely high. If anything, kind of our net retention rates, well over 100% 'cause we have opportunities to upsell into new modules and expanding the coverage of what they have today. I think the areas that if you cornered enterprise that use us and things they would complain about are things I just told you about, right? There's still some things I want to do in my Splunk, and I need an API to pull my data out and put it in my Splunk and stuff like that, and those are the things we want to enable. >> Yeah, so I can't wait till you guys go public because you got Snowflake up here, and you got Veritas down here, and I'm very curious as to where you guys go. When's the IPO? You want to tell me that? (chuckling) >> Unfortunately, it's not up to us right now. You got to get the markets- >> Yeah, I hear you. Right, if the market were better. Well, if the market were better, you think you'd be out? >> Yeah, I mean, we'd certainly be a viable candidate to go. >> Yeah, there you go. I have a question for you because I don't have a SOC. I run a small business with my co-CEO. We're like 30, 40 people W-2s, we got another 50 or so contractors, and I'm always like have one eye, sleep with one eye open 'cause of security. What is your ideal SMB customer? Think S. >> Yeah. >> Would I fit? >> Yeah, I mean you're you're right in the sweet spot. I think where the company started and where we still have a lot of value proposition, which is companies like, like you said it, you sleep with one eye open, but you don't have necessarily the technical acumen to be able to do that security for yourself, and that's where we fit in. We bring kind of this whole security, we call it Security Operations Cloud, to bear, and we have some of the best professionals in the world who can basically be your SOC for less than it would cost you to hire somebody right out of college to do IT stuff. And so the value proposition's there. You're going to get the best of the best, providing you a kind of a security service that you couldn't possibly build on your own, and that way you can go to bed at night and close both eyes. >> So (chuckling) I'm sure something else would keep me up. But so in thinking about that, our Amazon bill keeps growing and growing and growing. What would it, and I presume I can engage with you on a monthly basis, right? As a consumption model, or how's the pricing work? >> Yeah, so there's two models that we have. So typically the kind of the monthly billing type of models would be through one of our MSP partners, where they have monthly billing capabilities. Usually direct with us is more of a longer term deal, could be one, two, or three, or it's up to the customer. And so we have both of those engagement models. Were doing more and more and more through MSPs today because of that model you just described, and they do kind of target the very S in the SMB as well. >> I mean, rough numbers, even ranges. If I wanted to go with the MSP monthly, I mean, what would a small company like mine be looking at a month? >> Honestly, I do not even know the answer to that. >> We're not talking hundreds of thousands of dollars a month? >> No. God, no. God, no. No, no, no. >> I mean, order of magnitude, we're talking thousands, tens of thousands? >> Thousands, on a monthly basis. Yeah. >> Yeah, yeah. Thousands per month. So if I were to budget between 20 and $50,000 a year, I'm definitely within the envelope. Is that fair? I mean, I'm giving a wide range >> That's fair. just to try to make- >> No, that's fair. >> And if I wanted to go direct with you, I would be signing up for a longer term agreement, correct, like I do with Salesforce? >> Yeah, yeah, a year. A year would, I think, be the minimum for that, and, yeah, I think the budget you set aside is kind of right in the sweet spot there. >> Yeah, I'm interested, I'm going to... Have a sales guy call me (chuckles) somehow. >> All right, will do. >> No, I'm serious. I want to start >> I will. >> investigating these things because we sell to very large organizations. I mean, name a tech company. That's our client base, except for Arctic Wolf. We should talk about that. And increasingly they're paranoid about data protection agreements, how you're protecting your data, our data. We write a lot of software and deliver it as part of our services, so it's something that's increasingly important. It's certainly a board level discussion and beyond, and most large organizations and small companies oftentimes don't think about it or try not to. They just put their head in the sand and, "We don't want to be doing that," so. >> Yeah, I will definitely have someone get in touch with you. >> Cool. Let's see. Anything else you can tell me on the product side? Are there things that you're doing that we talked about, the gaps at the high end that you're, some of the features that you're building in, which was super helpful. Anything in the SMB space that you want to share? >> Yeah, I think the biggest thing that we're doing technically now is really trying to drive more and more automation and efficiency through our operations, and that comes through really kind of a generous use of AI. So building models around more efficient detections based upon signal, but also automating the actions of our operators so we can start to learn through the interface. When they do A and B, they always do C. Well, let's just do C for them, stuff like that. Then also building more automation as far as the response back to third-party solutions as well so we can remediate more directly on third-party products without having to get into the consoles or having our customers do it. So that's really just trying to drive efficiency in the system, and that helps provide better security outcomes but also has a big impact on our margins as well. >> I know you got to go, but I want to show you something real quick. I have data. I do a weekly program called "Breaking Analysis," and I have a partner called ETR, Enterprise Technology Research, and they have a platform. I don't know if you can see this. They have a survey platform, and each quarter, they do a survey of about 1,500 IT decision makers. They also have a survey on, they call ETS, Emerging Technology Survey. So it's private companies. And I don't want to go into it too much, but this is a sentiment graph. This is net sentiment. >> Just so you know, all I see is a white- >> Yeah, just a white bar. >> Oh, that's weird. Oh, whiteboard. Oh, here we go. How about that? >> There you go. >> Yeah, so this is a sentiment graph. So this is net sentiment and this is mindshare. And if I go to Arctic Wolf... So it's typical security, right? The 8,000 companies. And when I go here, what impresses me about this is you got a decent mindshare, that's this axis, but you've also got an N in the survey. It's about 1,500 in the survey, It's 479 Arctic Wolf customers responded to this. 57% don't know you. Oh, sorry, they're aware of you, but no plan to evaluate; 19% plan to evaluate, 7% are evaluating; 11%, no plan to utilize even though they've evaluated you; and 1% say they've evaluated you and plan to utilize. It's a small percentage, but actually it's not bad in the random sample of the world about that. And so obviously you want to get that number up, but this is a really impressive position right here that I wanted to just share with you. I do a lot of analysis weekly, and this is a really, it's completely independent survey, and you're sort of separating from the pack, as you can see. So kind of- >> Well, it's good to see that. And I think that just is a further indicator of what I was telling you. We continue to have a strong financial performance. >> Yeah, in a good market. Okay, well, thanks you guys. And hey, if I can get this recording, Hannah, I may even figure out how to write it up. (chuckles) That would be super helpful. >> Yes. We'll get that up. >> And David or Hannah, if you can send me David's contact info so I can get a salesperson in touch with him. (Hannah chuckling) >> Yeah, great. >> Yeah, we'll work on that as well. Thanks so much for both your time. >> Thanks a lot. It was great talking with you. >> Thanks, you guys. Great to meet you. >> Thank you. >> Bye. >> Bye.
SUMMARY :
I think for us, we also have the ability I don't think we overhired And never have? and how are you dealing with that? I think they'll just going to that are going to be So a lot of the steps we're and so I think we want to just continue and the cohorts you're going after, And so I think if you look at the growth So just to follow up but at the same time, we produce some tech and Active Directory and the like, So you don't need to but we have all our own tech behind it. like about the MSSP piece one of the things we want So given that sort of of growth that we have on the So large enterprises would engage with you kind of bringing in the right I inferred some of that is integrations. and it's great that you guys do to get rid of their SIEM. I've never met anyone I think everything that we and expanding the coverage to where you guys go. You got to get the markets- Well, if the market were Yeah, I mean, we'd certainly I have a question for you and that way you can go to bed I can engage with you because of that model you just described, the MSP monthly, I mean, know the answer to that. No. God, no. Thousands, on a monthly basis. I mean, I'm giving just to try to make- is kind of right in the sweet spot there. Yeah, I'm interested, I'm going to... I want to start because we sell to very get in touch with you. doing that we talked about, of our operators so we can start to learn I don't know if you can see this. Oh, here we go. from the pack, as you can see. And I think that just I may even figure out how to write it up. if you can send me David's contact info Thanks so much for both your time. great talking with you. Great to meet you.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
Hannah | PERSON | 0.99+ |
two models | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
Arctic Wolf Labs | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
80% | QUANTITY | 0.99+ |
70% | QUANTITY | 0.99+ |
Arctic Wolf | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
30 | QUANTITY | 0.99+ |
Palo | ORGANIZATION | 0.99+ |
479 | QUANTITY | 0.99+ |
half | QUANTITY | 0.99+ |
19% | QUANTITY | 0.99+ |
first question | QUANTITY | 0.99+ |
Forrester | ORGANIZATION | 0.99+ |
50 | QUANTITY | 0.99+ |
8,000 companies | QUANTITY | 0.99+ |
Thousands | QUANTITY | 0.99+ |
1% | QUANTITY | 0.99+ |
7% | QUANTITY | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
57% | QUANTITY | 0.99+ |
IDC | ORGANIZATION | 0.99+ |
CrowdStrike | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
A year | QUANTITY | 0.99+ |
one eye | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
both eyes | QUANTITY | 0.99+ |
each quarter | QUANTITY | 0.99+ |
less than one | QUANTITY | 0.98+ |
11% | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
five more people | QUANTITY | 0.98+ |
axis | ORGANIZATION | 0.98+ |
thousands | QUANTITY | 0.98+ |
tens of thousands | QUANTITY | 0.97+ |
Veritas | ORGANIZATION | 0.97+ |
about 1,500 IT decision makers | QUANTITY | 0.97+ |
20 | QUANTITY | 0.97+ |
a year | QUANTITY | 0.96+ |
Salesforce | ORGANIZATION | 0.96+ |
ETS | ORGANIZATION | 0.96+ |
Stanford | ORGANIZATION | 0.96+ |
40 people | QUANTITY | 0.95+ |
over 100% | QUANTITY | 0.95+ |
couple years ago | DATE | 0.95+ |
CISO | ORGANIZATION | 0.94+ |
four vendors | QUANTITY | 0.94+ |
$50,000 a year | QUANTITY | 0.93+ |
about 1,500 | QUANTITY | 0.92+ |
Enterprise Technology Research | ORGANIZATION | 0.92+ |
almost 15 times | QUANTITY | 0.91+ |
couple questions | QUANTITY | 0.91+ |
CrowdStrike | TITLE | 0.9+ |
hundreds of thousands of dollars a month | QUANTITY | 0.9+ |
ETR | ORGANIZATION | 0.88+ |
last 18 months | DATE | 0.87+ |
SQL Server | TITLE | 0.84+ |
three security | QUANTITY | 0.84+ |
Breaking Analysis | TITLE | 0.82+ |
Thousands per month | QUANTITY | 0.8+ |
XDR | TITLE | 0.79+ |
a month | QUANTITY | 0.74+ |
SIEM | TITLE | 0.74+ |
Arctic | ORGANIZATION | 0.74+ |
Veronika Durgin, Saks | The Future of Cloud & Data
(upbeat music) >> Welcome back to Supercloud 2, an open collaborative where we explore the future of cloud and data. Now, you might recall last August at the inaugural Supercloud event we validated the technical feasibility and tried to further define the essential technical characteristics, and of course the deployment models of so-called supercloud. That is, sets of services that leverage the underlying primitives of hyperscale clouds, but are creating new value on top of those clouds for organizations at scale. So we're talking about capabilities that fundamentally weren't practical or even possible prior to the ascendancy of the public clouds. And so today at Supercloud 2, we're digging further into the topic with input from real-world practitioners. And we're exploring the intersection of data and cloud, And importantly, the realities and challenges of deploying technology for a new business capability. I'm pleased to have with me in our studios, west of Boston, Veronika Durgin, who's the head of data at Saks. Veronika, welcome. Great to see you. Thanks for coming on. >> Thank you so much. Thank you for having me. So excited to be here. >> And so we have to say upfront, you're here, these are your opinions. You're not representing Saks in any way. So we appreciate you sharing your depth of knowledge with us. >> Thank you, Dave. Yeah, I've been doing data for a while. I try not to say how long anymore. It's been a while. But yeah, thank you for having me. >> Yeah, you're welcome. I mean, one of the highlights of this past year for me was hanging out at the airport with you after the Snowflake Summit. And we were just chatting about sort of data mesh, and you were saying, "Yeah, but." There was a yeah, but. You were saying there's some practical realities of actually implementing these things. So I want to get into some of that. And I guess starting from a perspective of how data has changed, you've seen a lot of the waves. I mean, even if we go back to pre-Hadoop, you know, that would shove everything into an Oracle database, or, you know, Hadoop was going to save our data lives. And the cloud came along and, you know, that was kind of a disruptive force. And, you know, now we see things like, whether it's Snowflake or Databricks or these other platforms on top of the clouds. How have you observed the change in data and the evolution over time? >> Yeah, so I started as a DBA in the data center, kind of like, you know, growing up trying to manage whatever, you know, physical limitations a server could give us. So we had to be very careful of what we put in our database because we were limited. We, you know, purchased that piece of hardware, and we had to use it for the next, I don't know, three to five years. So it was only, you know, we focused on only the most important critical things. We couldn't keep too much data. We had to be super efficient. We couldn't add additional functionality. And then Hadoop came along, which is like, great, we can dump all the data there, but then we couldn't get data out of it. So it was like, okay, great. Doesn't help either. And then the cloud came along, which was incredible. I was probably the most excited person. I'm lying, but I was super excited because I no longer had to worry about what I can actually put in my database. Now I have that, you know, scalability and flexibility with the cloud. So okay, great, that data's there, and I can also easily get it out of it, which is really incredible. >> Well, but so, I'm inferring from what you're saying with Hadoop, it was like, okay, no schema on write. And then you got to try to make sense out of it. But so what changed with the cloud? What was different? >> So I'll tell a funny story. I actually successfully avoided Hadoop. The only time- >> Congratulations. >> (laughs) I know, I'm like super proud of it. I don't know how that happened, but the only time I worked for a company that had Hadoop, all I remember is that they were running jobs that were taking over 24 hours to get data out of it. And they were realizing that, you know, dumping data without any structure into this massive thing that required, you know, really skilled engineers wasn't really helpful. So what changed, and I'm kind of thinking of like, kind of like how Snowflake started, right? They were marketing themselves as a data warehouse. For me, moving from SQL Server to Snowflake was a non-event. It was comfortable, I knew what it was, I knew how to get data out of it. And I think that's the important part, right? Cloud, this like, kind of like, vague, high-level thing, magical, but the reality is cloud is the same as what we had on prem. So it's comfortable there. It's not scary. You don't need super new additional skills to use it. >> But you're saying what's different is the scale. So you can throw resources at it. You don't have to worry about depreciating your hardware over three to five years. Hey, I have an asset that I have to take advantage of. Is that the big difference? >> Absolutely. Actually, from kind of like operational perspective, which it's funny. Like, I don't have to worry about it. I use what I need when I need it. And not to take this completely in the opposite direction, people stop thinking about using things in a very smart way, right? You like, scale and you walk away. And then, you know, the cool thing about cloud is it's scalable, but you also should not use it when you don't need it. >> So what about this idea of multicloud. You know, supercloud sort of tries to go beyond multicloud. it's like multicloud by accident. And now, you know, whether it's M&A or, you know, some Skunkworks is do, hey, I like Google's tools, so I'm going to use Google. And then people like you are called on to, hey, how do we clean up this mess? And you know, you and I, at the airport, we were talking about data mesh. And I love the concept. Like, doesn't matter if it's a data lake or a data warehouse or a data hub or an S3 bucket. It's just a node on the mesh. But then, of course, you've got to govern it. You've got to give people self-serve. But this multicloud is a reality. So from your perspective, from a practitioner's perspective, what are the advantages of multicloud? We talk about the disadvantages all the time. Kind of get that, but what are the advantages? >> So I think the first thing when I think multicloud, I actually think high-availability disaster recovery. And maybe it's just how I grew up in the data center, right? We were always worried that if something happened in one area, we want to make sure that we can bring business up very quickly. So to me that's kind of like where multicloud comes to mind because, you know, you put your data, your applications, let's pick on AWS for a second and, you know, US East in AWS, which is the busiest kind of like area that they have. If it goes down, for my business to continue, I would probably want to move it to, say, Azure, hypothetically speaking, again, or Google, whatever that is. So to me, and probably again based on my background, disaster recovery high availability comes to mind as multicloud first, but now the other part of it is that there are, you know, companies and tools and applications that are being built in, you know, pick your cloud. How do we talk to each other? And more importantly, how do we data share? You know, I work with data. You know, this is what I do. So if, you know, I want to get data from a company that's using, say, Google, how do we share it in a smooth way where it doesn't have to be this crazy, I don't know, SFTP file moving. So that's where I think supercloud comes to me in my mind, is like practical applications. How do we create that mesh, that network that we can easily share data with each other? >> So you kind of answered my next question, is do you see use cases going beyond H? I mean, the HADR was, remember, that was the original cloud use case. That and bursting, you know, for, you know, Thanksgiving or, you know, for Black Friday. So you see an opportunity to go beyond that with practical use cases. >> Absolutely. I think, you know, we're getting to a world where every company is a data company. We all collect a lot of data. We want to use it for whatever that is. It doesn't necessarily mean sell it, but use it to our competitive advantage. So how do we do it in a very smooth, easy way, which opens additional opportunities for companies? >> You mentioned data sharing. And that's obviously, you know, I met you at Snowflake Summit. That's a big thing of Snowflake's. And of course, you've got Databricks trying to do similar things with open technology. What do you see as the trade-offs there? Because Snowflake, you got to come into their party, you're in their world, and you're kind of locked into that world. Now they're trying to open up. You know, and of course, Databricks, they don't know our world is wide open. Well, we know what that means, you know. The governance. And so now you're seeing, you saw Amazon come out with data clean rooms, which was, you know, that was a good idea that Snowflake had several years before. It's good. It's good validation. So how do you think about the trade-offs between kind of openness and freedom versus control? Is the latter just far more important? >> I'll tell you it depends, right? It's kind of like- >> Could be insulting to that. >> Yeah, I know. It depends because I don't know the answer. It depends, I think, because on the use case and application, ultimately every company wants to make money. That's the beauty of our like, capitalistic economy, right? We're driven 'cause we want to make money. But from the use, you know, how do I sell a product to somebody who's in Google if I am in AWS, right? It's like, we're limiting ourselves if we just do one cloud. But again, it's difficult because at the same time, every cloud provider wants for you to be locked in their cloud, which is why probably, you know, whoever has now data sharing because they want you to stay within their ecosystem. But then again, like, companies are limited. You know, there are applications that are starting to be built on top of clouds. How do we ensure that, you know, I can use that application regardless what cloud, you know, my company is using or I just happen to like. >> You know, and it's true they want you to stay in their ecosystem 'cause they'll make more money. But as well, you think about Apple, right? Does Apple do it 'cause they can make more money? Yes, but it's also they have more control, right? Am I correct that technically it's going to be easier to govern that data if it's all the sort of same standard, right? >> Absolutely. 100%. I didn't answer that question. You have to govern and you have to control. And honestly, it's like it's not like a nice-to-have anymore. There are compliances. There are legal compliances around data. Everybody at some point wants to ensure that, you know, and as a person, quite honestly, you know, not to be, you know, I don't like when my data's used when I don't know how. Like, it's a little creepy, right? So we have to come up with standards around that. But then I also go back in the day. EDI, right? Electronic data interchange. That was figured out. There was standards. Companies were sending data to each other. It was pretty standard. So I don't know. Like, we'll get there. >> Yeah, so I was going to ask you, do you see a day where open standards actually emerge to enable that? And then isn't that the great disruptor to sort of kind of the proprietary stack? >> I think so. I think for us to smoothly exchange data across, you know, various systems, various applications, we'll have to agree to have standards. >> From a developer perspective, you know, back to the sort of supercloud concept, one of the the components of the essential characteristics is you've got this PaaS layer that provides consistency across clouds, and it has unique attributes specific to the purpose of that supercloud. So in the instance of Snowflake, it's data sharing. In the case of, you know, VMware, it might be, you know, infrastructure or self-serve infrastructure that's consistent. From a developer perspective, what do you hear from developers in terms of what they want? Are we close to getting that across clouds? >> I think developers always want freedom and ability to engineer. And oftentimes it's not, (laughs) you know, just as an engineer, I always want to build something, and it's not always for the, to use a specific, you know, it's something I want to do versus what is actually applicable. I think we'll land there, but not because we are, you know, out of the kindness of our own hearts. I think as a necessity we will have to agree to standards, and that that'll like, move the needle. Yeah. >> What are the limitations that you see of cloud and this notion of, you know, even cross cloud, right? I mean, this one cloud can't do it all. You know, but what do you see as the limitations of clouds? >> I mean, it's funny, I always think, you know, again, kind of probably my background, I grew up in the data center. We were physically limited by space, right? That there's like, you can only put, you know, so many servers in the rack and, you know, so many racks in the data center, and then you run out space. Earth has a limited space, right? And we have so many data centers, and everybody's collecting a lot of data that we actually want to use. We're not just collecting for the sake of collecting it anymore. We truly can't take advantage of it because servers have enough power, right, to crank through it. We will run enough space. So how do we balance that? How do we balance that data across all the various data centers? And I know I'm like, kind of maybe talking crazy, but until we figure out how to build a data center on the Moon, right, like, we will have to figure out how to take advantage of all the compute capacity that we have across the world. >> And where does latency fit in? I mean, is it as much of a problem as people sort of think it is? Maybe it depends too. It depends on the use case. But do multiple clouds help solve that problem? Because, you know, even AWS, $80 billion company, they're huge, but they're not everywhere. You know, they're doing local zones, they're doing outposts, which is, you know, less functional than their full cloud. So maybe I would choose to go to another cloud. And if I could have that common experience, that's an advantage, isn't it? >> 100%, absolutely. And potentially there's some maybe pricing tiers, right? So we're talking about latency. And again, it depends on your situation. You know, if you have some sort of medical equipment that is very latency sensitive, you want to make sure that data lives there. But versus, you know, I browse on a website. If the website takes a second versus two seconds to load, do I care? Not exactly. Like, I don't notice that. So we can reshuffle that in a smart way. And I keep thinking of ways. If we have ways for data where it kind of like, oh, you are stuck in traffic, go this way. You know, reshuffle you through that data center. You know, maybe your data will live there. So I think it's totally possible. I know, it's a little crazy. >> No, I like it, though. But remember when you first found ways, you're like, "Oh, this is awesome." And then now it's like- >> And it's like crowdsourcing, right? Like, it's smart. Like, okay, maybe, you know, going to pick on US East for Amazon for a little bit, their oldest, but also busiest data center that, you know, periodically goes down. >> But then you lose your competitive advantage 'cause now it's like traffic socialism. >> Yeah, I know. >> Right? It happened the other day where everybody's going this way up. There's all the Wazers taking. >> And also again, compliance, right? Every country is going down the path of where, you know, data needs to reside within that country. So it's not as like, socialist or democratic as we wish for it to be. >> Well, that's a great point. I mean, when you just think about the clouds, the limitation, now you go out to the edge. I mean, everybody talks about the edge in IoT. Do you actually think that there's like a whole new stove pipe that's going to get created. And does that concern you, or do you think it actually is going to be, you know, connective tissue with all these clouds? >> I honestly don't know. I live in a practical world of like, how does it help me right now? How does it, you know, help me in the next five years? And mind you, in five years, things can change a lot. Because if you think back five years ago, things weren't as they are right now. I mean, I really hope that somebody out there challenges things 'cause, you know, the whole cloud promise was crazy. It was insane. Like, who came up with it? Why would I do that, right? And now I can't imagine the world without it. >> Yeah, I mean a lot of it is same wine, new bottle. You know, but a lot of it is different, right? I mean, technology keeps moving us forward, doesn't it? >> Absolutely. >> Veronika, it was great to have you. Thank you so much for your perspectives. If there was one thing that the industry could do for your data life that would make your world better, what would it be? >> I think standards for like data sharing, data marketplace. I would love, love, love nothing else to have some agreed upon standards. >> I had one other question for you, actually. I forgot to ask you this. 'Cause you were saying every company's a data company. Every company's a software company. We're already seeing it, but how prevalent do you think it will be that companies, you've seen some of it in financial services, but companies begin to now take their own data, their own tooling, their own software, which they've developed internally, and point that to the outside world? Kind of do what AWS did. You know, working backwards from the customer and saying, "Hey, we did this for ourselves. We can now do this for the rest of the world." Do you see that as a real trend, or is that Dave's pie in the sky? >> I think it's a real trend. Every company's trying to reinvent themselves and come up with new products. And every company is a data company. Every company collects data, and they're trying to figure out what to do with it. And again, it's not necessarily to sell it. Like, you don't have to sell data to monetize it. You can use it with your partners. You can exchange data. You know, you can create products. Capital One I think created a product for Snowflake pricing. I don't recall, but it just, you know, they built it for themselves, and they decided to kind of like, monetize on it. And I'm absolutely 100% on board with that. I think it's an amazing idea. >> Yeah, Goldman is another example. Nasdaq is basically taking their exchange stack and selling it around the world. And the cloud is available to do that. You don't have to build your own data center. >> Absolutely. Or for good, right? Like, we're talking about, again, we live in a capitalist country, but use data for good. We're collecting data. We're, you know, analyzing it, we're aggregating it. How can we use it for greater good for the planet? >> Veronika, thanks so much for coming to our Marlborough studios. Always a pleasure talking to you. >> Thank you so much for having me. >> You're really welcome. All right, stay tuned for more great content. From Supercloud 2, this is Dave Vellante. We'll be right back. (upbeat music)
SUMMARY :
and of course the deployment models Thank you so much. So we appreciate you sharing your depth But yeah, thank you for having me. And the cloud came along and, you know, So it was only, you know, And then you got to try I actually successfully avoided Hadoop. you know, dumping data So you can throw resources at it. And then, you know, the And you know, you and I, at the airport, to mind because, you know, That and bursting, you know, I think, you know, And that's obviously, you know, But from the use, you know, You know, and it's true they want you to ensure that, you know, you know, various systems, In the case of, you know, VMware, but not because we are, you know, and this notion of, you know, can only put, you know, which is, you know, less But versus, you know, But remember when you first found ways, Like, okay, maybe, you know, But then you lose your It happened the other day the path of where, you know, is going to be, you know, How does it, you know, help You know, but a lot of Thank you so much for your perspectives. to have some agreed upon standards. I forgot to ask you this. I don't recall, but it just, you know, And the cloud is available to do that. We're, you know, analyzing Always a pleasure talking to you. From Supercloud 2, this is Dave Vellante.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Veronika | PERSON | 0.99+ |
Veronika Durgin | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
100% | QUANTITY | 0.99+ |
two seconds | QUANTITY | 0.99+ |
Saks | ORGANIZATION | 0.99+ |
$80 billion | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
three | QUANTITY | 0.99+ |
Snowflake | ORGANIZATION | 0.99+ |
last August | DATE | 0.99+ |
Capital One | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
M&A | ORGANIZATION | 0.99+ |
Skunkworks | ORGANIZATION | 0.99+ |
five years | QUANTITY | 0.99+ |
Nasdaq | ORGANIZATION | 0.98+ |
Supercloud 2 | EVENT | 0.98+ |
Earth | LOCATION | 0.98+ |
Databricks | ORGANIZATION | 0.98+ |
Supercloud | EVENT | 0.98+ |
today | DATE | 0.98+ |
Snowflake Summit | EVENT | 0.98+ |
US East | LOCATION | 0.98+ |
five years ago | DATE | 0.97+ |
SQL Server | TITLE | 0.97+ |
first thing | QUANTITY | 0.96+ |
Boston | LOCATION | 0.95+ |
Black Friday | EVENT | 0.95+ |
Hadoop | TITLE | 0.95+ |
over 24 hours | QUANTITY | 0.95+ |
one | QUANTITY | 0.94+ |
first | QUANTITY | 0.94+ |
supercloud | ORGANIZATION | 0.94+ |
one thing | QUANTITY | 0.93+ |
Moon | LOCATION | 0.93+ |
Thanksgiving | EVENT | 0.93+ |
over three | QUANTITY | 0.92+ |
one other question | QUANTITY | 0.91+ |
one cloud | QUANTITY | 0.9+ |
one area | QUANTITY | 0.9+ |
Snowflake | TITLE | 0.89+ |
multicloud | ORGANIZATION | 0.86+ |
Azure | ORGANIZATION | 0.85+ |
Supercloud 2 | ORGANIZATION | 0.83+ |
> 100% | QUANTITY | 0.82+ |
Goldman | ORGANIZATION | 0.81+ |
Snowflake | EVENT | 0.8+ |
a second | QUANTITY | 0.73+ |
several years before | DATE | 0.72+ |
this past year | DATE | 0.71+ |
second | QUANTITY | 0.7+ |
Marlborough | LOCATION | 0.7+ |
supercloud | TITLE | 0.66+ |
next five years | DATE | 0.65+ |
multicloud | TITLE | 0.59+ |
PaaS | TITLE | 0.55+ |
Tomer Shiran, Dremio | AWS re:Invent 2022
>>Hey everyone. Welcome back to Las Vegas. It's the Cube live at AWS Reinvent 2022. This is our fourth day of coverage. Lisa Martin here with Paul Gillen. Paul, we started Monday night, we filmed and streamed for about three hours. We have had shammed pack days, Tuesday, Wednesday, Thursday. What's your takeaway? >>We're routed final turn as we, as we head into the home stretch. Yeah. This is as it has been since the beginning, this show with a lot of energy. I'm amazed for the fourth day of a conference, how many people are still here I am too. And how, and how active they are and how full the sessions are. Huge. Proud for the keynote this morning. You don't see that at most of the day four conferences. Everyone's on their way home. So, so people come here to learn and they're, and they're still >>Learning. They are still learning. And we're gonna help continue that learning path. We have an alumni back with us, Toron joins us, the CPO and co-founder of Dremeo. Tomer, it's great to have you back on the program. >>Yeah, thanks for, for having me here. And thanks for keeping the, the best session for the fourth day. >>Yeah, you're right. I like that. That's a good mojo to come into this interview with Tomer. So last year, last time I saw you was a year ago here in Vegas at Reinvent 21. We talked about the growth of data lakes and the data lake houses. We talked about the need for open data architectures as opposed to data warehouses. And the headline of the Silicon Angle's article on the interview we did with you was, Dremio Predicts 2022 will be the year open data architectures replace the data warehouse. We're almost done with 2022. Has that prediction come true? >>Yeah, I think, I think we're seeing almost every company out there, certainly in the enterprise, adopting data lake, data lakehouse technology, embracing open source kind of file and table formats. And, and so I think that's definitely happening. Of course, nothing goes away. So, you know, data warehouses don't go away in, in a year and actually don't go away ever. We still have mainframes around, but certainly the trends are, are all pointing in that direction. >>Describe the data lakehouse for anybody who may not be really familiar with that and, and what it's, what it really means for organizations. >>Yeah. I think you could think of the data lakehouse as the evolution of the data lake, right? And so, you know, for, for, you know, the last decade we've had kind of these two options, data lakes and data warehouses and, you know, warehouses, you know, having good SQL support, but, and good performance. But you had to spend a lot of time and effort getting data into the warehouse. You got locked into them, very, very expensive. That's a big problem now. And data lakes, you know, more open, more scalable, but had all sorts of kind of limitations. And what we've done now as an industry with the Lake House, and especially with, you know, technologies like Apache Iceberg, is we've unlocked all the capabilities of the warehouse directly on object storage like s3. So you can insert and update and delete individual records. You can do transactions, you can do all the things you could do with a, a database directly in kind of open formats without getting locked in at a much lower cost. >>But you're still dealing with semi-structured data as opposed to structured data. And there's, there's work that has to be done to get that into a usable form. That's where Drio excels. What, what has been happening in that area to, to make, I mean, is it formats like j s o that are, are enabling this to happen? How, how we advancing the cause of making semi-structured data usable? Yeah, >>Well, I think first of all, you know, I think that's all changed. I think that was maybe true for the original data lakes, but now with the Lake house, you know, our bread and butter is actually structured data. It's all, it's all tables with the schema. And, you know, you can, you know, create table insert records. You know, it's, it's, it's really everything you can do with a data warehouse you can now do in the lakehouse. Now, that's not to say that there aren't like very advanced capabilities when it comes to, you know, j s O and nested data and kind of sparse data. You know, we excel in that as well. But we're really seeing kind of the lakehouse take over the, the bread and butter data warehouse use cases. >>You mentioned open a minute ago. Talk about why it's, why open is important and the value that it can deliver for customers. >>Yeah, well, I think if you look back in time and you see all the challenges that companies have had with kind of traditional data architectures, right? The, the, the, a lot of that comes from the, the, the problems with data warehouses. The fact that they are, you know, they're very expensive. The data is, you have to ingest it into the data warehouse in order to query it. And then it's almost impossible to get off of these systems, right? It takes an enormous effort, tremendous cost to get off of them. And so you're kinda locked in and that's a big problem, right? You also, you're dependent on that one data warehouse vendor, right? You can only do things with that data that the warehouse vendor supports. And if you contrast that to data lakehouse and open architectures where the data is stored in entirely open formats. >>So things like par files and Apache iceberg tables, that means you can use any engine on that data. You can use s SQL Query Engine, you can use Spark, you can use flin. You know, there's a dozen different engines that you can use on that, both at the same time. But also in the future, if you ever wanted to try something new that comes out, some new open source innovation, some new startup, you just take it and point out the same data. So that data's now at the core, at the center of the architecture as opposed to some, you know, vendors logo. Yeah. >>Amazon seems to be bought into the Lakehouse concept. It has big announcements on day two about eliminating the ETL stage between RDS and Redshift. Do you see the cloud vendors as pushing this concept forward? >>Yeah, a hundred percent. I mean, I'm, I'm Amazon's a great, great partner of ours. We work with, you know, probably 10 different teams there. Everything from, you know, the S3 team, the, the glue team, the click site team, you know, everything in between. And, you know, their embracement of the, the, the lake house architecture, the fact that they adopted Iceberg as their primary table format. I think that's exciting as an industry. We're all coming together around standard, standard ways to represent data so that at the end of the day, companies have this benefit of being able to, you know, have their own data in their own S3 account in open formats and be able to use all these different engines without losing any of the functionality that they need, right? The ability to do all these interactions with data that maybe in the past you would have to move the data into a database or, or warehouse in order to do, you just don't have to do that anymore. Speaking >>Of functionality, talk about what's new this year with drio since we've seen you last. >>Yeah, there's a lot of, a lot of new things with, with Drio. So yeah, we now have full Apache iceberg support, you know, with DML commands, you can do inserts, updates, deletes, you know, copy into all, all that kind of stuff is now, you know, fully supported native part of the platform. We, we now offer kind of two flavors of dr. We have, you know, Dr. Cloud, which is our SaaS version fully hosted. You sign up with your Google or, you know, Azure account and, and, and you're up in, you're up and running in, in, in a minute. And then dral software, which you can self host usually in the cloud, but even, even even outside of the cloud. And then we're also very excited about this new idea of data as code. And so we've introduced a new product that's now in preview called Dr. >>Arctic. And the idea there is to bring the concepts of GI or GitHub to the world of data. So things like being able to create a branch and work in isolation. If you're a data scientist, you wanna experiment on your own without impacting other people, or you're a data engineer and you're ingesting data, you want to transform it and test it before you expose it to others. You can do that in a branch. So all these ideas that, you know, we take for granted now in the world of source code and software development, we're bringing to the world of data with Jamar. And when you think about data mesh, a lot of people talking about data mesh now and wanting to kind of take advantage of, of those concepts and ideas, you know, thinking of data as a product. Well, when you think about data as a product, we think you have to manage it like code, right? You have to, and that's why we call it data as code, right? The, all those reasons that we use things like GI have to build products, you know, if we wanna think of data as a product, we need all those capabilities also with data. You know, also the ability to go back in time. The ability to undo mistakes, to see who changed my data and when did they change that table. All of those are, are part of this, this new catalog that we've created. >>Are you talk about data as a product that's sort of intrinsic to the data mesh concept. Are you, what's your opinion of data mesh? Is the, is the world ready for that radically different approach to data ownership? >>You know, we are now in dozens of, dozens of our customers that are using drio for to implement enterprise-wide kind of data mesh solutions. And at the end of the day, I think it's just, you know, what most people would consider common sense, right? In a large organization, it is very hard for a centralized single team to understand every piece of data, to manage all the data themselves, to, you know, make sure the quality is correct to make it accessible. And so what data mesh is first and foremost about is being able to kind of federate the, or distribute the, the ownership of data, the governance of the data still has to happen, right? And so that is, I think at the heart of the data mesh, but thinking of data as kind of allowing different teams, different domains to own their own data to really manage it like a product with all the best practices that that we have with that super important. >>So we we're doing a lot with data mesh, you know, the way that cloud has multiple projects and the way that Jamar allows you to have multiple catalogs and different groups can kind of interact and share data among each other. You know, the fact that we can connect to all these different data sources, even outside your data lake, you know, with Redshift, Oracle SQL Server, you know, all the different databases that are out there and join across different databases in addition to your data lake, that that's all stuff that companies want with their data mesh. >>What are some of your favorite customer stories that where you've really helped them accelerate that data mesh and drive business value from it so that more people in the organization kind of access to data so they can really make those data driven decisions that everybody wants to make? >>I mean, there's, there's so many of them, but, you know, one of the largest tech companies in the world creating a, a data mesh where you have all the different departments in the company that, you know, they, they, they were a big data warehouse user and it kinda hit the wall, right? The costs were so high and the ability for people to kind of use it for just experimentation, to try new things out to collaborate, they couldn't do it because it was so prohibitively expensive and difficult to use. And so what they said, well, we need a platform that different people can, they can collaborate, they can ex, they can experiment with the data, they can share data with others. And so at a big organization like that, the, their ability to kind of have a centralized platform but allow different groups to manage their own data, you know, several of the largest banks in the world are, are also doing data meshes with Dr you know, one of them has over over a dozen different business units that are using, using Dremio and that ability to have thousands of people on a platform and to be able to collaborate and share among each other that, that's super important to these >>Guys. Can you contrast your approach to the market, the snowflakes? Cause they have some of those same concepts. >>Snowflake's >>A very closed system at the end of the day, right? Closed and very expensive. Right? I think they, if I remember seeing, you know, a quarter ago in, in, in one of their earnings reports that the average customer spends 70% more every year, right? Well that's not sustainable. If you think about that in a decade, that's your cost is gonna increase 200 x, most companies not gonna be able to swallow that, right? So companies need, first of all, they need more cost efficient solutions that are, you know, just more approachable, right? And the second thing is, you know, you know, we talked about the open data architecture. I think most companies now realize that the, if you want to build a platform for the future, you need to have the data and open formats and not be locked into one vendor, right? And so that's kind of another important aspect beyond that's ability to connect to all your data, even outside the lake to your different databases, no sequel databases, relational databases, and drs semantic layer where we can accelerate queries. And so typically what you have, what happens with data warehouses and other data lake query engines is that because you can't get the performance that you want, you end up creating lots and lots of copies of data. You, for every use case, you're creating a, you know, a pre-joy copy of that data, a pre aggregated version of that data. And you know, then you have to redirect all your data. >>You've got a >>Governance problem, individual things. It's expensive. It's expensive, it's hard to secure that cuz permissions don't travel with the data. So you have all sorts of problems with that, right? And so what we've done because of our semantic layer that makes it easy to kind of expose data in a logical way. And then our query acceleration technology, which we call reflections, which transparently accelerates queries and gives you subsecond response times without data copies and also without extracts into the BI tools. Cause if you start doing bi extracts or imports, again, you have lots of copies of data in the organization, all sorts of refresh problems, security problems, it's, it's a nightmare, right? And that just collapsing all those copies and having a, a simple solution where data's stored in open formats and we can give you fast access to any of that data that's very different from what you get with like a snowflake or, or any of these other >>Companies. Right. That, that's a great explanation. I wanna ask you, early this year you announced that your Dr. Cloud service would be a free forever, the basic DR. Cloud service. How has that offer gone over? What's been the uptake on that offer? >>Yeah, it, I mean it is, and thousands of people have signed up and, and it's, I think it's a great service. It's, you know, it's very, very simple. People can go on the website, try it out. We now have a test drive as well. If, if you want to get started with just some sample public sample data sets and like a tutorial, we've made that increasingly easy as well. But yeah, we continue to, you know, take that approach of, you know, making it, you know, making it easy, democratizing these kind of cloud data platforms and, and kinda lowering the barriers to >>Adoption. How, how effective has it been in driving sales of the enterprise version? >>Yeah, a lot of, a lot of, a lot of business with, you know, that, that we do like when it comes to, to selling is, you know, folks that, you know, have educated themselves, right? They've started off, they've followed some tutorials. I think generally developers, they prefer the first interaction to be with a product, not with a salesperson. And so that's, that's basically the reason we did that. >>Before we ask you the last question, I wanna just, can you give us a speak peek into the product roadmap as we enter 2023? What can you share with us that we should be paying attention to where Drum is concerned? >>Yeah. You know, actually a couple, couple days ago here at the conference, we, we had a press release with all sorts of new capabilities that we, we we just released. And there's a lot more for, for the coming year. You know, we will shortly be releasing a variety of different performance enhancements. So we'll be in the next quarter or two. We'll be, you know, probably twice as fast just in terms of rock qu speed, you know, that's in addition to our reflections and our career acceleration, you know, support for all the major clouds is coming. You know, just a lot of capabilities in Inre that make it easier and easier to use the platform. >>Awesome. Tomer, thank you so much for joining us. My last question to you is, if you had a billboard in your desired location and it was going to really just be like a mic drop about why customers should be looking at Drio, what would that billboard say? >>Well, DRIO is the easy and open data lake house and, you know, open architectures. It's just a lot, a lot better, a lot more f a lot more future proof, a lot easier and a lot just a much safer choice for the future for, for companies. And so hard to argue with those people to take a look. Exactly. That wasn't the best. That wasn't the best, you know, billboards. >>Okay. I think it's a great billboard. Awesome. And thank you so much for joining Poly Me on the program, sharing with us what's new, what some of the exciting things are that are coming down the pipe. Quite soon we're gonna be keeping our eye Ono. >>Awesome. Always happy to be here. >>Thank you. Right. For our guest and for Paul Gillin, I'm Lisa Martin. You're watching The Cube, the leader in live and emerging tech coverage.
SUMMARY :
It's the Cube live at AWS Reinvent This is as it has been since the beginning, this show with a lot of energy. it's great to have you back on the program. And thanks for keeping the, the best session for the fourth day. And the headline of the Silicon Angle's article on the interview we did with you was, So, you know, data warehouses don't go away in, in a year and actually don't go away ever. Describe the data lakehouse for anybody who may not be really familiar with that and, and what it's, And what we've done now as an industry with the Lake House, and especially with, you know, technologies like Apache are enabling this to happen? original data lakes, but now with the Lake house, you know, our bread and butter is actually structured data. You mentioned open a minute ago. The fact that they are, you know, they're very expensive. at the center of the architecture as opposed to some, you know, vendors logo. Do you see the at the end of the day, companies have this benefit of being able to, you know, have their own data in their own S3 account Apache iceberg support, you know, with DML commands, you can do inserts, updates, So all these ideas that, you know, we take for granted now in the world of Are you talk about data as a product that's sort of intrinsic to the data mesh concept. And at the end of the day, I think it's just, you know, what most people would consider common sense, So we we're doing a lot with data mesh, you know, the way that cloud has multiple several of the largest banks in the world are, are also doing data meshes with Dr you know, Cause they have some of those same concepts. And the second thing is, you know, you know, stored in open formats and we can give you fast access to any of that data that's very different from what you get What's been the uptake on that offer? But yeah, we continue to, you know, take that approach of, you know, How, how effective has it been in driving sales of the enterprise version? to selling is, you know, folks that, you know, have educated themselves, right? you know, probably twice as fast just in terms of rock qu speed, you know, that's in addition to our reflections My last question to you is, if you had a Well, DRIO is the easy and open data lake house and, you And thank you so much for joining Poly Me on the program, sharing with us what's new, Always happy to be here. the leader in live and emerging tech coverage.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
Paul Gillen | PERSON | 0.99+ |
Paul Gillin | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Tomer | PERSON | 0.99+ |
Tomer Shiran | PERSON | 0.99+ |
Toron | PERSON | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
70% | QUANTITY | 0.99+ |
Monday night | DATE | 0.99+ |
Vegas | LOCATION | 0.99+ |
fourth day | QUANTITY | 0.99+ |
Paul | PERSON | 0.99+ |
last year | DATE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
dozens | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
10 different teams | QUANTITY | 0.99+ |
Dremio | PERSON | 0.99+ |
early this year | DATE | 0.99+ |
SQL Query Engine | TITLE | 0.99+ |
The Cube | TITLE | 0.99+ |
Tuesday | DATE | 0.99+ |
2023 | DATE | 0.99+ |
one | QUANTITY | 0.98+ |
a year ago | DATE | 0.98+ |
next quarter | DATE | 0.98+ |
S3 | TITLE | 0.98+ |
a quarter ago | DATE | 0.98+ |
twice | QUANTITY | 0.98+ |
Oracle | ORGANIZATION | 0.98+ |
second thing | QUANTITY | 0.98+ |
Drio | ORGANIZATION | 0.98+ |
couple days ago | DATE | 0.98+ |
both | QUANTITY | 0.97+ |
DRIO | ORGANIZATION | 0.97+ |
2022 | DATE | 0.97+ |
Lake House | ORGANIZATION | 0.96+ |
thousands of people | QUANTITY | 0.96+ |
Wednesday | DATE | 0.96+ |
Spark | TITLE | 0.96+ |
200 x | QUANTITY | 0.96+ |
first | QUANTITY | 0.96+ |
Drio | TITLE | 0.95+ |
Dremeo | ORGANIZATION | 0.95+ |
two options | QUANTITY | 0.94+ |
about three hours | QUANTITY | 0.94+ |
day two | QUANTITY | 0.94+ |
s3 | TITLE | 0.94+ |
Apache Iceberg | ORGANIZATION | 0.94+ |
a minute ago | DATE | 0.94+ |
Silicon Angle | ORGANIZATION | 0.94+ |
hundred percent | QUANTITY | 0.93+ |
Apache | ORGANIZATION | 0.93+ |
single team | QUANTITY | 0.93+ |
GitHub | ORGANIZATION | 0.91+ |
this morning | DATE | 0.9+ |
a dozen different engines | QUANTITY | 0.89+ |
Iceberg | TITLE | 0.87+ |
Redshift | TITLE | 0.87+ |
last | DATE | 0.87+ |
this year | DATE | 0.86+ |
first interaction | QUANTITY | 0.85+ |
two flavors | QUANTITY | 0.84+ |
Thursday | DATE | 0.84+ |
Azure | ORGANIZATION | 0.84+ |
DR. Cloud | ORGANIZATION | 0.84+ |
SQL Server | TITLE | 0.83+ |
four conferences | QUANTITY | 0.82+ |
coming year | DATE | 0.82+ |
over over a dozen different business | QUANTITY | 0.81+ |
one vendor | QUANTITY | 0.8+ |
Poly | ORGANIZATION | 0.79+ |
Jamar | PERSON | 0.77+ |
GI | ORGANIZATION | 0.77+ |
Inre | ORGANIZATION | 0.76+ |
Dr. | ORGANIZATION | 0.73+ |
Lake house | ORGANIZATION | 0.71+ |
Arctic | ORGANIZATION | 0.71+ |
a year | QUANTITY | 0.7+ |
a minute | QUANTITY | 0.7+ |
SQL | TITLE | 0.69+ |
AWS Reinvent 2022 | EVENT | 0.69+ |
subsecond | QUANTITY | 0.68+ |
DML | TITLE | 0.68+ |
Raj Gossain, Alation
(upbeat electronic music) >> Hello, and welcome to this Cube Conversation. My name is Dave Vellante, and we're here with Raj Gossain, who's the Chief Product Officer at Alation. We have some news. Hello, Raj. Thanks for coming on. >> Dave, it's great to be with you on theCUBE again. >> Yeah, good to see you. So, okay, we're going to talk about Alation Connected Sheets. You know, what is that? Talk to us about what it is, what it does, what it brings to customers. >> So we recognize, spreadsheets are really the dark matter of the data universe. And they're used by, over 78 million people use spreadsheets on a regular basis to drive critical business analysis. But there's a lot of challenges with spreadsheet usage. It brings risk to the organization. There's no visibility into where data comes from. And so we wanted to bring the power of the Alation Data Intelligence Platform to business users where they spend most of their time. And that's in a tool that they love, and that's spreadsheets. And so we're launching a brand new product next week called Alation Connected Sheets. >> So talk more about that. So yes, I get the lineage issue, like where did-- who did this, where's this data come from? I got different data. But talk more about the problems that Alation Connected Sheets solves, specifically for customers. >> Yeah, so the big challenges that we see when we talk to data organizations is how do they understand where the data came from? Is it trusted? Is it reusable? Should it be used in this format? And if you look at where most users that use spreadsheets get the data to power their spreadsheets, maybe it's a CSV download from a database, and then you have no idea where the data came from and where it's going. Or even worse, it's copying and pasting data from other spreadsheets. And so if you take those problems, how can we bring trusted data from governed sources like Snowflake and Redshift and put it in the hands of spreadsheet users, and give them the power and flexibility of Google Sheets or Microsoft Excel, but use trusted, reliable, well-governed data so that the data office feels great about them using spreadsheets and the end users, the business users, can take advantage of the tool that they know and love and do the work that they need to do quickly. >> So, okay. So I'm inferring from your comments there that you've got the ability to take data from you mentioned a couple, Snowflake and Redshift, other popular data warehouses. >> Yep. >> So talk about the key capabilities that you have, any specific features that we should know about. >> Sure. So, we built the leading data intelligence platform and the leading data catalog. And one of the benefits of that catalog is where you have visibility into all of the trusted, governed data sources that a data organization cares about, whether it's enterprise warehouses like Snowflake or Redshift, databases like SQL Server, Google BigQuery, what have you. So what we've done is we've brought the power of that data catalog directly into both Google Sheets as well as Excel. And the idea there is a user can log into their application, authenticate to Alation using the Alation Connected Sheets plugin into their spreadsheet tool, and browse those trusted data sets that are surfaced in the Alation catalog. They get trust signals, they get visibility into where this data came from. So lineage, insights, descriptive information. And then with one or two clicks, they can choose a data set from their warehouse, basically apply filtering conditions. So let's say I'm looking for customer data in Snowflake. I can find the right customer table. If I only want it for say, 2022, I can apply some filter conditions, I can reorder columns, push one button, authenticate to that data source. We want to maintain and ensure security is being applied, so only those users that have access to the warehouse can actually download that data set. But once they've authenticated, that data gets downloaded into their spreadsheet and there's a live connection that's maintained to that spreadsheet. So anytime you need to refresh the data, one push of a button and that data set gets updated. I can schedule the updates. So, you know, if I have to produce a report every Monday morning, I could have that data set refreshed at 8:00 a.m. Monday morning, or whatever schedule the user wants. And so it gives the user the data set they need, but the data organization, they can see where that data came from and they understand the lineage of the data as it is used in analysis in those spreadsheets themselves. >> So Raj, I know you're at the Super Bowl this week, a.k.a. re:Invent. >> Yes. >> And I know you got very close relationships with Snowflake, you've mentioned them a couple times with the data summit last spring. And I know you've done some integration work with those platforms and I'm sure others. So should we think of this as you're extending that sort of trust and governance out to spreadsheets, is that right? And stretching that out? >> That's exactly right. The way we talk about it is how do we bring data intelligence to business users in the tool that they know and love, which is the spreadsheet. And so, the data catalog and data intelligence platforms in general have really primarily been focused on servicing the needs of data users: data analysts, data scientists, data engineers. But you know, our vision, our aspiration at Alation is to really bring data intelligence to any business user. And so it's a big part of our strategy to make sure that the insights from the Alation catalog and platform can find their way into tools like Excel and Google Sheets. And so that's, what you highlighted, Dave, is exactly correct. We want to maximize the likelihood that a business user can have self-service access to trusted, governed data, do the work that they need to do, and ensure that the organization has a set of data assets in spreadsheets, frankly as opposed to liabilities, which is the way most data organizations look at spreadsheets is it's almost like a risk factor. We want to convert that risk, that liability, into an asset so that people can reuse data sets and they understand where this analysis is actually coming from. >> It's something that we've talked about for well over a decade on theCUBE. Is data an asset or is it a liability? >> Yeah, yeah. >> You obviously want to get value out of it, but if you can't share it, it's not trusted. So what people do is they lock it down and then that constricts value creation. >> Exactly. >> My understanding is this tech came out of an acquisition from a company, Kloudio. >> That's correct. >> Tell us about Kloudio. Why Kloudio? What's the fit there? >> Yeah, so Kloudio is a company, it's about five years old. We closed the acquisition of the company in March of this past year. And they had about 20 customers, 10 engineers. And we saw an opportunity with the spreadsheet tool that they'd created to really compliment our data intelligence strategy. And as you said, Dave, extend the value of data intelligence to business users. And so, we brought the Kloudio team into the fold. The thing I'm most excited about as a product guy, is within seven months of them joining Alation, we're actually shipping a brand new product that's going to drive revenue and meet the needs of tens of millions of users, ultimately. Like that's really our aspiration. And so, the tech they had was extremely modern. It reinforces the platform position that we have. You know, this microservices architecture that we've built Alation around, made it easy for that new team to come in and leverage existing APIs and capabilities from our platform and the tech that they brought into Alation to essentially connect the dots and deliver a brand new set of capabilities to an entirely new audience, to help our customers achieve their business objectives, which is really creating a data culture across their entire organization, inclusive of business users, not just, like I said, the data X users that are already taking advantage of solutions like Alation and cloud warehouses, et cetera. >> So I have two questions, follow up questions by me, and I think you might have answered the second one. The first one is what's the secret sauce behind Kloudio? How does the tech work? The second question is how does it fit into the Alation portfolio? How were you able to integrate it so quickly? Maybe that's the microservices architecture. But start with the secret sauce. What is it, what can you share with me? >> I think the thing that we saw with Kloudio that got us excited, and the fact that they, even though it was a small company, they had 20 customers, they were generating revenue, and they were delivering real value to business users, by really enabling business users to tap into the value of trusted, governed data, and frankly, get IT out of the way. You know, we almost refer to it as like smart self-service, which is, they could find a data asset and connect to that source, and just with a couple quick clicks, almost a low-code, no-code type of an experience, bring that sort of data into their spreadsheet so they could do the work that they needed to do. That opportunity, that tech that the Kloudio team had built out, the big gap that they had is, my goodness, what does it take to actually be aware of all the data sources that exist across an organization and connect to them? And that's what Alation does, right? That's why we built the platform that we built, so that we can basically understand all of a customer's data assets, whether they're on-prem or in the cloud. And so it was a little bit of, you know, that Reese's Peanut Butter Cup analogy. The chocolate and the peanut butter coming together. The Alation platform, the Alation catalog, coupled with the technology that Kloudio brought to us really was sort of a match made in heaven. And it's allowed us to bring this new capability to market that really is value-add on top of the platform and catalog investments that our customers have already made. >> Yeah, so they had this magic pixie dust, but it was sort of isolated, and then you've integrated it into your catalog. And that's the second part of my question. How were you able to do that so quickly? >> So, we've been on this evolution, enhancing the the Alation data intelligence platform. We've moved to a microservices architecture, we're fully multi-tenant in the cloud. And the fact that we'd made those investments over the past few years gave us the opportunity to make it easy for an acquired business like Kloudio, or you know, perhaps a future acquisition, or third party developers leveraging APIs that we expose to make it easy for them to integrate into the Alation platform. And so, I think it's a bit of foresight. We recognize that in starting with the catalog, the opportunity was much bigger than just providing a data catalog. We've added data governance, we've built out this platform and we recognize that more and more users can and should be benefiting from data intelligence. And so I think those platform investments have paid significant dividends and accelerated our ability to deliver Alation Connected Sheets as quickly as we have. >> Sounds like a great acquisition, like a diamond in the rough. I mean, I love big these big mega acquisitions 'cause the media company can write about 'em, but I really love the high, high return. You know, low denominator, high value. So, congratulations. >> Thank you. >> Where can people learn more about this? Maybe play around a little bit with it? >> Yeah, so we're going to be demoing Alation Connected Sheets at AWS re:Invent next week. And it's going to be available starting next week, so the 28th of November. And obviously you'll see it online, on social media, on our website as well. But folks that are going to be in Las Vegas next week, come to the Alation booth and you'll get a chance to see it directly. >> Awesome. Okay, Raj. Hey, thanks for spending some time with us today. Really appreciate it. >> Great, thanks so much, Dave. Great to see you. >> Hey, you're very welcome. And thank you for watching. This is Dave Vellante for theCUBE, your leader in enterprise and emerging tech coverage.
SUMMARY :
and we're here with Raj Gossain, Dave, it's great to be Talk to us about what it is, what it does, of the data universe. But talk more about the problems so that the data office feels great that you've got the So talk about the key And so it gives the user the Super Bowl this week, And stretching that out? and ensure that the organization It's something that we've talked about to get value out of it, from a company, Kloudio. What's the fit there? and the tech that they into the Alation portfolio? that they needed to do. And that's the second part of my question. And the fact that we'd like a diamond in the rough. But folks that are going to some time with us today. Great to see you. And thank you for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Raj Gossain | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Kloudio | ORGANIZATION | 0.99+ |
Raj | PERSON | 0.99+ |
two questions | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
10 engineers | QUANTITY | 0.99+ |
Alation | ORGANIZATION | 0.99+ |
Excel | TITLE | 0.99+ |
20 customers | QUANTITY | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
second part | QUANTITY | 0.99+ |
next week | DATE | 0.99+ |
second question | QUANTITY | 0.99+ |
8:00 a.m. Monday morning | DATE | 0.99+ |
28th of November | DATE | 0.99+ |
seven months | QUANTITY | 0.99+ |
Super Bowl | EVENT | 0.99+ |
two clicks | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
last spring | DATE | 0.98+ |
second one | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
over 78 million people | QUANTITY | 0.98+ |
Snowflake | TITLE | 0.98+ |
Google Sheets | TITLE | 0.97+ |
AWS | ORGANIZATION | 0.97+ |
SQL Server | TITLE | 0.97+ |
one push | QUANTITY | 0.96+ |
Monday morning | DATE | 0.96+ |
this week | DATE | 0.95+ |
about 20 customers | QUANTITY | 0.94+ |
first one | QUANTITY | 0.92+ |
Redshift | TITLE | 0.92+ |
about five years old | QUANTITY | 0.92+ |
ORGANIZATION | 0.91+ | |
Alation Connected Sheets | TITLE | 0.91+ |
one button | QUANTITY | 0.9+ |
Microsoft | ORGANIZATION | 0.87+ |
Reese | ORGANIZATION | 0.86+ |
tens of millions of users | QUANTITY | 0.83+ |
March of this past year | DATE | 0.78+ |
couple quick clicks | QUANTITY | 0.77+ |
Snowflake | ORGANIZATION | 0.77+ |
Conversation | EVENT | 0.75+ |
Alation Data | ORGANIZATION | 0.75+ |
theCUBE | ORGANIZATION | 0.73+ |
2022 | DATE | 0.71+ |
over a decade | QUANTITY | 0.68+ |
couple times | QUANTITY | 0.66+ |
Invent | EVENT | 0.64+ |
a button | QUANTITY | 0.64+ |
Sheets | COMMERCIAL_ITEM | 0.63+ |
The Truth About MySQL HeatWave
>>When Oracle acquired my SQL via the Sun acquisition, nobody really thought the company would put much effort into the platform preferring to focus all the wood behind its leading Oracle database, Arrow pun intended. But two years ago, Oracle surprised many folks by announcing my SQL Heatwave a new database as a service with a massively parallel hybrid Columbia in Mary Mary architecture that brings together transactional and analytic data in a single platform. Welcome to our latest database, power panel on the cube. My name is Dave Ante, and today we're gonna discuss Oracle's MySQL Heat Wave with a who's who of cloud database industry analysts. Holgar Mueller is with Constellation Research. Mark Stammer is the Dragon Slayer and Wikibon contributor. And Ron Westfall is with Fu Chim Research. Gentlemen, welcome back to the Cube. Always a pleasure to have you on. Thanks for having us. Great to be here. >>So we've had a number of of deep dive interviews on the Cube with Nip and Aggarwal. You guys know him? He's a senior vice president of MySQL, Heatwave Development at Oracle. I think you just saw him at Oracle Cloud World and he's come on to describe this is gonna, I'll call it a shock and awe feature additions to to heatwave. You know, the company's clearly putting r and d into the platform and I think at at cloud world we saw like the fifth major release since 2020 when they first announced MySQL heat wave. So just listing a few, they, they got, they taken, brought in analytics machine learning, they got autopilot for machine learning, which is automation onto the basic o l TP functionality of the database. And it's been interesting to watch Oracle's converge database strategy. We've contrasted that amongst ourselves. Love to get your thoughts on Amazon's get the right tool for the right job approach. >>Are they gonna have to change that? You know, Amazon's got the specialized databases, it's just, you know, the both companies are doing well. It just shows there are a lot of ways to, to skin a cat cuz you see some traction in the market in, in both approaches. So today we're gonna focus on the latest heat wave announcements and we're gonna talk about multi-cloud with a native MySQL heat wave implementation, which is available on aws MySQL heat wave for Azure via the Oracle Microsoft interconnect. This kind of cool hybrid action that they got going. Sometimes we call it super cloud. And then we're gonna dive into my SQL Heatwave Lake house, which allows users to process and query data across MyQ databases as heatwave databases, as well as object stores. So, and then we've got, heatwave has been announced on AWS and, and, and Azure, they're available now and Lake House I believe is in beta and I think it's coming out the second half of next year. So again, all of our guests are fresh off of Oracle Cloud world in Las Vegas. So they got the latest scoop. Guys, I'm done talking. Let's get into it. Mark, maybe you could start us off, what's your opinion of my SQL Heatwaves competitive position? When you think about what AWS is doing, you know, Google is, you know, we heard Google Cloud next recently, we heard about all their data innovations. You got, obviously Azure's got a big portfolio, snowflakes doing well in the market. What's your take? >>Well, first let's look at it from the point of view that AWS is the market leader in cloud and cloud services. They own somewhere between 30 to 50% depending on who you read of the market. And then you have Azure as number two and after that it falls off. There's gcp, Google Cloud platform, which is further way down the list and then Oracle and IBM and Alibaba. So when you look at AWS and you and Azure saying, hey, these are the market leaders in the cloud, then you start looking at it and saying, if I am going to provide a service that competes with the service they have, if I can make it available in their cloud, it means that I can be more competitive. And if I'm compelling and compelling means at least twice the performance or functionality or both at half the price, I should be able to gain market share. >>And that's what Oracle's done. They've taken a superior product in my SQL heat wave, which is faster, lower cost does more for a lot less at the end of the day and they make it available to the users of those clouds. You avoid this little thing called egress fees, you avoid the issue of having to migrate from one cloud to another and suddenly you have a very compelling offer. So I look at what Oracle's doing with MyQ and it feels like, I'm gonna use a word term, a flanking maneuver to their competition. They're offering a better service on their platforms. >>All right, so thank you for that. Holger, we've seen this sort of cadence, I sort of referenced it up front a little bit and they sat on MySQL for a decade, then all of a sudden we see this rush of announcements. Why did it take so long? And and more importantly is Oracle, are they developing the right features that cloud database customers are looking for in your view? >>Yeah, great question, but first of all, in your interview you said it's the edit analytics, right? Analytics is kind of like a marketing buzzword. Reports can be analytics, right? The interesting thing, which they did, the first thing they, they, they crossed the chasm between OTP and all up, right? In the same database, right? So major engineering feed very much what customers want and it's all about creating Bellevue for customers, which, which I think is the part why they go into the multi-cloud and why they add these capabilities. And they certainly with the AI capabilities, it's kind of like getting it into an autonomous field, self-driving field now with the lake cost capabilities and meeting customers where they are, like Mark has talked about the e risk costs in the cloud. So that that's a significant advantage, creating value for customers and that's what at the end of the day matters. >>And I believe strongly that long term it's gonna be ones who create better value for customers who will get more of their money From that perspective, why then take them so long? I think it's a great question. I think largely he mentioned the gentleman Nial, it's largely to who leads a product. I used to build products too, so maybe I'm a little fooling myself here, but that made the difference in my view, right? So since he's been charged, he's been building things faster than the rest of the competition, than my SQL space, which in hindsight we thought was a hot and smoking innovation phase. It kind of like was a little self complacent when it comes to the traditional borders of where, where people think, where things are separated between OTP and ola or as an example of adjacent support, right? Structured documents, whereas unstructured documents or databases and all of that has been collapsed and brought together for building a more powerful database for customers. >>So I mean it's certainly, you know, when, when Oracle talks about the competitors, you know, the competitors are in the, I always say they're, if the Oracle talks about you and knows you're doing well, so they talk a lot about aws, talk a little bit about Snowflake, you know, sort of Google, they have partnerships with Azure, but, but in, so I'm presuming that the response in MySQL heatwave was really in, in response to what they were seeing from those big competitors. But then you had Maria DB coming out, you know, the day that that Oracle acquired Sun and, and launching and going after the MySQL base. So it's, I'm, I'm interested and we'll talk about this later and what you guys think AWS and Google and Azure and Snowflake and how they're gonna respond. But, but before I do that, Ron, I want to ask you, you, you, you can get, you know, pretty technical and you've probably seen the benchmarks. >>I know you have Oracle makes a big deal out of it, publishes its benchmarks, makes some transparent on on GI GitHub. Larry Ellison talked about this in his keynote at Cloud World. What are the benchmarks show in general? I mean, when you, when you're new to the market, you gotta have a story like Mark was saying, you gotta be two x you know, the performance at half the cost or you better be or you're not gonna get any market share. So, and, and you know, oftentimes companies don't publish market benchmarks when they're leading. They do it when they, they need to gain share. So what do you make of the benchmarks? Have their, any results that were surprising to you? Have, you know, they been challenged by the competitors. Is it just a bunch of kind of desperate bench marketing to make some noise in the market or you know, are they real? What's your view? >>Well, from my perspective, I think they have the validity. And to your point, I believe that when it comes to competitor responses, that has not really happened. Nobody has like pulled down the information that's on GitHub and said, Oh, here are our price performance results. And they counter oracles. In fact, I think part of the reason why that hasn't happened is that there's the risk if Oracle's coming out and saying, Hey, we can deliver 17 times better query performance using our capabilities versus say, Snowflake when it comes to, you know, the Lakehouse platform and Snowflake turns around and says it's actually only 15 times better during performance, that's not exactly an effective maneuver. And so I think this is really to oracle's credit and I think it's refreshing because these differentiators are significant. We're not talking, you know, like 1.2% differences. We're talking 17 fold differences, we're talking six fold differences depending on, you know, where the spotlight is being shined and so forth. >>And so I think this is actually something that is actually too good to believe initially at first blush. If I'm a cloud database decision maker, I really have to prioritize this. I really would know, pay a lot more attention to this. And that's why I posed the question to Oracle and others like, okay, if these differentiators are so significant, why isn't the needle moving a bit more? And it's for, you know, some of the usual reasons. One is really deep discounting coming from, you know, the other players that's really kind of, you know, marketing 1 0 1, this is something you need to do when there's a real competitive threat to keep, you know, a customer in your own customer base. Plus there is the usual fear and uncertainty about moving from one platform to another. But I think, you know, the traction, the momentum is, is shifting an Oracle's favor. I think we saw that in the Q1 efforts, for example, where Oracle cloud grew 44% and that it generated, you know, 4.8 billion and revenue if I recall correctly. And so, so all these are demonstrating that's Oracle is making, I think many of the right moves, publishing these figures for anybody to look at from their own perspective is something that is, I think, good for the market and I think it's just gonna continue to pay dividends for Oracle down the horizon as you know, competition intens plots. So if I were in, >>Dave, can I, Dave, can I interject something and, and what Ron just said there? Yeah, please go ahead. A couple things here, one discounting, which is a common practice when you have a real threat, as Ron pointed out, isn't going to help much in this situation simply because you can't discount to the point where you improve your performance and the performance is a huge differentiator. You may be able to get your price down, but the problem that most of them have is they don't have an integrated product service. They don't have an integrated O L T P O L A P M L N data lake. Even if you cut out two of them, they don't have any of them integrated. They have multiple services that are required separate integration and that can't be overcome with discounting. And the, they, you have to pay for each one of these. And oh, by the way, as you grow, the discounts go away. So that's a, it's a minor important detail. >>So, so that's a TCO question mark, right? And I know you look at this a lot, if I had that kind of price performance advantage, I would be pounding tco, especially if I need two separate databases to do the job. That one can do, that's gonna be, the TCO numbers are gonna be off the chart or maybe down the chart, which you want. Have you looked at this and how does it compare with, you know, the big cloud guys, for example, >>I've looked at it in depth, in fact, I'm working on another TCO on this arena, but you can find it on Wiki bod in which I compared TCO for MySEQ Heat wave versus Aurora plus Redshift plus ML plus Blue. I've compared it against gcps services, Azure services, Snowflake with other services. And there's just no comparison. The, the TCO differences are huge. More importantly, thefor, the, the TCO per performance is huge. We're talking in some cases multiple orders of magnitude, but at least an order of magnitude difference. So discounting isn't gonna help you much at the end of the day, it's only going to lower your cost a little, but it doesn't improve the automation, it doesn't improve the performance, it doesn't improve the time to insight, it doesn't improve all those things that you want out of a database or multiple databases because you >>Can't discount yourself to a higher value proposition. >>So what about, I wonder ho if you could chime in on the developer angle. You, you followed that, that market. How do these innovations from heatwave, I think you used the term developer velocity. I've heard you used that before. Yeah, I mean, look, Oracle owns Java, okay, so it, it's, you know, most popular, you know, programming language in the world, blah, blah blah. But it does it have the, the minds and hearts of, of developers and does, where does heatwave fit into that equation? >>I think heatwave is gaining quickly mindshare on the developer side, right? It's not the traditional no sequel database which grew up, there's a traditional mistrust of oracles to developers to what was happening to open source when gets acquired. Like in the case of Oracle versus Java and where my sql, right? And, but we know it's not a good competitive strategy to, to bank on Oracle screwing up because it hasn't worked not on Java known my sequel, right? And for developers, it's, once you get to know a technology product and you can do more, it becomes kind of like a Swiss army knife and you can build more use case, you can build more powerful applications. That's super, super important because you don't have to get certified in multiple databases. You, you are fast at getting things done, you achieve fire, develop velocity, and the managers are happy because they don't have to license more things, send you to more trainings, have more risk of something not being delivered, right? >>So it's really the, we see the suite where this best of breed play happening here, which in general was happening before already with Oracle's flagship database. Whereas those Amazon as an example, right? And now the interesting thing is every step away Oracle was always a one database company that can be only one and they're now generally talking about heat web and that two database company with different market spaces, but same value proposition of integrating more things very, very quickly to have a universal database that I call, they call the converge database for all the needs of an enterprise to run certain application use cases. And that's what's attractive to developers. >>It's, it's ironic isn't it? I mean I, you know, the rumor was the TK Thomas Curian left Oracle cuz he wanted to put Oracle database on other clouds and other places. And maybe that was the rift. Maybe there was, I'm sure there was other things, but, but Oracle clearly is now trying to expand its Tam Ron with, with heatwave into aws, into Azure. How do you think Oracle's gonna do, you were at a cloud world, what was the sentiment from customers and the independent analyst? Is this just Oracle trying to screw with the competition, create a little diversion? Or is this, you know, serious business for Oracle? What do you think? >>No, I think it has lakes. I think it's definitely, again, attriting to Oracle's overall ability to differentiate not only my SQL heat wave, but its overall portfolio. And I think the fact that they do have the alliance with the Azure in place, that this is definitely demonstrating their commitment to meeting the multi-cloud needs of its customers as well as what we pointed to in terms of the fact that they're now offering, you know, MySQL capabilities within AWS natively and that it can now perform AWS's own offering. And I think this is all demonstrating that Oracle is, you know, not letting up, they're not resting on its laurels. That's clearly we are living in a multi-cloud world, so why not just make it more easy for customers to be able to use cloud databases according to their own specific, specific needs. And I think, you know, to holder's point, I think that definitely lines with being able to bring on more application developers to leverage these capabilities. >>I think one important announcement that's related to all this was the JSON relational duality capabilities where now it's a lot easier for application developers to use a language that they're very familiar with a JS O and not have to worry about going into relational databases to store their J S O N application coding. So this is, I think an example of the innovation that's enhancing the overall Oracle portfolio and certainly all the work with machine learning is definitely paying dividends as well. And as a result, I see Oracle continue to make these inroads that we pointed to. But I agree with Mark, you know, the short term discounting is just a stall tag. This is not denying the fact that Oracle is being able to not only deliver price performance differentiators that are dramatic, but also meeting a wide range of needs for customers out there that aren't just limited device performance consideration. >>Being able to support multi-cloud according to customer needs. Being able to reach out to the application developer community and address a very specific challenge that has plagued them for many years now. So bring it all together. Yeah, I see this as just enabling Oracles who ring true with customers. That the customers that were there were basically all of them, even though not all of them are going to be saying the same things, they're all basically saying positive feedback. And likewise, I think the analyst community is seeing this. It's always refreshing to be able to talk to customers directly and at Oracle cloud there was a litany of them and so this is just a difference maker as well as being able to talk to strategic partners. The nvidia, I think partnerships also testament to Oracle's ongoing ability to, you know, make the ecosystem more user friendly for the customers out there. >>Yeah, it's interesting when you get these all in one tools, you know, the Swiss Army knife, you expect that it's not able to be best of breed. That's the kind of surprising thing that I'm hearing about, about heatwave. I want to, I want to talk about Lake House because when I think of Lake House, I think data bricks, and to my knowledge data bricks hasn't been in the sites of Oracle yet. Maybe they're next, but, but Oracle claims that MySQL, heatwave, Lakehouse is a breakthrough in terms of capacity and performance. Mark, what are your thoughts on that? Can you double click on, on Lakehouse Oracle's claims for things like query performance and data loading? What does it mean for the market? Is Oracle really leading in, in the lake house competitive landscape? What are your thoughts? >>Well, but name in the game is what are the problems you're solving for the customer? More importantly, are those problems urgent or important? If they're urgent, customers wanna solve 'em. Now if they're important, they might get around to them. So you look at what they're doing with Lake House or previous to that machine learning or previous to that automation or previous to that O L A with O ltp and they're merging all this capability together. If you look at Snowflake or data bricks, they're tacking one problem. You look at MyQ heat wave, they're tacking multiple problems. So when you say, yeah, their queries are much better against the lake house in combination with other analytics in combination with O ltp and the fact that there are no ETLs. So you're getting all this done in real time. So it's, it's doing the query cross, cross everything in real time. >>You're solving multiple user and developer problems, you're increasing their ability to get insight faster, you're having shorter response times. So yeah, they really are solving urgent problems for customers. And by putting it where the customer lives, this is the brilliance of actually being multicloud. And I know I'm backing up here a second, but by making it work in AWS and Azure where people already live, where they already have applications, what they're saying is, we're bringing it to you. You don't have to come to us to get these, these benefits, this value overall, I think it's a brilliant strategy. I give Nip and Argo wallet a huge, huge kudos for what he's doing there. So yes, what they're doing with the lake house is going to put notice on data bricks and Snowflake and everyone else for that matter. Well >>Those are guys that whole ago you, you and I have talked about this. Those are, those are the guys that are doing sort of the best of breed. You know, they're really focused and they, you know, tend to do well at least out of the gate. Now you got Oracle's converged philosophy, obviously with Oracle database. We've seen that now it's kicking in gear with, with heatwave, you know, this whole thing of sweets versus best of breed. I mean the long term, you know, customers tend to migrate towards suite, but the new shiny toy tends to get the growth. How do you think this is gonna play out in cloud database? >>Well, it's the forever never ending story, right? And in software right suite, whereas best of breed and so far in the long run suites have always won, right? So, and sometimes they struggle again because the inherent problem of sweets is you build something larger, it has more complexity and that means your cycles to get everything working together to integrate the test that roll it out, certify whatever it is, takes you longer, right? And that's not the case. It's a fascinating part of what the effort around my SQL heat wave is that the team is out executing the previous best of breed data, bringing us something together. Now if they can maintain that pace, that's something to to, to be seen. But it, the strategy, like what Mark was saying, bring the software to the data is of course interesting and unique and totally an Oracle issue in the past, right? >>Yeah. But it had to be in your database on oci. And but at, that's an interesting part. The interesting thing on the Lake health side is, right, there's three key benefits of a lakehouse. The first one is better reporting analytics, bring more rich information together, like make the, the, the case for silicon angle, right? We want to see engagements for this video, we want to know what's happening. That's a mixed transactional video media use case, right? Typical Lakehouse use case. The next one is to build more rich applications, transactional applications which have video and these elements in there, which are the engaging one. And the third one, and that's where I'm a little critical and concerned, is it's really the base platform for artificial intelligence, right? To run deep learning to run things automatically because they have all the data in one place can create in one way. >>And that's where Oracle, I know that Ron talked about Invidia for a moment, but that's where Oracle doesn't have the strongest best story. Nonetheless, the two other main use cases of the lake house are very strong, very well only concern is four 50 terabyte sounds long. It's an arbitrary limitation. Yeah, sounds as big. So for the start, and it's the first word, they can make that bigger. You don't want your lake house to be limited and the terabyte sizes or any even petabyte size because you want to have the certainty. I can put everything in there that I think it might be relevant without knowing what questions to ask and query those questions. >>Yeah. And you know, in the early days of no schema on right, it just became a mess. But now technology has evolved to allow us to actually get more value out of that data. Data lake. Data swamp is, you know, not much more, more, more, more logical. But, and I want to get in, in a moment, I want to come back to how you think the competitors are gonna respond. Are they gonna have to sort of do a more of a converged approach? AWS in particular? But before I do, Ron, I want to ask you a question about autopilot because I heard Larry Ellison's keynote and he was talking about how, you know, most security issues are human errors with autonomy and autonomous database and things like autopilot. We take care of that. It's like autonomous vehicles, they're gonna be safer. And I went, well maybe, maybe someday. So Oracle really tries to emphasize this, that every time you see an announcement from Oracle, they talk about new, you know, autonomous capabilities. It, how legit is it? Do people care? What about, you know, what's new for heatwave Lakehouse? How much of a differentiator, Ron, do you really think autopilot is in this cloud database space? >>Yeah, I think it will definitely enhance the overall proposition. I don't think people are gonna buy, you know, lake house exclusively cause of autopilot capabilities, but when they look at the overall picture, I think it will be an added capability bonus to Oracle's benefit. And yeah, I think it's kind of one of these age old questions, how much do you automate and what is the bounce to strike? And I think we all understand with the automatic car, autonomous car analogy that there are limitations to being able to use that. However, I think it's a tool that basically every organization out there needs to at least have or at least evaluate because it goes to the point of it helps with ease of use, it helps make automation more balanced in terms of, you know, being able to test, all right, let's automate this process and see if it works well, then we can go on and switch on on autopilot for other processes. >>And then, you know, that allows, for example, the specialists to spend more time on business use cases versus, you know, manual maintenance of, of the cloud database and so forth. So I think that actually is a, a legitimate value proposition. I think it's just gonna be a case by case basis. Some organizations are gonna be more aggressive with putting automation throughout their processes throughout their organization. Others are gonna be more cautious. But it's gonna be, again, something that will help the overall Oracle proposition. And something that I think will be used with caution by many organizations, but other organizations are gonna like, hey, great, this is something that is really answering a real problem. And that is just easing the use of these databases, but also being able to better handle the automation capabilities and benefits that come with it without having, you know, a major screwup happened and the process of transitioning to more automated capabilities. >>Now, I didn't attend cloud world, it's just too many red eyes, you know, recently, so I passed. But one of the things I like to do at those events is talk to customers, you know, in the spirit of the truth, you know, they, you know, you'd have the hallway, you know, track and to talk to customers and they say, Hey, you know, here's the good, the bad and the ugly. So did you guys, did you talk to any customers my SQL Heatwave customers at, at cloud world? And and what did you learn? I don't know, Mark, did you, did you have any luck and, and having some, some private conversations? >>Yeah, I had quite a few private conversations. The one thing before I get to that, I want disagree with one point Ron made, I do believe there are customers out there buying the heat wave service, the MySEQ heat wave server service because of autopilot. Because autopilot is really revolutionary in many ways in the sense for the MySEQ developer in that it, it auto provisions, it auto parallel loads, IT auto data places it auto shape predictions. It can tell you what machine learning models are going to tell you, gonna give you your best results. And, and candidly, I've yet to meet a DBA who didn't wanna give up pedantic tasks that are pain in the kahoo, which they'd rather not do and if it's long as it was done right for them. So yes, I do think people are buying it because of autopilot and that's based on some of the conversations I had with customers at Oracle Cloud World. >>In fact, it was like, yeah, that's great, yeah, we get fantastic performance, but this really makes my life easier and I've yet to meet a DBA who didn't want to make their life easier. And it does. So yeah, I've talked to a few of them. They were excited. I asked them if they ran into any bugs, were there any difficulties in moving to it? And the answer was no. In both cases, it's interesting to note, my sequel is the most popular database on the planet. Well, some will argue that it's neck and neck with SQL Server, but if you add in Mariah DB and ProCon db, which are forks of MySQL, then yeah, by far and away it's the most popular. And as a result of that, everybody for the most part has typically a my sequel database somewhere in their organization. So this is a brilliant situation for anybody going after MyQ, but especially for heat wave. And the customers I talk to love it. I didn't find anybody complaining about it. And >>What about the migration? We talked about TCO earlier. Did your t does your TCO analysis include the migration cost or do you kind of conveniently leave that out or what? >>Well, when you look at migration costs, there are different kinds of migration costs. By the way, the worst job in the data center is the data migration manager. Forget it, no other job is as bad as that one. You get no attaboys for doing it. Right? And then when you screw up, oh boy. So in real terms, anything that can limit data migration is a good thing. And when you look at Data Lake, that limits data migration. So if you're already a MySEQ user, this is a pure MySQL as far as you're concerned. It's just a, a simple transition from one to the other. You may wanna make sure nothing broke and every you, all your tables are correct and your schema's, okay, but it's all the same. So it's a simple migration. So it's pretty much a non-event, right? When you migrate data from an O LTP to an O L A P, that's an ETL and that's gonna take time. >>But you don't have to do that with my SQL heat wave. So that's gone when you start talking about machine learning, again, you may have an etl, you may not, depending on the circumstances, but again, with my SQL heat wave, you don't, and you don't have duplicate storage, you don't have to copy it from one storage container to another to be able to be used in a different database, which by the way, ultimately adds much more cost than just the other service. So yeah, I looked at the migration and again, the users I talked to said it was a non-event. It was literally moving from one physical machine to another. If they had a new version of MySEQ running on something else and just wanted to migrate it over or just hook it up or just connect it to the data, it worked just fine. >>Okay, so every day it sounds like you guys feel, and we've certainly heard this, my colleague David Foyer, the semi-retired David Foyer was always very high on heatwave. So I think you knows got some real legitimacy here coming from a standing start, but I wanna talk about the competition, how they're likely to respond. I mean, if your AWS and you got heatwave is now in your cloud, so there's some good aspects of that. The database guys might not like that, but the infrastructure guys probably love it. Hey, more ways to sell, you know, EC two and graviton, but you're gonna, the database guys in AWS are gonna respond. They're gonna say, Hey, we got Redshift, we got aqua. What's your thoughts on, on not only how that's gonna resonate with customers, but I'm interested in what you guys think will a, I never say never about aws, you know, and are they gonna try to build, in your view a converged Oola and o LTP database? You know, Snowflake is taking an ecosystem approach. They've added in transactional capabilities to the portfolio so they're not standing still. What do you guys see in the competitive landscape in that regard going forward? Maybe Holger, you could start us off and anybody else who wants to can chime in, >>Happy to, you mentioned Snowflake last, we'll start there. I think Snowflake is imitating that strategy, right? That building out original data warehouse and the clouds tasking project to really proposition to have other data available there because AI is relevant for everybody. Ultimately people keep data in the cloud for ultimately running ai. So you see the same suite kind of like level strategy, it's gonna be a little harder because of the original positioning. How much would people know that you're doing other stuff? And I just, as a former developer manager of developers, I just don't see the speed at the moment happening at Snowflake to become really competitive to Oracle. On the flip side, putting my Oracle hat on for a moment back to you, Mark and Iran, right? What could Oracle still add? Because the, the big big things, right? The traditional chasms in the database world, they have built everything, right? >>So I, I really scratched my hat and gave Nipon a hard time at Cloud world say like, what could you be building? Destiny was very conservative. Let's get the Lakehouse thing done, it's gonna spring next year, right? And the AWS is really hard because AWS value proposition is these small innovation teams, right? That they build two pizza teams, which can be fit by two pizzas, not large teams, right? And you need suites to large teams to build these suites with lots of functionalities to make sure they work together. They're consistent, they have the same UX on the administration side, they can consume the same way, they have the same API registry, can't even stop going where the synergy comes to play over suite. So, so it's gonna be really, really hard for them to change that. But AWS super pragmatic. They're always by themselves that they'll listen to customers if they learn from customers suite as a proposition. I would not be surprised if AWS trying to bring things closer together, being morely together. >>Yeah. Well how about, can we talk about multicloud if, if, again, Oracle is very on on Oracle as you said before, but let's look forward, you know, half a year or a year. What do you think about Oracle's moves in, in multicloud in terms of what kind of penetration they're gonna have in the marketplace? You saw a lot of presentations at at cloud world, you know, we've looked pretty closely at the, the Microsoft Azure deal. I think that's really interesting. I've, I've called it a little bit of early days of a super cloud. What impact do you think this is gonna have on, on the marketplace? But, but both. And think about it within Oracle's customer base, I have no doubt they'll do great there. But what about beyond its existing install base? What do you guys think? >>Ryan, do you wanna jump on that? Go ahead. Go ahead Ryan. No, no, no, >>That's an excellent point. I think it aligns with what we've been talking about in terms of Lakehouse. I think Lake House will enable Oracle to pull more customers, more bicycle customers onto the Oracle platforms. And I think we're seeing all the signs pointing toward Oracle being able to make more inroads into the overall market. And that includes garnishing customers from the leaders in, in other words, because they are, you know, coming in as a innovator, a an alternative to, you know, the AWS proposition, the Google cloud proposition that they have less to lose and there's a result they can really drive the multi-cloud messaging to resonate with not only their existing customers, but also to be able to, to that question, Dave's posing actually garnish customers onto their platform. And, and that includes naturally my sequel but also OCI and so forth. So that's how I'm seeing this playing out. I think, you know, again, Oracle's reporting is indicating that, and I think what we saw, Oracle Cloud world is definitely validating the idea that Oracle can make more waves in the overall market in this regard. >>You know, I, I've floated this idea of Super cloud, it's kind of tongue in cheek, but, but there, I think there is some merit to it in terms of building on top of hyperscale infrastructure and abstracting some of the, that complexity. And one of the things that I'm most interested in is industry clouds and an Oracle acquisition of Cerner. I was struck by Larry Ellison's keynote, it was like, I don't know, an hour and a half and an hour and 15 minutes was focused on healthcare transformation. Well, >>So vertical, >>Right? And so, yeah, so you got Oracle's, you know, got some industry chops and you, and then you think about what they're building with, with not only oci, but then you got, you know, MyQ, you can now run in dedicated regions. You got ADB on on Exadata cloud to customer, you can put that OnPrem in in your data center and you look at what the other hyperscalers are, are doing. I I say other hyperscalers, I've always said Oracle's not really a hyperscaler, but they got a cloud so they're in the game. But you can't get, you know, big query OnPrem, you look at outposts, it's very limited in terms of, you know, the database support and again, that that will will evolve. But now you got Oracle's got, they announced Alloy, we can white label their cloud. So I'm interested in what you guys think about these moves, especially the industry cloud. We see, you know, Walmart is doing sort of their own cloud. You got Goldman Sachs doing a cloud. Do you, you guys, what do you think about that and what role does Oracle play? Any thoughts? >>Yeah, let me lemme jump on that for a moment. Now, especially with the MyQ, by making that available in multiple clouds, what they're doing is this follows the philosophy they've had the past with doing cloud, a customer taking the application and the data and putting it where the customer lives. If it's on premise, it's on premise. If it's in the cloud, it's in the cloud. By making the mice equal heat wave, essentially a plug compatible with any other mice equal as far as your, your database is concern and then giving you that integration with O L A P and ML and Data Lake and everything else, then what you've got is a compelling offering. You're making it easier for the customer to use. So I look the difference between MyQ and the Oracle database, MyQ is going to capture market more market share for them. >>You're not gonna find a lot of new users for the Oracle debate database. Yeah, there are always gonna be new users, don't get me wrong, but it's not gonna be a huge growth. Whereas my SQL heatwave is probably gonna be a major growth engine for Oracle going forward. Not just in their own cloud, but in AWS and in Azure and on premise over time that eventually it'll get there. It's not there now, but it will, they're doing the right thing on that basis. They're taking the services and when you talk about multicloud and making them available where the customer wants them, not forcing them to go where you want them, if that makes sense. And as far as where they're going in the future, I think they're gonna take a page outta what they've done with the Oracle database. They'll add things like JSON and XML and time series and spatial over time they'll make it a, a complete converged database like they did with the Oracle database. The difference being Oracle database will scale bigger and will have more transactions and be somewhat faster. And my SQL will be, for anyone who's not on the Oracle database, they're, they're not stupid, that's for sure. >>They've done Jason already. Right. But I give you that they could add graph and time series, right. Since eat with, Right, Right. Yeah, that's something absolutely right. That's, that's >>A sort of a logical move, right? >>Right. But that's, that's some kid ourselves, right? I mean has worked in Oracle's favor, right? 10 x 20 x, the amount of r and d, which is in the MyQ space, has been poured at trying to snatch workloads away from Oracle by starting with IBM 30 years ago, 20 years ago, Microsoft and, and, and, and didn't work, right? Database applications are extremely sticky when they run, you don't want to touch SIM and grow them, right? So that doesn't mean that heat phase is not an attractive offering, but it will be net new things, right? And what works in my SQL heat wave heat phases favor a little bit is it's not the massive enterprise applications which have like we the nails like, like you might be only running 30% or Oracle, but the connections and the interfaces into that is, is like 70, 80% of your enterprise. >>You take it out and it's like the spaghetti ball where you say, ah, no I really don't, don't want to do all that. Right? You don't, don't have that massive part with the equals heat phase sequel kind of like database which are more smaller tactical in comparison, but still I, I don't see them taking so much share. They will be growing because of a attractive value proposition quickly on the, the multi-cloud, right? I think it's not really multi-cloud. If you give people the chance to run your offering on different clouds, right? You can run it there. The multi-cloud advantages when the Uber offering comes out, which allows you to do things across those installations, right? I can migrate data, I can create data across something like Google has done with B query Omni, I can run predictive models or even make iron models in different place and distribute them, right? And Oracle is paving the road for that, but being available on these clouds. But the multi-cloud capability of database which knows I'm running on different clouds that is still yet to be built there. >>Yeah. And >>That the problem with >>That, that's the super cloud concept that I flowed and I I've always said kinda snowflake with a single global instance is sort of, you know, headed in that direction and maybe has a league. What's the issue with that mark? >>Yeah, the problem with the, with that version, the multi-cloud is clouds to charge egress fees. As long as they charge egress fees to move data between clouds, it's gonna make it very difficult to do a real multi-cloud implementation. Even Snowflake, which runs multi-cloud, has to pass out on the egress fees of their customer when data moves between clouds. And that's really expensive. I mean there, there is one customer I talked to who is beta testing for them, the MySQL heatwave and aws. The only reason they didn't want to do that until it was running on AWS is the egress fees were so great to move it to OCI that they couldn't afford it. Yeah. Egress fees are the big issue but, >>But Mark the, the point might be you might wanna root query and only get the results set back, right was much more tinier, which been the answer before for low latency between the class A problem, which we sometimes still have but mostly don't have. Right? And I think in general this with fees coming down based on the Oracle general E with fee move and it's very hard to justify those, right? But, but it's, it's not about moving data as a multi-cloud high value use case. It's about doing intelligent things with that data, right? Putting into other places, replicating it, what I'm saying the same thing what you said before, running remote queries on that, analyzing it, running AI on it, running AI models on that. That's the interesting thing. Cross administered in the same way. Taking things out, making sure compliance happens. Making sure when Ron says I don't want to be American anymore, I want to be in the European cloud that is gets migrated, right? So tho those are the interesting value use case which are really, really hard for enterprise to program hand by hand by developers and they would love to have out of the box and that's yet the innovation to come to, we have to come to see. But the first step to get there is that your software runs in multiple clouds and that's what Oracle's doing so well with my SQL >>Guys. Amazing. >>Go ahead. Yeah. >>Yeah. >>For example, >>Amazing amount of data knowledge and, and brain power in this market. Guys, I really want to thank you for coming on to the cube. Ron Holger. Mark, always a pleasure to have you on. Really appreciate your time. >>Well all the last names we're very happy for Romanic last and moderator. Thanks Dave for moderating us. All right, >>We'll see. We'll see you guys around. Safe travels to all and thank you for watching this power panel, The Truth About My SQL Heat Wave on the cube. Your leader in enterprise and emerging tech coverage.
SUMMARY :
Always a pleasure to have you on. I think you just saw him at Oracle Cloud World and he's come on to describe this is doing, you know, Google is, you know, we heard Google Cloud next recently, They own somewhere between 30 to 50% depending on who you read migrate from one cloud to another and suddenly you have a very compelling offer. All right, so thank you for that. And they certainly with the AI capabilities, And I believe strongly that long term it's gonna be ones who create better value for So I mean it's certainly, you know, when, when Oracle talks about the competitors, So what do you make of the benchmarks? say, Snowflake when it comes to, you know, the Lakehouse platform and threat to keep, you know, a customer in your own customer base. And oh, by the way, as you grow, And I know you look at this a lot, to insight, it doesn't improve all those things that you want out of a database or multiple databases So what about, I wonder ho if you could chime in on the developer angle. they don't have to license more things, send you to more trainings, have more risk of something not being delivered, all the needs of an enterprise to run certain application use cases. I mean I, you know, the rumor was the TK Thomas Curian left Oracle And I think, you know, to holder's point, I think that definitely lines But I agree with Mark, you know, the short term discounting is just a stall tag. testament to Oracle's ongoing ability to, you know, make the ecosystem Yeah, it's interesting when you get these all in one tools, you know, the Swiss Army knife, you expect that it's not able So when you say, yeah, their queries are much better against the lake house in You don't have to come to us to get these, these benefits, I mean the long term, you know, customers tend to migrate towards suite, but the new shiny bring the software to the data is of course interesting and unique and totally an Oracle issue in And the third one, lake house to be limited and the terabyte sizes or any even petabyte size because you want keynote and he was talking about how, you know, most security issues are human I don't think people are gonna buy, you know, lake house exclusively cause of And then, you know, that allows, for example, the specialists to And and what did you learn? The one thing before I get to that, I want disagree with And the customers I talk to love it. the migration cost or do you kind of conveniently leave that out or what? And when you look at Data Lake, that limits data migration. So that's gone when you start talking about So I think you knows got some real legitimacy here coming from a standing start, So you see the same And you need suites to large teams to build these suites with lots of functionalities You saw a lot of presentations at at cloud world, you know, we've looked pretty closely at Ryan, do you wanna jump on that? I think, you know, again, Oracle's reporting I think there is some merit to it in terms of building on top of hyperscale infrastructure and to customer, you can put that OnPrem in in your data center and you look at what the So I look the difference between MyQ and the Oracle database, MyQ is going to capture market They're taking the services and when you talk about multicloud and But I give you that they could add graph and time series, right. like, like you might be only running 30% or Oracle, but the connections and the interfaces into You take it out and it's like the spaghetti ball where you say, ah, no I really don't, global instance is sort of, you know, headed in that direction and maybe has a league. Yeah, the problem with the, with that version, the multi-cloud is clouds And I think in general this with fees coming down based on the Oracle general E with fee move Yeah. Guys, I really want to thank you for coming on to the cube. Well all the last names we're very happy for Romanic last and moderator. We'll see you guys around.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Mark | PERSON | 0.99+ |
Ron Holger | PERSON | 0.99+ |
Ron | PERSON | 0.99+ |
Mark Stammer | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Ron Westfall | PERSON | 0.99+ |
Ryan | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
Walmart | ORGANIZATION | 0.99+ |
Larry Ellison | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Alibaba | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Holgar Mueller | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Constellation Research | ORGANIZATION | 0.99+ |
Goldman Sachs | ORGANIZATION | 0.99+ |
17 times | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
David Foyer | PERSON | 0.99+ |
44% | QUANTITY | 0.99+ |
1.2% | QUANTITY | 0.99+ |
4.8 billion | QUANTITY | 0.99+ |
Jason | PERSON | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
Fu Chim Research | ORGANIZATION | 0.99+ |
Dave Ante | PERSON | 0.99+ |
Ramesh Prabagaran, Prosimo | CUBE Conversation
(upbeat music) >> Hello, welcome to this Cube Conversation here in Palo Alto, California. I'm John Furrier, host of theCube. We have a returning Cube alumni, Ramesh Prabagan, who is the co-founder and CEO of Prosimo.io. Great to see you, Ramesh. Thanks for coming in to our studio, and welcome to the new layout. >> Thanks for having me here, John. After a series of Zoom conversations, it's great to be live and in the flesh! >> Great to be in person. We also got a new stage for our Supercloud event, which we've been opening up to the community, looking forward to getting your perspective on that soon as well. But I want to keep the conversation really about you guys. I want to get the story down. You guys came out of stealth, Multicloud, Supercloud is right in your wheelhouse. >> Exactly. >> You got to love Supercloud. >> Yeah. As I walked in, I saw Supercloud all over the place, and it just gives you a jolt of energy. >> Well, you guys are in the middle of the action. Your company, I want you to explain this in a minute, is in the middle of this next wave. Because we had the structural change I called Cloud One. Amazon, use case, developers, no need to build a data center, all that goodness happens, higher level service of abstractions are happening, and then Azure comes in. More PaaS, and then more install base, now they're nipping at the heels. So full on hyperscale, Cap Backs growth, great for everybody. Now comes new use cases. Cloud to cloud, app to app, you see Databricks, Snowflake, MongoDB, all doing extremely well by leveraging the Cap Backs, now it's an ops problem. >> Exactly. >> Now ops and security. >> Yeah. It's speed of applications. >> How are you guys vectoring into that? Explain what you guys do. >> Absolutely. So let me take kind of the customer pain point first, right? Because it's always easier to explain that, and then we explain what is it that we do. So, it's no surprise. Applications are moving into the cloud, or people are building apps in the cloud in masses. The infrastructure that's sitting in front of these applications, cutting across networking, security, the operational piece associated with that, does not move at the same speed. The apps sometimes get upgraded two, three times a day, the infrastructure gets touched one time a week at best. And so increasingly, the cloud platform teams, the developers are all like, "Hey, why? Why? Why?" Right? "I thought things were supposed to move fast in the cloud." It doesn't. Now, if you double click on that, really, it's two reasons. One, those that won't have consistency across the stack that they hired in the data center, they bring a virtual form factor of that stack and line it up in the cloud, and before you know it, it's cost, it's operation complexity, there are multiple single panes of glass, all the fun stuff associated... >> Just to interject real quick. It is fast in the cloud if you're a developer. >> Exactly. >> So it's kind of like, hurry up, slow down, wait. >> Correct. >> So the developers are shifting left, open source is booming. Things are fine for developers right now. If you're a developer, things are good. >> But the guy sitting in front of that... >> The ops guys, they've got to deal with things like lock-in, choice, security. >> Exactly. And those are really the key challenges. We've seen some that actually said, "Hey, know what, I don't want to bring my data center stack into the cloud. Let me go cloud-native. And they start to build it up. 14 services from AWS, 15 from iGR, 14 more from GCP, even if you are in a single cloud. They just keep it to that. I need to know how to put this together. Because all these services are great, but how do I put this together. And enterprises don't have just one application, they have hundreds of these applications. So the requirements of a database is different than a service mesh, different than a serverless application, different than a web application. And before you know it, "How do I put all these things together?" And so we looked at this problem, and we said, "Okay. We subscribe to the fact that cloud-native is the way to go, right, but something needs to be there to make this simple." Right? And so, first thing that we did was bring all these cloud-native services together, we help orchestrate that, and we said, "okay, know what, Mr. Enterprise? We got you covered." Right? But now, it doesn't stop there. That's like, 10% of the value, right? What do you really need? What do you care about now? Because the apps are in the center of the universe, and who's talking to it? It's another application sitting either in the same cloud, or in a different cloud, or it's a user connecting into the application. So now, let's talk about what are the networking security operational requirements required for these apps to talk to each other, or the user to talk to the application. That's really what we focus on. >> Yeah. And I think one of the things that's driving this opportunity for you, and I want to get your reaction to this, is that the modern application movement is all about cloud-native. Okay, they're obviously doing great. Now, kind of the kumbaya moment in enterprise is that the security team and ops teams have to play ball and be friends with the developer, and vice versa. So harmony's coming there. So the little harmony. And two, the business is driving apps. IT is transforming over. This is why the Supercloud idea is interesting to Dave and I. Because when we coined that term, multi-cloud was not a market. Everyone has multiple clouds, 'cause they have Microsoft Office, that's now in the cloud, they got SQL Server, I mean it's really kind of Microsoft Cloud. >> Exactly. >> So you have a cloud. But do you have ops teams building on the stack? What about the network layer? This is where the rubber meets the road. >> Absolutely, yeah. And if you look at the challenges there, if you just focus on networking and security, right? When applications need to talk to each other, you have a whole bunch of underlying services, but somebody needs to put this thing on top. Because what you care about is "can these group of users talk to these class of applications." Or, "these group of applications, can they talk to each other," right? This whole notion of connectivity is just table stakes. Everybody just assumes it's there, right? It's the next layer up, which is, "how do I bring Zero Trust access? How do I get the observability?" And observability is not just a bunch of pretty donut chats. I have had people look to me in my previous company, the start-up, and said, "okay, give me all these nice donut chats, but so what? What do you want me to do with this?" And so you have to translate that into real actions, right? "How do I bring Zero Trust capabilities? How do I bring the observability capabilities? How do I understand cloud-native and networking and bring those things together so that you can help solve for the problem." >> It's interesting, one of the questions I had here to ask you was "what does it mean to be cloud-native, and why now?" And you brought up Zero Trust, trust and verify, these are security concepts. But if you look at what's going on at KubeKon and CNCF and Linux Foundation, software supply chain's a huge issue, where trust is the issue. They want trust there, so you got Zero Trust here. What is it? Zero Trust or trust? I mean, what's there? Is one hardware based, perimeter, networking? That kind of perimeter's dead, ton of... >> No, the whole- >> Trust or Zero Trust. >> The whole concept of Zero Trust is don't trust what is underlying, just trust what you're talking to. So if you and I talking to each other, John, you need to trust me, I need to trust you, and be able to have this conversation. >> You've been verified. >> Exactly, right? But in the application world, if you talk about two apps that are talking to each other, let's say there is a web application in one AWS region talking to a database in a different region, right? Now, do you want to make sure you are able to build that trust all the way from the application to the application? Or do you want to move the trust boundary to the two entities that are talking to each other so that irrespective of what they go on underneath the covers, you can be always sure that these two things are trusted. >> So, Ramesh, I was on LinkedIn yesterday, I wrote a comment, Dave Vallante wrote a post on Supercloud, we're talking about it, and I wrote, "Cloud as a commodity," question, and then a bunch of other stuff that we're going to talk about, and Keith Townsend jumped on that, and got on Twitter, put a poll, "Is cloud a commodity? Source: me." So, it started a big thread. And the reaction was interesting. And my point was to be provocative on "Cloud isn't commodity, but there's commodity elements." EC2 and S3, you can look at that and say, "that's commodity IaaS," but Amazon Web Services has done an amazing job for higher level services. Okay, so how does that translate into the use cases that you see that you guys are going after and solving, because it's the same kind of concept. IaaS and SaaS have to work together to solve problems, but that's in an integrated environment, say, in a native-cloud. How does that work across clouds? >> Yeah, no, you bring up a great point, John. So, let's take the simple use case, right? Let's keep the user to app thing to the side. Let us say two apps need to talk to each other, right? There are multiple ways in which you can solve this problem. You can build highways. That's what our customers call it. I'll build highways. I don't care what goes on those highways, I'll just build highways. You bring any kind of application workload on it, I just make sure that the highways are good, right? That's kind of the lowest common denominator. It's the path to least resistance. You can get stuff done, but it's not going to move the needle, right? Then you have really modern, kind of service networking, where, okay, I'm looking at every single HTTP, API, n:point, whatnot, and I'm optimizing for that. Right? Great if you know what you're doing, but, like, if you have thousands of these applications, it's not going to be really feasible to do that. And so, what we have seen customers do, actually, is employ a mixed approach, where they say, "I'm going to build these highways, the highways are going to make sure that I can go from one place to another, and maybe within regions, across clouds, whatnot, but then, I have specific requirements that my business needs, that actually needs tweaking, right? And so I'm going to tweak those things. That's why, what we call as like, full stack transit, is exactly that, right, which is, I'll build you the guts of it so that hey, you know what, if somebody screams at you, "Hey, why is my application not accessible?" You don't have that problem. It is always accessible. But then, the requirements for performance, the requirements for Zero Trust, the requirements for segmentation, and all of that are things that... >> That's a hard problem. >> That's a hard problem to solve. >> And you guys are solving that? >> Absolutely, exactly. >> So, let me throw this at you. So, okay, I get that. And by the way, that's exactly what we're seeing. Dave and I were also debating about multi-cloud as what it is. Now, the nirvana definition is, "Well, I have a workload, that's going to work the same, and just magically just shift to Azure." (Ramesh laughs) >> Like, 'cause there's better resources. >> There is no magic there. >> So, but this brings up the point of operations. Now, Databricks and Snowflake, they're building their software to run on multi-cloud seamlessly. Now they can do that, 'cause it's their application. What is the multi-cloud use case, so that's a Supercloud use case in your mind, because right now it's not yet there. What is the Supercloud use case that's going to allow this seamless management or workloads. What's your view? >> Yeah, so if you take enterprise, right? Large enterprise in particular. They invariably have some workloads that are on, let's say, if the primary cloud is AWS, there are some workloads in Azure. Maybe they have acquired a new company, maybe a start-up that uses GCP, whatnot. So they have sprinkles of workloads in other clouds. >> So that's the breed kind of thing. >> Yeah, exactly. That's not what causes anybody to wake up in the morning and say, "I need to have a Supercloud strategy." That's not the thing, right? But now, increasingly you're seeing "pick the right cloud for the appropriate workload." That is going to change quite a bit. Because I have my infrastructure heavy workloads in AWS. I have quite a bit of like, analytics and mining type of applications that are better on GCP. I have all of my package applications work well on Azure, right? How do I make sure all of this. And it's not apps of this kind. Even simple things like VDI. VDI always used to be, "I have this instance I run up" and whatnot. Now every single cloud provider is giving you their own flavor of virtual desktop. And so, how do you make sure all of these things work together, right? And once again, what we have seen customers do is they settle on one cloud as their primary, but then you always have sprinkles of workloads across all of the clouds. Now, you could also go down the path, and you're increasingly seeing this, you could go down the path of, "Hey, I'm using cloud as backbone," right? Cloud providers have invested massive amounts of dollars to make sure that the infrastructure reaches there. Literally almost to the extent that every user in a metro city is ten milliseconds from the public cloud. And so they have allowed for that. Now, you can actually use cloud backbones to get the availability, the liability and whatnot. So these are some new use cases that we have seen actually blew up in customers. I was just doing an interview, and the topic was the innovator's dilemma. And one of the panelists said, "It's not the innovator's dilemma, it's the integrator dilemma." Because if you have commodity, and you have choices on, say, backbones and whatnot for transit, the integration is the key glue now. What's your reaction to that? >> Absolutely. And we have seen, we used to spend quite a bit of time in kind of what is the day zero problem, right? Like, how do I put this together? Conversations are moved past that, because there are multiple ways in which you can do that right now, right? Conversations are moving to kind of, "this is more of an operational problem for me." It's not just operations in the form of "Hey, I need to find out where the problem is, troubleshoot it, and so forth. But I need to make like really high quality decisions." And those decisions are going to be guided by data. We have enterprise customers that acquire new companies. Or they have a new site that they open up. >> It's a mishmash. >> Yeah, exactly. It's a New York based company and they acquire a team out in Sidney, Australia, right? Does your cloud tell you today that you have new users, or new applications that are in Sidney, and naturally just extend? No, it doesn't. Somebody has to look at the macro problem, look at "Where are all my workloads?" Do a bunch of engineering to make that work, right? We took it upon ourselves to say "Hey, you know what, twenty-four hours later, you're going to get a recommendation in the platform that says, 'okay, you have new set of applications, a new set of users coming from Sidney, Australia, what have you done about it?' Click a button, and then you expand on it. >> It's kind of like how IT became the easy way to run the data center. Before IT you had to be a PhD, and roll out, I mean, you know how it was, right? So you're kind of taking that same approach. Okay, well, Ramesh, great stuff. I want to do a followup, certainly with you on this. 'Cause you're in the middle of where this wave is going, this structural change, and certainly can participate in that Supercloud conversation. But for your company, what's going on there? Give us an update, customer activity, what's it like, you guys came out of stealth, what's been the reaction, give a plug for the company, who you going to hire, take a minute to plug it. >> Oh, wonderful, thank you. So, primary use cases are really around cloud networking. How do you go within the cloud, and across clouds, and to the cloud, right? So those are really the key use cases. We go after large enterprises predominantly, but any kind of mid enterprise that is extremely cloud oriented, has lot of workloads in the cloud, equally applicable, applicable there. So we have about 60 of the Fortune 500s that we are engaged in right now. Many of them are paying customers as well. >> How are they buying, service? Is it... >> Yeah. So we provide software that actually sits inside the customer's own administrative control, delivered as a service, that they can use to go- >> So on-premise hosting or in the cloud? >> Entirely in the cloud, delivered as a service, so they didn't need to take care of the maintenance and whatnot, but they just consume it from the cloud directly, okay? And so, where we are right now is essentially, I have a branch of repeatable use cases that many customers are employing us for. So again, building highways, many different ways to build highways, at the same time take care of the micro-segmentation requirements, and then importantly, this whole NetDevOps, right? This whole NetDevOps is a cultural shift that we have seen. So if you are a network engineer, NetDevOps seems like it's a foreign term, right? But if you are an operational engineer, then NetDevOps, you know exactly what to do. So bringing all those principles together, making sure that the networking teams are empowered to essentially embrace the cloud that I created, the single biggest thing that we have done, I would say done well, is we have built very well on top of the cloud provider. So we don't go against cloud-native services. They have done that really, really well. It makes no sense to go say, "I have a better transit gateway than you." No. Hands down, an AWS transit gateway, or an Azure V1 and whatnot, are some of the best services that they have provided. But what does that mean? >> How do you build software into it? >> Exactly, right? And so how can you build a layer of software on top, so that when you attach that into the applications, right, that you can actually get the experience required, you can get the security requirements and so forth. So that's kind of where we are. We're also humbled by essentially some of the mega partners that have taken a bet on us, sometimes to the extent that, we're a 70% company, and some of the partners that we are talking to actually are quite humbling, right? >> Hey, lot more resource. >> Exactly, yeah. >> And how many rounds of financing have you done? >> So we have done two rounds of financing, we have raised about 55,000,000 in capital, again, really great set of investors backing us up, and a strong sense of conviction, on kind of where we are going. >> Do you think you're early, or not? 'Cause, that's always probably the biggest scary, I can see the smile, is that what keeps you up at night? >> So, yeah, exactly, I go through these phases internally in my head. >> The vision's right on the money, no doubt about it. >> So when you win an opportunity, and we have like, a few dozen of these, right, when you win an opportunity, you're like, "Yes, absolutely, this is where it is," right, and you go for a week and you don't win something, and you're like, "Hey man, why are we not seeing this?" Right, and so you go through these cycles, but I'll tell you with conviction, the fact that customers are moving workloads into the public cloud, not in dozens but in like, the hundreds and the thousands, essentially means that they need something like this. >> And the cloud-native wave is driving big time. >> Exactly, right. And so, when the customer as a conversation with AWS, Azure, GCP, and they are privy to all the services, and we go in after that and talk about, "How do I put this together and help you focus on your outcomes?" That mentally moves them. >> It's a day zero opportunity, and then you got headroom beyond that. >> Exactly. So that's the positive side of it, and enterprises certainly are sometimes a little cautious about when they're up new technologies and so forth. It's a natural cycle. Fortunately, again we are humbled by the fact that we have a few dozen of the pioneering customers that are using our platform. That gives you the legitimacy for a start-up. >> You got great pedigree on clients. Real quick, final question. 30 seconds. What's the pain point, for people watching, when do they call you in? What's their environment look like, what are some of the things that give the signals that you guys got to get the call? >> If you have more than, let's say five or ten VPCs in the cloud, and you have not invested in building a networking platform that gives you the connectivity, the security, the observability, and the performance requirements, you absolutely have to do that, right? Because we have seen many, many customers, it goes from 5 to 50 to 100 within a week, and so you don't want to be caught essentially in the midst of that. >> One more final final question. Since you're a seasoned entrepreneur, you've been there, done that previous times, >> Yeah, I've got scars. (laughs) >> Yes, we've all got scar tissue. We've been doing theCube for 12 years, we've seen a lot of stuff. What's the difference now in this market that's different than before? What's exciting you? What's the big change? What's, in your opinion, happening now that's really important that people should pay attention to? >> Absolutely. A lot of it is driven by one, the focus on the cloud itself, right? That's driving a sense of speed like never before. Because in the infrastructure world, yeah you do it today, oh, you do it six months from now, you had some leeway. Here, networking security teams are being yelled at almost every single day, by the cloud guy saying, "You guys are not moving fast enough, fast enough, fast enough." So that thing is different. So it helps, going to shrink the sale cycle for us. So second big one is, nobody knows, essentially, the new set of use cases that are coming about. We are seeing patterns emerge in terms of new use cases almost every single day. Some days it's like completely on the other end of the spectrum. Like, "I'm only serverless and service mesh." On the other end, it's like, "I have a package application, I'm moving it to the cloud." Right? And so, we're learning a lot as well. >> A great time for Supercloud. >> Exactly. >> Do the cloud really well, make it super, bring it to other use cases, stitch it all together, make it easy to use, reduce the complexity, it's just evolution. >> Yeah. And our goal is essentially, enterprise customers should not be focused so much on building infrastructure this way, right? They should focus on users, application services, let vendors like us worry about the nitty-gritty underneath. >> Ramesh, thank you for this conversation. It's a great Cube conversation. In the middle of all the action, Supercloud, multi-cloud, the future is going to be very much cloud-based, IaaS, SaaS, connecting environments. This is the cloud 2.0, Superclouds. And this is what people are going to be working on. I'm John Furrier with theCube, thanks for watching. (soft music)
SUMMARY :
Thanks for coming in to our studio, it's great to be live and in the flesh! really about you guys. and it just gives you a jolt of energy. is in the middle of this next wave. How are you guys vectoring into that? And so increasingly, the It is fast in the cloud So it's kind of like, So the developers are shifting left, got to deal with things That's like, 10% of the value, right? is that the modern application movement building on the stack? so that you can help one of the questions I had here to ask you So if you and I talking to each other, But in the application world, into the use cases that you see I just make sure that the And by the way, that's What is the multi-cloud use case, if the primary cloud is AWS, across all of the clouds. It's not just operations in the form of to say "Hey, you know what, IT became the easy way and to the cloud, right? How are they buying, service? that actually sits inside the customer's making sure that the and some of the partners that So we have done two So, yeah, exactly, I The vision's right on the money, Right, and so you go through these cycles, And the cloud-native and help you focus on your outcomes?" and then you got headroom beyond that. of the pioneering customers that give the signals and so you don't want to be caught that previous times, Yeah, I've got scars. What's the difference now in this market of the spectrum. Do the cloud really well, the nitty-gritty underneath. the future is going to
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
Dave Vallante | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Ramesh Prabagan | PERSON | 0.99+ |
Sidney | LOCATION | 0.99+ |
CNCF | ORGANIZATION | 0.99+ |
KubeKon | ORGANIZATION | 0.99+ |
Ramesh | PERSON | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Keith Townsend | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Ramesh Prabagaran | PERSON | 0.99+ |
10% | QUANTITY | 0.99+ |
two reasons | QUANTITY | 0.99+ |
12 years | QUANTITY | 0.99+ |
70% | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
two apps | QUANTITY | 0.99+ |
Databricks | ORGANIZATION | 0.99+ |
two entities | QUANTITY | 0.99+ |
Linux Foundation | ORGANIZATION | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
thousands | QUANTITY | 0.99+ |
30 seconds | QUANTITY | 0.99+ |
New York | LOCATION | 0.99+ |
14 | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Sidney, Australia | LOCATION | 0.99+ |
two | QUANTITY | 0.99+ |
hundreds | QUANTITY | 0.99+ |
Snowflake | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.99+ |
two rounds | QUANTITY | 0.99+ |
Cube | ORGANIZATION | 0.99+ |
Prosimo.io | ORGANIZATION | 0.99+ |
two things | QUANTITY | 0.99+ |
5 | QUANTITY | 0.99+ |
Supercloud | ORGANIZATION | 0.99+ |
Multicloud | ORGANIZATION | 0.99+ |
ten milliseconds | QUANTITY | 0.99+ |
three times a day | QUANTITY | 0.98+ |
one application | QUANTITY | 0.98+ |
IaaS | TITLE | 0.98+ |
Zero Trust | ORGANIZATION | 0.98+ |
one time a week | QUANTITY | 0.98+ |
50 | QUANTITY | 0.98+ |
Zero Trust | ORGANIZATION | 0.98+ |
SaaS | TITLE | 0.98+ |
14 services | QUANTITY | 0.97+ |
100 | QUANTITY | 0.97+ |
twenty-four hours later | DATE | 0.97+ |
a week | QUANTITY | 0.97+ |
S3 | TITLE | 0.97+ |
Microsoft | ORGANIZATION | 0.97+ |
about 60 | QUANTITY | 0.96+ |
single | QUANTITY | 0.96+ |
EC2 | TITLE | 0.95+ |
single panes | QUANTITY | 0.94+ |
Prosimo | PERSON | 0.94+ |
15 | QUANTITY | 0.93+ |
ORGANIZATION | 0.93+ | |
Cloud | TITLE | 0.92+ |
GCP | ORGANIZATION | 0.92+ |
zero | QUANTITY | 0.92+ |
dozens | QUANTITY | 0.91+ |
Azure | TITLE | 0.91+ |
NetDevOps | TITLE | 0.91+ |
one cloud | QUANTITY | 0.91+ |
George Fraser, Fivetran & Veronika Durgin, Saks | Snowflake Summit 2022
(upbeat music) >> Hey, gang. Welcome back to theCUBE's coverage of Snowflake Summit '22 live on the show floor at Caesar's Forum in Las Vegas. Lisa Martin here with Dave Vellante. Couple of guests joining us to unpack more of what we've been talking about today. George Fraser joins us, the CEO of Fivetran, and Veronika Durgin, the head of data at Saks Fifth Avenue. Guys, welcome to the program. >> Thank you for having us. >> Hello. >> George, talk to us about Fivetran for the audience that may not be super familiar. Talk to us about the company, your vision, your mission, your differentiation, and then maybe the partnership with Snowflake. >> Well, a lot of people in the audience here at Snowflake Summit probably are familiar with Fivetran. We have almost 2000 shared customers with them. So a considerable amount of the data that we're all talking about here, flows through Fivetran. But in brief, what Fivetran is, is we're data pipeline. And that means that we go get all the data of your company in all the places that it lives. So all your tools and systems that you use to run your company. We go get that data and we bring it all together in one place like Snowflake. And that is the first step in doing anything with data is getting it all in one place. >> So you've been considerable amount of shared customers. I think I saw this morning on the slide over 5,900, but you're saying you're already at around 2000 shared customers. Lots of innovation I'm sure, with between both companies, but talk to us about some of the latest developments at Fivetran, in terms of product, in terms of company growth, what's going on? >> Well, one of the biggest things that happened recently with Fivetran is we acquired another data integration company called HVR. And HVR specialty has always been replicating the biggest, baddest enterprise databases like Oracle and SQL Server databases that are enormous, that are run within an inch of their capabilities by their DBAs. And HVR was always known as the best in the business at that scenario. And by bringing that together with Fivetran, we now really have the full spectrum of capabilities. We can replicate all types of data for all sizes of company. And so that's a really exciting development for us and for the industry. >> So Veronika, head of data at Saks, what does that entail? How do you spend your time? What's your purview? >> So the cool thing abouts Saks is a very old company. Saks is the premier luxury e-commerce platform. And we help our Saks Fifth Avenue customers just express themselves through fashion. So we're trying to modernize very old company and we do have the biggest, baddest databases of any flavor you can imagine. So my job is to modernize, to bring us to near real-time data, to make sure data is available to all of our users so they can actually take advantage of it. >> So let's talk about some of those biggest, baddest hair balls that you've, and how you deal with that. So lot of over time, you've built up a lot of data. You've got different data stores. So, what are you doing with that? And what role does Fivetran and Snowflake play in helping you modernize? >> Yeah, Fivetran helps us ingest data from all of those data sources into Snowflake near real-time. It's very important to us. And like one of the examples that I give is within a matter of maybe a few weeks, we were able to get data from over a dozen of different data sources into Snowflake in near real-time. And some of those data sources were not available to our users in the past, and everybody was so excited. And the reason they weren't available is because they require a lot of engineering effort to actually build those data pipelines to manage them and maintain them. >> Lisa: Whoa, sorry. >> That was just a follow up. So, Fivetran is the consolidator of all that data and- >> That's right. >> Snowflake plays that role also. >> We bring it all together, and the place that it is consolidated is Snowflake. And from there you can really do anything with it. And there's really three things you were touching on it that make data integration hard. One is volume, and that's the one that people tend to talk about, just size of data. And that is important, but it's not the only thing. It's also latency. How fresh is the data in the locus of consolidation? Before Fivetran, the state of the art was nightly snapshots, once a day was considered pretty good. And we consider now once a minute pretty good and we're trying to make it even better. And then the last challenge, which people tend not to talk about, it's the dark secret of our industry is just incidental complexity. All of these data sources have a lot of strange behaviors and rules and corner cases. Every data source is a little bit different. And so a lot of what we bring that to the table, is that we've done the work over 10 years. And in the case of HVR, since the 90s', to map out all of these little complexities of all these data sources, that as a user, you don't have to see it. You just connect source, connect destination, and that's it. >> So you don't have to do the M word migrate off of all those databases. You can maybe allow them to dial them down over time, then create new value with using Fivetran and Snowflake. Is that the right way to think about it? >> Well, Fivetran, it's incredibly simple. You just connect it to whatever source, And then the matter of minutes you have a pipeline. And for us, it's in the matter of minutes, for Fivetran, there's hundreds of engineers, we're extending our data engineering team to now Fivetran. And we can pick and choose which tables we want to replicate which fields. And once data lands in Snowflake, now we have data across different sources in one place, in central place. And now we can do all kinds of different things. We can integrate it data together, we can do validations, we can do reconciliations. We now have ability to do point in time historical journey, in the past in transactional system, you don't see that, you only see data that's right now, but now that we replicate everything to Snowflake and Snowflake being so powerful as an analytical platform, we can do, what did it look like two months ago? What did it look like two years ago? >> You've got all that time series data, okay. >> And to address that word you mentioned a moment ago, migrate, this is something people often get confused about. What we're talking about here is not a migration, these source systems are not going away. These databases are the systems powering saks.com and they're staying right there. They're the systems you interact with when you place an order on this site. The purpose of our tool and the whole stack that Veronika has put together, is to serve other workloads in Snowflake that need to have access to all of the data together. >> But if you didn't have Snowflake, you would have to push those other data stores, try to have them do things that they have sometimes a tough time doing. >> Yeah, and you can't run analytical workloads. You cannot do reporting on the transactional database. It's not meant for that. It's supporting capability of an application and it's configured to be optimized for that. So we always had to offload those specific analytical reporting functionality, or machine learning somewhere else, and Snowflake is excellent for that. It's meant for that, yeah. >> I was going to ask you what you were doing before, you just answered that. What was the aha moment for realizing you needed to work with the power of Fivetran and Snowflake? If we look at, you talked about Saks being a legacy history company that's obviously been very successful at transforming to the digital age, but what was that one thing, as the head of the data you felt this is it? >> Great question. I've worked with Fivetran in the past. This is my third company, same with Snowflake. I actually brought Fivetran into two companies at this point. So my first experience with both Fivetran and Snowflake, was this like, this is where I want to be, this is the stack and the tooling, and just the engineering behind it. So as I moved on the next company, that that was, I'm bringing tools with me. So that was part. And the other thing I wanted to mention, when we evaluate tools for a new platform, we look at things in like three dimensions, right? One with cloud first, we want to have cloud native tools, and they have to be modular, but we also don't want to have too many tools. So Fivetran's certainly checks that off. They're first cloud native, and they also have a very long list of connectors. The other thing is for us, it's very important that data engineering effort is spent on actually analyzing data, not building pipelines and supporting infrastructure. In Fivetran, reliable, it's secure, it has various connectors, so it checks off that box as well. And another thing is that we're looking for companies we can partner with. So companies that help us grow and grow with us, we'll look in a company culture, their maturity, how they treat their customers and how they innovate. And again, Fivetran checks off that box as well. >> And I imagine Snowflake does as well, Frank Lutman on stage this morning talked about mission alignment. And it seemed to me like, wow, one of the missions of Snowflake is to align with its customer's missions. It sounds like from the conversations that Dave and I have had today, that it's the same with partners, but it sounds like you have that cultural alignment with Fivetran and Snowflake. >> Oh, absolutely. >> And Fivetran has that, obviously with 2000 shared customers. >> Yeah, I think that, well, not quite there yet, but we're close, (laughs) I think that the most important way that we've always been aligned with our customers is that we've been very clear on what we do and don't do. And that our job is to get the data from here to there, that the data be accurately replicated, which means in practice often joke that it is exactly as messed up as it was in the source. No better and no worse, but we really will accomplish that task. You do not need to worry about that. You can well and fully delegate it to us, but then what you do with the data, we don't claim that we're going to solve that problem for you. That's up to you. And anyone who claims that they're going to solve that problem for you, you should be very skeptical. >> So how do you solve that problem? >> Well, that's where modeling comes in, right? You get data from point A to point B, and it's like bad in, bad out. Like, that's it, and that's where we do those reconciliations, and that's where we model our data. We actually try to understand what our businesses, how our users, how they talk about data, how they talk about business. And that's where data warehouse is important. And in our case, it's data evolve. >> Talk to me a little bit before we wrap here about the benefits to the end user, the consumer. Say I'm on saks.com, I'm looking for a particular item. What is it about this foundation that Saks has built with Fivetran and with Snowflake, that's empowering me as a consumer, to be able to get, find what I want, get the transaction done like that? >> So getting access to, our end goal is to help our customers, right? Make their experience beautiful, luxurious. We want to make sure that what we put in front of you is what you're looking for. So you can actually make that purchase, and you're happy with it. So having that data, having that data coming from various different sources into one place enables us to do that near real-time analytics so we can help you as a customer to find what you're looking for. >> Magic on the back end, delighting customers. >> So the world is still messed up, right? Airlines are out of whack. There's supply imbalances. You've got the situation in Ukraine with oil prices. The Fed missed the mark. So can data solve these problems? If you think about the context of the macro environment, and you bring it down to what you're seeing at Saks, with your relationship with Fivetran and with Snowflake, do you see the light at the end of that confusion tunnel? >> That's such a great question. Very philosophical. I don't think data can solve it. Is the people looking at data and working together that can solve it. >> I think data can help, data can't stop a war. Data can help you forecast supply chain misses and mitigate those problems. So data can help. >> Can be a facilitator. >> Sorry, what? >> Can be a facilitator. >> Yeah, it can be a facilitator of whatever you end up doing with it. Data can be used for good or evil. It's ultimately up to the user. >> It's a tool, right? Do you bring a hammer to a gunfight? No, but t's a tool in the right hands, for the right purpose, it can definitely help. >> So you have this great foundation, you're able to delight customers as especially from a luxury brand perspective. I imagine that luxury customers have high expectations. What's next for Saks from a data perspective? >> Well, we want to first and foremost to modernize our data platform. We want to make sure we actually bring that near real-time data to our customers. We want to make sure data's reliable. That well understood that we do the data engineering and the modeling behind the scenes so that people that are using our data can rely on it. Because it's like, there is bad data is bad data but we want to make sure it's very clear. And what's next? The sky's the limit. >> Can you describe your data teams? Is it highly centralized? What's your philosophy in terms of the architecture of the organization? >> So right now we are starting with a centralized team. It just works for us as we're trying to rebuild our platform, and modernize it. But as we become more mature, we establish our practices, our data governance, our definitions, then I see a future where we like decentralize a little bit and actually each team has their own analytical function, or potentially data engineering function as well. >> That'll be an interesting discussion when you get there. >> That's a hot topic. >> It's one of the hardest problems in building a data team is whether decentralized or decentralized. We're still centralized at Fivetran, but companies now over 1000 people, and we're starting to feel the strain of that. And inevitably, you eventually have to find a way to find scenes and create specialization. >> You just have to be fluid, right? And then go with the company as the company grows and things change. >> Yeah, I've worked with some companies. JPMC is here, they've got a little, I'll call it a skunk works. They're probably under states what they're doing, but they're testing that out. A company like HelloFresh is doing some things 'cause their Hadoop cluster just couldn't scale. So they have to begin to decentralize. It is a hot topic these days. And I'm not sure there's a right or wrong. It's really a situational. But I think in a lot of situations, it's maybe the trend. >> Yeah. >> Yeah, I think centralized versus decentralized technology is a different question than centralized versus decentralized teams. >> Yes. >> They're both valid, but they're very different. And sometimes people conflate them, and that's very dangerous. Because you might want one to be centralized and the other to be decentralized. >> Well, it's true. And I think a lot of folks look at a centralized team and say, "Hey, it's more efficient to have these specialized roles, but at the same time, what's the outcome?" If the outcome can be optimized and it's maybe a little bit more people expensive, or I don't know. And they're in the lines of business where there's data context, that might be a better solution for a company. >> So to truly understand the value of data, you have to specialize in that specific area. So I see people like deep diving into specific vertical or whatever that is, and truly understanding what data they have and how to taken advantage of it. >> Well, all this talk about monetization and building data products, you're there, right? >> Yeah. >> You're on the cusp of that. And so who's going to build those data products? It's going to be somebody in the business. Today they don't "Own the life cycle" of the data. They don't feel responsible for it, but they complain when it's not what they want. And so, I feel as though what Snowflake is doing is actually attacking some of those problems. Not 100% there obviously, but a lot of work to do. >> Great analysts are great navigators of organizations amongst other things. And one of the best things that's happened as part of this evolution from technology like Hadoop to technology like Snowflake is the new stack is a lot simpler. There's a lot less technical knowledge that you need. You still need technical knowledge, but not nearly what you used to. And that has made it accessible to more people. People who bring different skills to the table. And in many cases, those are the skills you really need to deliver value from data is not, do you know the inner workings of HDFS? But do you know how to extract from your constituents in the organization, a precise version of the question that they're trying to ask? >> We really want them spending their time, the technical infrastructure is an operational detail, so you can put your teams on those types of questions, not how do we make it work? And that's what Hadoop was, "Hey, we got it to work." >> And that's something we're obsessed with. We're always trying to hide the technical complexities of the problem of data centralization behind the scenes. Even if it's harder for us, even if it's more expensive for us, we will pay any costs so that you don't have to see it. Because that allows our customers to focus on more high impact. >> Well, this is a case where a technology vendor's R&D is making your life easier. >> Veronika: Easier, right. >> I would presume you'd rather spend money to save time, than spend your time, to save engineering time, to save money. >> That's true. And at the end of the day, hiring three data engineers to do custom work that a tool does, it's actually not saving money. It costs more in the end. But to your point, pulling business people into those data teams gives them ownership, and they feel like they're part of the solution. And it's such a great feeling so that they're excited to contribute, they're excited to help us. So I love where the industry's going like in that direction. >> And of course, that's the theme of the show, the world around data collaborations. Absolutely critical, guys. Thank you so much for joining Dave and me, talking about Fivetran, Snowflake together, what you're doing to empower Saks, to be a data company. I'm going to absolutely have a different perspective next time I shop there. Thanks for joining us. Thank you. >> Dave: Thank you, guys. >> Thank you. >> For our guests and for Dave Vellante, I'm Lisa Martin. You're watching theCUBE live from Snowflake Summit '22, from Vegas. Stick around, our next guest joins us momentarily. (upbeat music)
SUMMARY :
on the show floor at for the audience that may And that is the first step of the latest developments and for the industry. Saks is the premier luxury and how you deal with that. And like one of the examples that I give So, Fivetran is the consolidator And in the case of HVR, since the 90s', Is that the right way to think about it? but now that we replicate You've got all that They're the systems you interact with that they have sometimes and it's configured to as the head of the data And the other thing I wanted to mention, that it's the same with partners, And Fivetran has that, And that our job is to get And in our case, it's data evolve. to be able to get, find what I want, so we can help you as a customer Magic on the back end, of the macro environment, Is the people looking at data Data can help you forecast of whatever you end up doing with it. for the right purpose, So you have this great foundation, and the modeling behind the scenes So right now we are starting discussion when you get there. And inevitably, you as the company grows and things change. So they have to begin to decentralize. is a different question and the other to be decentralized. but at the same time, what's the outcome?" and how to taken advantage of it. of the data. And one of the best things that's happened And that's what Hadoop was, so that you don't have to see it. is making your life easier. to save engineering time, to save money. And at the end of the day, And of course, that's guest joins us momentarily.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Veronika Durgin | PERSON | 0.99+ |
Saks | ORGANIZATION | 0.99+ |
Frank Lutman | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Fivetran | ORGANIZATION | 0.99+ |
George Fraser | PERSON | 0.99+ |
Veronika | PERSON | 0.99+ |
George | PERSON | 0.99+ |
Vegas | LOCATION | 0.99+ |
JPMC | ORGANIZATION | 0.99+ |
HelloFresh | ORGANIZATION | 0.99+ |
two companies | QUANTITY | 0.99+ |
Lisa | PERSON | 0.99+ |
Ukraine | LOCATION | 0.99+ |
Snowflake | ORGANIZATION | 0.99+ |
both companies | QUANTITY | 0.99+ |
third company | QUANTITY | 0.99+ |
first step | QUANTITY | 0.99+ |
each team | QUANTITY | 0.99+ |
Snowflake | TITLE | 0.99+ |
first experience | QUANTITY | 0.99+ |
100% | QUANTITY | 0.99+ |
two years ago | DATE | 0.99+ |
over 1000 people | QUANTITY | 0.99+ |
two months ago | DATE | 0.98+ |
Today | DATE | 0.98+ |
Las Vegas | LOCATION | 0.98+ |
over 10 years | QUANTITY | 0.98+ |
over a dozen | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
one place | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
Snowflake Summit '22 | EVENT | 0.98+ |
HVR | ORGANIZATION | 0.98+ |
both | QUANTITY | 0.97+ |
Couple of guests | QUANTITY | 0.97+ |
One | QUANTITY | 0.97+ |
once a minute | QUANTITY | 0.97+ |
Snowflake Summit 2022 | EVENT | 0.97+ |
Hadoop | TITLE | 0.97+ |
Snowflake Summit | EVENT | 0.97+ |
Caesar's Forum | LOCATION | 0.97+ |
saks.com | OTHER | 0.96+ |
three dimensions | QUANTITY | 0.96+ |
IBM33 Uli Homann VTT
(upbeat music) >> Narrator: From around the globe. It's theCUBE with digital coverage of IBM Think 2021. Brought to you by IBM. >> Welcome back to theCUBE coverage of IBM. Think 2021 virtual. I'm John Furrier, host of theCUBE. And this is theCUBE virtual and Uli Homann who's here Corporate Vice President, of cloud and AI at Microsoft. Thanks for coming on. I love this session, obviously, Microsoft one of the big clouds. Awesome. You guys partnering with IBM here at IBM Think. First of all, congratulations on all the success with Azure and just the transformation of IBM. I mean, Microsoft's Cloud has been phenomenal and hybrid is spinning perfectly into the vision of what enterprises want. And this has certainly been a great tailwind for everybody. So congratulations. So for first question, thanks for coming on and tell us the vision for hybrid cloud for Microsoft. It's almost like a perfect storm. >> Yeah. Thank you, John. I really appreciate you hosting me here and asking some great questions. We certainly appreciate it being part of IBM Think 2021 virtual. Although I do wish to see some people again, at some point. From our perspective, hybrid computing has always been part of the strategy that Microsoft as policed. We didn't think that public cloud was the answer to all questions. We always believed that there is multiple scenarios where either safety latency or other key capabilities impeded the usage of public cloud. Although we will see more public cloud scenarios with 5G and other capabilities coming along. Hybrid computing will still be something that is important. And Microsoft has been building capabilities on our own as a first party solution like Azure Stack and other capabilities. But we also partnering with VMware and others to effectively enable investment usage of capabilities that our clients have invested in to bring them forward into a cloud native application and compute model. So Microsoft is continuing investing in hybrid computing and we're taking more and more Azure capabilities and making them available in a hybrid scenario. For example, we took our entire database Stack SQL Server PostgreSQL and recently our Azure machine learning capabilities and make them available on a platform so that clients can run them where they need them in a factory in on-premise environment or in another cloud for example, because they trust the Microsoft investments in relational technology or machine learning. And we're also extending our management capabilities that Azure provides and make them available for Kubernetes virtual machine and other environments wherever they might run. So we believe that bringing Azure capabilities into our clients is important and taking also the capabilities that our clients are using into Azure and make it available so that they can manage them end to end is a key element of our strategy. >> Yeah. Thanks Uli for sharing that, I really appreciate that. You and I have been in this industry for a while. And you guys have a good view on this how Microsoft's got perspective riding the wave from the original computer industry. I remember during the client server days in the 80s, late 80s to early 90s the open systems interconnect was a big part of opening up the computer industry that was networking, internetworking and really created more lans and more connections for PCs, et cetera. And the world just went on from there. Similar now with hybrid cloud you're seeing that same kind of vibe. You seeing the same kind of alignment with distributed computing architectures for businesses where now you have, it's not just networking and plumbing and connecting lans and PCs and printers. It's connecting everything. It's almost kind of a whole another world but similar movie, if you will. So this is really going to be good for people who understand that market. IBM does, you guys do. Talk about the alignment between IBM and Microsoft in this new hybrid cloud space? It's really kind of now standardized but yet it's just now coming. >> Yeah. So again, fantastic question. So the way I think about this is first of all, Microsoft and IBM are philosophically very much aligned. We're both investing in key open source initiatives like the Cloud Native Computing Foundation, CNCF something that we both believe in. We are both partnering with the Red Hat organizations. So Red Hat forms a common bond if you still want to between Microsoft and IBM. And again, part of this is how can we establish a system of capabilities that every client has access to and then build on top of that stack. And again, IBM does this very well with their cloud packs which are coming out now with data and AI and others. And again, as I mentioned before we're investing in similar capabilities to make sure that core Azure functions are available on that CNCF cloud environment. So open source, open standards are key elements. And then you mentioned something critical which I believe is misunderstood but certainly not appreciated enough is, this is about connectivity between businesses. And so part of the power of the IBM perspective together with Microsoft is bringing together key business applications for healthcare, for retail, for manufacturing and really make them work together so that our clients that are critical scenarios get the support they need from both IBM as well as Microsoft on top of this common foundation of the CNCF and other open standards. >> It's interesting. I love that point. I'm going to double down and amplify that late and continue to bring it up. Connecting between businesses is one thread. But now people, because you have an edge, that's also industrial business but also people. People are participating in open source. People have wearables, people are connected. And also they're connecting with collaboration. So this kind of brings a whole 'nother architecture which I want to get into the solutions with you on on how you see that playing out. But first I know, you're a veteran with Microsoft for many, many years of decades. Microsoft's core competency has been ecosystems developer ecosystems, customer ecosystems. Today, that the services motion is built around ecosystems. You guys have that playbook IBM's well versed in it as well. How does that impact your partnerships, your solutions and how you deal with down this open marketplace? >> Well, let's start with the obvious. Obviously Microsoft and IBM will work together in common ecosystem. Again, I'm going to reference the CNCF again as the foundation for a lot of these initiatives. But then we're also working together in the ed hat ecosystem because Red Hat has built an ecosystem and Microsoft and IBM are players in that ecosystem. However, we also are looking a higher level there's a lot of times when people think ecosystems it's fairly low level technology. But Microsoft and IBM are talking about partnerships that are focused on industry scenarios. Again retail, for example, or healthcare and others where we're building on top of these lower level ecosystem capabilities and then bringing together the solution scenarios where the strength of IBM capabilities is coupled with Microsoft capabilities to drive this very famous one plus one equals three. And then the other piece that I think we both agree on is the open source ecosystem for software development and software development collaboration and GitHub is a common anchor that we both believe can feed the world's economy with respect to the software solutions that are needed to really bring the capabilities forward, help improve the wealth economy and so forth by effectively bringing together brilliant minds across the ecosystem. And again, just Microsoft and IBM bringing some people but the rest of the world obviously participating in that as well. So thinking again, open source, open standards and then industry specific collaboration and capabilities being a key part. You mentioned people. We certainly believe that people play a key role in software developers and the get hub notion being a key one. But there are others where, again, Microsoft with Microsoft 365 has a lot of capabilities in connecting people within the organization and across organizations. And while we're using zoom here, a lot of people are utilizing teams because teams is on the one side of collaboration platform. But on the other side is also an application host. And so bringing together people collaboration supported and powered by applications from IBM from Microsoft and others is going to be, I think a huge differentiation in terms of how people interact with software in the future. >> Yeah, and I think that whole joint development is a big part of this new people equation where it's not just partnering in market, it's also at the tech and you got open source and just phenomenal innovation, a formula there. So let's ask what solutions here. I want to get into some of the top solutions you're doing with Microsoft and maybe with IBM, but your title is corporate vice president of cloud and AI come on, cause you get a better department. I mean, more relevant than that. I mean, it's exciting. Your cloud-scale is driving tons of innovation. AI is eating software, changing the software paradigm. We can see that playing out. I've done dozens of interviews just in this past month on how AI is more certainly with machine learning and having a control plane with data, changing the game. So tell us what are the hot solutions for hybrid cloud? And why is this a different ball game than say public cloud? >> Well, so first of all let's talk a little bit about the AI capabilities and data because I think there are two categories. You're seeing an evolution of AI capabilities that are coming out. And again, I just read IBM's announcement about integrating the cloud pack with IBM Satellite. I think that's a key capability that IBM is putting out there and we're partnering with IBM in two directions there. Making it run very well on Azure with our Red Hat partners. But on the other side, also thinking through how we can optimize the experience for clients that choose Azure as their platform and IBM cloud Pak for data and AI as their technology, but that's a technology play. And then the next layer up is again, IBM has done a fantastic job to build AI capabilities that are relevant for industries. Healthcare being a very good example. Again, retail being another one. And I believe Microsoft and IBM will work on both partnerships on the technology side as well as the AI usage in specific verticals. Microsoft is doing similar things within our dynamics product line. We're using AI for business applications for planning, scheduling, optimizations, risk assessments those kinds of scenarios. And of course we're using those in the Microsoft 365 environment as well. I always joke that despite my 30 years at Microsoft, I still don't know how to read or use PowerPoint. And I can't do a PowerPoint slide for the life of me but with a new designer, I can actually get help from the system to make beautiful PowerPoint happen. So bringing AI into real life usage I think is the key part. The hybrid scenario is critical here as well. And especially when you start to think about real life scenarios, like safety, worker safety in a critical environment, freshness of product we're seeing retailers deploying cameras and AI inside the retail stores to effectively make sure that the shelves are stocked. That the quality of the vegetables for example, continues to be high and monitored. And previously people would do this on a occasional basis running around in the store. Now the store is monitored 24/7 and people get notified when things need fixing. Another really cool scenario set, is quality. We're working with a finished steel producer that effectively is looking at the stainless steel as it's being produced. And they have cameras on this steel that look at specific marks. And if these marks show up, then they know that the stainless steel will be bad. And I don't know if you've looked at a manufacturing process, but the earlier they can get a failure detected the better it is because they can most likely or more often than not return the product back into the beginning of the funnel and start over. And that's what they're using. So you can see molten steel, logically speaking with a camera and AI. And previously humans did this which is obviously a less reliable and be dangerous because this is very, very hot. This is very blowing steel. And so increasing safety while at the same time, improving the quality is something that we see hybrid scenarios. Again, autonomous driving, another great scenario where perception AI is going to be utilized. So there's a bunch of capabilities out there that really are hybrid in nature and will help us move forward with key scenarios, safety, quality and autonomous behaviors like driving and so forth. >> Uli, great insight, great product vision great alignment with IBM's hybrid cloud space with all customers are looking for now and certainly multi-cloud around the horizon. So great to have you on, great agility and congratulations for your continued success. You got great area cloud and AI and we'll be keeping in touch. I'd love to do a deep dive sometime. Thanks for coming on. >> John, thank you very much for the invitation and great questions. Great interview. Love it. Appreciate it. >> Okay, CUBE coverage here at IBM Think 2021 virtual. I'm John Furrier, your host. Thanks for watching. (upbeat music)
SUMMARY :
Narrator: From around the globe. and just the transformation of IBM. and taking also the capabilities in the 80s, late 80s to early 90s And so part of the power of the solutions with you on and the get hub notion being a key one. of the top solutions that the stainless steel will be bad. and congratulations for for the invitation and great questions. Thanks for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
IBM | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
Uli Homann | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
30 years | QUANTITY | 0.99+ |
Cloud Native Computing Foundation | ORGANIZATION | 0.99+ |
John Furr | PERSON | 0.99+ |
Uli | PERSON | 0.99+ |
CNCF | ORGANIZATION | 0.99+ |
two categories | QUANTITY | 0.99+ |
PowerPoint | TITLE | 0.99+ |
first question | QUANTITY | 0.99+ |
early 90s | DATE | 0.99+ |
three | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
Azure | TITLE | 0.98+ |
late 80s | DATE | 0.98+ |
Today | DATE | 0.98+ |
one thread | QUANTITY | 0.98+ |
two directions | QUANTITY | 0.98+ |
Red Hat | ORGANIZATION | 0.97+ |
Azure Stack | TITLE | 0.97+ |
80s | DATE | 0.97+ |
both partnerships | QUANTITY | 0.97+ |
one side | QUANTITY | 0.96+ |
First | QUANTITY | 0.96+ |
first | QUANTITY | 0.96+ |
one | QUANTITY | 0.94+ |
Think 2021 | COMMERCIAL_ITEM | 0.93+ |
JG Chirapurath, Microsoft CLEAN
>> Okay, we're now going to explore the vision of the future of cloud computing from the perspective of one of the leaders in the field, JG Chirapurath is the Vice President of Azure Data AI and Edge at Microsoft. JG, welcome to theCUBE on Cloud, thanks so much for participating. >> Well, thank you, Dave. And it's a real pleasure to be here with you and just want to welcome the audience as well. >> Well, JG, judging from your title, we have a lot of ground to cover and our audience is definitely interested in all the topics that are implied there. So let's get right into it. We've said many times in theCUBE that the new innovation cocktail comprises machine intelligence or AI applied to troves of data with the scale of the cloud. It's no longer we're driven by Moore's law. It's really those three factors and those ingredients are going to power the next wave of value creation in the economy. So first, do you buy into that premise? >> Yes, absolutely. We do buy into it and I think one of the reasons why we put data analytics and AI together, is because all of that really begins with the collection of data and managing it and governing it, unlocking analytics in it. And we tend to see things like AI, the value creation that comes from AI as being on that continuum of having started off with really things like analytics and proceeding to be machine learning and the use of data in interesting ways. >> Yes, I'd like to get some more thoughts around data and how you see the future of data and the role of cloud and maybe how Microsoft strategy fits in there. I mean, your portfolio, you've got SQL Server, Azure SQL, you got Arc which is kind of Azure everywhere for people that aren't familiar with that you got Synapse which course does all the integration, the data warehouse and it gets things ready for BI and consumption by the business and the whole data pipeline. And then all the other services, Azure Databricks, you got you got Cosmos in there, you got Blockchain, you've got Open Source services like PostgreSQL and MySQL. So lots of choices there. And I'm wondering, how do you think about the future of cloud data platforms? It looks like your strategy is right tool for the right job. Is that fair? >> It is fair, but it's also just to step back and look at it. It's fundamentally what we see in this market today, is that customers they seek really a comprehensive proposition. And when I say a comprehensive proposition it is sometimes not just about saying that, "Hey, listen "we know you're a sequence of a company, "we absolutely trust that you have the best "Azure SQL database in the cloud. "But tell us more." We've got data that is sitting in Hadoop systems. We've got data that is sitting in PostgreSQL, in things like MongoDB. So that open source proposition today in data and data management and database management has become front and center. So our real sort of push there is when it comes to migration management modernization of data to present the broadest possible choice to our customers, so we can meet them where they are. However, when it comes to analytics, one of the things they ask for is give us lot more convergence use. It really, it isn't about having 50 different services. It's really about having that one comprehensive service that is converged. That's where things like Synapse fits in where you can just land any kind of data in the lake and then use any compute engine on top of it to drive insights from it. So fundamentally, it is that flexibility that we really sort of focus on to meet our customers where they are. And really not pushing our dogma and our beliefs on it but to meet our customers according to the way they've deployed stuff like this. >> So that's great. I want to stick on this for a minute because when I have guests on like yourself they never want to talk about the competition but that's all we ever talk about. And that's all your customers ever talk about. Because the counter to that right tool for the right job and that I would say is really kind of Amazon's approach is that you got the single unified data platform, the mega database. So it does it all. And that's kind of Oracle's approach. It sounds like you want to have your cake and eat it too. So you got the right tool with the right job approach but you've got an integration layer that allows you to have that converged database. I wonder if you could add color to that and confirm or deny what I just said. >> No, that's a very fair observation but I'd say there's a nuance in what I sort of described. When it comes to data management, when it comes to apps, we have then customers with the broadest choice. Even in that perspective, we also offer convergence. So case in point, when you think about cosmos DB under that one sort of service, you get multiple engines but with the same properties. Right, global distribution, the five nines availability. It gives customers the ability to basically choose when they have to build that new cloud native app to adopt cosmos DB and adopt it in a way that is an choose an engine that is most flexible to them. However, when it comes to say, writing a SequenceServer for example, if modernizing it, you want sometimes, you just want to lift and shift it into things like IS. In other cases, you want to completely rewrite it. So you need to have the flexibility of choice there that is presented by a legacy of what sits on premises. When you move into things like analytics, we absolutely believe in convergence. So we don't believe that look, you need to have a relational data warehouse that is separate from a Hadoop system that is separate from say a BI system that is just, it's a bolt-on. For us, we love the proposition of really building things that are so integrated that once you land data, once you prep it inside the Lake you can use it for analytics, you can use it for BI, you can use it for machine learning. So I think, our sort of differentiated approach speaks for itself there. >> Well, that's interesting because essentially again you're not saying it's an either or, and you see a lot of that in the marketplace. You got some companies you say, "No, it's the data lake." And others say "No, no, put it in the data warehouse." And that causes confusion and complexity around the data pipeline and a lot of cutting. And I'd love to get your thoughts on this. A lot of customers struggle to get value out of data and specifically data product builders are frustrated that it takes them too long to go from, this idea of, hey, I have an idea for a data service and it can drive monetization, but to get there you got to go through this complex data life cycle and pipeline and beg people to add new data sources and do you feel like we have to rethink the way that we approach data architecture? >> Look, I think we do in the cloud. And I think what's happening today and I think the place where I see the most amount of rethink and the most amount of push from our customers to really rethink is the area of analytics and AI. It's almost as if what worked in the past will not work going forward. So when you think about analytics only in the enterprise today, you have relational systems, you have Hadoop systems, you've got data marts, you've got data warehouses you've got enterprise data warehouse. So those large honking databases that you use to close your books with. But when you start to modernize it, what people are saying is that we don't want to simply take all of that complexity that we've built over, say three, four decades and simply migrate it en masse exactly as they are into the cloud. What they really want is a completely different way of looking at things. And I think this is where services like Synapse completely provide a differentiated proposition to our customers. What we say there is land the data in any way you see, shape or form inside the lake. Once you landed inside the lake, you can essentially use a Synapse Studio to prep it in the way that you like. Use any compute engine of your choice and operate on this data in any way that you see fit. So case in point, if you want to hydrate a relational data warehouse, you can do so. If you want to do ad hoc analytics using something like Spark, you can do so. If you want to invoke Power BI on that data or BI on that data, you can do so. If you want to bring in a machine learning model on this prep data, you can do so. So inherently, so when customers buy into this proposition, what it solves for them and what it gives to them is complete simplicity. One way to land the data multiple ways to use it. And it's all integrated. >> So should we think of Synapse as an abstraction layer that abstracts away the complexity of the underlying technology? Is that a fair way to think about it? >> Yeah, you can think of it that way. It abstracts away Dave, a couple of things. It takes away that type of data. Sort of complexities related to the type of data. It takes away the complexity related to the size of data. It takes away the complexity related to creating pipelines around all these different types of data. And fundamentally puts it in a place where it can be now consumed by any sort of entity inside the Azure proposition. And by that token, even Databricks. You can in fact use Databricks in sort of an integrated way with the Azure Synapse >> Right, well, so that leads me to this notion of and I wonder if you buy into it. So my inference is that a data warehouse or a data lake could just be a node inside of a global data mesh. And then it's Synapse is sort of managing that technology on top. Do you buy into that? That global data mesh concept? >> We do and we actually do see our customers using Synapse and the value proposition that it brings together in that way. Now it's not where they start, oftentimes when a customer comes and says, "Look, I've got an enterprise data warehouse, "I want to migrate it." Or "I have a Hadoop system, I want to migrate it." But from there, the evolution is absolutely interesting to see. I'll give you an example. One of the customers that we're very proud of is FedEx. And what FedEx is doing is it's completely re-imagining its logistics system. That basically the system that delivers, what is it? The 3 million packages a day. And in doing so, in this COVID times, with the view of basically delivering on COVID vaccines. One of the ways they're doing it, is basically using Synapse. Synapse is essentially that analytic hub where they can get complete view into the logistic processes, way things are moving, understand things like delays and really put all of that together in a way that they can essentially get our packages and these vaccines delivered as quickly as possible. Another example, it's one of my favorite. We see once customers buy into it, they essentially can do other things with it. So an example of this is really my favorite story is Peace Parks initiative. It is the premier of white rhino conservancy in the world. They essentially are using data that has landed in Azure, images in particular to basically use drones over the vast area that they patrol and use machine learning on this data to really figure out where is an issue and where there isn't an issue. So that this part with about 200 radios can scramble surgically versus having to range across the vast area that they cover. So, what you see here is, the importance is really getting your data in order, landing consistently whatever the kind of data it is, build the right pipelines, and then the possibilities of transformation are just endless. >> Yeah, that's very nice how you worked in some of the customer examples and I appreciate that. I want to ask you though that some people might say that putting in that layer while you clearly add simplification and is I think a great thing that there begins over time to be a gap, if you will, between the ability of that layer to integrate all the primitives and all the piece parts, and that you lose some of that fine grain control and it slows you down. What would you say to that? >> Look, I think that's what we excel at and that's what we completely sort of buy into. And it's our job to basically provide that level of integration and that granularity in the way that it's an art. I absolutely admit it's an art. There are areas where people crave simplicity and not a lot of sort of knobs and dials and things like that. But there are areas where customers want flexibility. And so I think just to give you an example of both of them, in landing the data, in consistency in building pipelines, they want simplicity. They don't want complexity. They don't want 50 different places to do this. There's one way to do it. When it comes to computing and reducing this data, analyzing this data, they want flexibility. This is one of the reasons why we say, "Hey, listen you want to use Databricks. "If you're buying into that proposition. "And you're absolutely happy with them, "you can plug it into it." You want to use BI and essentially do a small data model, you can use BI. If you say that, "Look, I've landed into the lake, "I really only want to use ML." Bring in your ML models and party on. So that's where the flexibility comes in. So that's sort of that we sort of think about it. >> Well, I like the strategy because one of our guests, Jumark Dehghani is I think one of the foremost thinkers on this notion of of the data mesh And her premise is that the data builders, data product and service builders are frustrated because the big data system is generic to context. There's no context in there. But by having context in the big data architecture and system you can get products to market much, much, much faster. So, and that seems to be your philosophy but I'm going to jump ahead to my ecosystem question. You've mentioned Databricks a couple of times. There's another partner that you have, which is Snowflake. They're kind of trying to build out their own DataCloud, if you will and GlobalMesh, and the one hand they're a partner on the other hand they're a competitor. How do you sort of balance and square that circle? >> Look, when I see Snowflake, I actually see a partner. When we see essentially we are when you think about Azure now this is where I sort of step back and look at Azure as a whole. And in Azure as a whole, companies like Snowflake are vital in our ecosystem. I mean, there are places we compete, but effectively by helping them build the best Snowflake service on Azure, we essentially are able to differentiate and offer a differentiated value proposition compared to say a Google or an AWS. In fact, that's been our approach with Databricks as well. Where they are effectively on multiple clouds and our opportunity with Databricks is to essentially integrate them in a way where we offer the best experience the best integrations on Azure Berna. That's always been our focus. >> Yeah, it's hard to argue with the strategy or data with our data partner and ETR shows Microsoft is both pervasive and impressively having a lot of momentum spending velocity within the budget cycles. I want to come back to AI a little bit. It's obviously one of the fastest growing areas in our survey data. As I said, clearly Microsoft is a leader in this space. What's your vision of the future of machine intelligence and how Microsoft will participate in that opportunity? >> Yeah, so fundamentally, we've built on decades of research around essentially vision, speech and language. That's been the three core building blocks and for a really focused period of time, we focused on essentially ensuring human parity. So if you ever wonder what the keys to the kingdom are, it's the boat we built in ensuring that the research or posture that we've taken there. What we've then done is essentially a couple of things. We've focused on essentially looking at the spectrum that is AI. Both from saying that, "Hey, listen, "it's got to work for data analysts." We're looking to basically use machine learning techniques to developers who are essentially, coding and building machine learning models from scratch. So for that select proposition manifest to us as really AI focused on all skill levels. The other core thing we've done is that we've also said, "Look, it'll only work as long "as people trust their data "and they can trust their AI models." So there's a tremendous body of work and research we do and things like responsible AI. So if you asked me where we sort of push on is fundamentally to make sure that we never lose sight of the fact that the spectrum of AI can sort of come together for any skill level. And we keep that responsible AI proposition absolutely strong. Now against that canvas Dave, I'll also tell you that as Edge devices get way more capable, where they can input on the Edge, say a camera or a mic or something like that. You will see us pushing a lot more of that capability onto the edge as well. But to me, that's sort of a modality but the core really is all skill levels and that responsibility in AI. >> Yeah, so that brings me to this notion of, I want to bring an Edge and hybrid cloud, understand how you're thinking about hybrid cloud, multicloud obviously one of your competitors Amazon won't even say the word multicloud. You guys have a different approach there but what's the strategy with regard to hybrid? Do you see the cloud, you're bringing Azure to the edge maybe you could talk about that and talk about how you're different from the competition. >> Yeah, I think in the Edge from an Edge and I even I'll be the first one to say that the word Edge itself is conflated. Okay, a little bit it's but I will tell you just focusing on hybrid, this is one of the places where, I would say 2020 if I were to look back from a COVID perspective in particular, it has been the most informative. Because we absolutely saw customers digitizing, moving to the cloud. And we really saw hybrid in action. 2020 was the year that hybrid sort of really became real from a cloud computing perspective. And an example of this is we understood that it's not all or nothing. So sometimes customers want Azure consistency in their data centers. This is where things like Azure Stack comes in. Sometimes they basically come to us and say, "We want the flexibility of adopting "flexible button of platforms let's say containers, "orchestrating Kubernetes "so that we can essentially deploy it wherever you want." And so when we designed things like Arc, it was built for that flexibility in mind. So, here's the beauty of what something like Arc can do for you. If you have a Kubernetes endpoint anywhere, we can deploy an Azure service onto it. That is the promise. Which means, if for some reason the customer says that, "Hey, I've got "this Kubernetes endpoint in AWS. And I love Azure SQL. You will be able to run Azure SQL inside AWS. There's nothing that stops you from doing it. So inherently, remember our first principle is always to meet our customers where they are. So from that perspective, multicloud is here to stay. We are never going to be the people that says, "I'm sorry." We will never say (speaks indistinctly) multicloud but it is a reality for our customers. >> So I wonder if we could close, thank you for that. By looking back and then ahead and I want to put forth, maybe it's a criticism, but maybe not. Maybe it's an art of Microsoft. But first, you did Microsoft an incredible job at transitioning its business. Azure is omnipresent, as we said our data shows that. So two-part question first, Microsoft got there by investing in the cloud, really changing its mindset, I think and leveraging its huge software estate and customer base to put Azure at the center of it's strategy. And many have said, me included, that you got there by creating products that are good enough. We do a one Datto, it's still not that great, then a two Datto and maybe not the best, but acceptable for your customers. And that's allowed you to grow very rapidly expand your market. How do you respond to that? Is that a fair comment? Are you more than good enough? I wonder if you could share your thoughts. >> Dave, you hurt my feelings with that question. >> Don't hate me JG. (both laugh) We're getting it out there all right, so. >> First of all, thank you for asking me that. I am absolutely the biggest cheerleader you'll find at Microsoft. I absolutely believe that I represent the work of almost 9,000 engineers. And we wake up every day worrying about our customer and worrying about the customer condition and to absolutely make sure we deliver the best in the first attempt that we do. So when you take the plethora of products we deliver in Azure, be it Azure SQL, be it Azure Cosmos DB, Synapse, Azure Databricks, which we did in partnership with Databricks, Azure Machine Learning. And recently when we premiered, we sort of offered the world's first comprehensive data governance solution in Azure Purview. I would humbly submit it to you that we are leading the way and we're essentially showing how the future of data, AI and the Edge should work in the cloud. >> Yeah, I'd be disappointed if you capitulated in any way, JG. So, thank you for that. And that's kind of last question is looking forward and how you're thinking about the future of cloud. Last decade, a lot about cloud migration, simplifying infrastructure to management and deployment. SaaSifying My Enterprise, a lot of simplification and cost savings and of course redeployment of resources toward digital transformation, other valuable activities. How do you think this coming decade will be defined? Will it be sort of more of the same or is there something else out there? >> I think that the coming decade will be one where customers start to unlock outsize value out of this. What happened to the last decade where people laid the foundation? And people essentially looked at the world and said, "Look, we've got to make a move. "They're largely hybrid, but you're going to start making "steps to basically digitize and modernize our platforms. I will tell you that with the amount of data that people are moving to the cloud, just as an example, you're going to see use of analytics, AI or business outcomes explode. You're also going to see a huge sort of focus on things like governance. People need to know where the data is, what the data catalog continues, how to govern it, how to trust this data and given all of the privacy and compliance regulations out there essentially their compliance posture. So I think the unlocking of outcomes versus simply, Hey, I've saved money. Second, really putting this comprehensive sort of governance regime in place and then finally security and trust. It's going to be more paramount than ever before. >> Yeah, nobody's going to use the data if they don't trust it, I'm glad you brought up security. It's a topic that is at number one on the CIO list. JG, great conversation. Obviously the strategy is working and thanks so much for participating in Cube on Cloud. >> Thank you, thank you, Dave and I appreciate it and thank you to everybody who's tuning into today. >> All right then keep it right there, I'll be back with our next guest right after this short break.
SUMMARY :
of one of the leaders in the field, to be here with you that the new innovation cocktail comprises and the use of data in interesting ways. and how you see the future that you have the best is that you got the single that once you land data, but to get there you got to go in the way that you like. Yeah, you can think of it that way. of and I wonder if you buy into it. and the value proposition and that you lose some of And so I think just to give you an example So, and that seems to be your philosophy when you think about Azure Yeah, it's hard to argue the keys to the kingdom are, Do you see the cloud, you're and I even I'll be the first one to say that you got there by creating products Dave, you hurt my We're getting it out there all right, so. that I represent the work Will it be sort of more of the same and given all of the privacy the data if they don't trust it, thank you to everybody I'll be back with our next guest
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
JG | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Microsoft | ORGANIZATION | 0.99+ |
FedEx | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Jumark Dehghani | PERSON | 0.99+ |
Databricks | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
JG Chirapurath | PERSON | 0.99+ |
first | QUANTITY | 0.99+ |
50 different services | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
50 different places | QUANTITY | 0.99+ |
MySQL | TITLE | 0.99+ |
one | QUANTITY | 0.99+ |
GlobalMesh | ORGANIZATION | 0.99+ |
Both | QUANTITY | 0.99+ |
first attempt | QUANTITY | 0.99+ |
Second | QUANTITY | 0.99+ |
Last decade | DATE | 0.99+ |
three | QUANTITY | 0.99+ |
three factors | QUANTITY | 0.99+ |
Synapse | ORGANIZATION | 0.99+ |
one way | QUANTITY | 0.99+ |
COVID | OTHER | 0.99+ |
One | QUANTITY | 0.98+ |
first one | QUANTITY | 0.98+ |
first principle | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
Azure Stack | TITLE | 0.98+ |
Azure SQL | TITLE | 0.98+ |
Spark | TITLE | 0.98+ |
First | QUANTITY | 0.98+ |
MongoDB | TITLE | 0.98+ |
2020 | DATE | 0.98+ |
about 200 radios | QUANTITY | 0.98+ |
Moore | PERSON | 0.97+ |
PostgreSQL | TITLE | 0.97+ |
four decades | QUANTITY | 0.97+ |
Arc | TITLE | 0.97+ |
single | QUANTITY | 0.96+ |
Snowflake | ORGANIZATION | 0.96+ |
last decade | DATE | 0.96+ |
Azure Purview | TITLE | 0.95+ |
3 million packages a day | QUANTITY | 0.95+ |
One way | QUANTITY | 0.94+ |
three core | QUANTITY | 0.94+ |
Greg Altman, Swiff-Train Company & Puneet Dhawan, Dell EMC | Dell Technologies World 2020
>> Narrator: From around the globe, it's theCUBE, with digital coverage of Dell Technologies World. Digital Experience brought to you by Dell Technologies. >> Welcome to theCUBE's coverage of Dell Technologies World 2020, the Digital Experience. I am Lisa Martin and I've got a couple of guests joining me. Please welcome Puneet Dhawan, the Director of Product Management, Hyper-converged infrastructure for Dell Technologies. Puneet great to see you today. >> Thank you, for having me over. >> And we've got a customer that's going to be articulating all the value that Puneet's going to talk about. Please welcome Greg Altman, the IT infrastructure manager from Swiff-Train. Hey, Greg, how are you today? >> I'm doing well. Thank you. >> Excellent. All right guys. So Puneet, let's start with you, give us a little bit of an overview of your role. You lead product management, for Dell Technologies partner aligned HCI systems. Talk to us about that? >> Sure, absolutely. Um so, you know, it's largely about providing customers the choice. My team specifically focuses on developing Hyper-converged infrastructure products for our customers that are aligned to key technologies from our partners, such as Microsoft, Nutanix, et cetera. And that, you know, falls very nicely with meeting our customers on what technology they want to pick on, what technology they want to go with, whether it's VMware, Microsoft, Nutanix, we have to source from the customers. >> Let's dig into Microsoft. Talk to us about Azure Stack HCI. How is Dell Tech working with them to position this in the market? >> Sure, um, this is largely about following the customer journey towards digital transformation. So both in terms of where they are in digital transformation and how they want to approach it. So for example, we have a large customer base who's looking to modernize their legacy Hyper-V architectures, and that's where Azure Stack HCI fits in very nicely, and not only our customers are able to modernize the legacy architectures using the architectural benefits of simplicity, high performance, simple management, scalability. (Greg breathes heavily) For HCI for Hyper-V, at the same time, they can connect to Azure to get the benefits of the bullet's force. Now on the other end, we have a large customer base who started off in Azure, you know, they have cloud native applications, you know, kind of born in the cloud. But they're also looking to bring some of the applications down to on-prem, or things like disconnected scenarios, regulatory concerns, data locality reasons. And for those customers, Microsoft and Dell have a department around Dell EMC Integrated solutions for Azure Stack Hub. And that's what essentially brings Azure ecosystem, on-prem so it's like running cloud in your own premises. >> So you mentioned a second ago giving customers choice, and we always talk about that at pretty much every event that we do. So tell me a little bit about how the long standing partnership that Dell Technologies has with Microsoft decades. How is that helping you to really differentiate the technology and then show the customers the different options, together these two companies can deliver? >> Sure, so we've had a very long standing partnerships, actually over three decades now. Across the spectrum whether we talk about our partnership more on the Windows 10 side, and the modernization of the workforce, to the level of hybrid cloud and cloud solutions, and helping even customers, you know, run their applications on Azure to our large services offerings. Over the past several years, we have realized how important is hybrid cloud and multicloud for customers. And that's where we have taken our partnership to the next level, to co-develop, co-engineer and bring to the market together our full portfolio of Azure Stack Hybrid Solutions. And that's where I've said, meeting customers on where they are either bringing Azure on-prem, or helping customers on-prem, modernize on-prem architectures using Azure Stack HCI. So, you know, there's a whole lot of core development we have done together to simplify how customers manage on-prem infrastructures on a day-to-day basis, how do they install it, even how they support it, you know, we have joined support agreements with Microsoft that encompassed and bearing the entirety of the portfolio so that customers have one place to go, which is Dell Technologies to get not only the product, either in US or worldwide, to a very secure supply chain to Dell EMC, at the same time for all their support consulting services, whether they're on-prem or in the cloud. We offer all those services in very close partnership with Microsoft. >> Terrific. Great. Let's switch over to you now, probably we talk about what Swiff-Train is doing with its Azure Stack HCI, tell our audience a little bit about Swiff-Train what you guys are what you do. >> Well, Swiff-Train is a full covering flooring wholesaler, we sell flooring across Texas, Oklahoma, Louisiana, Arkansas, even into Florida. And we're an 80 year old company, 80 plus. And we've been moving forward with kind of hybridizing our infrastructure, making use of cloud where it makes sense. And when it came to our on-prem infrastructure, it was old, well five, six years old, running Windows 2012 2016, it was time to upgrade. And when we look at doing a large scale upgrade, like that, we called Dell and say, you know, this is what we're trying to do, and what's the new technologies that we can do that makes the migration work easier. And that's where we wound up with Azure Stack. >> So from a modernization perspective, you mentioned 80 plus year old company, I was looking on the website 1937. I always like to talk to companies like that, because modernizing when you've been around for that long it's challenging, it's challenging culturally , it's challenging historically, But talk to us a little bit about some of the specifics, that you guys were looking to Dell and Microsoft to help modernize. And was this really to drive things like, you know, operational simplicity, allow the business to have more agility so that it can expand in some of those other cities, like we talked about? >> Absolutely. We were dealing with a long maintenance window five or six hours every week patching, updating. Since we moved to Azure Stack HCI, we have virtually zero downtime. That allows our night shifts or weekend crews to be able to keep working. And the system is just bulletproof. It just does not go down. And with the lifecycle management tools that we get with Windows Admin Center, and Dell's OpenManage Plug-in, I log into one pane of glass in the morning, and I look and I say, "Hey, all my servers are going great. Everything's in the green." I know that that day, I'm not going to have any infrastructure issues, I can deal with other issues that make the business money. >> And I'm sure they appreciate that. Tell us a little bit about the the actual implementation and the support as, as Puneet talked about all of the core development, the joint support that these two powerhouses deliver. Tell us about that implementation. And then for your day to day, what's your interaction with Dell and or Microsoft like? >> Well, for the implementation, we worked with our Dell representative. And we came up with a sizing plan. This is what we needed to do, we had eight or nine physical servers that we wanted to get rid of. And we wanted to compress down. Now we're definitely went from eight or nine to you servers down to three rack units of space with an edge, including the extra switches and stuff that we had to do. So I mean we were able to get rid of a lot of storage space or rack space. And as far as the implementation was really easy. Dell literally has a book, you follow the book and it's that simple. (Puneet chuckles) >> I like that I think more of us these days, can you somewhat write a book that we can just follow? That would be fantastic. One more question, Greg for you, before we go back to Puneet. As Puneet talked about in the beginning from describing his role, that you know, Dell Technologies works with a lot of other vendors. Why Azure Stack HCI for Swiff-Train? >> Well, it made sense for us. We were already moving, several of our websites were already moved to Azure, we've been a Hyper-V user for many years. So it was just kind of a natural evolution to migrate in that direction, because it kind of pulls all of our management tools into one, well you know, a one pane of glass type of scenario. >> Excellent. All right Puneet back to you. With some of the things that you talked about before and that Greg sort of articulated about simplifying day-to-day. Greg, I saw in my notes that you had this old aging infrastructure, you were spending five hours a week patching maintain, that you say is now virtually eliminated, Puneet, Dell Technologies and Microsoft had done quite a bit of work to simplify the operational experience. Talk to us about that, and what are some of the measurable improvements that you guys have made? >> Sure. It all starts with neither on how we approach the problem, and we have always taken a very product-centric approach at Azure Stack HCI. You know, unlike, some of our competition, which had followed. There is a reference architecture, you can put Windows Server 2019 on it and go run your own servers, and the Hyper-converged Stack on it, but we have followed a very different approach where we have learned quite a lot, you know, we are the number one vendor in HCI space, and we know a thing or two about HCI and what customers really need there. So that's why from the very beginning, we have taken a product-centric approach, and doing that allows us to have product type offers in terms of our Kx notes that are specifically designed and built for Azure Stack HCI. And on top of that, we have done very specific integration to the management Stack, we've been doing Admin Center, that is the new management tool for Microsoft to manage, both on-prem, Hyper-converged infrastructure, your Windows servers, as well as any VM's that you're running on Azure, to provide customers a very seamless, you know, a single pane of glass for both the on-prem as well as infrastructure on public cloud services. And in doing that, our customers have really appreciated how simple it is to keep their clusters running, to reduce the maintenance windows, based on some of our internal testing that we have done. IT administrators can reduce the time they spend on maintaining the clusters by over 90%. Over 40% reduction in the maintenance window itself. And all that leads to your clusters running in a healthy state. So you don't have to worry about pulling the right drivers, right founder from 10 different places, making sure whether they are qualified or not when running together, we provide one single pane of glass that customers can click on, and you know, see whether their questions are compliant or not, and if yes go update. And all this has been possible by a joint engineering with Microsoft. >> Can you just describe the difference between an all in one validated HCI solution, which is what you're delivering, versus competitors that are only delivering a reference architecture? >> Absolutely. So if you're running just a reference architecture, you are running an operating system, systems Stack on a server, we know that when it comes to running HCI, that means running also business critical applications on a clustered environment. You need to make sure that all the hardware, the drivers, the founder, the hard drives, the memory configuration, the network configurations, all that can be very complex very easily. And if you have reference architectures, there is no way to know, but then running certified components in my note are not. How do you tell then? If a part fails? How do which part to sell or send, you know, for a replacement? If you're just running a reference architecture, you have no way to say the part the hard drive that failed, the one that was sent to the customer to replace whether that is certified for Azure Stack HCI or not? You know, what, how do you really make a determination, what is the right firmware that needs to be applied to a cluster of what other drivers that apply to be cluster, that are compliant and tested for Azure Stack HCI. None of these things are possible, if you just have a reference architecture approach. That's why we have been very clear that our approach is a product-based approach. And, you know, very frankly this is how we have... that's the feedback we've provided the Microsoft to, and we've been working very, you know, closely together. And you see that, now in terms of the new Azure Stack HCI, that Microsoft announced at Inspirely this year, that brings Microsoft into the mainstream HCI space as a product offering, and not just as a feature or a few features within the Windows Server program. >> Greg, I saw in the notes with respect to Swiff-Train that you guys have with Azure Stack HCI, you have reduced Rackspace by 50%, you talked about some of the Rackspace benefits. But you've also reduced energy by 70%. Those are big, impactful numbers, impacting not just your day-to-day but the overall business. >> That's true, >> Last question for you, Greg. If you think about how can you just describe the difference between an all in one validated HCI solution versus a reference architecture. For your peers watching in any industry. what's your... what are your top recommendations for going with a validated all in one solution? >> Well, we looked at doing the reference architecture's path, if you will, because we're hands on we like to build things and I looked at it and like Puneet said, "Drivers and memory and making sure that everything is going to work well together." And not only that everything is going to work well together. But when something fails, then you get into the finger pointing between vendors, your storage vendor, your process vendor, that's not something that we need to deal with. We need to keep a business running. So we went with Dell, it's one box, you know, but one box per unit and then you Stack two of them together you have a cluster. >> You make it sound so easy. >> Let us question-- >> I put together children's toys that were harder than building the Stack I promise you, I did it in an afternoon. >> Music to my ears Greg, thank you. (Greg giggles) >> It was that easy >> That is gold >> Easier to put together Azure Stack HCI than some, probably even opening the box of some children's toys I can imagine. (all chuckling) >> We should use that as a tagline. >> Exactly. You should, I think you have a new tagline there. Greg, thank you. Puneet, well last question for you, Would Dell Technologies World sessions on hybrid cloud benefits with Dell and Microsoft? Give us a flavor of what some of the things are that the audience will have a chance to learn. >> Yeah, this is a great session with Microsoft that essentially provides our customers an overview of our joint hybrid cloud solutions, both for Microsoft Azure Stack Hub, Azure stack HCI as well as our joint solutions on VMware in Azure. But much more importantly, we also talk about what's coming next. Now, especially with Microsoft as your Stack at CIO's a full blown product. Hyper hybrid, you know, HCI offering that will be available as, Azure service. So customers could run on-prem infrastructure that is Hyper-converged but managed pay bill for as an Azure service, so that they have always the latest and greatest from Microsoft. And all the product differentiation we have created in terms of a product-centric approach, simpler lifecycle management will all be applicable, in this new hybrid, hybrid cloud solution as well. And that led essentially a great foundation for our customers who have standardized on Hyper-V, who are much more aligned to Azure, to not worry about the infrastructure on-prem. But start taking advantages of both the modernization benefits of HCI. But much more importantly, start coupling back with the hybrid ecosystem that we are building with Microsoft, whether it's running an Azure Kubernetes service on top to modernize the new applications, and bringing the Azure data services such as Azure SQL Server on top, so that you have a consistent, vertically aligned hybrid cloud infrastructure Stack that is not only easy to manage, but it is modern, it is available as a pay as you go option. And it's tightly integrated into Azure, so that you can manage all your on-prem as well as public cloud resources on one single pane of glass, thereby providing customers whole lot more simplicity, and operational efficiency. >> And as you said, the new tagline said from, beautifully from Greg's mouth, "The customer easier to put together than many children's toys." Puneet, thank you so much for sharing with us what's going on with Azure Stack HCI, what folks can expect to learn and see at Dell Tech World of virtual experience. >> Thank you. >> And Greg, thank you for sharing the story, what you're doing. Helping your peers learn from you. And I'm going to say on behalf of Dell Technologies, that awesome new tagline. That was cool. (Greg chuckles) (Lisa chuckles) >> Thank you. 'Preciate your time. >> We're going to use it for sure. (Greg chuckles) >> All right, for Puneet Dhawan and Greg Altman. I'm Lisa Martin. You're watching theCUBE's coverage of Dell Technologies World, the Digital Experience. (soft music)
SUMMARY :
to you by Dell Technologies. Puneet great to see you today. all the value that Puneet's Thank you. Talk to us about that? that are aligned to key Talk to us about Azure Stack HCI. some of the applications down to on-prem, How is that helping you to so that customers have one place to go, switch over to you now, that makes the migration work easier. allow the business to have more agility that make the business money. and the support as, as Puneet talked about and stuff that we had to do. from describing his role, that you know, into one, well you know, Greg, I saw in my notes that you had this And all that leads to that all the hardware, to Swiff-Train that you guys the difference between and then you Stack two of them than building the Stack I promise you, Music to my ears Greg, probably even opening the are that the audience will so that you can manage all your on-prem And as you said, And Greg, thank you 'Preciate your time. We're going to use it for sure. the Digital Experience.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
Puneet | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Greg Altman | PERSON | 0.99+ |
Greg | PERSON | 0.99+ |
Puneet Dhawan | PERSON | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Texas | LOCATION | 0.99+ |
eight | QUANTITY | 0.99+ |
Florida | LOCATION | 0.99+ |
Louisiana | LOCATION | 0.99+ |
Puneet Dhawan | PERSON | 0.99+ |
one box | QUANTITY | 0.99+ |
Arkansas | LOCATION | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
five | QUANTITY | 0.99+ |
Oklahoma | LOCATION | 0.99+ |
US | LOCATION | 0.99+ |
Swiff-Train Company | ORGANIZATION | 0.99+ |
70% | QUANTITY | 0.99+ |
nine | QUANTITY | 0.99+ |
50% | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
six years | QUANTITY | 0.99+ |
Azure Stack HCI | TITLE | 0.99+ |
two companies | QUANTITY | 0.99+ |
six hours | QUANTITY | 0.99+ |
Swiff-Train | ORGANIZATION | 0.99+ |
10 different places | QUANTITY | 0.99+ |
three rack | QUANTITY | 0.99+ |
Dell Tech | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
Windows 10 | TITLE | 0.99+ |
Evan Weaver & Eric Berg, Fauna | Cloud Native Insights
(bright upbeat music) >> Announcer: From theCUBE studios in Palo Alto in Boston, connecting with thought leaders around the globe, these are Cloud Native Insights. >> Hi, I'm Stu Miniman, the host of Cloud Native Insights. We talk about cloud native, we're talking about how customers can take advantage of the innovation and agility that's out there in the clouds, one of the undercurrents, not so hidden if you've been watching the program so far. We've talked a bit about serverless, say something that's helping remove the friction, allowed developers to take advantage of technology and definitely move really fast. So I'm really happy to welcome to the program, for coming from Fauna. First of all, I have the CTO and Co-founder, who's Evan Weaver. And also joining him is the new CEO Eric Berg. They said, both from Fauna, talking serverless, talking data as an API and talking the modern database. So first of all, thank you both for joining us. >> Thanks for having us Stu. >> Hi, good to be here. >> All right, so Evan, we're going to start with you. I love talking to founders always. If you could take us back a little bit, Fauna as a project first before it was a company, you of course were an early employee at Twitter. So if you could just bring us back a little bit, what created the Fauna project and bring us through a brief history if you would. >> So I was employee 15 and Twitter, I joined in 2008. And I had a database background, I was sort of a performance analyst and worked on Ruby on Rails sites at CNET networks with the team that went on to found GitHub actually. Now I went to Twitter 'cause I wanted Twitter the product to stay alive. And for no greater ambition than that. And I ended up running the back end engineering team there and building out all the distributed storage for the core business objects, tweets, timelines, the social graph, image storage, the cache, that kind of thing. And this was early in the cloud era. API's were new and weird. You couldn't get Amazon EC2 off the shelf easily. We were racking hardware and code ancient center. And there were no databases or platforms for data of any kind. They really let us the Twitter engineering team focus on building the product. And we did a lot of open source work there. Some of which has influenced Fauna, originally, Twitter's open source was hosted on the Fauna GitHub account, which predated Twitter like you mentioned. And I was there for four years build out the team, basically scaled the site, especially scaled the Twitter.com API. And we just never found a platform which was suitable for what we were trying to accomplish. Like a lot of what Twitter did was itself a platform. We had developers all over the world using the Twitter API to interact with tweets. And we're frustrated that we basically had to become specialists in data systems because there wasn't a data API, we can just build the product on. And ultimately, then data API that we wished we had, is now Fauna. >> Well, it's a story we've loved hearing. And it's fascinating one, is that the marketplace wasn't doing what we needed. Often open source is a piece of that, how do we scale that out? How do we build that? Realized that the problem that you have is what others have. And hey, maybe there's a company. So could you give us that transition, Fauna as a product, as a company, where was it understood that, hey, there's a lot of other people that can take advantage from some of the same tools that you needed before. >> I mean, we saw it in the developers working with the Twitter platform. We weren't like, your traditional database experiences, either manage cloud or on-prem, you have to administrate the machine, and you're responsible for its security and its availability and its location and backups and all that kind of thing. People building against Twitter's API weren't doing that. They're just using the web interface that we provided to them. It was our responsibility as a platform provider. We saw lots of successful companies being built on the API, but obviously, it was limited specifically to interacting with tweets. And we also saw peers from Twitter who went on to found companies, other people we knew in the startup scene, struggling to just get something out the door, because they had to do all this undifferentiated heavy lifting, which didn't contribute to their product at all, if they did succeed and they struggled with scalability problems and security problems and that kind of thing. And I think it's been a drag on the market overall, we're essentially, in cloud services. We're more or less built for the enterprise for mature and mid market and enterprise companies that already had resources to put behind these things, then wasn't sort of the cloud equivalent of the web, where individuals, people with fewer resources, people starting new projects, people doing more speculative work, which is what we originally and Jack was doing at Twitter, it just get going and build dynamic web applications. So I think the move to cloud kind of left this gap, which ultimately was starting to be filled with serverless, in particular, that we sort of backtracked from the productivity of the '90s with the lamp era, you can do everything on a single machine, nobody bothered you, you didn't have to pay anyone, just RPM install and you're good to go. To this Kubernetes, containers, cloud, multi site, multi region world where it's just too hard to get a basic product out the door and now serverless is sort of brought that around full circle, we see people building those products again, because the tools have probably matured. >> Well, Evan, I really appreciate you helping set the table. I think you've clearly articulated some of the big challenges we're seeing in the industry right now. Eric, I want to bring you into the conversation. So you relatively recently brought in as CEO, came from Okta a company that is also doing quite well. So give us if you could really the business opportunity here, serverless is not exactly the most mature market, there's a lot of interest excitement, we've been tracking it for years and see some good growth. But what brought you in and what do you see is that big opportunity. >> Yeah, absolutely, so the first thing I'll comment on is what, when I was looking for my next opportunity, what was really important is to, I think you can build some of the most interesting businesses and companies when there are significant technological shifts happening. Okta, which you mentioned, took advantage of the fact of SaaS application, really being adopted by enterprise, which back in 2009, wasn't an exactly a known thing. And similarly, when I look at Fauna, the move that Evan talked about, which is really the maturation of serverless. And therefore, that as an underpinning for a new type of applications is really just starting to take hold. And so then there creates opportunities that for a variety of different people in that stack that to build interesting businesses and obviously, the databases is an incredibly important part of that. And the other thing I've mentioned is that, a lot of people don't know this but there's a very good chunk of Okta's business, which is what they call their customer identity business, which is basically, servicing of identity is a set of API's that people can integrate into their applications. And you see a lot of enterprises using this as a part of their digital transformation effort. And so I was very familiar with that model and how prevalent, how much investment, how much aid was out there for customers, as every company becoming a software company and needing to rethink their business and build applications. And so you put those two trends together and you just see that serverless is going to be able to meet the needs of a lot of those companies. And as Evan mentioned, databases in general and traditionally have come with a lot of complexity from an operational perspective. And so when you look at the technology and some of the problems that Fauna has solved, in terms of really removing all of that operational burden when it comes to starting with and scaling a database, not only locally but globally. It's sort of a new, no brainer, everybody would love to have a database that scales, that is reliable and secure that they don't have to manage. >> Yeah, Eric, one follow up question for you. I think back a few years ago, you talked to companies and it's like, okay, database is the center of my business. It's a big expense. I have a team that works on it. There have been dealt so much change in the database market than most customers I talked to, is I have lots of solutions out there. I'm using Mongo, I've got Snowflake, Amazon has flavors of things I'm looking at. Snowflake just filed for their IPO, so we see the growth in the space. So maybe if you could just obviously serverless is a differentiation. There's a couple of solutions out there, like from Amazon or whether Aurora serverless solution but how does Fauna look to differentiate. Could you give us a little bit of kind of compared to the market out there? >> Sure, yeah, so at the high level, just to clarify, at the super high level for databases, there tends to be two types operational databases and then data warehouse which Snowflake is an example of a data warehouse. And as you probably already know, the former CEO of Snowflake is actually a chairman of Fauna. So Bob Muglia. So we have a lot of good insight into that business. But Fauna is very much on the operational database side. So the other half of that market, if you will, so really focused on being the core operational store for your application. And I think Evan mentioned it a little bit, there's been a lot of the transformation that's happened if we rewind all the way back to the early '90s, when it was Oracle, and Microsoft SQL Server were kind of the big players there. And then as those architectures basically hit limits, when it came to applications moving to the web, you had this whole rise in a lot of different no SQL solutions, but those solutions sort of gave up on some of the promises of a relational database in order to achieve some of the ability to scale in the performance required at the web. But we required then a little bit more sophistication, intelligence, in order to be able to basically create logic in your application that could make up for the fact that those databases didn't actually deliver on the promises of traditional relational databases. And so, enter Fauna and it's really sort of a combination of those two things, which is providing the trust, the security and reliability of a traditional relational database, but offering it as serverless, as we talked about, at the scale that you need it for a web application. And so it's a very interesting combination of those capabilities that we think, as Evan was talking about, allows people who don't have large DevOps teams or very sophisticated developers who can code around some of the limitations of these other databases, to really be able to use a database for what they're looking for. What I write to it is what I'm going to read from it and that we maintain that commitment and make that super easy. >> Yeah, it's important to know that the part of the reason that operational database, the database for mission critical business data has remained a cost center is because the conventional wisdom was that something like Fauna was impossible to build. People said, you literally cannot in information science create a global API for data which is transactional and consistent and suitable for relying on, for mission critical, user login, banking payments, user generated content, social graphs, internal IT data, anything that's irreplaceable. People said, there can be no general service that can do this ubiquitously a global internet scale, you have to do it specifically. So it's sort of like, we had no power company. Instead, you could call up Amazon, they drive a truck with a generator to your house and hook you up. And you're like, right on, I didn't have to like, install the generator myself. But like, it's not a good experience. It's still a pain in the neck, it's still specific to the location you're at. It's not getting utility computing from the cloud the way, it's been a dream for many decades that we get all our services through brokers and API's and the web and it's finally real with serverless. I want to emphasize that the Fauna it technology is new and novel. And based on and inspired by our experience at Twitter and also academic research with some of our advisors like Dr. Daniel Abadi. It's one of the things that attracted us, Snowflake chairman to our company that we'd solve groundbreaking problems in information science in the cloud, just the way Snowflakes had. >> Yeah, well and Evan, yeah please go on Eric. >> Yeah, I'm just going to have one thing to that, which is, in addition, I think when you think about Fauna and you mentioned MongoDB, I think they're one of a great examples of database companies over the last decade, who's been able to build a standalone business. And if you look at it from a business model perspective, the thing that was really successful for them is they didn't go into try to necessarily like, rip and replace in big database migrations, they started evolving with a new class of developers and new applications that were being developed and then rode that obviously into sort of a land and expand model into enterprises over time. And so when you think about Fauna from your business value proposition is harnessing the technological innovation that Evan talked about. And then combining this with a product that bottoms up developer first business motion that kind of rides this technological shift into you creating a presence in the database market over time. >> Well, Evan, I just want to go back to that, it's impossible comment that you made, a lot of people they learn about a technology and they feel that that's the way the technology works. Serverless is obviously often misunderstood from the name itself, too. We had a conversation with Andy Jassy, the CEO of AWS a couple years ago, and he said, "If I could rebuild AWS from the ground up today, "it would be using all serverless," that doesn't mean only lambda, but they're rebuilding a lot of their pieces underneath it. So I've looked at the container world and we're only starting the last year or so, talking about people using databases with Kubernetes and containers, so what is it that allows you to be able to have as you said, there's the consistency. So we're talking about acid there, not worry about things like cold starts, which are thing lots of people are concerned about when it comes to serverless and help us understand a little bit that what you do and the underlying technologies that you leverage. >> Yeah, databases are always the last to evolve because they're the riskiest to change and the hardest to build. And basically, through the cloud era, we've done this lift and shift of existing on premises solutions, especially with databases into cloud machines, but it's still the metaphor of the physical computer, which is the overriding unit of granularity mental concept, everything like you mentioned, containers, like we had machines then we had Vms, now we have containers, it's still a computer. And the database goes in that one computer, in one spot and it sits there and you got to talk to it. Wherever that is in the world, no matter how far away it is from you. And people said, well, the relational database is great. You can use locks within a single machine to make sure that you're not conflicting your data when you update it, you going to have transactionality, you can have serialize ability. What do you do, if you want to make that experience highly available at global scale? We went through a series of evolutions as an industry. From initially that the on-prem RDBMS to things like Google's percolator scheme, which essentially scales that up to data center scale and puts different parts of the traditional database on different physical machines on low latency links, but otherwise doesn't change the consistency properties, then to things like Google Spanner, which rely on synchronized atomic clocks to guarantee consistency. Well, not everyone has synchronized atomic clocks just lying around. And they're also, their issues with noisy neighbors and tenancy and things because you have to make sure that you can always read the clock in a consistent amount of time, not just have the time accurate in the first place. And Fauna is based on and inspired and evolved from an algorithm called Calvin, which came out of a buddy's lab at Yale. And what Calvin does is invert the traditional database relationship and say, instead of doing a bunch of work on the disk and then figuring out which transactions won by seeing what time it is, we will create a global pre determined order of transactions which is arbitrary by journaling them and replicating them. And then we will use that to essentially derive the time from the transactions which have already been committed to disk. And then once we know the order, we can say which one's won and didn't win and which happened before, happen after and present the appearance of consistency to all possible observers. And when this paper came out, it came out about a decade ago now I think, it was very opaque. There's a lot of kind of hand waving exercises left to the reader. Some scary statements about how wasn't suitable for things that in particular SQL requires. We met, my co-founder and I met as Fauna chief architect, he worked on my team at Twitter, at one of the database groups. We were building Fauna we were doing our market discovery or prototyping and we knew we needed to be a global API. We knew we needed low latency, high performance at global scale. We looked at Spanner and Spanner couldn't do it. But we found that this paper proposed a way that could and we can see based on our experience at Twitter that you could overcome all these obstacles which had made the paper overall being neglected by industry and it took us quite a while to implement it at industrial quality and scale, to qualify it with analysts and others, prove to the world that it was real. And Eric mentioned Mongo, we did a lot of work with Cassandra as well at Twitter, we're early in the Cassandra community. Like I wrote, the first tutorial for Cassandra where data stacks was founded. These vendors were telling people that you could not have transactionality and scale at the same time, and it was literally impossible. Then we had this incrementalism like things with Spanner. And it wasn't till Fauna that anyone had proved to the world that that just wasn't true. There was more marketing around their failure to solve the information science problem, than something fundamental. >> Eric, I'm wondering if you're able to share just order of magnitude, how many customers you have out there from a partnership standpoint, we'd like to understand a little bit how you work or fit into the public cloud ecosystems out there. I noticed that Alphabets General Venture Fund was one of the contributors to the last raise. And obviously, there's some underlying Google technology there. So if you could just customers and ecosystem. >> Yeah, so as I mentioned, we've had a very aggressive product lead developer go to market. And so we have 10s of thousands of people now on the service, using Fauna at different levels. And now we're focused on, how do we continue to build that momentum, again, going back to the model of focus on a developer lead model, really what we're focused on there is taking everything that Evan just talked about, which is real and very differentiated in terms of the real core tech in the back end and then combining that with a developer experience that makes it extremely easy to use and really, we think that's the magic in terms of what Fauna is bringing, so we got 10s of thousands of users and we got more signing up every day, coming to the service, we have an aggressive free plan there and then they can migrate up to higher paying plans as they consume over time. And the ecosystem, we're aggressively playing in the broader serverless ecosystem. So what we're looking at is as Evan mentioned, sometimes the databases is the last thing to change, it's also not necessarily the first thing that a developer starts from when they think about building their application or their website. And so we're plugging into the larger serverless ecosystem where people are making their choices about potentially their compute platform or maybe a development platform like I know you've talked to the folks over at JAMstack, sorry at Netlify and Purcell, who are big in the JAMstack community and providing really great workflows for new web and application developers on these platforms. And then at the compute layer, obviously, our Amazon, Google, Microsoft all have a serverless compute solution. CloudFlare is doing some really interesting things out at the edge. And so there's a variety of people up and down that stack, if you will, when people are thinking about this new generation of applications that we're plugging into to make sure that the Fauna is that the default database of choice. >> Wonderful, last question, Evan if I could, I love what I got somebody with your background. Talk about just so many different technologies maturing, give us a little bit as to some of the challenges you see serverless ecosystem, what's being attacked, what do we still need to work on? >> I mean, serverless is in the same place that Lamp was in the in the early '90s. We have the old conservatives ecosystem with the JAMstack players that Eric mentioned. We have closed proprietary ecosystems like the AWS stack or the Google Firebase stack. As to your point, Google has also invested in us so they're placing their bets widely. But it's seeing the same kind of criticism. That Lamp, the Linux, Apache, MySQL, PHP, Perl, it's not mature, it's a toy, no one will ever use this for real business. We can't switch from like DV2 or mumps to MySQL, like no one is doing that. The movement and the momentum in serverless is real. And the challenge now is for all the vendors in collaboration with the community of developers to mature the tools as those the products and applications being built on the new more productive stack also mature, so we have to keep ahead of our audience and make sure we start delivering and this is partly why Eric is here. Those those mid market and ultimately enterprise requirements so that business is built on top of Fauna today, can grow like Twitter did from small to giant. >> Yeah, I'd add on to that, this is reminiscent for me, back in 2009 at Okta, we were one of the early ISVs that built on in relied 100% on AWS. At that time there was still, it was very commonplace for people racking and stacking their own boxes and using Colo and we used to have conversations about I wonder how long it's going to be before we exceed the cost of this AWS thing and we go and run our own data centers. And that would be laughable to even consider today, right, no one would ever even think about that. And I think serverless is in a similar situation where the consumption model is very attractive to get started, some people sitting there, is it going to be too expensive as I scale. And as Evan mentioned, when we think about if you fast forward to kind of what the innovation that we can anticipate both technologically and economically it's just going to be the default model that people are going to wonder why they used to spend all these time managing these machines, if they don't have to. >> Evan and Eric, thank you so much, is great to hear the progress that you've made and big supporters, the serverless ecosystem, so excited to watch the progress there. Thanks so much. >> Thanks Stu. >> Thanks for having us Stu. >> All right and I'm Stu Miniman. Stay tuned. Every week we are putting out the Cloud Native Insights. Appreciate. Thank you for watching. (bright upbeat music)
SUMMARY :
leaders around the globe, of the innovation and going to start with you. We had developers all over the is that the marketplace cloud equivalent of the web, some of the big challenges and secure that they don't have to manage. is the center of my business. of the ability to scale that the part of the reason Yeah, well and Evan, And so when you think about Fauna and the underlying and the hardest to build. or fit into the public the last thing to change, to some of the challenges And the challenge now that people are going to wonder why and big supporters, the the Cloud Native Insights.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Evan | PERSON | 0.99+ |
Eric | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Jack | PERSON | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
2008 | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
Bob Muglia | PERSON | 0.99+ |
2009 | DATE | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Eric Berg | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Snowflake | ORGANIZATION | 0.99+ |
Amazo | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Netlify | ORGANIZATION | 0.99+ |
four years | QUANTITY | 0.99+ |
100% | QUANTITY | 0.99+ |
two types | QUANTITY | 0.99+ |
Fauna | ORGANIZATION | 0.99+ |
Daniel Abadi | PERSON | 0.99+ |
MySQL | TITLE | 0.99+ |
Evan Weaver | PERSON | 0.99+ |
Okta | ORGANIZATION | 0.99+ |
two things | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
first | QUANTITY | 0.99+ |
one computer | QUANTITY | 0.99+ |
JAMstack | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
PHP | TITLE | 0.99+ |
Alphabets General Venture Fund | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
early '90s | DATE | 0.98+ |
CNET | ORGANIZATION | 0.98+ |
First | QUANTITY | 0.98+ |
Stu | PERSON | 0.98+ |
Boston | LOCATION | 0.98+ |
Mongo | ORGANIZATION | 0.97+ |
Linux | TITLE | 0.97+ |
single machine | QUANTITY | 0.97+ |
first thing | QUANTITY | 0.97+ |
Elton Stoneman & Julie Lerman | DockerCon 2020
>> Speaker: From around the Globe, it's theCUBE with digital coverage of DockerCon Live 2020, brought to you by Docker and its ecosystem partners. >> Hello, how you doing? Welcome to DockerCon. We're kind of halfway through now, I guess. Thank you for joining us on this session. So my name is Elton, I'm a Docker Captain. And I'm joined by Julie who was also a Docker Captain. This is actually this session was Julie's idea. We were talking about this learning of Docker and how it's a light bulb moment for lots of people. But Julie, she came up with this great idea for DevOps. So I'll let Julie introduce herself, and tell you a bit about what we're going to talk about. >> Thanks, Elton. So I'm Julie Lerman. I'm a Software Coach. I'm a developer. I've been a developer for over 30 years. I work independently and I'm a Docker captain. Also a Microsoft Regional Director. I wouldn't let them put it on there, because it makes people think I work for Microsoft but I don't. (he laughs) >> Yeah, so it's a weird title. So the Microsoft ID the Regional Director, it's like a kind of Uber, MVP. So I'm an MVP. And that's fine. That's just like a community recognition, just like you get with a Docker captain. So MVP is kind of like the micro version, Julie's MVP too. But then you get the Regional Director which is something that MVP get. >> Doesn't matter. >> I'm not surprised Julie. >> Stop, a humble man. (he laughs) >> We've been using Docker for years 10 years between. >> You probably, how long ago was your Docker aha moment? >> So 2014 I first started using Docker, so I was working on a project, where I was assaulting for a team who were building an Android tablet, and they were building the whole thing, so they Spec out the tablet, they got a bill over in the far East. They were building their own OS their own app to run on and of course all that stacks within it. But they was all talking to the services that were running in the power they wanted to use as your for that and .NET that was on-prem, though that technology historically . So I came in to do the .NET stuff is running in as your, but I got really friendly with the Linux guys. It was very DevOps, it was one team who did the whole thing. And they were using Docker for that their build tools, and for have the and the CI tools, and they were running their own get server and it was all in. >> Already until 2014. That's pretty cool. >> Yeah, pretty early introduction to it. And it was super cool. So I'd always been interested in Linux, but never really dug into it. Because the entry bar was so high runs nothing in it. So you read about this great open source project, and then you go and look at the documentation and you have to download the source code and build it and it's like, well, I'm not going to be doing that stuff. And then Docker came along. I do Docker run. (he laughs) >> Well, I would say it was a little definitely delayed from that. I'm still thinking Wait, when you first started saying that this company was building their own android system, you start thinking, they're building software, but no, they weren't building everything, which is pretty amazing. So, I have to say it took me quite a while, but I was also behind on understanding virtual machines. (both laughs) So, Docker comes along, and I have lots of friends who are using it, I spent a lot of time with Michelle Noorali this Monday, and she's big container person. And most of the people I hear talking about Docker are really doing DevOps, which is not my thing. As a developer, I always just said, let somebody else do that stuff. I want to code an architect and do things like that. And I also do a lot of data work. I'm not like a big data person doing analytics. Or I'm not a DBA. I'm more very involved in getting data in and out of applications. So my aha moment, I would say was like, four years ago, after Microsoft moved SQL Server over to Linux, and then put it inside a Docker image. So that was my very first experience, just saying, oh, what does this do and I downloaded the image. And Docker run. And then like literally I was like, holy smokes. SQL Servers already installed. The containers up like that, and then it's got to run a couple of Bashan SQL scripts to get all the system tables, and databases and things like that. So that's another 15 seconds. But that was literally for me. The not really aha, it was more like OMG, and I'll keep the EFF out just to keep it clean here. It was my OMG moment with Docker. So getting that start, then I worked with the SQL Server image and container and did some different things, with that in applications. And then eventually, expanded my knowledge out bit by bit, and got a deeper understanding of it and tried more things. So I get to a comfort level and then add to it and add to it. >> Yeah. And I think that the great thing about that is that as you're going on that journey that aha moments keep coming, along we had another aha moment this week, with the new announcement that you can use your Docker compose files, and use your Docker commands to spin stuff up running in as your container instances. So like that you've kept up that learning journey is there if you want to go down, How do I take my monolithic application, and break up into pieces and run those in containers? Like suddenly the fact that you can just glue all these things together in run it on one platform, and manage everything in the same way? And these light bulbs keep on coming. So, you've seen the modernization things that people are doing that's a lot of the work that I do now, and taking these big applications, you just write a Docker file, and you've got your 15 year old .NET application running in the container. And you can run that in the cloud with no changes to code, and not see them. But that's super powerful for people. >> And I think one of the really important things, especially for people like you and I, who are also teachers, and is to try to really remember that moment, because I know a lot of times, when people are deeply expert in something it they forget how hard it was, or what it felt like not to understand it that context. So I still have held on to that. So when I talk, I like to do introduction, I like to help people get that aha moment. And then I say, Okay, now go on to the, they're really expert people. You're ready to learn more, but it's really important to especially, maybe we're teachers, conference speakers, book authors, pluralsight, etc. But lots of other people, who are working on teams they might already be somebody who's gotten there with Docker, and they want to help their teammates understand Docker. So I think it's really important to, for everybody who wants to share that to kind of have a little empathy, and remember what that was like, and understand that sometimes it just takes explaining it a different way explaining maybe, just tweaking your expression, or some of the words or your analogies. >> Yeah, that's definitely true. And you often find this it's a technology, that people really become affectionate for, they have a real deep feeling for documents, once they start using it, and you get these internal champions in companies who say, "This is the stuff I've been using, I've been using this at home or whatever." And they want to bring it into their project, and it's pretty cool to be able to say to them this is, take me on the same journey that you've been on, or you've been on a journey, which was probably slightly more investment for you, because you had to learn from scratch. But now you can relay that back into your own project. So you can take, you don't have to take everyone from scratch like you did. You can say, here's the Docker file for our own application. This is how it works. And bringing things into the terms that people are using everyday , I think is something that's super powerful. Why because you're completely strange. (he laughs) >> Oh, I was being really cool about your video. (both laughs) Maybe it's just how it streaming back to me. I think the teacher thing again, like we'll work a little harder and, bump our knees and stub our toes, or tear our hair out or whatever pain we have to go through, with that learning because, it's also kind of obsessive. And you can steer people away from those things, although it's also helpful to let them be aware like this might happen, and if it does, it's because of this. But that's not the happy path. >> Yeah, absolutely. And I think, it's really interesting talking to people about the time you're trying to get to what problem are they trying to solve? It's interesting, you talk about DevOps there, and how that sort of not an area, that you've done a lot of stuff in. Writing a couple of organizations, whether they're really trying hard to move that model, and trying to break down the barriers, between the team who build the software, and the team who run the software, but they have those barriers, but 20 years, it's really hard to write that stuff down. And it's a big cultural shift, it needs a lot of investment. But if you can make a technological change as well, if you can get people using the same tools, the same languages, the same processes to do things, that makes it so much easier. Like now my operators are using Docker files, on there and the security team are going into the Docker file and cozening it, or DevOps team or building up my compose file, and everyone's using the same thing, it really helps a lot, to bind people together to work on the same area. >> I also do a lot of work in domain Dave Vellante design, and that whole idea of collaboration, and bringing together teams, that don't normally work together, and bringing them together, and enabling them to find a way to collaborate giving them tools for collaboration, just like what you're saying with, having the same terms and using the same tools. So that's really powerful. You gave me a great example of one of your clients, aha moments with Docker. Do you remember which that was? The money yes, it's a very powerful Aha. >> Yes. >> She cherish that. >> The company that I've worked for before, when I was doing still get thought that I can sort a thing, and they knew I'd go into containers. I was working for Docker at the time. And I went in just as if I wasn't a sales pitch or anything, I was just as a favor to talk to them about what containers would look like if payments, their operation, big heavy Windows users, huge number of environment, lots of VMs that are all running stuff, to get the isolation, and give them what they needed. And I did this presentation of IT. So it wasn't a technical thing. It was very high level, it was about how containers kind of work. And I'm fundamentally a technical person, so I probably have more detail in there. And then you would get from a sales pitch, but it was very much about, you can take your applications, you can wrap them up the running these things for containers, you still get the isolation, you can run loads more of them on the same hardware that you've got, and you don't pay a Windows license each of those containers, you pay a license for the server that the right one. >> That's it, that's the moment. >> And the head of IT said that's going to save us millions of dollars. (he laughs) And that was his aha moment. >> I'm going to wrap that into my conference session, about getting to the Docker, for sure getting that aha moment. My experience is less that but wow, I mean, that's so powerful. When you're talking to come C level people about making those kinds of changes, because you need to have their buy in. So as a developer and somebody who works with developers, and that's kind of my audience, my experience more has been, when I'm giving conference presentations, and I'll start out in a room of people, and I have to say, when I'm at .NET focus conference, I find that the not there yet with Docker. Part of the audience is a big one. So I kind of do a poll at the beginning of the talk. Who's heard of Docker, obviously, they're in the room, but curious because you still don't really understand it. And that's usually a bulk of the room. And what I like to ask at the end is, of all of you that, that first group, like, do you feel like you get it now, like you just get what it is and what it does, as opposed to I don't know what this thing is. It's for rocket scientists. Is that's how I felt about it. I was like, I'm just a developer. It wasn't my thing. But now, I'm still not doing DevOps, I use Docker as a really important tool, during development and test and that's actually one of it I'm going to be talking about that. But it's my session a little later. Oh, like the next hour. It's about using Docker, that my aha Docker, SQL Server, in an image and but using that in Dave Vellante, it's not about the DevOps and the CI/CD and Kubernetes, I can spell it. (he laughs) Especially when I get to say k eight s, Like I even know the cool Lingo (mumbles) on Twitter. (he laughs) >> I think that's one of the cool things about this technology stack in particular, I think to get the most out of it, you need to dig in really light if you want to, if you're looking at doing this stuff in production, if you're attracted by the fact that I can have a managed container platform in anytime. And I can deploy my app, everywhere using the same set of things that compose files or humidity files or whatever. And if you really want to take advantage of that, you kind of have to get down to the principles understand all go on a proper kind of learning journey. But if you don't want to do that, you can kind of stop wherever it makes sense for you. So like even when I'm talking to different audiences, is a lot strangely enough, I did a pool size large bin this morning. It was quite a specific topic. It was about building applications in containers. So is about using containers, to compile your app and then package it, so you can build anywhere. But even a session like that, the first maybe two minutes, I give a lightning quick overview, of what containers are and how you use them. Here's exactly like you say, people will come to a session, if it's got Docker or humanities in the title. But if they don't have the entry requirements. They've never really used this stuff. And we were up here and it's a big dump for them. So I try and always have that introductory slide. >> I had to do that on the fly. >> Sorry. >> I've done that on the fly in conference, because yes, doing like, ASP.NET Core with Entity Framework and containers. And, 80% of the room, really didn't know anything about Docker. So, instead of talking like five minutes about Docker and then demoing the rest, I ended up spending more time talking about Docker, to make sure everybody was really you could tell that difference when they're like oh, like that they understood enough, in order to be follow along and understand the value of what it was that I was there to show, about it in that core, I'm also this is making me remember that first time I actually use Docker compose, because it was a while, I was just using the SQL Server, Docker image, in on my development machine for quite a while. And because I wasn't deploying, I was learning and exploring and so I was on my development machine, so I didn't need to do anything else. So the first time I really started orchestrating, that was yet another aha moment. But I was ready for it then. I think you know if you start with Docker compose and you don't haven't done the other, maybe I would write but I was ready, because I'd already gotten used to using the tooling and, really understanding what was going on with containers. Then that Docker compose was like yeah. (he laughs) >> It's just the next one, in the line is a great comment actually in the chat about someone in the chat. >> From chat? >> Yeah, from Steve saying, that he could see there would be an aha moment for his about security. And actually that's absolutely, it's so when security people, first want to get their head around containers, they get worried that if someone can compromise the app in the container, they might get a break out, and get to all the other containers. And suddenly, instead of having one VM compromised, you have 100 containers compromised. But actually, when you dig into it so much easier to get this kind of defense in depth, when you're building in containers, because you have your tape on an image that's owned by your team who produced the path, whether or not they will have their own images, that are built with best practices. You can sign your images, through your platform doesn't run anything that isn't signed, you have a full history of exactly what's in the source code is what's in production, there's all sorts of, ways you can layer on security that, attract that side of the audience. >> I've been looking at you this whole time, and like I forgot about the live chat. There's the live chat. (he laughs) There's Scott Johnston in live chat. >> Yes. >> People talking about Kubernetes and swarm. I'm scrolling through quickly to see if anybody's saying, well, my aha moment was. >> There was a good one. What was this one from Fatima earlier on, Maya was pointing out with almost no configuration onto a VM, and couldn't believe it never looked back on us. >> Yeah. >> That's exactly, on one command, if your image is mostly built, SaaS has some sensible defaults, it just all works. And everyone's (mumbles). >> Yeah, and the thing that I'm doing in my session is, what I love. the fact that for development team, Development Testing everybody on the team, and then again on up the pipeline to CI/CD. It's just a matter of, not only do you have your SaaS code, but in your SaaS code, you've got your Docker compose, and your Docker compose just makes sure, that you have the development environment that you need, all the frame, everything that you need is just there, without having to go out and find it and install it. >> There were no gap in a development environment with CI build the production. So I'm hearing, you don't hear but I can hear that we need to wrap up. >> Oh, yeah. >> Get yourself prepared for your next session, which everyone should definitely, I'll be watching everyone else do. So thanks everyone for joining. Thanks, Julie for a great idea for a conversation, was about 4050 we'll have a beer with and I would, I would Yeah. >> Yeah, we live many thousands of miles away from one another. >> Well, hopefully next year, there will be a different topic on how we can all meet some of you guys. >> And I do need to point out, the last time we were together, Elton, I got a copy of Alan's book and he signed it. (both laughs) And we took a picture of it. >> There are still more books on the stand >> Yeah, I know that's an old book, but it's the one that you signed. Thank you so much. >> Thanks everyone for joining and we'll enjoy the rest of the topic home. >> Bye. (soft music)
SUMMARY :
brought to you by Docker and tell you a bit about what and I'm a Docker captain. So MVP is kind of like the micro version, (he laughs) We've been using Docker and for have the and the CI tools, That's pretty cool. and then you go and look and then it's got to run a couple that you can use your and is to try to really and it's pretty cool to be able And you can steer people and the team who run the software, and enabling them to find a way and you don't pay a Windows license each And that was his aha moment. I find that the not there yet with Docker. and how you use them. and so I was on my development machine, in the chat about someone in the chat. and get to all the other containers. and like I forgot about the live chat. Kubernetes and swarm. and couldn't believe it it just all works. Yeah, and the thing that So I'm hearing, you don't hear and I would, I would Yeah. Yeah, we live many how we can all meet some of you guys. And I do need to point out, but it's the one that you signed. and we'll enjoy the
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Steve | PERSON | 0.99+ |
Julie | PERSON | 0.99+ |
Michelle Noorali | PERSON | 0.99+ |
Scott Johnston | PERSON | 0.99+ |
Julie Lerman | PERSON | 0.99+ |
Elton | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Alan | PERSON | 0.99+ |
2014 | DATE | 0.99+ |
two minutes | QUANTITY | 0.99+ |
80% | QUANTITY | 0.99+ |
100 containers | QUANTITY | 0.99+ |
Docker | TITLE | 0.99+ |
android | TITLE | 0.99+ |
five minutes | QUANTITY | 0.99+ |
Docker | ORGANIZATION | 0.99+ |
next year | DATE | 0.99+ |
15 seconds | QUANTITY | 0.99+ |
one platform | QUANTITY | 0.99+ |
over 30 years | QUANTITY | 0.99+ |
DockerCon | EVENT | 0.99+ |
SQL Servers | TITLE | 0.99+ |
SQL Server | TITLE | 0.99+ |
first experience | QUANTITY | 0.98+ |
four years ago | DATE | 0.98+ |
SQL | TITLE | 0.98+ |
Linux | TITLE | 0.98+ |
Windows | TITLE | 0.98+ |
both | QUANTITY | 0.97+ |
first | QUANTITY | 0.96+ |
first time | QUANTITY | 0.96+ |
Android | TITLE | 0.96+ |
both laughs | QUANTITY | 0.96+ |
this week | DATE | 0.96+ |
one | QUANTITY | 0.95+ |
DockerCon Live 2020 | EVENT | 0.95+ |
Elton Stoneman | PERSON | 0.94+ |
each | QUANTITY | 0.94+ |
.NET | TITLE | 0.94+ |
ORGANIZATION | 0.93+ | |
Uber | ORGANIZATION | 0.93+ |
Kubernetes | TITLE | 0.93+ |
one team | QUANTITY | 0.93+ |
Docker compose | TITLE | 0.93+ |
Entity Framework | TITLE | 0.91+ |
millions of dollars | QUANTITY | 0.91+ |
thousands of miles | QUANTITY | 0.9+ |
first group | QUANTITY | 0.86+ |
years 10 years | QUANTITY | 0.85+ |
one command | QUANTITY | 0.81+ |
Ted Kummert, UiPath | The Release Show: Post Event Analysis
>> Narrator: From around the globe it's theCUBE! With digital coverage of UiPath Live, the release show. Brought to you by UiPath. >> Hi everybody this is Dave Valenti, welcome back to our RPA Drill Down. Ted Kummert is here he is Executive Vice President for Products and Engineering at UiPath. Ted, thanks for coming on, great to see you. >> Dave, it's great to be here, thanks so much. >> Dave your background is pretty interesting, you started as a Silicon Valley Engineer, they pulled you out, you did a huge stint at Microsoft. You got experience in SAS, you've got VC chops with Madrona. And at Microsoft you saw it all, the NT, the CE Space, Workflow, even MSN you did stuff with MSN, and then the all important data. So I'm interested in what attracted you to UiPath? >> Yeah Dave, I feel super fortunate to have worked in the industry in this span of time, it's been an amazing journey, and I had a great run at Microsoft it was fantastic. You mentioned one experience in the middle there, when I first went to the server business, the enterprise business, I owned our Integration and Workflow products, and I would say that's the first I encountered this idea. Often in the software industry there are ideas that have been around for a long time, and what we're doing is refining how we're delivering them. And we had ideas we talked about in terms of Business Process Management, Business Activity Monitoring, Workflow. The ways to efficiently able somebody to express the business process in a piece of software. Bring systems together, make everybody productive, bring humans into it. These were the ideas we talked about. Now in reality there were some real gaps. Because what happened in the technology was pretty different from what the actual business process was. And so lets fast forward then, I met Madrona Venture Group, Seattle based Venture Capital Firm. We actually made a decision to participate in one of UiPath's fundraising rounds. And that's the first I really came encountered with the company and had to have more than an intellectual understanding of RPA. 'Cause when I first saw it, I said "oh, I think that's desktop automation" I didn't look very close, maybe that's going to run out of runway, whatever. And then I got more acquainted with it and figured out "Oh, there's a much bigger idea here". And the power is that by really considering the process and the implementation from the humans work in, then you have an opportunity really to automate the real work. Not that what we were doing before wasn't significant, this is just that much more powerful. And that's when I got really excited. And then the companies statistics and growth and everything else just speaks for itself, in terms of an opportunity to work, I believe, in one of the most significant platforms going in the enterprise today, and work at one of the fastest growing companies around. It was like almost an automatic decision to decide to come to the company. >> Well you know, you bring up a good point you think about software historically through our industry, a lot of it was 'okay here's this software, now figure out how to map your processes to make it all work' and today the processes, especially you think about this pandemic, the processes are unknown. And so the software really has to be adaptable. So I'm wondering, and essentially we're talking about a fundamental shift in the way we work. And is there really a fundamental shift going on in how we write software and how would you describe that? >> Well there certainly are, and in a way that's the job of what we do when we build platforms for the enterprises, is try and give our customers a new way to get work done, that's more efficient and helps them build more powerful applications. And that's exactly what RPA does, the efficiency, it's not that this is the only way in software to express a lot of this, it just happens to be the quickest. You know in most ways. Especially as you start thinking about initiatives like our StudioX product to what we talk about as enabling citizen developers. It's an expression that allows customers to just do what they could have done otherwise much more quickly and efficient. And the value on that is always high, certainly in an unknown era like this, it's even more valuable, there are specific processes we've been helping automate in the healthcare, in financial services, with things like SBA Loan Processing, that we weren't thinking about six months ago, or they weren't thinking about six months ago. We're all thinking about how we're reinventing the way we work as individuals and corporations because of what's going on with the coronavirus crisis, having a platform like this that gives you agility and mapping the real work to what your computer state and applications all know how to do, is even more valuable in a climate like that. >> What attracted us originally to UiPath, we knew Bobby Patrick CMO, he said "Dave, go download a copy, go build some automations and go try it with some other companies". So that really struck us as wow, this is actually quite simple. Yet at the same time, and so you've of course been automating all these simple tasks, but now you've got real aspiration, you're glomming on to this term of Hyperautomation, you've made some acquisitions, you've got a vision, that really has taken you beyond 'paving the cow path' I sometimes say, of all these existing processes. It's really trying to discover new processes and opportunities for automation, which you would think after 50 or whatever years we've been in this industry, we'd have attacked a lot of it, but wow, seems like we have a long way to go. Again, especially what we're learning through this pandemic. Your thoughts on that? >> Yeah, I'd say Hyperautomation. It's actually a Gartner term, it's not our term. But there is a bigger idea here, built around the core automation platform. So let's talk for a second just what's not about the core platform and then what Hyperautomation really means around that. And I think of that as the bookends of how do I discover and plan, how do I improve my ability to do more automations, and find the real opportunities that I have. And how do I measure and optimize? And that's a lot of what we delivered in 20.4 as a new capability. So let's talk about discover and plan. One aspect of that is the wisdom of the crowd. We have a product we call Automation Hub that is all about that. Enabling people who have ideas, they're the ones doing the work, they have the observation into what efficiencies can be. Enabling them to either with our Ask Capture Utility capture that and document that, or just directly document that. And then, people across the company can then collaborate eventually moving on building the best ideas out of that. So there's capturing the crowd, and then there's a more scientific way of capturing actually what the opportunities are. So we've got two products we introduced. One is process mining, and process mining is about going outside in from the, let's call it the larger processes, more end to end processes in the enterprise. Things like order-to-cash and procure-to-pay, helping you understand by watching the events, and doing the analytics around that, where your bottle necks, where are you opportunities. And then task mining said "let's watch an individual, or group of individuals, what their tasks are, let's watch the log of events there, let's apply some machine learning processing to that, and say here's the repetitive things we've found." And really helping you then scientifically discover what your opportunities are. And these ideas have been along for a long time, process mining is not new. But the connection to an automation platform, we think is a new and powerful idea, and something we plan to invest a lot in going forward. So that's the first bookend. And then the second bookend is really about attaching rich analytics, so how do I measure it, so there's operationally how are my robots doing? And then there's everything down to return on investment. How do I understand how they are performing, verses what I would have spent if I was continuing to do them the old way. >> Yeah that's big 'cause (laughing) the hero reports for the executives to say "hey, this is actually working" but at the same time you've got to take a systems view. You don't want to just optimize one part of the system at the detriment to others. So you talk about process mining, which is kind of discovering the backend systems, ERP and the like, where the task mining it sounds like it's more the collaboration and front end. So that whole system thinking, really applies, doesn't it? >> Yeah. Very much so. Another part of what we talked about then, in the system is, how do we capture the ideas and how do we enable more people to build these automations? And that really gets down to, we talk about it in our company level vision, is a robot for every person. Every person should have a digital assistant. It can help you with things you do less frequently, it can help you with things you do all the time to do your job. And how do we help you create those? We've released a new tool we call StudioX. So for our RPA Developers we have Studio, and StudioX is really trying to enable a citizen developer. It's not unlike the art that we saw in Business Intelligence there was the era where analytics and reporting were the domain of experts, and they produced formalized reports that people could consume. But the people that had the questions would have to work with them and couldn't do the work themselves. And then along comes ClickView and Tableau and Power BI enabling the self services model, and all of a sudden people could do that work themselves, and that enabled powerful things. We think the same arch happens here, and StudioX is really our way of enabling that, citizen developer with the ideas to get some automation work done on their own. >> Got a lot in this announcement, things like document understanding, bring your own AI with AI fabric, how are you able to launch so many products, and have them fit together, you've made some acquisitions. Can you talk about the architecture that enables you to do that? >> Yeah, it's clearly in terms of ambition, and I've been there for 10 weeks, but in terms of ambition you don't have to have been there when they started the release after Forward III in October to know that this is the most ambitious thing that this company has ever done from a release perspective. Just in terms of the surface area we're delivering across now as an organization, is substantive. We talk about 1,000 feature improvements, 100's of discreet features, new products, as well as now our automation cloud has become generally available as well. So we've had muscle building over this past time to become world class at offering SAS, in addition to on-premises. And then we've got this big surface area, and architecture is a key component of how you can do this. How do you deliver efficiently the same software on-premises and in the cloud? Well you do that by having the right architecture and making the right bets. And certainly you look forward, how are companies doing this today? It's really all about Cloud-Native Platform. But it's about an architecture such that we can do that efficiently. So there is a lot about just your technical strategy. And then it's just about a ton of discipline and customer focus. It keeps you focused on the right things. StudioX was a great example of we were led by customers through a lot of what we actually delivered, a couple of the major features in it, certainly the out of box templates, the studio governance features, came out of customer suggestions. I think we had about 100 that we have sitting in the backlog, a lot of which we've already done, and really being disciplined and really focused on what customers are telling. So make sure you have the right technical strategy and architecture, really follow your customers, and really stay disciplined and focused on what matters most as you execute on the release. >> What can we learn from previous examples, I think about for instance SQL Server, you obviously have some knowledge in it, it started out pretty simple workloads and then at the time we all said "wow, it's a lot more powerful to come from below that it is, if a Db2, or an Oracle sort of goes down market", Microsoft proved that, obviously built in the robustness necessary, is there a similar metaphor here with regard to things like governance and security, just in terms of where UiPath started and where you see it going? >> Well I think the similarities have more to do with we have an idea of a bigger platform that we're now delivering against. In the database market, that was, we started, SQL Server started out as more of just a transactional database product, and ultimately grew to all of the workloads in the data platform, including transaction for transactional apps, data warehousing and as well as business intelligence. I see the same analogy here of thinking more broadly of the needs, and what the ability of an integrated platform, what it can do to enable great things for customers, I think that's a very consistent thing. And I think another consistent thing is know who you are. SQL Server knew exactly who it had to be when it entered the database market. That it was going to set a new benchmark on simplicity, TCO, and that was going to be the way it differentiated. In this case, we're out ahead of the market, we have a vision that's broader than a lot of the market is today. I think we see a lot of people coming in to this space, but we see them building to where we were, and we're out ahead. So we are operating from a leadership position, and I'm not going to tell you one's easier that the other, and both you have to execute with great urgency. But we're really executing out ahead, so we've got to keep thinking about, and there's no one's tail lights to follow, we have to be the ones really blazing the trail on what all of this means. >> I want to ask you about this incorporation of existing systems. Some markets they take off, it's kind of a one shot deal, and the market just embeds. I think you guys have bigger aspirations than that, I look at it like a service now, misunderstood early on, built the platform and now really is fundamental part of a lot of enterprises. I also look at things like EDW, which again, you have some experience in. In my view it failed to live up to a lot of it's promises even though it delivered a lot of value. You look at some of the big data initiatives, you know EDW still plugs in, it's the system of record, okay that's fine. How do you see RPA evolving? Are we going to incorporate, do we have to embrace existing business process systems? Or is this largely a do-over in your opinion? >> Well I think it's certainly about a new way of building automation, and it's starting to incorporate and include the other ways, for instance in the current release we added support for long running workflow, it was about human workflow based scenarios, now the human is collaborating with the robot, and we built those capabilities. So I do see us combining some of the old and new way. I think one of the most significant things here, is also that impact that AI and ML based technologies and skills can have on the power of the automations that we deliver. We've certainly got a surface area that, I think about our AI and ML strategy in two parts, that we are building first class first party skills, that we're including in the platform, and then we're building a platform for third parties and customers to bring their what their data science teams have delivered, so those can also be a part of our ecosystem, and part of automations. And so things like document understanding, how do I easily extract data from more structured, semi-structured and completely unstructured documents, accurately? And include those in my automations. Computer vision which gives us an ability to automate at a UI level across other types of systems than say a Windows and a browser base application. And task mining is built on a very robust, multi layer ML system, and the innovation opportunity that I think just consider there, you know continue there. You think it's a macro level if there's aspects of machine learning that are about captured human knowledge, well what exactly is an automation that captured where you're capturing a lot of human knowledge. The impact of ML and AI are going to be significant going out into the future. >> Yeah, I want to ask you about them, and I think a lot of people are just afraid of AI, as a separate thing and they have to figure out how to operationalize it. And I think companies like UiPath are really in a position to embed UI into applications AI into applications everywhere, so that maybe those folks that haven't climbed on the digital bandwagon, who are now with this pandemic are realizing "wow, we better accelerate this" they can actually tap machine intelligence through your products and others as well. Your thoughts on that sort of narrative? >> Yeah, I agree with that point of view, it's AI and ML is still maturing discipline across the industry. And you have to build new muscle, and you build new muscle and data science, and it forces you to think about data and how you manage your data in a different way. And that's a journey we've been on as a company to not only build our first party skills, but also to build the platform. It's what's given us the knowledge that to help us figure out, well what do we need to include here so our customers can bring their skills, actually to our platform, and I do think this is a place where we're going to see the real impact of AI and ML in a broader way. Based on the kind of apps it is and the kind of skills we can bring to bear. >> Okay last question, you're ten weeks in, when you're 50, 100, 200 weeks in, what should we be watching, what do you want to have accomplished? >> Well we're listening, we're obviously listening closely to our customers, right now we're still having a great week, 'cause there's nothing like shipping new software. So right now we're actually thinking deeply about where we're headed next. We see there's lots of opportunities and robot for every person, and that initiative, and so we're launched a bunch of important new capabilities there, and we're going to keep working with the market to understand how we can, how we can add additional capability there. We've just got the GA of our automation cloud, I think you should expect more and more services in our automation cloud going forward. I think this area we talked about, in terms of AI and ML and those technologies, I think you should expect more investment and innovation there from us and the community, helping our customers, and I think you will also see us then, as we talked about this convergence of the ways we bring together systems through integrate and build business process, I think we'll see a convergence into the platform of more of those methods. I look ahead to the next releases, and want to see us making some very significant releases that are advancing all of those things, and continuing our leadership in what we talk about now as the Hyperautomation platform. >> Well Ted, lot of innovation opportunities and of course everybody's hopping on the automation bandwagon. Everybody's going to want a piece of your RPA hide, and you're in the lead, we're really excited for you, we're excited to have you on theCUBE, so thanks very much for all your time and your insight. Really appreciate it. >> Yeah, thanks Dave, great to spend this time with you. >> All right thank you for watching everybody, this is Dave Velanti for theCUBE, and our RPA Drill Down Series, keep it right there we'll be right back, right after this short break. (calming instrumental music)
SUMMARY :
Brought to you by UiPath. great to see you. Dave, it's great to the NT, the CE Space, Workflow, the company and had to have more than an a fundamental shift in the way we work. and mapping the real work Yet at the same time, and find the real ERP and the like, And how do we help you create those? how are you able to and making the right bets. and I'm not going to tell you one's easier and the market just embeds. and include the other ways, and I think a lot of people and it forces you to think and I think you will also see us then, and of course everybody's hopping on the great to spend this time with you. and our RPA Drill Down Series,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Ted Kummert | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Dave Valenti | PERSON | 0.99+ |
Dave Velanti | PERSON | 0.99+ |
10 weeks | QUANTITY | 0.99+ |
Ted | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Madrona Venture Group | ORGANIZATION | 0.99+ |
ten weeks | QUANTITY | 0.99+ |
100 | QUANTITY | 0.99+ |
October | DATE | 0.99+ |
UiPath | ORGANIZATION | 0.99+ |
MSN | ORGANIZATION | 0.99+ |
Seattle | LOCATION | 0.99+ |
SQL Server | TITLE | 0.99+ |
50 | QUANTITY | 0.99+ |
SQL Server | TITLE | 0.99+ |
first | QUANTITY | 0.99+ |
first bookend | QUANTITY | 0.99+ |
two parts | QUANTITY | 0.98+ |
Madrona | ORGANIZATION | 0.98+ |
Venture Capital Firm | ORGANIZATION | 0.98+ |
second bookend | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
200 weeks | QUANTITY | 0.98+ |
SQL | TITLE | 0.98+ |
One | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
one | QUANTITY | 0.98+ |
two products | QUANTITY | 0.98+ |
Tableau | TITLE | 0.98+ |
Oracle | ORGANIZATION | 0.97+ |
one experience | QUANTITY | 0.97+ |
Power BI | TITLE | 0.97+ |
about 100 | QUANTITY | 0.97+ |
Windows | TITLE | 0.96+ |
EDW | ORGANIZATION | 0.96+ |
Gartner | ORGANIZATION | 0.96+ |
ClickView | TITLE | 0.95+ |
CE Space | ORGANIZATION | 0.94+ |
one part | QUANTITY | 0.94+ |
100's | QUANTITY | 0.94+ |
Executive Vice President | PERSON | 0.92+ |
six months ago | DATE | 0.92+ |
Forward III | TITLE | 0.91+ |
coronavirus crisis | EVENT | 0.91+ |
first party | QUANTITY | 0.91+ |
SAS | ORGANIZATION | 0.86+ |
One aspect | QUANTITY | 0.86+ |
UiPath | PERSON | 0.86+ |
Bobby Patrick CMO | PERSON | 0.83+ |
one shot | QUANTITY | 0.83+ |
20.4 | QUANTITY | 0.81+ |
StudioX | TITLE | 0.81+ |
Workflow | ORGANIZATION | 0.8+ |
first class | QUANTITY | 0.79+ |
StudioX | ORGANIZATION | 0.79+ |
Hub | ORGANIZATION | 0.78+ |
theCUBE | ORGANIZATION | 0.78+ |
Hyperautomation | ORGANIZATION | 0.77+ |
UiPath Live | TITLE | 0.77+ |
about 1,000 feature improvements | QUANTITY | 0.74+ |
about six months ago | DATE | 0.73+ |
pandemic | EVENT | 0.7+ |
second | QUANTITY | 0.66+ |
Studio | TITLE | 0.66+ |
NT | ORGANIZATION | 0.65+ |
SBA | ORGANIZATION | 0.61+ |
Silicon Valley | LOCATION | 0.55+ |
Dave Brown, Amazon | AWS Summit Online 2020
>> Narrator: From theCUBE studios in Palo Alto in Boston, connecting with thought leaders all around the world, this is theCUBE conversation. >> Everyone, welcome to the Cube special coverage of the AWS Summit San Francisco, North America all over the world, and most of the parts Asia, Pacific Amazon Summit is the hashtag. This is part of theCUBE Virtual Program, where we're going to be covering Amazon Summits throughout the year. I'm John Furrier, host of theCUBE. And of course, we're not at the events. We're here in the Palo Alto Studios, with our COVID-19 quarantine crew. And we got a great guest here from AWS, Dave Brown, Vice President of EC2, leads the team on elastic compute, and its business where it's evolving and most importantly, what it means for the customers in the industry. Dave, thanks for spending the time to come on theCUBE virtual program. >> Hey John, it's really great to be here, thanks for having me. >> So we got the summit going down. It's new format because of the shelter in place. They're going virtual or digital, virtualization of events. And I want to have a session with you on EC2, and some of the new things they're going on. And I think the story is important, because certainly around the pandemic, and certainly on the large scale, SaaS business models, which are turning out to be quite the impact from a positive standpoint, with people sheltering in place, what is the role of data in all this, okay? And also, there's a lot of pressure financially. We've had the payroll loan programs from the government, and to companies really looking at their bottom lines. So two major highlights going on in the world that's directly impacted. And you have some products, and news around this, I want to do a deep dive on that. One is AppFlow, which is a new integration service by AWS, that really talks about taking the scale and value of AWS services, and integrating that with SaaS Applications. And the migration acceleration program for Windows, which has a storied history of database. For many, many years, you guys have been powering most of the Windows workloads, ironic that you guys are not Microsoft, but certainly had success there. Let's start with the AppFlow. Okay, this was recently announced on the 22nd of April. This is a new service. Can you take us through why this is important? What is the service? Why now, what was the main driver behind AppFlow? >> Yeah, absolutely. So with the launcher AppFlow, what we're really trying to do is make it easy for organizations and enterprises to really control the flow of their data, between the number of different applications that they use on premise, and AWS. And so the problem we started to see was, enterprises just had this data all over the place, and they wanted to do something useful with it. Right, we see many organizations running Data Lakes, large scale analytics, Big Machine Learning on AWS, but before you can do all of that, you have to have access to the data. And if that data is sitting in an application, either on-premise or elsewhere in AWS, it's very difficult to get out of that application, and into S3, or Redshift, or one of those services, before you can manipulate it, that was the challenge. And so the journey kind of started a few years ago, we actually launched a service on the EC2 network, inside Private Link. And it was really, it provided organizations with a very secure way to transfer network data, both between VPCs, and also between VPC, and on-prem networks. And what this highlighted to us, is organizations say that's great, but I actually don't have the technical ability, or the team, to actually do the work that's required to transform the data from, whether it's Salesforce, or SAP, and actually move it over Private Link to AWS. And so we realized, while private link was useful, we needed another layer of service that actually provided this, and one of the key requirements was an organization must be able to do this with no code at all. So basically, no developer required. And I want to be able to transfer data from Salesforce, my Salesforce database, and put that in Redshift together with some other data, and then perform some function on that. And so that's what AppFlow is all about. And so we came up with the idea about a little bit more than a year ago, that was the first time I sat down, and actually reviewed the content for what this was going to be. And the team's been hard at work, and launched on the 22nd of April. And we actually launched with 14 partners as well, that provide what we call connectors, which allow us to access these various services, and companies like Salesforce and ServiceNow, Slack, Snowflake, to name a few. >> Well, certainly you guys have a great ecosystem of SaaS partners, and that's you know well documented in the industry that you guys are not going to be competing directly with a lot of these big SaaS players, although you do have a few services for customers who want end to end, Jassy continues to pound that home on my Cube interviews. But I think this, >> Absolutely. is notable, and I want to get your thoughts on this, because this seems to be the key unlocking of the value of SaaS and Cloud, because data traversal, data transfer, there's costs involved, also moving traffic over the internet is unsecure, and unreliable. So a couple questions I wanted to just ask you directly. One is did the AppFlow come out of the AWS Private Link piece of it? And two, is it one directional or bi-directional? How is that working? Because I'm guessing that you had Private Link became successful, because no one wants to move on the internet. They wanted direct connects. Was there something inadequate about that service? Was there more headroom there? And is it bi-directional for the customer? >> So let me take the second one, it's absolutely bi-directional. So you can transfer that data between an on-premise application and AWS, or AWS and the on-premise application. Really, anything that has a connector can support the data flow in both directions. And with transformations, and so data in one data source, may need to be transformed, before it's actually useful in a second data source. And so AppFlow takes care of all that transformation as well, in both directions, And again, with no requirement for any code, on behalf of the customer. Which really unlocks it for a lot of the more business focused parts of an organization, who maybe don't have immediate access to developers. They can use it immediately, just literally with a few transformations via the console, and it's working for you. In terms of, you mentioned sort of the flow of data over the internet, and the need for security of data. It's critically important, and as we look at just what had happened as a company does. We have very, very strict requirements around the flow of data, and what services we can use internally. And where's any of our data going to be going? And I think it's a good example of how many enterprises are thinking about data today. They don't even want to trust even HTTPS, and encryption of data on the internet. I'd rather just be in a world where my data never ever traverses the internet, and I just never have to deal with that. And so, the journey all started with Private Link there, and probably was an interesting feature, 'cause it really was changing the way that we asked our customers to think about networking. Nothing like Private Link has ever existed, in the sort of standard networking that an enterprise would normally have. It's kind of only possible because of what VPC allows you to do, and what the software defined network on AWS gives you. And so we built Private Link, and as I said, customers started to adopt it. They loved the idea of being able to transfer data, either between VPCs, or between on-premise. Or between their own VPC, and maybe a third party provider, like Snowflake, has been a very big adopter of Private Link, and they have many customers using it to get access to Snowflake databases in a very secure way. And so that's where it all started, and in those discussions with customers, we started to see that they wanted us to up level a little bit. They said, "We can use Private Link, it's great, "but one of the problems we have is just the flow of data." And how do we move data in a very secure, in a highly available way, with no sort of bottlenecks in the system. And so we thought Private Link was a great sort of underlying technology, that empowered all of this, but we had to build the system on top of that, which is AppFlow. That says we're going to take care of all the complexity. And then we had to go to the ecosystem, and say to all these providers, "Can you guys build connectors?" 'Cause everybody realized it's super important that data can be shared, and so that organizations can really extract the value from that data. And so the 14 of them at launch, we have many, many more down the road, have come to the party with with connectors, and full support of what AppFlow provides. >> Yeah us DevOps purists always are pounding the fist on the table, now virtual table, API's and connectors. This is the model, so people are integrating. And I want to get your thoughts on this. I think you said low code, or no code on the developer simplicity side. Is it no code, or low code? Can you just explain quickly and clarify that point? >> It's no code for getting started literally, for the kind of, it's basic to medium complexity use case. It's not code, and a lot of customers we spoke to, that was a bottleneck. Right, they needed something from data. It might have been the finance organization, or it could have been human resources, somebody else in organization needed that. They don't have a developer that helps them typically. And so we find that they would wait many, many months, or maybe even never get the project done, just because they never ever had access to that data, or to the developer to actually do the work that was required for the transformation. And so it's no code for almost all use cases. Where it literally is, select your data source, select the connector, and then select the transformations. And some basic transformations, renaming of fields, transformation of data in simple ways. That's more than sufficient for the vast majority of use cases. And then obviously through to the destination, with the connector on the other side, to do the final transformation, to the final data source that you want to migrate the data to. >> You know, you have an interesting background, was looking at your history, and you've essentially been a web services kind of guy all your life. From a code standpoint software environment, and now I'll say EC2 is the crown jewel of AWS, and doing more and more with S3. But what's interesting, as you build more of these layers services in there, there's more flexibility. So right now, in most of the customer environments, is a debate around, do I build something monolithic, and or decoupled, okay? And I think there's a world where there's a mutually, not mutually exclusive, I mean, you have a mainframe, you have a big monolithic thing, if it does something. But generally people would agree that a decoupled environment is more flexible, and more agile. So I want to kind of get to the customer use case, 'cause I can really see this being really powerful, AppFlow with Private Link, where you mentioned Snowflake. I mean, Snowflake is built on AWS, they're doing extremely, extremely well, like any other company that builds on AWS. Whether it's theCUBE Cloud, or it's Snowflake. As we tap those services, customers, we might have people who want to build on our platform on top of AWS. So I know a bunch of startups that are building within the Snowflake ecosystem, a customer of yours. >> Yeah. >> So they're technically a customer of Amazon, but they're also in the ecosystem of say, Snowflake. >> Yes. >> So this brings up an interesting kind of computer science problem, which is architecturally, how do I think about that? Is this something where AppFlow could help me? Because I certainly want to enable people to build on a platform, that I build if I'm doing that, if I'm not going to be a pure SaaS turnkey application. But if I'm going to bring partners in, and do integration, use the benefits of the goodness of an API or Connector driven architecture, I need that. So explain to me how this helps me, or doesn't help me. Is this something that makes sense to you? Does this question make sense? How do you react to that? >> I think so, I think the question is pretty broad. But I think there's an element in which I can help. So firstly, you talk about sort of decoupled applications, right? And I think that is certainly the way that we've gone at Amazon, and been very, very successful for us. I think we started that journey back in 2003, when we decoupled the monolithic application that was amazon.com. And that's when our service journey started. And a lot of that sort of inspired AWS, and how we built what we built today. And we see a lot of our customers doing that, moving to smaller applications. It just works better, it's easier to debug, there's ownership at a very controlled level. So you can get all your engineering teams to have very clear and crisp ownership. And it just drives innovation, right? 'Cause each little component can innovate without the burden of the rest of the ecosystem. And so that's what we really enjoy. I think the other thing that's important when you think about design, is to see how much of the ecosystem you can leverage. And so whether you're building on Snowflake, or you're building directly on top of AWS, or you're building on top of one of our other customers and partners. If you can use something that solves the problem for you, versus building it yourself. Well that just leaves you with more time to actually go and focus on the stuff that you need to be solving, right? The product you need to be building. And so in the case of AppFlow, I think if there's a need for transfer of data, between, for example, Snowflake and some data warehouse, that you as an organisation are trying to build on a Snowflake infrastructure. AppFlow is something you could potentially look at. It's certainly not something that you could just use for, it's very specific and focused to the flow of data between services from a data analytics point of view. It's not really something you could use from an API point of view, or messaging between services. It's more really just facilitating that flow of data, and the transformation of data, to get it into a place that you can do something useful with it. >> And you said-- >> But like any of our services-- (speakers talk over each other) Couldn't be using any layer in the stack. >> Yes, it's a level of integration, right? There's no code to code, depending on how you look at it, cool. Customer use cases, you mentioned, large scale analytics, I thought I heard you say, machine learning, Data Lakes. I mean, basically, anyone who's using data is going to want to tap some sort of data repository, and figure out how to scale data when appropriate. There's also contextual, relevant data that might be specific to say, an industry vertical, or a database. And obviously, AI becomes the application for all this. >> Exactly. >> If I'm a customer, how does AppFlow relate to that? How does that help me, and what's the bottom line? >> So I think there's two parts to that journey. And depending on where customers are, and so there's, we do have millions of customers today that are running applications on AWS. Over the last few years, we've seen the emergence of Data Lakes, really just the storage of a large amount of data, typically in S3. But then companies want to extract value out of, and use in certain ways. Obviously, we have many, many tools today, from Redshift, Athena, that allow you to utilize these Data Lakes, and be able to run queries against this information. Things like EMR, and one of our oldest services in the space. And so doing some sort of large scale analytics, and more recently, services like SageMaker, are allowing us to do machine learning. And so being able to run machine learning across an enormous amount of data that we have stored in AWS. And there's some stuff in the IoT, workload use space as well, that's emerging. And many customers are using it. There's obviously many customers today that aren't using it on AWS, potential customers for us, that are looking to do something useful with data. And so the one part of the journey is taking up all of that infrastructure, and we have a lot of services that make it really easy to do machine learning, and do analytics, and that sort of thing. And then the other problem, the other side of the problem, which is what AppFlow is addressing is, how do I get that data to S3, or to Redshift, to actually go and run that machine learning workload? And that's what it's really unlocking for customers. And it's not just the one time transfer of data, the other thing that AppFlow actually supports, is the continuous updating of data. And so if you decide that you want to have that view of your data in S3, for example, and Data Lake, that's kept up to date, within a few minutes, within an hour, you can actually configure AppFlow to do that. And so the data source could be Salesforce, it could be Slack, it could be whatever data source you want to blend. And you continuously have that flow of data between those systems. And so when you go to run your machine learning workload, or your analytics, it's all continuously up to date. And you don't have this problem of, let me get the data, right? And when I think about some of the data jobs that I've run, in my time, back in the day as an engineer, on early EC2, a small part of it was actually running the job on the data. A large part of it was how do I actually get that data, and is it up to date? >> Up to date data is critical, I think that's the big feature there is that, this idea of having the data connectors, really makes the data fresh, because we go through the modeling, and you realize why I missed a big patch of data, the machine learnings not effective. >> Exactly. >> I mean, it's only-- >> Exactly, and the other thing is, it's very easy to bring in new data sources, right? You think about how many companies today have an enormous amount of data just stored in silos, and they haven't done anything with it. Often it'll be a conversation somewhere, right? Around the coffee machine, "Hey, we could do this, and we can do this." But they haven't had the developers to help them, and haven't had access to the data, and haven't been able to move the data, and to put it in a useful place. And so, I think what we're seeing here, with AppFlow, really unlocking of that. Because going from that initial conversation, to actually having something running, literally requires no code. Log into the AWS console, configure a few connectors, and it's up and running, and you're ready to go. And you can do the same thing with SageMaker, or any of the other services we have on the other side that make it really simple to run some of these ideas, that just historically have been just too complicated. >> Alright, so take me through that console piece. Just walk me through, I'm in, you sold me on this. I just came out of meeting with my company, and I said, "Hey, you know what? "We're blowing up this siloed approach. "We want to kind of create this horizontal data model, "where we can mix "and match connectors based upon our needs." >> Yeah. >> So what do I do? I'm using SageMaker, using some data, I got S3, I got an application. What do I do? I'm connecting what, S3? >> Yeah, well-- >> To the app? >> So the simplest thing is, and the simplest place to find this actually, is on Jeff Bezos blog, that he did for the release, right? Jeff always does a great job in demonstrating how to use our various products. But it literally is going into the standard AWS console, which is the console that we use for all of our services. I think we have 200 of them, so it is getting kind of challenging to find the ball in that console, as we continue to grow. And find AppFlow. AppFlow is a top level service, and so you'll see it in the console. And the first thing you got to do, is you got to configure your Source-Connect. And so it's a connector that, where's the data coming from? And as I said, we had 14 partners, you'll be able to see those connectors there, and see what's supported. And obviously, there's the connectivity. Do you have access to that data, or where is the data running? AppFlow runs within AWS, and so you need to have either VPN, or direct connect back to the organization, if the data source is on-premise. If the data source happens to be in AWS, and obviously be in a VPC, and you just need to configure some of that connectivity functionality. >> So no code if the connectors are there, but what if I want to build my own connector? >> So building your own connector, that is something that we working with third parties with right now. I could be corrected, but not 100% sure whether that's available. It's certainly something I think we would allow customers to do, is to extend sort of either the existing connectors, or to add additional transformations as well. And so you'd be able to do that. But the transformations that the vast majority of our customers are using are literally just in the console, with the basic transformations. >> It comes bigger apps that people have, and just building those connectors. How does a partner get involved? You got 14 partners now, how do you extend the partner base contact in Amazon Partner Manager, or you send an email to someone? How does someone get involved? What are you recommending? >> So there are a couple of ways, right? We have an extensive partner ecosystem that the vast majority of these ISVs are already integrated with. And so, we have the 14 we launched with, we also pre announced SAP, which is going to be a very critical one for the vast majority of our customers. Having deep integration with SAP data, and being able to bring that seamlessly into AWS. That'll be launching soon. And then there's a long list of other ones, that we're currently working on. And they're currently working on them themselves. And then the other one is going to be, like with most things that Amazon, feedback from customers. And so what we hear from customers, and very often you'll hear from third party partners as well, who'll come and say, "Hey, my customers are asking me "to integrate with the AppFlow, what do I need to do?" And so, you know, just reaching out to AWS, and letting them know that you'd be interested in integrating, that you're not part of the partner program. The team would be happy to engage, and bring you on board, so-- >> (mumbles) on playbook, get the top use cases nailed down, listen to customers, and figure it out. >> Exactly. >> Great stuff Dave, we really appreciate it. I'm looking forward to digging in AppFlow, and I'll check on Jeff Bezos blog. Sure, it's April 22, was the launch day, probably had up there. One of the things that want to just jump into, now moving into the next topic, is the cost structure. A lot of pressure on costs. This is where I think this Migration Acceleration Program for Windows is interesting. Andy Jassy always likes to boast on stage at Reinvent, about the number of workloads of Windows running on Amazon Web Services. This has been a big part of the customers, I think, for over 10 years, that I can think of him talking about this. What is this about? Are you still seeing uptake on Windows workloads, or, I mean,-- >> Absolutely. >> Azure has got some market share, >> Absolutely. >> but now you, doesn't really kind of square in my mind, what's going on here. Tell us about this migration service. >> Yeah, absolutely, on the migration side. So Windows is absolutely, we still believe AWS is the best place to run a Windows workload. And we have many, many happy Windows customers today. And it's a very big, very large, growing point of our business today, it used to be. I was part of the original team back in 2008, that launched, I think it was Windows 2008, back then on EC2. And I remember sort of working out all the details, of how to do all the virtualization with Windows, obviously back then we'd done Linux. And getting Windows up and running, and working through some of the challenges that Windows had as an operating system in the early days. And it was October 2008 that we actually launched Windows as an operating system. And it's just been, we've had many, many happy Windows customers since then. >> Why is Amazon so peak to run workloads from Windows so effectively? >> Well, I think, sorry what did you say peaked? >> Why is Amazon so in well positioned to run the Windows workloads? >> Well, firstly, I mean, I think Windows is really just the operating system, right? And so if you think about that as the very last little bit of your sort of virtualization stack, and then being able to support your applications. What you really have to think about is, everything below that, both in terms of the compute, so performance you're going to get, the price performance you're going to get. With our Nitro Hypervisor, and the Nitro System that we developed back in 2018, or launched in 2018. We really are able to provide you with the best price performance, and have the very least overhead from a hypervisor point of view. And then what that means is you're getting more out of your machine, for the price that you pay. And then you think about the rest of the ecosystem, right? Think about all the other services, and all the features, and just the breadth, and the extensiveness of AWS. And that's critically important for all of our Windows customers as well. And so you're going to have things like Active Directory, and these sort of things that are very Windows specific, and we can absolutely support all of those, natively. And in the Windows operating system as well. We have things like various agents that you can run inside the Windows box to do more maintenance and management. And so I think we've done a really good job in bringing Windows into the larger, and broader ecosystem of AWS. And it really is just a case of making sure that Windows runs smoothly. And that's just the last little bit on top of that, and so many customers enterprises run Windows today. When I started out my career, I was developing software in the banking industry, and it was a very much a Windows environment. They were running critical applications. And so we see it's critically important for customers who run Windows today, to be able to bring those Windows workloads to AWS. >> Yeah, and that's certainly-- >> We are seeing a trend. Yeah, sorry, go ahead. >> Well, they're certainly out there from a market share standpoint, but this is a cost driver, you guys are saying, and I want you to just give an example, or just illustrate why it costs less. How is it a cost savings? Is it just services, cycle times on EC2? I mean what's the cost savings? I'm a customer like, "Okay, so I'm going to go to Amazon with my workloads." Why is it a cost saving? >> I think there are a few things. The one I was referring to in my previous comment was the price performance, right? And so if I'm running on a system, where the hypervisor is using a significant portion of the physical CPU that I want to use as well. Well there's an overhead to that. And so from a price performance point of view, I look at, if I go and benchmark a CPU, and I look at how much I pay for that per unit of that benchmark, it's better on AWS. Because with our natural system, we're able to give you 100% of the floor. And so you get a performance then. So that's the first thing is price performance, which is different from this price. But there's a saving there as well. The other one is a large part, and getting into the migration program as well. A large part of what we do with our customers, when they come to AWS, is supposed to be, we take a long look at their license strategy. What licenses do they have? And a key part of bringing in Windows workloads AWS, is license optimization. What can we do to help you optimize the licenses that you're using today for Windows, for SQL Server, and really try and find efficiencies in that. And so we're able to secure significant savings for many of our customers by doing that. And we have a number of tools that they use as part of the migration program to do that. And so that helps save there. And then finally, we have a lot of customers doing what we call modernization of their applications. And so it really embraced Cloud, and some of the benefits that you get from Cloud. Especially elasticities, so being able to scale for demand. It's very difficult to do that when you bound by license for your operating system, because every box you run, you have to have a license for it. And so tuning auto scaling on, you've got to make sure you have enough licenses for all these Windows boxes you've seen. And so the push the Cloud's bringing, we've seen a lot of customers move Windows applications from Windows to Linux, or even move SQL Server, from SQL server to SQL Server on Linux, or another database platform. And do a modernization there, that already allows them to benefit from the elasticity that Cloud provides, without having to constantly worry about licenses. >> So final question on this point, migration service implies migration from somewhere else. How do they get involved? What's the onboarding process? Can you give a quick detail on that? >> Absolutely, so we've been helping customers with migrations for years. We've launched a migration program, or Migration Acceleration Program, MAP. We launched it, I think about 2016, 2017 was the first part of that. It was really just a bringing together of the various, the things we'd learned, the tools we built, the best strategies to do a migration. And we said, "How do we help customers looking "to migrate to the Cloud." And so that's what MAP's all about, is just a three phase, we'll help you assess the migration, we'll help you do a lot of planning. And then ultimately, we help you actually do the migration. We partner with a number of external partners, and ISVs, and GSIs, who also worked very closely with us to help customers do migrations. And so what we launched in April of this year, with the Windows migration program, is really just more support for Windows workload, as part of the broader Migration Acceleration Program. And there's benefits to customers, it's a smoother migration, it's a faster migration in almost all cases, we're doing license assessments, and so there's cost reduction in that as well. And ultimately, there's there's other benefits as well that we offer them, if they partner with us in bringing the workload to AWS. And so getting involved is really just reaching out to one of our AWS sales folks, or one of your account managers, if you have an account manager, and talk to them about workloads that you'd like to bring in. And we even go as far as helping you identify which applications are easiest to migrate. And so that you can kind of get going with some of the easier ones, while we help you with some of the more difficult ones. And strategies' about removing those roadblocks to bring your services to AWS. >> Takes the blockers away, Dave Brown, Vice President of EC2, the crown jewel of AWS, breaking down AppFlow, and the migration to Windows services. Great insights, appreciate the time. >> Thanks. >> We're here with Dave Brown, VP of EC2, as part of the virtual Cube coverage. Dave, I want to get your thoughts on an industry topic. Given what you've done with EC2, and the success, and with COVID-19, you're seeing that scale problem play out on the world stage for the entire population of the global world. This is now turning non-believers into believers of DevOps, web services, real time. I mean, this is now a moment in history, with the challenges that we have, even when we come out of this, whether it's six months or 12 months, the world won't be the same. And I believe that there's going to be a Cambrian explosion of applications. And an architecture that's going to look a lot like Cloud, Cloud-native. You've been doing this for many, many years, key architect of EC2 with your team. How do you see this playing out? Because a lot of people are going to be squirreling in rooms, when this comes back. They're going to be video conferencing now, but when they have meetings, they're going to look at the window of the future, and they're going to be exposed to what's failed. And saying, "We need to double down on that, "we have to fix this." So there's going to be winners and losers coming out of this pandemic, really quickly. And I think this is going to be a major opportunity for everyone to rally around this moment, to reset. And I think it's going to look a lot like this decoupled, this distributed computing environment, leveraging all the things that we've talked about in the past. So what's your advice, and how do you see this evolving? >> Yeah, I completely agree. I mean, I think, just the speed at which it happened as well. And the way in which organizations, both internally and externally, had to reinvent themselves very, very quickly, right? We've been very fortunate within Amazon, moving to working from home was relatively simple for the vast majority of us. Obviously, we have a number of our employees that work in data centers, and performance centers that have been on the front lines, and be doing a great job. But for the rest of us, it's been virtual video conferencing, right? All about meetings, and being able to use all of our networking tools securely, either over the VPN, or the no VPN infrastructure that we have. And many organizations had to do that. And so I think there are a number of different things that have impacted us right now. Obviously, virtual desktops has been a significant sort of growth point, right? Folks don't have access to the physical machine anymore, they're now all having to work remote, and so service like Workspaces, which runs on EC2, as well, has being a critical service data to support many of our largest customers. Our client VPN service, so we have within EC2 on the networking side, has also been critical for many large organizations, as they see more of their staff working everyday remotely. It has also seen, been able to support a lot of customers there. Just more broadly, what we've seen with COVID-19, is we've seen some industries really struggle, obviously travel industry, people just aren't traveling anymore. And so there's been immediate impact to some of those industries. They've been other industries that support functions like the video conferencing, or entertainment side of the house, has seen a bit of growth, over the last couple of months. And education has been an interesting one for us as well, where schools have been moving online. And behind the scenes in AWS, and on EC2, we've been working really hard to make sure that our supply chains are not interrupted in any way. The last thing we want to do is have any of our customers not be able to get EC2 capacity, when they desperately need it. And so we've made sure that capacity is fully available, even all the way through the pandemic. And we've even been able to support customers with, I remember one customer who told me the next day, they're going to have more than hundred thousand students coming online. And they suddenly had to grow their business, by some crazy number. And we were able to support them, and give them the capacity, which is way outside of any sort of demand--. >> I think this is the Cambrain explosion that I was referring to, because a whole new set of new things have emerged. New gaps in businesses have been exposed, new opportunities are emerging. This is about agility. It's real time now. It's actually happening for everybody, not just the folks on the inside of the industry. This is going to create a reinvention. So it's ironic, I've heard the word reinvent mentioned more times now, over the past three months, than I've heard it representing to Amazon. 'Cause that's your annual conference, Reinvent, but people are resetting and reinventing. It's actually a tactic, this is going on. So they're going to need some Clouds. So what do you say to that? >> So, I mean, the first thing is making sure that we can continue to be highly available, continue to have the capacity. The worst scenario is not being able to have the capacity for our customers, right? We did see that with some providers, and that honesty on outside is just years and years of experience of being able to manage supply chain. And the second thing is obviously, making sure that we remain available, that we don't have issues. And so, you know, with all of our stuff going remote and working from home, all my teams are working from home. Being able to support AWS in this environment, we haven't missed a beat there, which has been really good. We were well set up to be able to absorb this. And then obviously, remaining secure, which was our highest priority. And then innovating with our customers, and being able to, and that's both products that we're going to launch over time. But in many cases, like that education scenario I was talking about, that's been able to find that capacity, in multiple regions around the world, literally on a Sunday night, because they found out literally that afternoon, that Monday morning, all schools were virtual, and they were going to use their platform. And so they've been able to respond to that demand. We've seen a lot more machine learning workloads, we've seen an increase there as well as organizations are running more models, both within the health sciences area, but also in the financial areas. And also in just general business, (mumbles), yes, wherever it might be. Everybody's trying to respond to, what is the impact of this? And better understand it. And so machine learning is helping there, and so we've been able to support all those workloads. And so there's been an explosion. >> I was joking with my son, I said, "This world is interesting." Amazon really wins, that stuff's getting delivered to my house, and I want to play video games and Twitch, and I want to build applications, and write software. Now I could do that all in my home. So you went all around. But all kidding aside, this is an opportunity to define agility, so I want to get your thoughts, because I'm a bit a big fan of Amazon. As everyone knows, I'm kind of a pro Amazon person, and as other Clouds kind of try to level up, they're moving in the same direction, which is good for everybody, good competition and all. But S3 and EC2 have been the crown jewels. And building more services around those, and creating these abstraction layers, and new sets of service to make it easier, I know has been a top priority for AWS. So can you share your vision on how you're going to make EC2, and all these services easier for me? So if I'm a coder, I want literally no code, low code, infrastructure as code. I need to make Amazon more programmable and easier. Can you just share your vision on, as we talk about the virtual summits, as we cover the show, what's your take on making Amazon easier to consume and use? >> It's been something we thought a lot over the years, right? When we started out, we were very simple. The early days of EC2, it wasn't that rich feature set. And it's been an interesting journey for us. We've obviously become a lot more, we've written, launched local features, which narrative brings some more complexity to the platform. We have launched things like Lightsail over the years. Lightsail is a hosting environment that gives you that EC2 like experience, but it's a lot simpler. And it's also integrated with a number of other services like RDS and ELB as well, basic load balancing functionality. And we've seen some really good growth there. But what we've also learned is customers enjoy the richness of what ECU provides, and what the full ecosystem provides, and being able to use the pieces that they really need to build their application. From an S3 point of view, from a board ecosystem point of view. It's providing customers with the features and functionality that they really need to be successful. From the compute side of the house, we've done some things. Obviously, Containers have really taken off. And there's a lot of frameworks, whether it's EKS, or community service, or a Docker-based ECS, has made that a lot simpler for developers. And then obviously, in the serverless space, Landers, a great way of consuming EC2, right? I know it's serverless, but there's still an EC2 instance under the hood. And being able to bring a basic function and run those functions in serverless is, a lot of customers are enjoying that. The other complexity we're going after is on the networking side of the house, I find that a lot of developers out there, they're more than happy to write the code, they're more than happy to bring their reputation to AWS. But they struggle a little bit more on the networking side, they really do not want to have to worry about whether they have a route to an internet gateway, and if their subnets defined correctly to actually make the application work. And so, we have services like App Mesh, and the whole mesh server space is developing a lot. To really make that a lot simpler, where you can just bring your application, and call it on an application that just uses service discovery. And so those higher level services are definitely helping. In terms of no code, I think that App Mesh, sorry not App Mesh, AppFlow is one of the examples for already given organizations something at that level, that says I can do something with no code. I'm sure there's a lot of work happening in other areas. It's not something I'm actively thinking on right now , in my role in leading EC2, but I'm sure as the use cases come from customers, I'm sure you'll see more from us in those areas. They'll likely be more specific, though. 'Cause as soon as you take code out of the picture, you're going to have to get pretty specific in the use case. You already get the depth, the functionality the customers will need. >> Well, it's been super awesome to have your valuable time here on the virtual Cube for covering Amazon Summit, Virtual Digital Event that's going on. And we'll be going on throughout the year. Really appreciate the insight. And I think, it's right on the money. I think the world is going to have in six to 12 months, surge in reset, reinventing, and growing. So I think a lot of companies who are smart, are going to reset, reinvent, and set a new growth trajectory. Because it's a Cloud-native world, it's Cloud-computing, this is now a reality, and I think there's proof points now. So the whole world's experiencing it, not just the insiders, and the industry, and it's going to be an interesting time. So really appreciate that, they appreciate it. >> Great, >> Them coming on. >> Thank you very much for having me. It's been good. >> I'm John Furrier, here inside theCUBE Virtual, our virtual Cube coverage of AWS Summit 2020. We're going to have ongoing Amazon Summit Virtual Cube. We can't be on the show floor, so we'll be on the virtual show floor, covering and talking to the people behind the stories, and of course, the most important stories in silicon angle, and thecube.net. Thanks for watching. (upbeat music)
SUMMARY :
leaders all around the world, and most of the parts Hey John, it's really great to be here, and certainly on the large And so the problem we started to see was, in the industry that you guys And is it bi-directional for the customer? and encryption of data on the internet. And I want to get your thoughts on this. and a lot of customers we spoke to, And I think there's a world in the ecosystem of say, Snowflake. benefits of the goodness And so in the case of AppFlow, of our services-- and figure out how to scale And so the one part of the really makes the data fresh, Exactly, and the other thing is, and I said, "Hey, you know what? So what do I do? And the first thing you got to do, that the vast majority and just building those connectors. And then the other one is going to be, the top use cases nailed down, One of the things that doesn't really kind of square in my mind, of how to do all the And in the Windows We are seeing a trend. and I want you to just give an example, And so the push the Cloud's bringing, What's the onboarding process? And so that you can kind of get going and the migration to Windows services. And I believe that there's going to And the way in which organizations, inside of the industry. And the second thing is obviously, But S3 and EC2 have been the crown jewels. and the whole mesh server and it's going to be an interesting time. Thank you very much for having me. and of course, the most important stories
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Brown | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
14 | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
100% | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
October 2008 | DATE | 0.99+ |
Jeff | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
2003 | DATE | 0.99+ |
2018 | DATE | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
April 22 | DATE | 0.99+ |
14 partners | QUANTITY | 0.99+ |
six months | QUANTITY | 0.99+ |
two parts | QUANTITY | 0.99+ |
22nd of April | DATE | 0.99+ |
Windows | TITLE | 0.99+ |
Snowflake | TITLE | 0.99+ |
12 months | QUANTITY | 0.99+ |
AppFlow | TITLE | 0.99+ |
first | QUANTITY | 0.99+ |
SQL Server | TITLE | 0.99+ |
SQL | TITLE | 0.99+ |
Linux | TITLE | 0.99+ |
EC2 | TITLE | 0.99+ |
Breaking Analysis: Cloud Momentum Building for the Post COVID Era
>> From theCUBE studios in Palo Alto and Boston, connecting with thought leaders all around the world, this is a Cube Conversation. >> Analysis from company earnings reports and costumer survey data, continues to show that Microsoft Azure and GCP are closing the gap on AWS's cloud dominance. Now, while reporting definitions of the cloud remain fuzzy, it's very clear that clouds steady march into the stronghold of on-premises computing continues. The global Coronavirus pandemic has only strengthen the cloud's position in the overall market place. Now, as you might recall, we reported last week, the story of the haves and the have nots, and that's playing out in several sectors. And in this breaking analysis we're going to take a closer look at the big three cloud players, and we'll do a brief investigation of AWS specifically in a short drill down. Welcome everyone, to theCUBE insights powered by ETR. Today we're going to try to really accomplish three things. First, we want to quantify how the cloud is impacting the on-prem business. As we enter this decade, let's take a snapshot of some of the vendors that are well positioned, and maybe some of those that are facing greater head winds. The second thing we want to do, is we want to update you on the latest market share data for the big three cloud players. And then finally, I want to dig into the business of AWS in a little bit more depth to see where they're seeing the most strengthen, and where, perhaps, maybe there are some cracks in their substantial armor. Now, let's look at the IT landscape where we are in 2020. The first data point that we want to share, really tells a familiar story, and really drafts off the theme that we've set for the past several weeks, which is the bifurcation in the marketplace. Now, if you take a look at this chart what's really showing is ETR's version of the Gartner Magic Quadrant, but it uses survey data to plot the vendors. So in the y-axis is the metric of it, net score, which is a measurement of spending momentum. And just to review, each quarter ETR surveys more than 1,200 CIOS and IT professionals, and asks them, essentially are they spending more or less on a particular supplier. And what we do is we subtract the less from the more, and the remainder is the net score. So it's sort of like NPS, and I'll go into that a little bit later. But that's the vertical axis. Now the x-axis is called market share. You know, it's really not market share, like IDC measures, rather it's a measure of pervasiveness in the survey and it's calculated by dividing the mentions of a particular company by the total mentions in the overall survey. And you see that's plotted on the horizontal axis. So several points here that I want to note. First is remember, this is April survey data, so for more than 1200 buyers, and you can see we've plotted several companies, including the big three cloud players. You got Microsoft and AWS in the upper right and Google with much lower presence but decent spending momentum. And we've plotted a number of other enterprise players, including several on-prem leaders, like Dell EMC, IBM, Oracle, and Cisco. And we've also included some of the companies that are showing real promise from a momentum standpoint, and penetration. These are business models that we like, and they include Snowflake, the analytic database disruptor, UiPath, who's the RPA specialist, Okta and CrowdStrike who are really killing it in security and Datadog who provides cloud monitoring services. And as you can see, we've superimposed in the upper right a table showing the net scores and market shares for each of the companies. And the story here very clearly quantifies that cloud is winning, and we think it's likely to continue to grow fast and penetrate the enterprise. Now, as we've reported many times, downturns tend to be good for cloud. But the on-prem leaders, you know, as you can see by Cisco's position, for example, they're not going to just roll over. And we'll be covering winning strategies for legacy players in a later segment. But let me just say this, if you're a customer with a lot of on-prem infrastructure, and you're building out data centers, unless you're a big cloud provider, you're probably going to be in the wrong side of history here. Okay. Let's take a closer look at the big three. I want to update you on their IaaS and PaaS numbers as best we can. All recently reported earnings, and this chart shows the data for each of the companies. Now as you can see, each of them has substantial businesses with AWS by far the largest, GCP is growing the fastest. What's notable is that AWS in 2018 was 2.7 x larger than Azure, and today that delta is under two x based on our q1 estimates. And it's just about two X on a trailing 12 month basis. Now, I got to caution you that the AWS numbers are the cleanest AWS reports religiously an easy to understand revenue and operating profit number for its cloud business, every quarter. Microsoft and Google are much fuzzier. You know, for example, you read through Microsoft's 10-K reports and you'll see that their intelligent cloud revenue comprises public and private clouds, hybrid, SQL Server, Windows Server, System Center, GitHub, enterprise support and consulting services and, oh yeah, Azure. So we have to estimate how much of that hairball is actually comparable directly to AWS. Now, Google similarly just started breaking out its cloud revenue in bundles more than just IaaS and PaaS into its cloud numbers. Now, having said that, both Microsoft and Google, they do give little tidbits like Hansel and Gretel of guidance in the form of growth rates or commentary on growth rates in their respective IaaS and PaaS businesses, ie, Azure and GCP. So this is our best estimate, given all that is reported and what we know from survey data. Now, I also want to point out that these clouds are, they're really different in quality and they have different fits for different use cases. For example, Microsoft is building out a cloud really to support it's huge install base of customers, and really make it easy for them to tap into the Microsoft Cloud services, but it may not be the most robust cloud, as has been widely reported in analyzed in the press. You know, Microsoft is struggling to provide adequate capacity for its customers. It's kind of using the COVID-19 pandemic as a bit of a heat shield on this issue. Microsoft put out a blog post essentially saying that it'll, it'll prioritize first responders, health workers, and essential businesses during the COVID 19 pandemic, oh, and Teams customers. So okay, that's one of those caveat emptor situations, you know, if you're not one of these camps, you know, or frankly, maybe if you are. But it's unquestionable that Microsoft has strong momentum across its vast portfolio, including cloud. And really that's what I want to get into next. So let's take a look at some data, we've been reporting for quite some time based on the ETR surveys, that the big cloud players, you know, have very, very strong momentum as measured by net scores. So what this chart shows is the most recent survey results, again, more than 1,200, it buyers 1,269 to be exact. And you can see broadly that all the big three are well on green for net scores as we show in the upper right hand box, and well over 50% net scores for all three, and Microsoft Azure is in the 70% range. So a very, very strong demand across the board. Now remember, ETR is asking buyers to comment on the areas with which they are familiar. So a buyer might be interpreting cloud to include all those things in Microsoft and Google that may not be directly comparable to the AWS responses, but it doesn't matter. The point is, they all have momentum, and you can see, you know, even though there's a slight dip in the most recent survey, you know, which ran during the peak of the shutdown in the US. So even there's a small dip relative to other parts of the survey, cloud is very, very strong. Now, let's dig into the data a bit more, and take a look at the Fortune 500 drill down. So of course, this is an indicator of larger companies. And you can see AWS overtakes Azure in this segment by a small margin, you know, noting the same caveats that I mentioned earlier. But the strength of the net scores for all three is meaningful as they all increased within these larger buying basis. Now let's take a look at this next chart, if we extend that cut, to include the Fortune 1000, you can see here that all three companies again, continue to show strength. But you know, there's a convergence, which really says to me that this multi cloud picture that's emerged, and that CIOs are really now starting to see that whether it's through M and A, or maybe it was shadow IT or whatever, they're faced with a variety of choices that are increasingly viable. And despite my previously and sometimes snarky comments that multi cloud has been more of a symptom of multi vendor versus a clear CIO strategy, that maybe is perhaps beginning to change, especially as they're asked to clean up what I've often called as the crime scene. Now, I want to close by taking a little bit of a closer look at the AWS business specifically. And I want to come back to this notion of net score and explain it a little bit. So what we show here on this wheel chart is really a breakdown of responses across more than 600 AWS customers in the April survey, remember again, this survey ran at the height of the lockdown in the US. It's a global survey well over 100 responses outside of the United States. But really, what's relevant here is the strength of the AWS business overall. This chart shows how net score is essentially derived, ETR asked customers, are you adopting new? Are you increasing spend meaning, increasing by 6% or more? Are you keeping spending flat? Or are you decreasing spending by more than 6%? Or are you chucking the platform i.e. replacement? So look at this, we're talking about nearly 70% of customers spending more in 2020 on AWS than they spent last year, and only 4% are spending less. That's pretty impressive for a player with a $38 billion business. Now the next data point I want to share really shows where the action is across the AWS portfolio, so let's take a look at this. The chart here shows the responses from an end of more than 700 and the net score, or spending momentum, across the AWS portfolio with a comparison across three survey dates, last April, January 2020, and April 2020. And as you can see the very elevated spending momentum across most of the AWS key business lines, including cloud functions, data warehouse, which is EDW, etc, AI and machine learning, workspaces with the work from home pivot. And, you know, there are some areas that are maybe less robust, but nothing in the red zone, red zone, meaning, you know, net scores would be like below, let's say 25% net score. And as you can see, there's really nothing close to that in the AWS portfolio. So you're seeing a very strong momentum for AWS, you know, specifically, and of course, the cloud in general. Now, as I said, the pandemic has been been good for cloud, downturns generally are a tailwind. So if you're building data centers, it's probably not a good use of capital, you know, so server huggers, beware. There's an attractiveness more so than ever with this COVID-19 pandemic of that dial up, dial down service. Watch for software companies starting to use that model, whereas today, they often try to lock you into a, you know, one year or a two year or three year license. Increasingly, we're seeing companies investigate and actually go to market with a true cloud model. Okay, thanks for watching this episode of theCUBE Insights powered by ETR. Remember, these breaking analysis segments are all available as podcasts. You check out siliconangle.com, I publish there weekly, they have all the news, I also published on Wikibon. So don't forget to check out etr.plus, as well get in touch with me @dvellante. Or you can email me at david.vellante@siliconangle.com. Stay safe everybody, and we'll see you next time. (gentle music)
SUMMARY :
leaders all around the world, in the most recent survey, you know,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
IBM | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
one year | QUANTITY | 0.99+ |
25% | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
2018 | DATE | 0.99+ |
April | DATE | 0.99+ |
Boston | LOCATION | 0.99+ |
$38 billion | QUANTITY | 0.99+ |
12 month | QUANTITY | 0.99+ |
United States | LOCATION | 0.99+ |
three year | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
April 2020 | DATE | 0.99+ |
70% | QUANTITY | 0.99+ |
US | LOCATION | 0.99+ |
last week | DATE | 0.99+ |
UiPath | ORGANIZATION | 0.99+ |
6% | QUANTITY | 0.99+ |
Okta | ORGANIZATION | 0.99+ |
CrowdStrike | ORGANIZATION | 0.99+ |
each | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
david.vellante@siliconangle.com | OTHER | 0.99+ |
more than 6% | QUANTITY | 0.99+ |
more than 600 | QUANTITY | 0.99+ |
more than 700 | QUANTITY | 0.99+ |
more than 1200 buyers | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
more than 1,200 | QUANTITY | 0.99+ |
both | QUANTITY | 0.98+ |
second thing | QUANTITY | 0.98+ |
last April | DATE | 0.98+ |
two year | QUANTITY | 0.98+ |
2.7 x | QUANTITY | 0.98+ |
Dell EMC | ORGANIZATION | 0.98+ |
GitHub | ORGANIZATION | 0.97+ |
nearly 70% | QUANTITY | 0.97+ |
Datadog | ORGANIZATION | 0.97+ |
Today | DATE | 0.97+ |
4% | QUANTITY | 0.96+ |
about two X | QUANTITY | 0.95+ |
three companies | QUANTITY | 0.95+ |
over 50% | QUANTITY | 0.95+ |
ETR | ORGANIZATION | 0.94+ |
more than 1,200 CIOS | QUANTITY | 0.93+ |
siliconangle.com | OTHER | 0.93+ |
q1 | DATE | 0.93+ |
three | QUANTITY | 0.92+ |
COVID-19 pandemic | EVENT | 0.92+ |
January 2020 | DATE | 0.91+ |
Snowflake | ORGANIZATION | 0.91+ |
Gretel | PERSON | 0.91+ |
Joe Gonzalez, MassMutual | Virtual Vertica BDC 2020
(bright music) >> Announcer: It's theCUBE. Covering the Virtual Vertica Big Data Conference 2020, brought to you by Vertica. Hello everybody, welcome back to theCUBE's coverage of the Vertica Big Data Conference, the Virtual BDC. My name is Dave Volante, and you're watching theCUBE. And we're here with Joe Gonzalez, who is a Vertica DBA, at MassMutual Financial. Joe, thanks so much for coming on theCUBE I'm sorry that we can't be face to face in Boston, but at least we're being responsible. So thank you for coming on. >> (laughs) Thank you for having me. It's nice to be here. >> Yeah, so let's set it up. We'll talk about, you know, a little bit about MassMutual. Everybody knows it's a big financial firm, but what's your role there and kind of your mission? >> So my role is Vertica DBA. I was hired January of last year to come on and manage their Vertica cluster. They've been on Vertica for probably about a year and a half before that started out on on-prem cluster and then move to AWS Enterprise in the cloud, and brought me on just as they were considering transitioning over to Vertica's EON mode. And they didn't really have anybody dedicated to Vertica, nobody who really knew and understood the product. And I've been working with Vertica for about probably six, seven years, at that point. I was looking for something new and landed a really good opportunity here with a great company. >> Yeah, you have a lot of experience in Vertica. You had a role as a market research, so you're a data guy, right? I mean that's really what you've been doing your entire career. >> I am, I've worked with Pitney Bowes, in the postage industry, I worked with healthcare auditing, after seven years in market research. And then I've been with MassMutual for a little over a year now, yeah, quite a lot. >> So tell us a little bit about kind of what your objectives are at MassMutual, what you're kind of doing with the platform, what application just supporting, paint a picture for us if you would. >> Certainly, so my role is, MassMutual just decided to make Vertica its enterprise data warehouse. So they've really bought into Vertica. And we're moving all of our data there probably about to good 80, 90% of MassMutual's data is going to be on the Vertica platform, in EON mode. So, and we have a wide usage of that data across corporation. Right now we're about 50 terabytes and growing quickly. And a wide variety of users. So there's a lot of ETLs coming in overnight, loading a lot of data, transforming a lot of data. And a lot of reporting tools are using it. So currently, Tableau MicroStrategy. We have Alteryx using it, and we also have API's running against it throughout the day, 24/7 with people coming in, especially now these days with the, you know, some financial uncertainty going on. A lot of people coming and checking their 401k's, checking their insurance and status and what not. So we have to handle a lot of concurrent traffic on top of the normal big query. So it's a quite diverse cluster. And I'm glad they're really investing in using Vertica as their overall solution for this. >> Yeah, I mean, these days your 401k like this, right? (laughing) Afraid to look. So I wonder, Joe if you could share with our audience. I mean, for those who might not be as familiar with the history of just Vertica, and specifically, about MPP, you've had historically you have, you know, traditional RDBMS, whether it's Db2 or Oracle, and then you had a spate of companies that came out with this notion of MPP Vertica is the one that, I think it's probably one of the few if only brands that they've survived, but what did that bring to the industry and why is that important for people to understand, just in terms of whatever it is, scale, performance, cost. Can you explain that? >> To me, it actually brought scale at good cost. And that's why I've been a big proponent of Vertica ever since I started using it. There's a number, like you said of different platforms where you can load big data and store and house big data. But the purpose of having that big data is not just for it to sit there, but to be used, and used in a variety of ways. And that's from, you know, something small, like the first installation I was on was about 10 terabytes. And, you know, I work with the data warehouses up to 100 terabytes, and, you know, there's Vertica installations with, you know, hundreds of petabytes on them. You want to be able to use that data, so you need a platform that's going to be able to access that data and get it to the clients, get it to the customers as quickly as possible, and not paying an arm and a leg for the privilege to do so. And Vertica allows companies to do that, not only get their data to clients and you know, in company users quickly, but save money while doing so. >> So, but so, why couldn't I just use a traditional RDBMS? Why not just throw it all into Oracle? >> One, cost, Oracle is very expensive while Vertica's a lot more affordable than that. But the column-score structure of Vertica allows for a lot more optimized queries. Some of the queries that you can run in Vertica in 2, 3, 4 seconds, will take minutes and sometimes hours in an RDBMS, like Oracle, like SQL Server. They have the capability to store that amount of data, no question, but the usability really lacks when you start querying tables that are 180 billion column, 180 billion rows rather of tables in Vertica that are over 1000 columns. Those will take hours to run on a traditional RDBMS and then running them in Vertica, I get my queries back in a sec. >> You know what's interesting to me, Joe and I wonder if you could comment, it seems that Vertica has done a good job of embracing, you know, riding the waves, whether it was HDFS and the big data in our early part of the big data era, the machine learning, machine intelligence. Whether it's, you know, TensorFlow and other data science tools, it seems like Vertica somehow in the cloud is the other one, right? A lot of times cloud is super disruptive, particularly to companies that started on-prem, it seems like Vertica somehow has been able to adopt and embrace some of these trends. Why, from your standpoint, first of all, from your standpoint, as a customer, is that true? And why do you think that is? Is it architectural? Is it true mindset engineering? I wonder if you could comment on that. >> It's absolutely true, I've started out again, on an on-prem Vertica data warehouse, and we kind of, you know, rolled kind of along with them, you know, more and more people have been using data, they want to make it accessible to people on the web now. And you know, having that, the option to provide that data from an on-prem solution, from AWS is key, and now Vertica is offering even a hybrid solution, if you want to keep some of your data behind a firewall, on-prem, and put some in the cloud as well. So data at Vertica has absolutely evolved along with the industry in ways that no other company really has that I've seen. And I think the reason for it and the reason I've stayed with Vertica, and specifically have remained at Vertica DBA for the last seven years, is because of the way Vertica stays in touch with it's persons. I've been working with the same people for the seven, eight years, I've been using Vertica, they're family. I'm part of their family, and you know, I'm good friends with some of these people. And they really are in tune not only with the customer but what they're doing. They really sit down with you and have those conversations about, you know, what are your needs? How can we make Vertica better? And they listen to their clients. You know, just having access to the data engineers who develop Vertica to be arranged on a phone call or whatnot, I've never had that with any other company. Vertica makes that available to their customers when they need it. So the personal touch is a huge for them. >> That's good, it's always good to get the confirmation from the practitioners, just not hear from the vendor. I want to ask you about the EON transition. You mentioned that MassMutual brought you in to help with that. What were some of the challenges that you faced? And how did you get over them? And what did, what is, why EON? You know, what was the goal, the outcome and some of the challenges maybe that you had to overcome? >> Right. So MassMutual had an interesting setup when I first came in. They had three different Vertica clusters to accommodate three different portions of their business. The data scientists who use the data quite extensively in very large queries, very intense queries, their work with their predictive analytics and whatnot. It was a separate one for the API's, which needed, you know, sub-second query response times. And the enterprise solution, they weren't always able to get the performance they needed, because the fast queries were being overrun by the larger queries that needed more resources. And then they had a third for starting to develop this enterprise data platform and started, you know, looking into their future. The first challenge was, first of all, bringing all those three together, and back into a single cluster, and allowing our users to have both of the heavy queries and the API queries running at the same time, on the same platform without having to completely separate them out onto different clusters. EON really helps with that because it allows to store that data in the S3 communal storage, have the main cluster set up to run the heavy queries. And then you can set up sub clusters that still point to that S3 data, but separates out the compute so that the API's really have their own resources to run and not be interfered with by the other process. >> Okay, so that, I'm hearing a couple of things. One is you're sort of busting down data silos. So you're able to have a much more coherent view of your data, which I would imagine is critical, certainly. Companies like MassMutual, have been around for 100 years, and so you've got all kinds of data dispersed. So to the extent that you can break down those silos, that's important, but also being able to I guess have granular increments of compute and storage is what I'm hearing. What does that do for you? It make that more efficient? Well, they are other business benefits? Maybe you could elucidate. >> Well, one cost is again, a huge benefit, the cost of running three different clusters in even AWS, in the enterprise solution was a little costly, you know, you had to have your dedicated servers here and there. So you're paying for like, you know, 12, 15 different servers, for example. Whereas we bring them all back into EON, I can run everything on a six-node production cluster. And you know, when things are busy, I can spin up the three-node top cluster for the API's, only paid for when I need them, and then bring them back into the main cluster when things are slowed down a bit, and they can get that performance that they need. So that saves a ton on resource costs, you know, you're not paying for the storage, you're paying for one S3 bucket, you're only paying for the nodes, these are two instances, that are up and running when you need them., and that is huge. And again, like you said, it gives us the ability to silo our data without having to completely separate our data into different storage areas. Which is a big benefit, it gives us the ability to query everything from one single cluster without having to synchronize it to, you know, three different ones. So this one going to have there's, this one going to have there's, but everyone's still looking at the same data and replicate that in QA and Devs so that people can do it outside of production and do some testing as well. >> So EON, obviously a very important innovation. And of course, Vertica touts the difference between others who separate huge storage, and you know, they're not the only one that does that, but they are really I think the only one that does it for on-prem, and virtually across clouds. So my question is, and I think you're doing a breakout session on the Virtual BDC. We're going to be in Boston, now we're doing it online. If I'm in the audience, I'm imagining I'm a junior DBA at an organization that maybe doesn't have a Joe. I haven't been an expert for seven years. How hard is it for me to get, what do I need to do to get up to speed on EON? It sounds great, I want it. I'm going to save my company money, but I'm nervous 'cause I've only been at Vertica DBA for, you know, a year, and I'm sort of, you know, not as experienced as you. What are the things that I should be thinking about? Do I need to bring in? Do I need to hire somebody? Do I need to bring in a consultant? Can I learn it myself? What would you advise? >> It's definitely easy enough that if you have at least a little bit of work experience, you can learn it yourself, okay? 'Cause the concepts are still there. There's some you know, little bits of nuances where you do need to be aware of certain changes between the Enterprise and EON edition. But I would also say consult with your Vertica Account Manager, consult with your, you know, let them bring in the right people from Vertica to help you get up to speed and if you need to, there are also resources available as far as consultants go, that will help you get up to speed very quickly. And we did work together with Vertica and with one of their partners, Clarity, in helping us to understand EON better, set it up the right way, you know, how do we take our, the number of shards for our data warehouse? You know, they helped us evaluate all that and pick the right number of shards, the right number of nodes to get set up and going. And, you know, helped us figure out the best ways to get our data over from the Enterprise Edition into EON very quickly and very efficient. So different with yourself. >> I wanted to ask you about organizational, you know, issues because, you know, the guys like you practitioners always tell me, "Look, the tech, technology comes and goes, that's kind of the easy part, we're good at that. It's the people it's the processes, the skill sets." What does your, you know, team regime look like? And do you have any sort of ideal team makeup or, you know, ideal advice, is it two piece of teams? Is it what kind of skills? What kind of interaction and communications to senior leadership? I wonder if you could just give us some color on that. >> One of the things that makes me extremely proud to be working for MassMutual right now, is that they do what a lot of companies have not been doing and that is investing in IT. They have put a lot of thought, a lot of money, and a lot of support into setting up their enterprise data platform and putting Vertica at the center. And not only did they put the money into getting the software that they needed, like Vertica, you know, MicroStrategy, and all the other tools that we were using to use that, they put the money in the people. Our managers are extremely supportive of us. We hired about 40 to 45 different people within a four-month time frame, data engineers, data analysts, data modelers, a nice mix of people across who can help shape your data and bring the data in and help the users use the data properly, and allow me as the database administrator to make sure that they're doing what they're doing most efficiently and focus on my job. So you have to have that diversity among the different data skills in order to make your team successful. >> That's awesome. Kind of a side question, and it's really not Vertica's wheelhouse, but I'm curious, you know, in the early days of the big data, you know, movement, a lot of the data scientists would complain, and they still do that, "80% of my time is spent wrangling data." The tools for the data engineer, the data scientists, the database, you know, experts, they're all different. And is that changing? And to what degree is that changing? Kind of what ending are we in and just in terms of a more facile environment for all those roles? >> Again, I think it depends on company to company, you know, what resources they make available to the data scientists. And the data scientists, we have a lot of them at MassMutual. And they're very much into doing a lot of machine learning, model training, predictive analytics. And they are, you know, used to doing it outside of Vertica too, you know, pulling that data out into Python and Scalars Bar, and tools like that. And they're also now just getting into using Vertica's in-database analytics and machine learning, which is a skill that, you know, definitely nobody else out there has. So being able to have one somebody who understands Vertica like myself, and being able to train other people to use Vertica the way that is most efficient for them is key. But also just having people who understand not only the tools that you're using, but how to model data, how to architect your tables, your schemas, the interaction between your tables and schemas and whatnot, you need to have that diversity in order to make this work. And our data scientists have benefited immensely from the struct that MassMutual put in place by our data management delivery team. >> That's great, I think I saw, somewhere in your background, that you've trained about 100 people in Vertica. Did I get that right? >> Yes, I've, since I started here, I've gone to our Boston location, our Springfield location, and our New York City location and trained, probably about this point, about 120, 140 of our Vertica users. And I'm trying to do, you know, a couple of follow-up sessions per year. >> So adoption, obviously, is a big goal of yours. Getting people to adopt the platform, but then more importantly, I guess, deliver business value and outcomes. >> Absolutely. >> Yeah, I wanted to ask you about encryption. You know, in the perfect world, everything would be encrypted, but there are trade offs. Are you using encryption? What are you doing in that regard? >> We are actually just getting into that now due to the New York and the CCPA regulations that are now in place. We do have a lot of Person Identifiable Information in our data store that does require encryption. So we are going through a month's long process that started in December, I think, it's actually a bit earlier than that, to start identifying all the columns, not only in our Vertica database, but in, you know, the other databases that we do use, you know, we have Postgres database, SQL Server, Teradata for the time being, until that moves into Vertica. And identify where that data sits, what downstream applications, pull that data from the data sources and store it locally as well, and starts encrypting that data. And because of the tight relationship between Voltage and Vertica, we settled on Voltages as the major platform to start doing that encryption. So we're going to be implementing that in Vertica probably within the next month or two, and roll it out to all the teams that have data that requires encryption. We're going to start rolling it out to the downstream application owners to make sure that they are encrypting the data as they get it pulled over. And we're also using another product for several other applications that don't mesh well as well with both. >> Voltage being micro, focuses encryption solution, correct? >> Right, yes. >> Yes, of course, like a focus for the audience's is the, it owns Vertica and if Vertica is a separate brand. So I want to ask you kind of close on what success looks like. You've been at this for a number of years, coming into MassMutual which was great to hear. I've had some past experience with MassMutual, it's an awesome company, I've been to the Springfield facility and in Boston as well, and I have great respect for them, and they've really always been a leader. So it's great to hear that they're investing in technology as a differentiator. What does success look like for you? Let's say you're at MassMutual for a few years, you're looking back, what success look like? Go. >> A good question. It's changing every day just, you know, with more and more, you know, applications coming onboard, more and more data being pulled in, more uses being found for the data that we have. I think success for me is making sure that Vertica, first of all, is always up made, is always running at its most optimal to keep our users happy. I think when I started, you know, we had a lot of processes that were running, you know, six, seven hours, some of them were taking, you know, almost a day long, because they were so complicated, we've got those running in under an hour now, some of them running in a matter of minutes. I want to keep that optimization going for all of our processes. Like I said, there's a lot of users using this data. And it's been hard over the first year of me being here to get to all of them. And thankfully, you know, I'm getting a bit of help now, I have a couple of system DBAs, and I'm training up to help out with these optimizations, you know, fixing queries, fixing projections to make sure that queries do run as quickly as possible. So getting that to its optimal stage is one. Two, getting our data encrypted and protected so that even if for whatever reasons, somehow somebody breaks into our data, they're not going to be able to get anything at all, because our data is 100% protected. And I think more companies need to be focusing on that as well. And third, I want to see our data science teams using more and more of Vertica's in-database predictive analytics, in-database machine learning products, and really helping make their jobs more efficient by doing so. >> Joe, you're awesome guest I mean, we always like I said, love having the practitioners on and getting the straight, skinny and pros. You're welcome back anytime, and as I say, I wish we could have met in Boston, maybe next year at the BDC. But it's great to have you online, and thanks for coming on theCUBE. >> And thank you for having me and hopefully we'll meet next year. >> Yeah, I hope so. And thank you everybody for watching that. Remember theCUBE is running concurrent with the Vertica Virtual BDC, it's vertica.com/bdc2020. If you want to check out all the keynotes, and all the breakout sessions, I'm Dave Volante for theCUBE. We'll be going. More interviews, for people right there. Thanks for watching. (bright music)
SUMMARY :
Big Data Conference 2020, brought to you by Vertica. (laughs) Thank you for having me. We'll talk about, you know, cluster and then move to AWS Enterprise in the cloud, Yeah, you have a lot of experience in Vertica. in the postage industry, I worked with healthcare auditing, paint a picture for us if you would. with the, you know, some financial uncertainty going on. and then you had a spate of companies that came out their data to clients and you know, Some of the queries that you can run in Vertica a good job of embracing, you know, riding the waves, And you know, having that, the option to provide and some of the challenges maybe that you had to overcome? It was a separate one for the API's, which needed, you know, So to the extent that you can break down those silos, So that saves a ton on resource costs, you know, and I'm sort of, you know, not as experienced as you. to help you get up to speed and if you need to, because, you know, the guys like you practitioners the database administrator to make sure that they're doing of the big data, you know, movement, Again, I think it depends on company to company, you know, Did I get that right? And I'm trying to do, you know, a couple of follow-up Getting people to adopt the platform, but then more What are you doing in that regard? the other databases that we do use, you know, So I want to ask you kind of close on what success looks like. And thankfully, you know, I'm getting a bit of help now, But it's great to have you online, And thank you for having me And thank you everybody for watching that.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Joe Gonzalez | PERSON | 0.99+ |
Vertica | ORGANIZATION | 0.99+ |
Dave Volante | PERSON | 0.99+ |
MassMutual | ORGANIZATION | 0.99+ |
Boston | LOCATION | 0.99+ |
December | DATE | 0.99+ |
100% | QUANTITY | 0.99+ |
Joe | PERSON | 0.99+ |
six | QUANTITY | 0.99+ |
New York City | LOCATION | 0.99+ |
seven years | QUANTITY | 0.99+ |
12 | QUANTITY | 0.99+ |
80% | QUANTITY | 0.99+ |
seven | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
four-month | QUANTITY | 0.99+ |
vertica.com/bdc2020 | OTHER | 0.99+ |
Springfield | LOCATION | 0.99+ |
2 | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
two instances | QUANTITY | 0.99+ |
seven hours | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Scalars Bar | TITLE | 0.99+ |
Python | TITLE | 0.99+ |
180 billion rows | QUANTITY | 0.99+ |
Two | QUANTITY | 0.99+ |
third | QUANTITY | 0.99+ |
15 different servers | QUANTITY | 0.99+ |
two piece | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
180 billion column | QUANTITY | 0.98+ |
over 1000 columns | QUANTITY | 0.98+ |
eight years | QUANTITY | 0.98+ |
Voltage | ORGANIZATION | 0.98+ |
three | QUANTITY | 0.98+ |
hundreds of petabytes | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
six-node | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
one single cluster | QUANTITY | 0.98+ |
Vertica Big Data Conference | EVENT | 0.98+ |
MassMutual Financial | ORGANIZATION | 0.98+ |
4 seconds | QUANTITY | 0.98+ |
EON | ORGANIZATION | 0.98+ |
New York | LOCATION | 0.97+ |
about 10 terabytes | QUANTITY | 0.97+ |
first challenge | QUANTITY | 0.97+ |
next month | DATE | 0.97+ |
UNLIST TILL 4/2 The Data-Driven Prognosis
>> Narrator: Hi, everyone, thanks for joining us today for the Virtual Vertica BDC 2020. Today's breakout session is entitled toward Zero Unplanned Downtime of Medical Imaging Systems using Big Data. My name is Sue LeClaire, Director of Marketing at Vertica, and I'll be your host for this webinar. Joining me is Mauro Barbieri, lead architect of analytics at Philips. Before we begin, I want to encourage you to submit questions or comments during the virtual session. You don't have to wait. Just type your question or comment in the question box below the slides and click Submit. There will be a Q&A session at the end of the presentation. And we'll answer as many questions as we're able to during that time. Any questions that we don't get to we'll do our best to answer them offline. Alternatively, you can also visit the vertical forums to post your question there after the session. Our engineering team is planning to join the forums to keep the conversation going. Also a reminder that you can maximize your screen by clicking the double arrow button in the lower right corner of the slide. And yes, this virtual session is being recorded, and we'll be available to view on demand this week. We'll send you a notification as soon as it's ready. So let's get started. Mauro, over to you. >> Thank you, good day everyone. So medical imaging systems such as MRI scanners, interventional guided therapy machines, CT scanners, the XR system, they need to provide hospitals, optimal clinical performance but also predictable cost of ownership. So clinicians understand the need for maintenance of these devices, but they just want to be non intrusive and scheduled. And whenever there is a problem with the system, the hospital suspects Philips services to resolve it fast and and the first interaction with them. In this presentation you will see how we are using big data to increase the uptime of our medical imaging systems. I'm sure you have heard of the company Phillips. Phillips is a company that was founded in 129 years ago in actually 1891 in Eindhoven in Netherlands, and they started by manufacturing, light bulbs, and other electrical products. The two brothers Gerard and Anton, they took an investment from their father Frederik, and they set up to manufacture and sale light bulbs. And as you may know, a key technology for making light bulbs is, was glass and vacuum. So when you're good at making glass products and vacuum and light bulbs, then there is an easy step to start making radicals like they did but also X ray tubes. So Philips actually entered very early in the market of medical imaging and healthcare technology. And this is what our is our core as a company, and it's also our future. So, healthcare, I mean, we are in a situation now in which everybody recognize the importance of it. And and we see incredible trends in a transition from what we call Volume Based Healthcare to Value Base, where, where the clinical outcomes are driving improvements in the healthcare domain. Where it's not enough to respond to healthcare challenges, but we need to be involved in preventing and maintaining the population wellness and from a situation in which we episodically are in touch with healthcare we need to continuously monitor and continuously take care of populations. And from healthcare facilities and technology available to a few elected and reach countries we want to make health care accessible to everybody throughout the world. And this of course, has poses incredible challenges. And this is why we are transforming the Philips to become a healthcare technology leader. So from Philips has been a concern realizing and active in many sectors in many sectors and realizing what kind of technologies we've been focusing on healthcare. And we have been transitioning from creating and selling products to making solutions to addresses ethical challenges. And from selling boxes, to creating long term relationships with our customers. And so, if you have known the Philips brand from from Shavers from, from televisions to light bulbs, you probably now also recognize the involvement of Philips in the healthcare domain, in diagnostic imaging, in ultrasound, in image guided therapy and systems, in digital pathology, non invasive ventilation, as well as patient monitoring intensive care, telemedicine, but also radiology, cardiology and oncology informatics. Philips has become a powerhouse of healthcare technology. To give you an idea of this, these are the numbers for, from 2019 about almost 20 billion sales, 4% comparable sales growth with respect to the previous year and about 10% of the sales are reinvested in R&D. This is also shown in the number of patents rights, last year we filed more than 1000 patents in, in the healthcare domain. And the company is about 80,000 employees active globally in over 100 countries. So, let me focus now on the type of products that are in the scope of this presentation. This is a Philips Magnetic Resonance Imaging Scanner, also called Ingenia 3.0 Tesla is an incredible machine. Apart from being very beautiful as you can see, it's a it's a very powerful technology. It can make high resolution images of the human body without harmful radiation. And it's a, it's a, it's a complex machine. First of all, it's massive, it weights 4.6 thousand kilograms. And it has superconducting magnets cooled with liquid helium at -269 degrees Celsius. And it's actually full of software millions and millions of lines of code. And it's occupied three rooms. What you see in this picture, the examination room, but there is also a technical room which is full of of of equipment of custom hardware, and machinery that is needed to operate this complex device. This is another system, it's an interventional, guided therapy system where the X ray is used during interventions with the patient on the table. You see on the left, what we call C-arm, a robotic arm that moves and can take images of the patient while it's been operated, it's used for cardiology intervention, neurological intervention, cardiovascular intervention. There's a table that moves in very complex ways and it again it occupies two rooms, this room that we see here and but also a room full of cabinets and hardwood and computers. This is another another characteristic of this machine is that it has to operate it as it is used during medical interventions, and so it has to interact with all kind of other equipment. This is another system it's a, it's a, it's a Computer Tomography Scanner Icon which is a unique, it is unique due to its special detection technology. It has an image resolution up to 0.5 millimeters and making thousand by thousand pixel images. And it is also a complex machine. This is a picture of the inside of a compatible device not really an icon, but it has, again three rotating, which waits two and a half turn. So, it's a combination of X ray tube on top, high voltage generators to power the extra tube and in a ray of detectors to create the images. And this rotates at 220 right per minutes, making 50 frames per second to make 3D reconstruction of the of the body. So a lot of technology, complex technology and this technology is made for this situation. We make it for clinicians, who are busy saving people lives. And of course, they want optimal clinical performance. They want the best technology to treat the patients. But they also want predictable cost of ownership. They want predictable system operations. They want their clinical schedules not interrupted. So, they understand these machines are complex full of technology. And these machines may have, may require maintenance, may require software update, sometimes may even say they require some parts, horrible parts to be replaced, but they don't want to have it unplanned. They don't want to have unplanned downtime. They would hate send, having to send patients home and to have to reschedule visits. So they understand maintenance. They just want to have a schedule predictable and non intrusive. So already a number of years ago, we started a transition from what we call Reactive Maintenance services of these devices to proactive. So, let me show you what we mean with this. Normally, if a system has an issue system on the field, and traditional reactive workflow would be that, this the customer calls a call center, reports the problem. The company servicing the device would dispatch a field service engineer, the field service engineer would go on site, do troubleshooting, literally smell, listen to noise, watch for lights, for, for blinking LEDs or other unusual issues and would troubleshoot the issue, find the root cause and perhaps decide that the spare part needs to be replaced. He would order a spare part. The part would have to be delivered at the site. Either immediately or the engineer would would need to come back another day when the part is available, perform the repair. That means replacing the parts, do all the needed tests and validations. And finally release the system for clinical use. So as you can see, there is a lot of, there are a lot of steps, and also handover of information from one to between different people, between different organizations even. Would it be better to actually keep monitoring the installed base, keep observing the machine and actually based on the information collected, detect or predict even when an issue is is going to happen? And then instead of reacting to a customer calling, proactively approach the customer scheduling, preventive service, and therefore avoid the problem. So this is actually what we call Corrective Service. And this is what we're being transitioning to using Big Data and Big Data is just one ingredient. In fact, there are more things that are needed. The devices themselves need to be designed for reliability and predictability. If the device is a black box does not communicate to the outside world the status, if it does not transmit data, then of course, it is not possible to observe and therefore, predict issues. This of course requires a remote service infrastructure or an IoT infrastructure as it is called nowadays. The passivity to connect the medical device with a data center in enterprise infrastructure, collect the data and perform the remote troubleshooting and the predictions. Also the right processes and the right organization is to be in place, because an organization that is, you know, waiting for the customer to call and then has a number of few service engineers available and a certain amount of spare parts and stock is a different organization from an organization that actually is continuously observing the installed base and is scheduling actions to prevent issues. And in other pillar is knowledge management. So in order to realize predictive models and to have predictive service action, it's important to manage knowledge about failure modes, about maintenance procedures very well to have it standardized and digitalized and available. And last but not least, of course, the predictive models themselves. So we talked about transmitting data from the installed base on the medical device, to an enterprise infrastructure that would analyze the data and generate predictions that's predictive models are exactly the last ingredient that is needed. So this is not something that I'm, you know, I'm telling you for the first time is actually a strategic intent of Philips, where we aim for zero unplanned downtime. And we market it that way. We also is not a secret that we do it by using big data. And, of course, there could be other methods to to achieving the same goal. But we started using big data already now well, quite quite many years ago. And one of the reasons is that our medical devices already are wired to collect lots of data about the functioning. So they collect events, error logs that are sensor connecting sensor data. And to give you an idea, for example, just as an order of magnitudes of size of the data, the one MRI scanner can log more than 1 million events per day, hundreds of thousands of sensor readings and tens of thousands of many other data elements. And so this is truly big data. On the other hand, this data was was actually not designed for predictive maintenance, you have to think a medical device of this type of is, stays in the field for about 10 years. Some a little bit longer, some of it's shorter. So these devices have been designed 10 years ago, and not necessarily during the design, and not all components were designed, were designed with predictive maintenance in mind with IoT, and with the latest technology at that time, you know, progress, will not so forward looking at the time. So the actual the key challenge is taking the data which is already available, which is already logged by the medical devices, integrating it and creating predictive models. And if we dive a little bit more into the research challenges, this is one of the Challenges. How to integrate diverse data sources, especially how to automate the costly process of data provisioning and cleaning? But also, once you have the data, let's say, how to create these models that can predict failures and the degradation of performance of a single medical device? Once you have these models and alerts, another challenge is how to automatically recommend service actions based on the probabilistic information on these possible failures? And once you have the insights even if you can recommend action still recommending an action should be done with the goal of planning, maintenance, for generating value. That means balancing costs and benefits, preventing unplanned downtimes without of course scheduling and unnecessary interventions because every intervention, of course, is a disruption for the clinical schedule. And there are many more applications that can be built off such as the optimal management of spare parts supplies. So how do you approach this problem? Our approach was to collect into one database Vertica. A large amount of historical data, first of all historical data coming from the medical devices, so event logs, parameter value system configuration, sensor readings, all the data that we have at our disposal, that in the same database together with records of failures, maintenance records, service work orders, part replacement contracts, so basically the evidence of failures and once you have data from the medical devices, and data from the failures in the same database, it becomes possible to correlate event logs, errors, signal sensor readings with records of failures and records of part replacement and maintenance operations. And we did that also with a specific approach. So we, we create integrated teams, and every integrated team at three figures, not necessarily three people, they were actually multiple people. But there was at least one business owner from a service organization. And this business owner is the person who knows what is relevant, which use case are relevant to solve for a particular type of product or a particular market. What basically is generating value or is worthwhile tackling as an organization. And we have data scientists, data scientists are the one who actually can manipulate data. They can write the queries, they can write the models and robust statistics. They can create visualization and they are the ones who really manipulate the data. Last but not least, very important is subject matter experts. Subject Matter Experts are the people who know the failure modes, who know about the functioning of the medical devices, perhaps they're even designed, they come from the design side, or they come from the service innovation side or even from the field. People who have been servicing the machines in real life for many, many years. So, they are familiar with the failure models, but also familiar with the type of data that is logged and the processes and how actually the systems behave, if you if you if you if you allow me in, in the wild in the in the field. So the combination of these three secrets was a key. Because data scientist alone, just statisticians basically are people who can all do machine learning. And they're not very effective because the data is too complicated. That's why you more than too complex, so they will spend a huge amount of time just trying to figure out the data. Or perhaps they will spend the time in tackling things that are useless, because it's such an interesting knows much quicker which data points are useful, which phenomenon can be found in the data or probably not found. So the combination of subject matter experts and data scientists is very powerful and together gathered by a business owner, we could tackle the most useful use cases first. So, this teams set up to work and they developed three things mainly, first of all, they develop insights on the failure modes. So, by looking at the data, and analyzing information about what happened in the field, they find out exactly how things fail in a very pragmatic and quantitative way. Also, they of course, set up to develop the predictive model with associated alerts and service actions. And a predictive model is just not an alert is just not a flag. Just not a flag, only flag that turns on like a like a traffic light, you know, but there's much more than that. It's such an alert is to be interpreted and used by highly skilled and trained engineer, for example, in a in a call center, who needs to evaluate that error and plan a service action. Service action may involve the ordering a replacement of an expensive part, it may involve calling up the customer hospital and scheduling a period of downtime, downtime to replace a part. So it has an impact on the clinical practice, could have an impact. So, it is important that the alert is coupled with sufficient evidence and information for such a highly skilled trained engineer to plan the service session efficiently. So, it's it's, it's a lot of work in terms of preparing data, preparing visualizations, and making sure that old information is represented correctly and in a compact form. Additionally, These teams develop, get insight into the failure modes and so they can provide input to the R&D organization to improve the products. So, to summarize these graphically, we took a lot of historical data from, coming from the medical devices from the history but also data from relational databases, where the service, work orders, where the part replacement, the contact information, we integrated it, and we set up to the data analytics. From there we don't have value yet, only value starts appearing when we use the insights of data analytics the model on live data. When we process live data with the module we can generate alerts, and the alerts can be used to plan the maintenance and the maintenance therefore the plant maintenance replaces replacing downtime is creating value. To give an idea of the, of the type of I cannot show you the details of these modules, all of these predictive models. But to give you an idea, this is just a picture of some of the components of our medical device for which we have models for which we have, for which we call the failure modes, hard disk, clinical grade monitoring, monitors, X ray tubes, and so forth. This is for MRI machines, a lot of custom hardware and other types of amplifiers and electronics. The alerts are then displayed in a in a dashboard, what we call a Remote monitoring dashboard. We have a team of remote monitoring engineers that basically surveyors the install base, looks at this dashboard picks up these alerts. And an alert as I said before is not just one flag, it contains a lot of information about the failure and about the medical device. And the remote monitor engineer basically will pick up these alerts, they review them and they create cases for the markets organization to handle. So, they see an alert coming in they create a case. So that the particular call center in in some country can call the customer and schedule and make an appointment to schedule a service action or it can add it preventive action to the schedule of the field service engineer who's already supposed to go to visit the customer for example. This is a picture and high-level picture of the overall data person architecture. On the bottom we have install base install base is formed by all our medical devices that are connected to our Philips and more service network. Data is transmitted in a in a secure and in a secure way to our enterprise infrastructure. Where we have a so called Data Lake, which is basically an archive where we store the data as it comes from, from the customers, it is scrubbed and protected. From there, we have a processes ETL, Extract, Transform and Load that in parallel, analyze this information, parse all these files and all this data and extract the relevant parameters. All this, the reason is that the data coming from the medical device is very verbose, and in legacy formats, sometimes in binary formats in strange legacy structures. And therefore, we parse it and we structure it and we make it magically usable by data science teams. And the results are stored in a in a vertica cluster, in a data warehouse. In the same data warehouse, where we also store information from other enterprise systems from all kinds of databases from SQL, Microsoft SQL Server, Tera Data SAP from Salesforce obligations. So, the enterprise IT system also are connected to vertica the data is inserted into vertica. And then from vertica, the data is pulled by our predictive models, which are Python and Rscripts that run on our proprietary environment helps with insights. From this proprietary environment we generate the alerts which are then used by the remote monitoring application. It's not the only application this is the case of remote monitoring. We also have applications for particular remote service. So whenever we cannot prevent or predict we cannot predict an issue from happening or we cannot prevent an issue from happening and we need to react on a customer call, then we can still use the data to very quickly troubleshoot the system, find the root cause and advice or the best service session. Additionally, there are reliability dashboards because all this data can also be used to perform reliability studies and improve the design of the medical devices and is used by R&D. And the access is with all kinds of tools. So Vertica gives the flexibility to connect with JDBC to connect dashboards using Power BI to create dashboards and click view or just simply use RM Python directly to perform analytics. So little summary of the, of the size of the data for the for the moment we have integrated about 500 terabytes worth of data tables, about 30 trillion data points. More than eighty different data sources. For our complete connected install base, including our customer relation management system SAP, we also have connected, we have integrated data from from the factory for repair shops, this is very useful because having information from the factory allows to characterize components and devices when they are new, when they are still not used. So, we can model degradation, excuse me, predict failures much better. Also, we have many years of historical data and of course 24/7 live feeds. So, to get all this going, we we have chosen very simple designs from the very beginning this was developed in the back the first system in 2015. At that time, we went from scratch to production eight months and is also very stable system. To achieve that, we apply what we call Exhaustive Error Handling. When you process, most of people attending this conference probably know when you are dealing with Big Data, you have probably you face all kinds of corner cases you feel that will never happen. But just because of the sheer volume of the data, you find all kinds of strange things. And that's what you need to take care of, if you want to have a stable, stable platform, stable data pipeline. Also other characteristic is that, we need to handle live data, but also be able to, we need to be able to reprocess large historical datasets, because insights into the data are getting generated over time by the team that is using the data. And very often, they find not only defects, but also they have changed requests for new data to be extracted to distract in a different way to be aggregated in a different way. So basically, the platform is continuously crunching data. Also, components have built-in monitoring capabilities. Transparent transparency builds trust by showing how the platform behaves. People actually trust that they are having all the data which is available, or if they don't see the data or if something is not functioning they can see why and where the processing has stopped. A very important point is documentation of data sources every data point as a so called Data Provenance Fields. That is not only the medical device where it comes from, with all this identifier, but also from which file, from which moment in time, from which row, from which byte offset that data point comes. This allows to identify and not only that, but also when this data point was created, by whom, by whom meaning which version of the platform and of the ETL created a data point. This allows us to identify issues and also to fix only the subset of when an issue is identified and fixed. It's possible then to fix only subset of the data that is impacted by that issue. Again, this grid trusts in data to essential for this type of applications. We actually have different environments in our analytic solution. One that we call data science environment is more or less what I've shown so far, where it's deployed in our Philips private cloud, but also can be deployed in in in public cloud such as Amazon. It contains the years of historical data, it allows interactive data exploration, human queries, therefore, it is a highly viable load. It is used for the training of machine learning algorithms and this design has been such that we it is for allowing rapid prototyping and for large data volumes. In other environments is the so called Production Environment where we actually score the models with live data from generation of the alerts. So this environment does not require years of data just months, because a model to make a prediction does not need necessarily years of data, but maybe some model even a couple of weeks or a few months, three months, six months depending on the type of data on the failure which has been predicted. And this has highly optimized queries because the applications are stable. It only only change when we deploy new models or new versions of the models. And it is designed optimized for low latency, high throughput and reliability is no human intervention, no human queries. And of course, there are development staging environments. And one of the characteristics. Another characteristic of all this work is that what we call Data Driven Service Innovation. In all this work, we use the data in every step of the process. The First business case creation. So, basically, some people ask how did you manage to find the unlocked investment to create such a platform and to work on it for years, you know, how did you start? Basically, we started with a business case and the business case again for that we use data. Of course, you need to start somewhere you need to have some data, but basically, you can use data to make a quantitative analysis of the current situation and also make it as accurate as possible estimate quantitative of value creation, if you have that basically, is you can justify the investments and you can start building. Next to that data is used to decide where to focus your efforts. In this case, we decided to focus on the use cases that had the maximum estimated business impact, with business impact meaning here, customer value, as well as value for the company. So we want to reduce unplanned downtime, we want to give value to our customers. But it would be not sustainable, if for creating value, we would start replacing, you know, parts without any consideration for the cost of it. So it needs to be sustainable. Also, then we use data to analyze the failure modes to actually do digging into the data understanding of things fail, for visualization, and to do reliability analysis. And of course, then data is a key to do feature engineering for the development of the predictive models for training the models and for the validation with historical data. So data is all over the place. And last but not least, again, these models is architecture generates new data about the alerts and about the how good the alerts are, and how well they can predict failures, how much downtime is being saved, how money issues have been prevented. So this also data that needs to be analyzed and provides insights on the performance of this, of this models and can be used to improve the models found. And last but not least, once you have performance of the models you can use data to, to quantify as much as possible the value which is created. And it is when you go back to the first step, you made the business value you you create the first business case with estimates. Can you, can you actually show that you are creating value? And the more you can, have this fitness feedback loop closed and quantify the better it is for having more and more impact. Among the key elements that are needed for realizing this? So I want to mention one about data documentation is the practice that we started already six years ago is proven to be very valuable. We document always how data is extracted and how it is stored in, in data model documents. Data Model documents specify how data goes from one place to the other, in this case from device logs, for example, to a table in vertica. And it includes things such as the finish of duplicates, queries to check for duplicates, and of course, the logical design of the tables below the physical design of the table and the rationale. Next to it, there is a data dictionary that explains for each column in the data model from a subject matter expert perspective, what that means, such as its definition and meaning is if it's, if it's a measurement, the use of measure and the range. Or if it's a, some sort of, of label the spec values, or whether the value is raw or or calculated. This is essential for maximizing the value of data for allowing people to use data. Last but not least, also an ETL design document, it explains how the transformation has happened from the source to the destination including very important the failure and the strategy. For example, when you cannot parse part of a file, should you load only what you can parse or drop the entire file completely? So, import best effort or do all or nothing or how to populate records for which there is no value what are the default values and you know, how to have the data is normalized or transform and also to avoid duplicates. This again is very important to provide to the users of the data, if full picture of all the data itself. And this is not just, this the formal process the documents are reviewed and approved by all the stakeholders into the subject matter experts and also the data scientists from a function that we have started called Data Architect. So to, this is something I want to give about, oh, yeah and of course the the documents are available to the end users of the data. And we even have links with documents of the data warehouse. So if you are, if you get access to the database, and you're doing your research and you see a table or a view, you think, well, it could be that could be interesting. It looks like something I could use for my research. Well, the data itself has a link to the document. So from the database while you're exploring data, you can retrieve a link to the place where the document is available. This is just the quick summary of some of the of the results that I'm allowed to share at this moment. This is about image guided therapy, using our remote service infrastructure for remotely connected system with the right contracts. We can achieve we have we have reduced downtime by 14% more than one out of three of cases are resolved remotely without an engineer having to go outside. 82% is the first time right fixed rate that means that the issue is fixed either remotely or if a visit at the site is needed, that visit only one visit is needed. So at that moment, the engineer we decided the right part and fix this straightaway. And this result on average on 135 hours more operational availability per year. This therefore, the ability to treat more patients for the same costs. I'd like to conclude with citing some nice testimonials from some of our customers, showing that the value that we've created is really high impact and this concludes my presentation. Thanks for your attention so far. >> Thank you Morrow, very interesting. And we've got a number of questions that we that have come in. So let's get to them. The first one, how many devices has Philips connected worldwide? And how do you determine which related center data workloads get analyzed with protocols? >> Okay, so this is just two questions. So the first question how many devices are connected worldwide? Well, actually, I'm not allowed to tell you the precise number of connected devices worldwide, but what I can tell is that we are in the order of tens of thousands of devices. And of all types actually. And then, how would we determine which related sensor gets analyzed with vertica well? And a little bit how I set In the in the presentation is a combination of two approaches is a data driven approach and the knowledge driven approach. So a knowledge driven approach because we make maximum use of our knowledge of the failure modes, and the behavior of the medical devices and of their components to select what we think are promising data points and promising features. However, from that moment on data science kicks in, and it's actually data science is used to look at the actual data and come up with quantitative information of what is really happening. So, it could be that an expert is convinced that the particular range of value of a sensor are indicative of a particular failure. And it turns out that maybe it was too optimistic on the other way around that in practice, there are many other situations situation he was not aware of. That could happen. So thanks to the data, then we, you know, get a better understanding of the phenomenon and we get the better modeling. I bet I answered that, any question? >> Yeah, we have another question. Do you have plans to perform any analytics at the edge? >> Now that's a good question. So I can't disclose our plans on this right now, but at the edge devices are certainly one of the options we look at to help our customers towards Zero Unplanned Downtime. Not only that, but also to facilitate the integration of our solution with existing and future hospital IT infrastructure. I mean, we're talking about advanced security, privacy and guarantee that the data is always safe remains. patient data and clinical data remains does not go outside the parameters of the hospital of course, while we want to enhance our functionality provides more value with our services. Yeah, so edge definitely very interesting area of innovation. >> Another question, what are the most helpful vertica features that you rely on? >> I would say, the first that comes to mind, to me at this moment is ease of integration. Basically, with vertica, we will be able to load any data source in a very easy way. And also it really can be interfaced very easily with old type of ions as an application. And this, of course, is not unique to vertica. Nevertheless, the added value here is that this is coupled with an incredible speed, incredible speed for loading and for querying. So it's basically a very versatile tool to innovate fast for data science, because basically we do not end up another thing is multiple projections, advanced encoding and compression. So this allows us to perform the optimizations only when we need it and without having to touch applications or queries. So if we want to achieve high performance, we Basically spend a little effort on improving the projection. And now we can achieve very often dramatic increases in performance. Another feature is EO mode. This is great for for cloud for cloud deployment. >> Okay, another question. What is the number one lesson learned that you can share? >> I think that would my advice would be document control your entire data pipeline, end to end, create positive feedback loops. So I hear that what I hear often is that enterprises I mean Philips is one of them that are not digitally native. I mean, Philips is 129 years old as a company. So you can imagine the the legacy that we have, we will not, you know, we are not born with Web, like web companies are with with, you know, with everything online and everything digital. So enterprises that are not digitally native, sometimes they struggle to innovate in big data or into to do data driven innovation, because, you know, the data is not available or is in silos. Data is controlled by different parts of the organ of the organization with different processes. There is not as a super strong enterprise IT system, providing all the data, you know, for everybody with API's. So my advice is to, to for the very beginning, a creative creating as soon as possible, an end to end solution, from data creation to consumption. That creates value for all the stakeholders of the data pipeline. It is important that everyone in the data pipeline from the producer of the data to the to the consumers, basically in order to pipeline everybody gets a piece of value, piece of the cake. When the value is proven to all stakeholders, everyone would naturally contribute to keep the data pipeline running, and to keep the quality of the data high. That's the students there. >> Yeah, thank you. And in the area of machine learning, what types of innovations do you plan to adopt to help with your data pipeline? >> So, in the error of machine learning, we're looking at things like automatically detecting the deterioration of models to trigger improvement action, as well as connected with active learning. Again, focused on improving the accuracy of our predictive models. So active learning is when the additional human intervention labeling of difficult cases is triggered. So the machine learning classifier may not be able to, you know, classify correctly all the time and instead of just randomly picking up some cases for a human to review, you, you want the costly humans to only review the most valuable cases, from a machine learning point of view, the ones that would contribute the most in improving the classifier. Another error is is deep learning and was not working on it, I mean, but but also applications of more generic anomaly detection algorithms. So the challenge of anomaly detection is that we are not only interested in finding anomalies but also in the recommended proper service actions. Because without a proper service action, and alert generated because of an anomaly, the data loses most of its value. So, this is where I think we, you know. >> Go ahead. >> No, that's, that's it, thanks. >> Okay, all right. So that's all the time that we have today for questions. I want to thank the audience for attending Mauro's presentation and also for your questions. If you weren't able to, if we weren't able to answer your question today, I'd ask let we'll let you know that we'll respond via email. And again, our engineers will be at the vertica, on the vertica quorums awaiting your other questions. It would help us greatly if you could give us some feedback and rate the session before you sign off. Your rating will help us guide us as when we're looking at content to provide for the next vertica BTC. Also, note that a replay of today's event and a PDF copy of the slides will be available on demand, we'll let you know when that'll be by email hopefully later this week. And of course, we invite you to share the content with your colleagues. Again, thank you for your participation today. This includes this breakout session and hope you have a wonderful day. Thank you. >> Thank you
SUMMARY :
in the lower right corner of the slide. and perhaps decide that the spare part needs to be replaced. So let's get to them. and the behavior of the medical devices Do you have plans to perform any analytics at the edge? and guarantee that the data is always safe remains. on improving the projection. What is the number one lesson learned that you can share? from the producer of the data to the to the consumers, And in the area of machine learning, what types the deterioration of models to trigger improvement action, and a PDF copy of the slides will be available on demand,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Mauro Barbieri | PERSON | 0.99+ |
Philips | ORGANIZATION | 0.99+ |
Gerard | PERSON | 0.99+ |
Frederik | PERSON | 0.99+ |
Phillips | ORGANIZATION | 0.99+ |
Sue LeClaire | PERSON | 0.99+ |
2015 | DATE | 0.99+ |
two questions | QUANTITY | 0.99+ |
Mauro | PERSON | 0.99+ |
Eindhoven | LOCATION | 0.99+ |
4.6 thousand kilograms | QUANTITY | 0.99+ |
two rooms | QUANTITY | 0.99+ |
Vertica | ORGANIZATION | 0.99+ |
14% | QUANTITY | 0.99+ |
six months | QUANTITY | 0.99+ |
Anton | PERSON | 0.99+ |
4% | QUANTITY | 0.99+ |
135 hours | QUANTITY | 0.99+ |
three months | QUANTITY | 0.99+ |
2019 | DATE | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
82% | QUANTITY | 0.99+ |
two approaches | QUANTITY | 0.99+ |
eight months | QUANTITY | 0.99+ |
three people | QUANTITY | 0.99+ |
three rooms | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
first question | QUANTITY | 0.99+ |
more than 1000 patents | QUANTITY | 0.99+ |
1891 | DATE | 0.99+ |
Today | DATE | 0.99+ |
Power BI | TITLE | 0.99+ |
Netherlands | LOCATION | 0.99+ |
one ingredient | QUANTITY | 0.99+ |
three figures | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
over 100 countries | QUANTITY | 0.99+ |
later this week | DATE | 0.99+ |
tens of thousands | QUANTITY | 0.99+ |
SQL | TITLE | 0.98+ |
about 10% | QUANTITY | 0.98+ |
about 80,000 employees | QUANTITY | 0.98+ |
six years ago | DATE | 0.98+ |
Python | TITLE | 0.98+ |
three | QUANTITY | 0.98+ |
two brothers | QUANTITY | 0.98+ |
millions | QUANTITY | 0.98+ |
first step | QUANTITY | 0.98+ |
about 30 trillion data points | QUANTITY | 0.98+ |
first one | QUANTITY | 0.98+ |
about 500 terabytes | QUANTITY | 0.98+ |
Microsoft | ORGANIZATION | 0.98+ |
first time | QUANTITY | 0.98+ |
each column | QUANTITY | 0.98+ |
hundreds of thousands | QUANTITY | 0.98+ |
this week | DATE | 0.97+ |
Salesforce | ORGANIZATION | 0.97+ |
first | QUANTITY | 0.97+ |
tens of thousands of devices | QUANTITY | 0.97+ |
first system | QUANTITY | 0.96+ |
about 10 years | QUANTITY | 0.96+ |
10 years ago | DATE | 0.96+ |
one visit | QUANTITY | 0.95+ |
Morrow | PERSON | 0.95+ |
up to 0.5 millimeters | QUANTITY | 0.95+ |
More than eighty different data sources | QUANTITY | 0.95+ |
129 years ago | DATE | 0.95+ |
first interaction | QUANTITY | 0.94+ |
one flag | QUANTITY | 0.94+ |
three things | QUANTITY | 0.93+ |
thousand | QUANTITY | 0.93+ |
50 frames per second | QUANTITY | 0.93+ |
First business | QUANTITY | 0.93+ |
Breaking Analysis: Gearing up for Cloud 2020
>> From the silicon angle media office in Boston Massachusetts, it's the Cube. Now here's your host, Dave Vellante. >> Hello everyone and welcome to this week's episode of wiki buns cube insights, powered by ETR. In this breaking analysis, I plan to look deeper into the cloud market and specifically the business results and the momentum of the big three U.S cloud players. Now, Google last week opened up a bit and they not only broke out YouTube's revenues but also its cloud business. And quite a bit more detailed now like Microsoft the numbers are still somewhat opaque and hard to compare with AWS numbers which I find much cleaner. Nonetheless by squinting through the data, we're able to better understand the momentum that these three companies have in cloud and of course the ETR spending data, gives us an added data-driven dimension that is really insightful and helpful. Today we're focusing on, the big three in cloud. Amazon's AWS, Google's cloud platform GCP and Microsoft Azure. Now to meet the other U.S players are not hyper scalars and they're really not even in the discussion other than is an extension of their existing business. As an example, it would take IBM and Oracle between four and six years to spend as much on capex as Google spends, in four months. Now coming back to the big three. Each of these companies is coming at the opportunity with a different perspective. But Amazon and Microsoft, have been on a collision course for quite some time now. Google of course aspires to get into that conversation. Amazon in my opinion is the gold standard in cloud and I specifically refer to infrastructure as a service. They created the market and have earned the right to define the sector. Competitors like Microsoft are smart to differentiate and I'm going to discuss that. But first, let's take a listen as to how Amazon's CEO Andy Jassy Amazon web services CEO Andy Jassy, thinks about the goals of the AWS business. Roll the clip please. >> A high-level are top-down aggressive goals that we want every single customer who uses our platform to have an outstanding customer experience. And we want that outstanding customer experience in part is that their operational performance and their security are outstanding. But also that it allows them to build projects and initiatives that change their customer experience and allow them to be a sustainable successful business over a long period of time and then, we also really want to be the technology infrastructure platform under all the applications of people build. >> So, what's interesting to me here is how Jesse thinks about the AWS platform. It's a platform, to build applications. It's not a SaaS, it's not a platform which AWS can use to sell its software packages, it's a place to build apps. Any application, any workload, any place in the world. So when I say AWS has clean numbers, it's because they have a clean business. Infrastructure is what they do, period. That's what they report in their numbers and it's clean. Now compare that with Microsoft. Microsoft is doing incredibly well in the cloud and will come back to that, but Microsoft is taking a much different approach to the market. They report cloud revenue but it comprises public, private and hybrid. It includes SQL Server, Windows Server, Visual Studio, System Center, GateHub and Azure. And also support services and consulting. But the key here is they defined cloud to their advantage which is smart trying to differentiate with a multi cloud any cloud, any edge, story. Think Microsoft Azure stack slash Microsoft Ark etc. Now Google as we know is coming at this as a late comer. They admit they're a challenger. Their starting point is G suite. Their cloud focus is infrastructure and analytics. So, with that as some background let's take a look at the wiki bond estimates for I as revenue in 2019. What we have here is our estimates of AWS Azure and GCPs is IaaS and PaaS revenue, for 2018 and 2019. We've tried to strip out everything else so we can make an apples-to-apples comparison with Amazon. So let's start with Amazon. The street is concerned about the growth rate of AWS. It grew 35% last quarter, which admittedly is slowing down. But it did just under 10 billion. Think about that. AWS will probably hit a 50 billion dollar run rate this year 50 billion and it's growing in the double digits. AWS is going to be larger than Oracle this year and Cisco is next in its sights. it's like Drew Brees knocking down records in the NFL. Microsoft is very strong but remember, these are estimates. They report as your growth, but they don't really give us a dollar figure. We have to infer that from other data. So the narrative on Microsoft is they're catching up to AWS and in one-dimension that's true because they're growing faster than AWS. But AWS in 2019 grew by an amount almost equal to Asher's entire business in 2018. Now Google is hard to peg. The only thing we know is Google said it's cloud business was 9 billion in 2019, up from 5.8 billion in 18 and 4 billion in 17. So we're seeing an accelerating growth rate. That they said is largely attributable to GCP and they told us that GCP is growing significantly faster than their overall business. Which remember includes, G suite, cloud business that is. Okay. So that's the picture. Now, I want to take a minute to talk about the profitability of the cloud. On the Microsoft earnings call, Heather Bellini of Goldman Sachs, she was effusive she's an analyst exclaiming how impressed she was with the fact that Microsoft has been consistently increasing its cloud gross margins each quarter. I think was up five points in the last quarter. And on the Google call, Heather again was praising Google CEO Sundar Pichai on gross margin guidance for GCP. Which Sundar didn't answer. As well, Andy Jassy said in the Q blast reinvent that the cloud was higher margin than retail but it's scale, it's a relatively low margin business. As compared to software. I would like to comment on all this. First I think Jesse is sandbagging. AWS is a great margin business in my opinion. AWS has operating margins consistently in the mid 20s like 26% last quarter. Now, Bellini on the earnings call, was pressing on gross margins which in my opinion are even more impressive. Here's why. This is a chart I drew a long long time ago. It's a very basic view of the economics of the different sectors of the technology business. Namely hardware, software and services. Now, that each have a different margin profile as we're showing here. On the vertical axes, marginal cost that is the incremental cost of producing one additional unit of a product or service. On the horizontal axis, is volume. And we're showing the Pre-Cloud Era on the left and the Post-Cloud Era on the right-hand side of the chart. And you can see each segment has a different cost and hence different margin profile. In Hardware, you have economies at volume but you have to purchase and assemble components and so at some point your marginal cost hit a floor. Professional services have a diseconomies of scale. Meaning at higher volume, things get more complex and you have more overhead. Now that red line is software and everybody loves software because the marginal costs go to zero and your gross margin approaches the cost of distributing the software. Back in the old days, it really came down to the cost of a what our custom distributed a disk or a CD. So software gross margins are absolutely huge. Now let me call your attention to the green line that we've labeled outsourcing. In the pre-cloud era, outsourcing companies could get some economies but it really wasn't game changing. But in the post-cloud world the hyper scalars are driving automation. Now I'm exaggerating the margin impact because the cloud players still have to buy hardware and they have other costs. But the point is, gross margin and outsourcing IT to a cloud player is far more attractive to the vendor at scale. So Heather Bellini, was essentially asking Sachini Adela how is it that you can keep expanding your gross margins each quarter and she was trying to understand, if GCP gross margins were tracking similar to where AWS and Azure were back when they were smaller. And I think these curves at least give us some guidance. All right, so now let's pivot into the ETR data. This chart shows net score which remember, refers to spending velocity for each of the big three cloud players. Over the past nine surveys for cloud computing the cloud computing sector. Now three things stand out. First is that AWS remains very strong with net scores solidly in the 60% plus range. Second, is Azure has sustained a clear momentum lead over AWS, since the July 18 survey. And the third, is look at GCP's uptick. It's very notable and quite encouraging for Google. Now, let's take another cut on this data and drill into the larger companies, in the ETR data set. Look what happens when you isolate on Fortune 500. Two points here, AWS actually retakes the lead over azure, in net score or spending velocity even though Azure remains very strong. Amazon's showing in large accounts is very very impressive. Nearly back to early 2018 peak levels at 76%. So really strong net scores. The second point is GCPs uptrend holds firm and actually increases slightly, in these larger accounts. So it appears, that the big brands which perhaps used to shy away from cloud, are now increasingly adopting. Now, one of the things ETR does that I love is these drill downs, where they'll ask specific questions that are both timely and relevant. So we want to know, what every salesperson wants to know. Why do they buy? And that's what this chart shows. It shows data from the ETR drill downs and on the left hand in the green or the y the buys from Microsoft AWS and Google cloud. For Microsoft CIOs a compatibility with existing skills and the organization's IT footprint then its feature set etc. Look here's the deal, this is mr. softies huge advantage. It's just simpler to migrate work to Azure if you're already running Microsoft apps. And if Microsoft continues to deliver adequate features it's a no-brainer for many customers. For AWS, the pluses are ROI near-term and long-term and I've said many times, best cloud in terms of reliability, uptime, security AWS has the best cloud for infrastructure. And if you're not incurring huge migration cost or if you're not Walmart, why wouldn't you go, with the best cloud? Now GCP comes down to the tech. Google has good tech and IT guys. They're geeks. And geeks love Tech. And when it comes to analytics, Google is very very strong as well. Now the right-hand side of this chart shows why this is not in my opinion a winner-take-all game. The chart shows the percent of workloads in the cloud today in two years and three years across different survey dates. Today it's between 25% and 35% and it's headed upwards to 50% , this is a huge growth opportunity for these companies. You know sometimes people say to me that Google doesn't care about the cloud because it's such a small piece of their business or well they can't be number one or number two so they'll exit it. I don't buy this for a second. This is a trillion dollar business. Google is in it for the long game, and in my opinion, is going to slowly gain share over time. All right let's wrap up by looking forward to 2020 and beyond. The first thing I want to say is feel good for Google for reporting its cloud revenues but I think Google has to show more in cloud. I understand it's a good first step but IT buyers are still going to want to see more transparency. The other point I want to make is we are entering a new era the story of the past isn't going to be the same as this decade. Buyers aren't afraid of cloud anymore. It has become a mandate. The dominant services of the past and compute storage and networking to still be there but they're evolving, to support analytics, with AI and new types of database services. And these are becoming platforms for business transformation. Competition is, as we've seen, much more real today. Buyers have optionality. And that's going to create more innovation. SaaS, continues to be a huge factor but more so than ever. And hybrid and multi cloud is increasingly real and it's become a challenge for IT buyers so, I expect AWS is going to enter the ring in a bigger way to expand its Tim. Finally developers are no longer tinkerers, they are product creators. Now they said, there's a huge market. And the big tree can all participate as well as overseas players like, Ali Baba. As a customer it's becoming a more and more complicated situation. Cloud is not just about experimentation or startups it's increasingly about something that you really need to get right. Where to bet, migration and managing risks all become much more critical. On one hand, optionality is a good thing but if you make the wrong bet, it could be costly if you don't have a good exit strategy. Now as always, I really appreciate the comments that I get on my LinkedIn post and on Twitter I'm @DVellante So thanks for watching and thanks for your comments and your feedback This is Dave Vellante for the cube insights powered by ETR. We'll see you next time. (upbeat music)
SUMMARY :
it's the Cube. the right to define the sector. and allow them to be a sustainable successful business Back in the old days, it really came down to the cost
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Amazon | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Heather | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Heather Bellini | PERSON | 0.99+ |
YouTube | ORGANIZATION | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
2019 | DATE | 0.99+ |
2018 | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
Cisco | ORGANIZATION | 0.99+ |
Sundar | PERSON | 0.99+ |
July 18 | DATE | 0.99+ |
Walmart | ORGANIZATION | 0.99+ |
60% | QUANTITY | 0.99+ |
26% | QUANTITY | 0.99+ |
Jesse | PERSON | 0.99+ |
35% | QUANTITY | 0.99+ |
Goldman Sachs | ORGANIZATION | 0.99+ |
50% | QUANTITY | 0.99+ |
third | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
Sundar Pichai | PERSON | 0.99+ |
first | QUANTITY | 0.99+ |
Second | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
Boston Massachusetts | LOCATION | 0.99+ |
four months | QUANTITY | 0.99+ |
second point | QUANTITY | 0.99+ |
76% | QUANTITY | 0.99+ |
early 2018 | DATE | 0.99+ |
Drew Brees | PERSON | 0.99+ |
last week | DATE | 0.99+ |
@DVellante | PERSON | 0.99+ |
theCUBE Insights | Microsoft Ignite 2019
>> Narrator: Live from Orlando, Florida, it's theCUBE, covering Microsoft Ignite. Brought to you by, Cohesity. >> Good morning everyone and welcome back to theCUBE's live coverage of Microsoft Ignite. We are here in the Orange County Convention Center. I'm your host Rebecca Knight, along with Stu Miniman. Stu, this is Microsoft's Big Show. 26,000 people from around the globe, all descending on Orlando. This is the big infrastructure show. Thoughts, impressions, now that we're on day two of a three day show. >> Yeah, Rebecca. Last year I had this feeling that it was a little bit too much talking about the Windows 10 transition and the latest updates to Office 365. I could certainly want to make sure that we really dug in more to what's going on with Azure, what's happening in 6the developer space. Even though they do have a separate show for developers, it's Microsoft build. They actually have a huge partner show. And so, Microsoft has a lot of shows. So it's, what is this show that is decades old? And really it is the combination of Microsoft as a platform today. Satya Nadella yesterday talked about empowering the world. This morning, Scott Hanselman was in a smaller theater, talking about app devs. And he came out and he's like, "Hey, developers, isn't it a little bit early for you this morning?" Everybody's laughing. He said, "Even though we're kicking off at 9:00 a.m., Eastern." He said, "That's really early, especially for anybody coming from the West Coast." He was wearing his Will Code For Tacos shirt. And we're going to have Scott on later today, so we'll talk about that. But, where does Microsoft sit in this landscape? Is something we've had. I spent a lot of time looking at the cloud marketplace. Microsoft has put themselves as the clear number two behind AWS. But trying to figure out because SaaS is a big piece of what Microsoft does. And they have their software estate in their customer relationship. So how many of those that are what we used to call window shops. And you had Windows people are going to start, Will it be .NET? Will it be other operating systems? Will it come into Azure? Where do they play? And the answer is, Microsoft's going to play a lot of places. And what was really kind of put on with the point yesterday is, it's not just about the Microsoft solutions, it is about the ecosystem, they really haven't embraced their role, very supportive of open source. And trust is something that I know both you and I have been pointing in on because, in the big tech market, Microsoft wants to stand up and say, "We are the most trusted out there. And therefore, turn to us and we will help you through all of these journeys." >> So you're bringing up so many great points and I want to now go through each and every one of them. So, absolutely, we are hearing that this is the kinder, gentler Microsoft, we had Dave Totten on yesterday. And he was, as you just described, just talking about how much Microsoft is embracing and supporting customers who are using a little bit of Microsoft here, a little bit of other companies. I'm not going to name names, but they're seemingly demanding. I just want best to breed, and this is what I'm going to do. And Microsoft is supporting that, championing that. And, of course we're seeing this as a trend in the broader technology industry. However, it feels different, because it's Microsoft doing this. And they've been so proprietary in the past. >> Yeah, well, and Rebecca, it's our job on theCUBE actually, I'm going to name names. (laughs) And actually Microsoft is-- >> Okay. >> Embracing of this. So, the thing I'm most interested in at the show was Azure Arc. And I was trying to figure out, is this a management platform? And at the end of the day really, it is, there's Kubernetes in there, and it's specifically tied to applications. So they're going to start with databases specifically. My understanding, SQL is the first piece and saying, it sounds almost like the next incarnation of platform as a service to our past. And say, I can take this, I can put it on premises in Azure or on AWS. Any of those environments, manage all of them the same. Reminds me of what I hear from VMware with Hangzhou. Vmworld, Europe is going on right now in Barcelona. Big announcement is to the relationship with VMware on Azure. If I got it right, it's actually in beta now. So, Arc being announced and the next step of where Microsoft and VMware are going together, it is not a coincidence. They are not severing the ties with VMware. VMware, of course partners with all the cloud providers, most notably AWS. Dave Totten yesterday, talked about Red Hat. You want Kubernetes? If you want OpenShift, if you are a Red Hat customer and you've decided that, the way I'm going to leverage and use and have my applications run, are through OpenShift, Microsoft's is great. And the best, most secure place to run that environment is on Azure. So, that's great. So Microsoft, when you talk about choice, when you talk about flexibility, and you talk about agility cause, it is kinder and gentler, but Satya said they have that tech intensity. So all the latest and greatest, the new things that you want, you can get it from Microsoft, but they are also going to meet you where you are. That was Jeremiah Dooley, the Azure advocate, said that, "There's, lots of bridges we need to make, Microsoft has lots of teams. It's not just the DevOps, it's not just letting the old people do their own thing, from your virtualization through your containerization and everything in between microservices server list, and the like. Microsoft has teams, they have partners. Sure that you could buy everything in Microsoft, but they know that there are lots of partners and pieces. And between their partners, their ecosystem, their channel, and their go-to-market, they're going to pull this together to help you leverage what you need to move your business forward. >> So, next I want to talk about Scott Hanselman who was up on the main stage, we're going to have him on the show and he was as you said, adorned in coder dude, attire with a cool t-shirt and snappy kicks. But his talk was app development for everyone. And this is really Microsoft's big push, democratizing computing, hey, anyone can do this. And Satya Nadella, as we've talked about on the show. 61% of technologist's jobs are not in the technology industry. So this is something that Microsoft sees as a trend that's happening in the employment market. So they're saying, "Hey, we're going to help you out here." But Microsoft is not a hardware company. So how does this really change things for Microsoft in terms of the products and services-- >> Well right, >> It offers. >> So really what we're talking about here, we're talking about developers right? 61% of jobs openings for developers are outside the tech sector. And the high level message that Scott had is your tools, your language, your apps. And what we have is, just as we were talking about choice of clouds, it's choice of languages. Sure they'd love to say .NET is wonderful, but you want your Java, your PHP, all of these options. And chances are, not only are you going to use many of them, but even if you're working on a total solution, different groups inside your company might be using them and therefore you need tools that can spam them. The interesting example they use was Chipotle. And if there's a difference between when you're ordering and going through the delivery service, and some of the back-end pieces, and data needs to flow between them, and it can't be, "Oh wait, I've got silos of my data, I've got silos of all these other environments." So, developer tools are all about, having the company just work faster and work across environments. I was at AnsibleFest show earlier this year. And, Ansible is one of those tools that actually, different roles where you have to have the product owner, the developer, or the the operations person. They all have their way into that tool. And so, Microsoft's showing some very similar things as to, when I build something, it's not, "Oh, wait, we all chose this language." And so many of the tools was, " Okay, well, I had to standardize on something." But that didn't fit into what the organization needed. So I need to be able to get to what they all had. Just like eventually, when I'm picking my own taco, I can roll it, bowl it, soft or hard shell-- >> It was a cool analogy. >> And choose all my toppings in there. So it is Taco Tuesday here-- >> Yes. >> At Microsoft Ignite and the developers like their choices of tools, just like they like their tacos. >> And they like their extra guac. So going back to one of the other points you made at the very opening. And this is the competitive dynamic that we have here. We had David Davis and Scott Lowe on yesterday from a ActualTech Media. Scott was incredibly bullish about Microsoft. And saying it could really overtake AWS, not tomorrow, but within the next decade. Of course, the choice for JEDI certainly could accelerate that. What do you make of it? I mean, do you think that's still pie in the sky here? AWS is so far ahead. >> So look, first of all, when you look at the growth rates, first of all, just to take the actual number, we know what AWS's, revenue is. Last quarter, AWS did $9 billion. And they're still growing at about a 35% clip. When I look at Microsoft, they have their intelligent cloud bucket, which is Azure, Windows Server, SQL Server and GitHub. And that was 10.8 billion. And you say, "Oh, okay, that's really big." But last year, Azure did about $12 billion dollars. So, AWS is still two to three times larger when you look at infrastructure as a service. But SaaS hugely important piece of what's going on in the cloud opportunity. AWS really is more of the platform and infrastructure service, they absolutely have some of the PaaS pieces. Azure started out as PaaS and has this. So you're trying to count these buckets, and Azure is still growing at, last quarter was 64%. So if you look at the projection, is it possible for Azure to catch up in the next three years? Well, Azure's growth rate is also slowing down, so I don't think it matters that much. There is a number one and a number two, and they're both clear, valid choices for a customer. And, this morning at breakfast, I was talking to a customer and they are very heavily on Microsoft shop. But absolutely, they've got some AWS on the side. They're doing Azure, they've got a lot of Azure, being here at our Microsoft show. And when I go to AWS, even when I talked to the companies that are all in on AWS, " Oh, you got O 365?" "Of course we do." "Oh, if you're starting to do O 365, are there any other services that you might be using out of Azure?" "Yeah, that's possible." I know Google is in the mix. Ali Baba's in the mix. Oracle, well, we're not going to talk about Oracle Cloud, but we talked about Oracle, because they will allow their services to run on Azure specifically. We talked about that a lot yesterday, especially how that ties into JEDI. So, look, I think it is great when we have a healthy competitive marketplace. Today really, it is a two horse race. It is, AWS and Azure are the main choices for customers. Everyone else is really a niche player. Even a company like IBM, there's good solutions that they have, but they play in a multi cloud world. Google has some great data services, and absolutely a important player when you talk about multi cloud for all they've done with Kubernetes and Istio. I'm going to be at Kube Con in a couple of weeks and Google is front and center there. But if you talk about the general marketplace, Microsoft has a lot of customers, they had a lot of applications and therefore, can they continue to mature that market and grow their environment? Absolutely. AWS has so many customers, they have the marketplace is stronger. It's an area that I want to dig in a little bit more at this show is the Azure Marketplace, how much we talked about the ecosystem. But, can I just procure through the cloud and make it simpler? Big theme we've talked about is, cloud in the early days was supposed to be cheap and simple. And it is neither of those things. So, how do we make it easier, so that we can go from the 20% of applications in the public cloud, up to 50% or more? Because it is not about all everything goes to the public cloud, but making customers put the applications and their data in the right place at the right time with the right services. And then we haven't even talked about edge computing which Microsoft has a big push on, especially with their partners. We talked to HP, a little bit about that yesterday. But really the surface area that this show and Microsoft covers is immense and global. >> It is indeed, and we are going, this is our second day of three days of coverage and we're going to be getting into all of those things. We've got a lot of great guests. We have Cute Host, Keith Townsend, Dave Cahill, a former Wikibon guy, a lot of other fantastic people. So I'm excited to get it on with you today, Stu. >> Thank you, Rebecca. Great stuff. >> I'm Rebecca Knight, for Stu Miniman. Stay tuned for more of theCUBE's live coverage of Microsoft Ignite. (upbeat music`)
SUMMARY :
Brought to you by, Cohesity. We are here in the Orange County Convention Center. And really it is the combination of Microsoft And he was, as you just described, I'm going to name names. And the best, most secure place to run that environment So they're saying, "Hey, we're going to help you out here." And so many of the tools was, " Okay, well, And choose all my toppings At Microsoft Ignite and the developers like So going back to one of the other points you made So look, first of all, when you look at the growth rates, So I'm excited to get it on with you today, Stu. of Microsoft Ignite.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
IBM | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Rebecca Knight | PERSON | 0.99+ |
Keith Townsend | PERSON | 0.99+ |
Rebecca | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Satya Nadella | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Jeremiah Dooley | PERSON | 0.99+ |
Dave Totten | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
Scott | PERSON | 0.99+ |
20% | QUANTITY | 0.99+ |
10.8 billion | QUANTITY | 0.99+ |
Scott Lowe | PERSON | 0.99+ |
Dave Cahill | PERSON | 0.99+ |
$9 billion | QUANTITY | 0.99+ |
Scott Hanselman | PERSON | 0.99+ |
David Davis | PERSON | 0.99+ |
Barcelona | LOCATION | 0.99+ |
64% | QUANTITY | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
Last year | DATE | 0.99+ |
yesterday | DATE | 0.99+ |
Chipotle | ORGANIZATION | 0.99+ |
last quarter | DATE | 0.99+ |
tomorrow | DATE | 0.99+ |
9:00 a.m. | DATE | 0.99+ |
Last quarter | DATE | 0.99+ |
three days | QUANTITY | 0.99+ |
Stu | PERSON | 0.99+ |
61% | QUANTITY | 0.99+ |
Orlando | LOCATION | 0.99+ |
Office 365 | TITLE | 0.99+ |
Today | DATE | 0.99+ |
26,000 people | QUANTITY | 0.99+ |
Orange County Convention Center | LOCATION | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Denise Dumas, Red Hat | Red Hat Summit 2019
(upbeat music) >> Narrator: Live, from Boston, Massachusetts, it's theCube! Covering Red Hat Summit 2019. Brought to you by Red Hat. >> Welcome back, live here on theCube, as we continue our coverage here of Red Hat Summit, along with Stu Miniman, I'm John Walls. It's great to have you here, in one of America's great cities! We're in Boston, Massachusetts, for day one of the three-day conference. And we're now joined with Denise Dumas, who is with Red Hat, and working on the RHEL 8 release that just became, I guess, available today, right? >> Today! >> Huge news! >> Yes! >> I have to first of compliment you on rocking these Red Hat red earrings. And then I look down below you've got the Red Hat sneakers on too, so you are company-branded >> Absolutely. >> up and down, literally, from head-to-toe. >> I'm very proud of the earrings, because some of the support guys made them up on their 3D printer back at the office. >> John: How cool is that? >> I love it. >> Now we had Stefanie Chiras on a little bit earlier, and we were talking about RHEL 8 and all that came with that, and we talked about the deeper dive we're gonna take with you a little bit later on, now we're at that moment. Just first off, in general, how do you feel when something like this finally gets out of the beta stage, gets moved into a much more active space, and now it's available to the marketplace? >> It's like fresh air, right? >> Thrilled. >> Oh, thrilled. Well, you know, and in a way, it's almost an anti-climax, because we're working on 8.1 already, and we're talking about RHEL 9, but this is just such an opportunity to take a moment, especially for so many of the RHEL engineering and QE team who are wandering around the summit, and for us all to just kind of say, (sighs) it's out. It's out, let's see if they like it, I hope they do. But you know, we've been working with so many of the customers and partners through the High Touch Beta Program, 40,000 downloads of the beta, and it has been tremendous feedback. We've been really pleased to see how many people are willing to pick it up and experiment with it, and tell us what they like and what they don't like. >> So Denise, it's always great to hear the customers, but take a second and celebrate that internal work, 'cause so much code, so many engineers, years worth of planning and coding that go into this, so give us a little but of a look behind the curtain, if you would. >> Well, you know so much community as well, right, because, like everything else that Red Hat does, it's totally Open Source. So, many communities feed into Fedora, and Fedora feeds into RHEL, so we took Fedora 28, and pulled it in, and then did a lot more work on it, to try to move it into, this year, we've done the distro differently. There's a core kernel, the noodles, you know, and then there are the application streams. So we've done a lot of work to separate out the two types of package that make up RHEL, so that we can spin the application streams faster. That's where things like developer tools, and language runtimes, databases, the things that are more aimed at developers, where a ten-year life cycle is not a natural for those, right, and yet the core of RHEL, the kernel, you rely on that, we're gonna support it for ten years, but you need your application streams to keep the developers happy. So we tried to make the admin side happy, and the developer side happy. >> All right so, as Vice President of Software Engineering, your team had, certainly, its focuses along this way. >> Denise: Oh, yeah. And dealing with, I guess, the complexities that you were, was there maybe a point in the process where you had an uh-oh moment, or, I'm just curious, because it's not always smooth sailing, right, you run into speed bumps, and some times there're barriers, they're not just bumps, but in terms of what you were trying to enable, and what your vision was to get there, talk about that journey from the engineering side of the equation, and maybe the hiccups you had to deal with along the way. >> So, RHEL 8 has been interesting because in the course of putting the product together, the RHEL organization went through our own digital transformation. So just like our customers have been moving to become more agile, the RHEL engineering team, and our partners in QE, and our partners in support, have worked together to deliver the operating system in a much more agile way. I mean, did you ever think you would hear agile and operating system in the same breath, right, it's like, wow. So that has been an interesting process, and a real set of challenges, because it's meant that people have had to change work habits that have served them well for many, many years. It's a different world. So we've been very fortunate to take people through a lot of changes, they've been very flexible. But there have been some times when it's just been too much too fast, like (gasps), And so it's like, everybody take a deep breath, okay, will do. You know, a couple of weeks, we'll consolidate. It's been a really interesting process. Clearly the kernel, so we've got the 4.18 kernel, and the kernel comes in and we have to understand what the kernel configuration is gonna be. And that can be a lengthy process, because it means you have to understand, when you pull a kernel out of the upstream some of the features are pretty solid, some are maybe less solid. We have to make an educated call about what's ready to go and what's not. So figuring out the kernel configuration can take a while. We do that with our friends in the performance team. And so every inch of the way, we build it, we see how the performance looks, maybe we do some tweaking, change that lock, everything we do goes back upstream, to make the upstream kernel better. So that, as well, has been an interesting process, because there's a lot of change. We're really proud of the performance in RHEL 8, we think that it's a significant improvement in many different areas. We've got the Shack and Larry Show tomorrow, we'll talk all the way through performance, but that's been a big differentiator, I think. >> All right so, Denise, security, absolutely is at top of mind always? >> Denise: Always. >> Some updates in RHEL 8, maybe if you walk us through security and some of the policy changes. >> Yeah, we bake security in, right, we have a secure supply chain, and, talk about difficult things for RHEL 8, right, every package that comes in, we totally refresh everything from upstream. But when they come in, we have to inspect all the crypto, we have to run them through security scans, vulnerability scanners, we've got three different vulnerability scanners that we're using, we run them through penetration testing, so there's a huge amount of work that comes just to inherit all that from the upstream. But in addition to that, we put a lot of work into making sure that, well, our crypto has to be FIP certified, right, which means you've got to meet standards. We also have work that's gone in to make sure that you can enable a security policy consistently across the system, so that no application that you load on can violate your security policy. We've got nftables in there, new firewalling, network-bound disk encryption, that actually, it kind of ties in with a lot of the system management work that we've done. So a thing that I think differentiates RHEL 8 is we put a lot of focus on making it easy to use on day one, and easy to manage day two. It's always been interesting, you know, our customers have been very very technical. They understand how to build their golden images, they understand how to fine-tweak everything. But it's becoming harder and harder to find that level of Linux expertise. I'll vouch for that. And also, once you have those guys, you don't want to waste their time on things that could be automated. And so we've done a lot of work with the management tooling, to make sure that the daily tasks are much easier, that we're integrated better with satellite, we've got Ansible system roles, so if you use Ansible system roles we wanted to make it easy, we wanted to make the operating system easy to configure. So the same work that we do for RHEL 8 itself also goes into Red Hat Enterprise Linux core OS, which will be shipping with OpenShift. So it's a subset of the package set, same kernel. But there it's a very, very focused workload that they're gonna run. So we've been able to do a really opinionated build for RHEL core OS. But for RHEL 8 itself, it's got to be much more general purpose, we've focused on some of our traditional workloads, things like SAP, SAP HANA, SQL Server, so we've done a lot to make sure that those deploy really easily, we've got tuning profiles that help you make sure you've got your system set up to get the right kind of performance. But at the same time, there are lots of other applications out there and we have to do a really good general-purpose operating system. We can be opinionated to some extent, but we have to support much, much wider range. >> Yeah, I mean, Denise, I think back, it's been five years since the last major release. >> Yeah. >> And in the last five years, you know, Red Hat lived a lot of places, but, oh, the diversity of location in today's multi cloud world, with containerization and everything happening there, and from an application standpoint, the machine learning and new modern apps, there's such breadth and depth, seems like in order of magnitude more effort must be needed to support the ecosystem today than it was five years ago. >> Well, it's interesting that you say ecosystem, because you don't play in those places without a tight network of partnerships. So we have lots, of course, hardware partnerships, that's the thing that you think about when you think about the operating system, but we also have lots of partnerships with the software vendors. We've done a lot of work this year with Nvidia, we've supported their one and two systems, right, and we've done a lot to make sure that the workloads are happy. But, increasingly, as ISVs move to containerize their applications, when you containerize you need a user space that you bring along with you, you need your libraries, you need your container runtime. So we've taken a lot of the RHEL user space content, and put it into something that we're calling the Universal Base Image. So, you can rely on that layer of RHEL content when you build your container, put your application into a container. You can rely on that, you can get a stream of updates associated with that, so you can maintain your security, and when you deploy it on top of RHEL, we're with OpenShift, we can actually support it well for you. >> Walk me through the migration process, a little bit, if I'm running 7, and I'm shifting over, and I'm gonna make the move, how does that work? >> Denise: Carefully (laughs). >> Yeah sure, right. (laughs) 'Cause I've got my own concerns, right, I've got-- >> Of course! >> Sure, I've got to think, daily operation, or moment-to-moment operation, I can't afford to have downtime, I've got to make sure it's done in a secure way, I've got to make sure that files aren't corrupted, and things aren't lost, and, so that in itself is a, teeth-gnashing moment I would think, a bit, how do you make that easier for me? >> Yeah, well, especially when you've got 10,000 servers that you need to manage, and you want to start migrating them. You absolutely have to come to tomorrow morning's demo, we're gonna do, it's live! >> It's always tricky, right, live is always, yeah. >> Yeah, but migration, so we've put a lot of effort into migration. We're looking at, it's no good if the applications can't come along, why would you migrate the operating system, you wanna migrate the application. So we've got tooling that examines your environment, and tries to automate as much of it as we can. It looks at your existing environment, it looks at what you're gonna move through, it'll ask a few questions, it's totally driven by plug-in equivalents, we call them actors, and they understand the various, like one understands how to do network configuration, one understands how to replicate your disk configuration. It's integrated with automated backup and rollback, which is a thing that people have wanted for a long time so that we've got a much tighter level of safety there. We won't be able to migrate everything, I'm sure, but, as time goes along we add more and more and more into that utility as we learn more about what matters to customers. >> So, tomorrow morning, live demo. >> Denise: Live demo! >> Get a good night's sleep tonight! >> Denise: Put on your crash helmets! >> Fingers crossed. But thanks for joining us here and talking about the RHEL 8, about the rollout, and we wish you well with that, off to a great start for sure. >> Thank you so much, >> Thank you Denise. >> the RHEL teams are amazing, I love my guys. >> Great, thanks for being with us. >> Denise: Thank you so much. >> We'll continue here at the Red Hat Summit. You're watching theCUBE, live from Boston. (upbeat music)
SUMMARY :
Brought to you by Red Hat. It's great to have you here, I have to first of compliment you from head-to-toe. some of the support guys made them up we're gonna take with you a little bit later on, But you know, we've been working with so many behind the curtain, if you would. There's a core kernel, the noodles, you know, your team had, certainly, its focuses along this way. and maybe the hiccups you had to deal with along the way. and the kernel comes in and we have to understand Some updates in RHEL 8, maybe if you walk us through to make sure that you can enable a security policy since the last major release. And in the last five years, you know, that's the thing that you think about 'Cause I've got my own concerns, right, I've got-- and you want to start migrating them. so that we've got a much tighter level of safety there. about the rollout, and we wish you well with that, We'll continue here at the Red Hat Summit.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Denise | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
John | PERSON | 0.99+ |
John Walls | PERSON | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
RHEL 8 | TITLE | 0.99+ |
Denise Dumas | PERSON | 0.99+ |
RHEL | TITLE | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
Boston | LOCATION | 0.99+ |
ten years | QUANTITY | 0.99+ |
RHEL 9 | TITLE | 0.99+ |
40,000 downloads | QUANTITY | 0.99+ |
tomorrow morning | DATE | 0.99+ |
two types | QUANTITY | 0.99+ |
three-day | QUANTITY | 0.99+ |
two systems | QUANTITY | 0.99+ |
Boston, Massachusetts | LOCATION | 0.99+ |
Stefanie Chiras | PERSON | 0.99+ |
10,000 servers | QUANTITY | 0.99+ |
America | LOCATION | 0.99+ |
ten-year | QUANTITY | 0.99+ |
SQL Server | TITLE | 0.98+ |
Red Hat Summit | EVENT | 0.98+ |
tomorrow | DATE | 0.98+ |
five years | QUANTITY | 0.98+ |
Red Hat Enterprise Linux | TITLE | 0.98+ |
this year | DATE | 0.98+ |
five years ago | DATE | 0.97+ |
day two | QUANTITY | 0.97+ |
today | DATE | 0.97+ |
Today | DATE | 0.97+ |
OpenShift | TITLE | 0.96+ |
Linux | TITLE | 0.96+ |
kernel | TITLE | 0.96+ |
Red Hat Summit 2019 | EVENT | 0.96+ |
day one | QUANTITY | 0.96+ |
Fedora 28 | TITLE | 0.95+ |
SAP HANA | TITLE | 0.94+ |
first | QUANTITY | 0.94+ |
tonight | DATE | 0.93+ |
Shack | PERSON | 0.92+ |
RHEL | ORGANIZATION | 0.9+ |
QE | ORGANIZATION | 0.86+ |
a second | QUANTITY | 0.82+ |
Larry | PERSON | 0.78+ |
three different vulnerability scanners | QUANTITY | 0.78+ |
Fedora | TITLE | 0.77+ |
Vice | PERSON | 0.73+ |
last five years | DATE | 0.67+ |
every | QUANTITY | 0.66+ |
SAP | TITLE | 0.64+ |
High Touch Beta | OTHER | 0.62+ |
agile | TITLE | 0.59+ |
Narrator | TITLE | 0.57+ |
Bob Ward & Jeff Woolsey, Microsoft | Dell Technologies World 2019
(energetic music) >> Live from Las Vegas. It's theCUBE. Covering Dell Technologies World 2019. Brought to you by Dell Technologies and it's Ecosystem Partners. >> Welcome back to theCUBE, the ESPN of tech. I'm your host, Rebecca Knight along with my co-host Stu Miniman. We are here live in Las Vegas at Dell Technologies World, the 10th anniversary of theCUBE being here at this conference. We have two guests for this segment. We have Jeff Woolsey, the Principal Program Manager Windows Server/Hybrid Cloud, Microsoft. Welcome, Jeff. >> Thank you very much. >> And Bob Ward, the principal architect at Microsoft. Thank you both so much for coming on theCUBE. >> Thanks, glad to be here. >> It's a pleasure. Honor to be here on the 10th anniversary, by the way. >> Oh is that right? >> Well, it's a big milestone. >> Congratulations. >> Thank you very much. >> I've never been to theCUBE. I didn't even know what it was. >> (laughs) >> Like what is this thing? >> So it is now been a couple of days since Tatiana Dellis stood up on that stage and talked about the partnership. Now that we're sort of a few days past that announcement, what are you hearing? What's the feedback you're getting from customers? Give us some flavor there. >> Well, I've been spending some time in the Microsoft booth and, in fact, I was just chatting with a bunch of the guys that have been talking with a lot of customers as well and we all came to the consensus that everyone's telling us the same thing. They're very excited to be able to use Azure, to be able to use VMware, to be able to use these in the Azure Cloud together. They feel like it's the best of both worlds. I already have my VMware, I'm using my Office 365, I'm interested in doing more and now they're both collocated and I can do everything I need together. >> Yeah it was pretty interesting for me 'cause VMware and Microsoft have had an interesting relationship. I mean, the number one application that always lived on a VM was Microsoft stuff. The operating system standpoint an everything, but especially in the end using computer space Microsoft and VM weren't necessarily on the same page to see both CEOs, also both CUBE alums, up there talking about that really had most of us sit up and take notice. Congratulations on the progress. >> For me, being in a SQL server space, it's a huge popular workload on VMware, as you know and virtualization so everybody's coming up to me saying when can I start running SQL server in this environment? So we're excited to kind of see the possibilities there. >> Customers, they live in a heterogeneous environment. Multicloud has only amplified that. It's like, I want to be able to choose my infrastructure, my Cloud, and my application of choice and know that my vendors are going to rally around me and make this easy to use. >> This is about meeting our customers where they are, giving them the ability to do everything they need to do, and make our customers just super productive. >> Yeah, absolutely. >> So, Jeff, there's some of the new specific give us the update as to the pieces of the puzzle and the various options that Microsoft has in this ecosystem. >> Well, a lot of these things are still coming to light and I would tell people definitely take a look at the blog. The blog really goes in in depth. But key part of this is, for customers that want to use their VMware, you get to provision your resources using, for example, the well known, well easy to use Azure Infrastructure and Azure Portal, but when it's time to actually do your VMs or configure your network, you get to use all of the same tools that you're using. So your vCenter, your vSphere, all of the things that a VMware administrator knows how to do, you continue to use those. So, it feels familiar. You don't feel like there's a massive change going on. And then when you want to hook this up to your Azure resources, we're making that super easy, as well, through integration in the portal. And you're going to see a lot more. I think really this is just the beginning of a long road map together. >> I want to ask you about SQL 19. I know that's your value, so-- >> That's what I do, I'm the SQL guy. >> Yeah, so tell us what's new. >> Well, you know, we launched SQL 19 last year at Ignite with our preview of SQL 19. And it'll be, by the way, it'll be generally available in the second half of this calendar year. We did something really radical with SQL 19. We did something called data virtualization polybase. Imagine as a SQL customer you connecting with SQL and then getting access to Oracle, MongoDB, Hadoop data sources, all sorts of different data in your environment, but you don't move the data. You just connect to SQL Server and get access to everything in your corporate environment now. We realize you're not just going to have SQL Server now in your environment. You're going to have everything. But we think SQL can become like your new data hub to put that together. And then we built something called big data clusters where we just deploy all that for you automatically. We even actually built a Hadoop cluster for you with SQL. It's kind of radical stuff for the normal database people, right? >> Bob, it's fascinating times. We know it used to be like you know I have one database and now when I talk to customers no, I have a dozen databases and my sources of data are everywhere and it's an opportunity of leveraging the data, but boy are there some challenges. How are customers getting their arms around this. >> I mean, it's really difficult. We have a lot of people that are SQL Server customers that realize they have those other data sources in their environment, but they have skills called TSQL, it's a programming language. And they don't want to lose it, they want to learn, like, 10 other languages, but they have to access that data source. Let me give you an example. You got Oracle in a Linux environment as your accounting system and you can't move it to SQL Server. No problem. Just use SQL with your TSQL language to query that data, get the results, and join it with your structured data in SQL Server itself. So that's a radical new thing for us to do and it's all coming in SQL 19. >> And what it helps-- what really helps break down is when you have all of these disparate sources and disparate databases, everything gets siloed. And one of the things I have to remind people is when I talk to people about their data center modernization and very often they'll talk about you know, I've had servers and data that's 20, 30, even, you know, decades old and they talk about it almost like it's like baggage it's luggage. I'm like, no, that's your company, that's your history. That data is all those customer interactions. Wouldn't it be great if you could actually take better advantage of it. With this new version of SQL, you can bring all of these together and then start to leverage things like ML and AI to actually better harvest and data mine that and rather than keeping those in disparate silos that you can't access. >> How ready would you say are your customers to take advantage of AI and ML and all the other-- >> It's interesting you say that because we actually launched the ability to run R and Python with SQL Server even two years ago. And so we've got a whole new class of customers, like data scientists now, that are working together with DBAs to start to put those workloads together with SQL Server so it's actually starting to come a really big deal for a lot of our community. >> Alright, so, Jeff, we had theCUBE at Microsoft Ignite last year, first time we'd done a Microsoft show. As you mentioned, our 10th year here, at what used to be EMC World. It was Interesting for me to dig in. There's so many different stack options, like we heard this week with Dell Technologies. Azure, I understood things a lot from the infrastructure side. I talked to a lot of your partners, talked to me about how many nodes and how many cores and all that stuff. But very clearly at the show, Azure Stack is an extension of Azure and therefore the applications that live on it, how I manage that, I should think Azure first, not infrastructure first. There's other solutions that extend the infrastructure side, things like WSSD I heard a lot about. But give us the update on Azure Stack, always interest in the Cloud, watching where that fits and some of the other adjacent pieces of the portfolio. >> So the Azure Stack is really becoming a rich portfolio now. So we launched with Azure Stack, which is, again, to give you that Cloud consistency. So you can literally write applications that you can run on premises, you can move to the Cloud. And you can do this without any code change. At the same time, a bunch of customers came to us and they said this is really awesome, but we have other environments where we just simply need to run traditional workloads. We want to run traditional VMs and containers and stuff like that. But we really want to make it easy to connect to the Cloud. And so what we have actually launched is Azure Stack HCI. It's been out about a month, month and a half. And, in fact, here at Dell EMC Dell Technology World here, we actually have Azure Stack HCI Solutions that are shipping, that are on the marketplace right now here are the show as well and I was just demoing one to someone who was blown away at just how easy it is with our admin center integration to actually manage the hyper converged cluster and very quickly and easily configure it to Azure so that I can replicate a virtual machine to Azure with one click. So I can back up to Azure in just a couple clicks. I can set up easy network connectivity in all of these things. And best yet, Dell just announced their integration for their servers into admin center here at Dell Technologies World. So there's a lot that we're doing together on premises as well. >> Okay, so if I understand right, is Dell is that one of their, what they call Ready Nodes, or something in the VxFlex family. >> Yes. >> That standpoint. The HCI market is something that when we wrote about it when it was first coming out, it made sense that, really, the operating system and hypervisor companies take a lead in that space. We saw VMware do it aggressively and Microsoft had a number of different offerings, but maybe explain why this offering today versus where we were five years ago with HCI. >> Well, one of the things that we've been seeing, so as people move to the Cloud and they start to modernize their applications and their portfolio, we see two things happen. Generally, there are some apps that people say hey, I'm obviously going to move that stuff to Azure. For example, Exchange. Office 365, Microsoft, you manage my mail for me. But then there are a bunch of apps that people say that are going to stay on Prem. So, for example, in the case of SQL, SQL is actually an example of one I see happening going in both places. Some people want to run SQL up in the Cloud, 'cause they want to take advantage of some of the services there. And then there are people who say I have SQL that is never, ever, ever, ever, ever going to the Cloud because of latency or for governance and compliance. So I want to run that on modern hardware that's super fast. So this new Dell Solutions that have Intel, Optane DC Persistent Memory have lots of cores. >> I'm excited about that stuff, man. >> Oh my gosh, yes. Optane Persistent Memory and lots of cores, lots of fast networking. So it's modern, but it's also secure. Because a lot of servers are still very old, five, seven, ten years old, those don't have things like TPM, Secure Boot, UEFI. And so you're running on a very insecure platform. So we want people to modernize on new hardware with a new OS and platform that's secure and take advantage of the latest and greatest and then make it easy to connect up to Azure for hybrid cloud. >> Persistent Memory's pretty exciting stuff. >> Yes. >> Actually, Dell EMC and Intel just published a paper using SQL Server to take advantage of that technology. SQL can be I/O bound application. You got to have data and storage, right? So now Dell EMC partnered together with SQL 19 to access Persistent Memory, bypass the I/O part of the kernel itself. And I think they achieved something like 170% faster performance versus even a fast NVNMe. It's a great example of just using a new technology, but putting the code in SQL to have that intelligence to figure out how fast can Persistent Memory be for your application. >> I want to ask about the cultural implications of the Dell Microsoft relationship partnership because, you know, these two companies are tech giants and really of the same generation. They're sort of the Gen Xers, in their 30s and 40s, they're not the startups, been around the block. So can you talk a little bit about what it's like to work so closely with Dell and sort of the similarities and maybe the differences. >> Sure. >> Well, first of all, we've been doing it for, like you said, we've been doing this for awhile. So it's not like we're strangers to this. And we've always had very close collaboration in a lot of different ways. Whether it was in the client, whether it's tablets, whether it's devices, whether it's servers, whether it's networking. Now, what we're doing is upping our cloud game. Essentially what we're doing is, we're saying there is an are here in Cloud where we can both work a lot closer together and take advantage of the work that we've done traditionally at the hardware level. Let's take that engineering investment and let's do that in the Cloud together to benefit our mutual customers. >> Well, SQL Server is just a primary application that people like to run on Dell servers. And I've been here for 26 years at Microsoft and I've seen a lot of folks run SQL Server on Dell, but lately I've been talking to Dell, it's not just about running SQL on hardware, it's about solutions. I was even having discussions yesterday about Dell about taking our ML and AI services with SQL and how could Dell even package ready solutions with their offerings using our software stack, but even addition, how would you bring machine learning and SQL and AI together with a whole Dell comp-- So it's not just about talking about the servers anymore as much, even though it's great, it's all about solutions and I'm starting to see that conversation happen a lot lately. >> And it's generally not a server conversation. That's one of the reasons why Azure Stack HCI is important. Because its customers-- customers don't come to me and say Jeff, I want to buy a server. No, I want to buy a solution. I want something that's pre configured, pre validated, pre certified. That's why when I talk about Azure Stack HCI, invariably, I'm going to get the question: Can I build my own? Yes, you can build your own. Do I recommend it? No, I would actually recommend you take a look at our Azure Stack HCI catalog. Like I said, we've got Dell EMC solutions here because not only is the hardware certified for Windows server, but then we go above and beyond, we actually run whole bunch of BurnInTests, a bunch of stress tests. We actually configure, tune, and tune these things for the best possible performance and security so it's ready to go. Dell EMC can ship it to you and you're up and running versus hey, I'm trying to configure make all this thing work and then test it for the next few months. No, you're able to consume Cloud very quickly, connect right up, and, boom, you got hybrid in the house. >> Exactly. >> Jeff and Bob, thank you both so much for coming on theCUBE. It was great to have you. >> Our pleasure. Thanks for having us. Enjoyed it, thank you. >> I'm Rebecca Knight for Stu Miniman. We will have more of theCUBEs live coverage of Dell Technologies World coming up in just a little bit.
SUMMARY :
Brought to you by Dell Technologies We have Jeff Woolsey, the Principal Program Manager Thank you both so much for coming on theCUBE. Honor to be here on the 10th anniversary, by the way. I've never been to theCUBE. what are you hearing? and we all came to the consensus but especially in the end using computer space it's a huge popular workload on VMware, as you know and make this easy to use. and make our customers just super productive. and the various options that Microsoft has Well, a lot of these things are still coming to light I want to ask you about SQL 19. and get access to everything in your and it's an opportunity of leveraging the data, and you can't move it to SQL Server. And one of the things I have to remind people is so it's actually starting to come and some of the other adjacent pieces of the portfolio. a bunch of customers came to us and they said or something in the VxFlex family. and hypervisor companies take a lead in that space. and they start to modernize their applications and then make it easy to connect up to Azure Actually, Dell EMC and Intel just published a paper and really of the same generation. and let's do that in the Cloud together and I'm starting to see that conversation Dell EMC can ship it to you and you're up and running Jeff and Bob, Thanks for having us. of Dell Technologies World
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff | PERSON | 0.99+ |
Jeff Woolsey | PERSON | 0.99+ |
Rebecca Knight | PERSON | 0.99+ |
Tatiana Dellis | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Bob Ward | PERSON | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
SQL 19 | TITLE | 0.99+ |
five | QUANTITY | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
170% | QUANTITY | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Bob | PERSON | 0.99+ |
seven | QUANTITY | 0.99+ |
Azure Stack | TITLE | 0.99+ |
yesterday | DATE | 0.99+ |
26 years | QUANTITY | 0.99+ |
SQL Server | TITLE | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
SQL | TITLE | 0.99+ |
ten years | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
two guests | QUANTITY | 0.99+ |
Azure Stack HCI | TITLE | 0.99+ |
both | QUANTITY | 0.99+ |
two companies | QUANTITY | 0.99+ |
Azure | TITLE | 0.99+ |
Dell Technologies World | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Office 365 | TITLE | 0.99+ |
10th year | QUANTITY | 0.99+ |
five years ago | DATE | 0.99+ |
two years ago | DATE | 0.98+ |
Python | TITLE | 0.98+ |
two things | QUANTITY | 0.98+ |
both places | QUANTITY | 0.98+ |
Dominic Preuss, Google | Google Cloud Next 2019
>> Announcer: Live from San Francisco, it's theCUBE. Covering Google Cloud Next '19. Brought to you by Google Cloud and it's ecosystem partners. >> Welcome back to the Moscone Center in San Francisco everybody. This is theCUBE, the leader in live tech coverage. This is day two of our coverage of Google Cloud Next #GoogleNext19. I'm here with my co-host Stuart Miniman and I'm Dave Vellante, John Furrier is also here. Dominic Preuss is here, he's the Director of Product Management, Storage and Databases at Google. Dominic, good to see you. Thanks for coming on. >> Great, thanks to be here. >> Gosh, 15, 20 years ago there were like three databases and now there's like, I feel like there's 300. It's exploding, all this innovation. You guys made some announcements yesterday, we're gonna get into, but let's start with, I mean, data, we were just talking at the open, is the critical part of any IT transformation, business value, it's at the heart of it. Your job is at the heart of it and it's important to Google. >> Yes. Yeah, you know, Google has a long history of building businesses based on data. We understand the importance of it, we understand how critical it is. And so, really, that ethos is carried over into Google Cloud platform. We think about it very much as a data platform and we have a very strong responsibility to our customers to make sure that we provide the most secure, the most reliable, the most available data platform for their data. And it's a key part of any decision when a customer chooses a hyper cloud vendor. >> So summarize your strategy. You guys had some announcements yesterday really embracing open source. There's certainly been a lot of discussion in the software industry about other cloud service providers who were sort of bogarting open source and not giving back, et cetera, et cetera, et cetera. How would you characterize Google's strategy with regard to open source, data storage, data management and how do you differentiate from other cloud service providers? >> Yeah, Google has always been the open cloud. We have a long history in our commitment to open source. Whether be Kubernetes, TensorFlow, Angular, Golang. Pick any one of these that we've been contributing heavily back to open source. Google's entire history is built on the success of open source. So we believe very strongly that it's an important part of the success. We also believe that we can take a different approach to open source. We're in a very pivotal point in the open source industry, as these companies are understanding and deciding how to monetize in a hyper cloud world. So we think we can take a fundamentally different approach and be very collaborative and support the open source community without taking advantage or not giving back. >> So, somebody might say, okay, but Google's got its own operational databases, you got analytic databases, relational, non-relational. I guess Google Spanner kind of fits in between those. It was an amazing product. I remember that that first came out, it was making my eyes bleed reading the white paper on it but awesome tech. You certainly own a lot of your own database technology and do a lot of innovation there. So, square that circle with regard to partnerships with open source vendors. >> Yeah, I think you alluded to a little bit earlier there are hundreds of database technologies out there today. And there's really been a proliferation of new technology, specifically databases, for very specific use cases. Whether it be graph or time series, all these other things. As a hyper cloud vendor, we're gonna try to do the most common things that people need. We're gonna do manage MySQL, and PostgreS and SQL Server. But for other databases that people wanna run we want to make sure that those solutions are first class opportunities on the platform. So we've engaged with seven of the top and leading open source companies to make sure that they can provide a managed service on Google Cloud Platform that is first class. What that means is that as a GCP customer I can choose a Google offered service or a third-party offered service and I'm gonna have the same, seamless, frictionless, integrated experience. So I'm gonna get unified billing, I'm gonna get one bill at the end of the day. I'm gonna have unified support, I'm gonna reach out to Google support and they're going to figure out what the problem is, without blaming the third-party or saying that isn't our problem. We take ownership of the issue and we'll go and figure out what's happening to make sure you get an answer. Then thirdly, a unified experience so that the GCP customer can manage that experience, inside a cloud console, just like they would their Google offered serves. >> A fully-managed database as a service essentially. >> Yes, so of the seven vendors, a number of them are databases. But also for Kafka, to manage Kafka or any other solutions that are out there as well. >> All right, so we could spend the whole time talking about databases. I wanna spend a couple minutes talking about the other piece of your business, which is storage. >> Dominic: Absolutely. >> Dave and I have a long history in what we'd call traditional storage. And the dialog over the last few years has been we're actually talking about data more than the storing of information. A few years back, I called cloud the silent killer of the old storage market. Because, you know, I'm not looking at buying a storage array or building something in the cloud. I use storage is one of the many services that I leverage. Can you just give us some of the latest updates as to what's new and interesting in your world. As well as when customers come to Google where does storage fit in that overall discussion? >> I think that the amazing opportunity that we see for for large enterprises right now is today, a lot of that data that they have in their company are in silos. It's not properly documented, they don't necessarily know where it is or who owns it or the data lineage. When we pick all that date up across the enterprise and bring it in to Google Cloud Platform, what's so great about is regardless of what storage solution you choose to put your data in it's in a centralized place. It's all integrated, then you can really start to understand what data you have, how do I do connections across it? How do I try to drive value by correlating it? For us, we're trying to make sure that whatever data comes across, customers can choose whatever storage solution they want. Whichever is most appropriate for their workload. Then once the data's in the platform we help them take advantage of it. We are very proud of the fact that when you bring data into object storage, we have a single unified API. There's only one product to use. If you would have really cold data, or really fast data, you don't have to wait hours to get the data, it's all available within milliseconds. Now we're really excited that we announced today is a new storage class. So, in Google Cloud Storage, which is our object storage product, we're now gonna have a very cold, archival storage option, that's going to start at $0.12 per gigabyte, per month. We think that that's really going to change the game in terms of customers that are trying to retire their old tape backup systems or are really looking for the most cost efficient, long term storage option for their data. >> The other thing that we've heard a lot about this week is that hybrid and multi-cloud environment. Google laid out a lot of the partnerships. I think you had VMware up on stage. You had Cisco up on stage, I see Nutanix is here. How does that storage, the hybrid multi-cloud, fit together for your world. >> I think the way that we view hybrid is that every customer, at some point, is hybrid. Like, no one ever picks up all their data on day one and on day two, it's on the cloud. It's gonna be a journey of bringing that data across. So, it's always going to be hybrid for that period of time. So for us, it's making sure that all of our storage solutions, we support open standards. So if you're using an an S3 compliant storage solution on-premise, you can use Google Cloud Storage with our S3 compatible API. If you are doing block, we work with all the large vendors, whether be NetApp or EMC or any of the other vendors you're used to having on-premise, making sure we can support those. I'm personally very excited about the work that we've done with NetApp around NetApp cloud buying for Google Cloud Platform. If you're a NetApp shop and you've been leveraging that technology and you're really comfortable and really like it on-premise, we make it really easy to bring that data to the cloud and have the same exact experience. You get all the the wonderful features that NetApp offers you on-premise in a cloud native service where you're paying on a consumption based service. So, it really takes, kind of, the decision away for the customers. You like NetApp on-premise but you want cloud native features and pricing? Great, we'll give you NetApp in the cloud. It really makes it to be an easy transition. So, for us it's making sure that we're engaged and that we have a story with all the storage vendors that you used to using on-premise today. >> Let me ask you a question, about go back, to the very cold, ice cold storage. You said $0.12 per gigabyte per month, which is kinda in between your other two major competitors. What was your thinking on the pricing strategy there? >> Yeah, basically everything we do is based on customer demand. So after talking to a bunch of customers, understanding the workloads, understanding the cost structure that they need, we think that that's the right price to meet all of those needs and allow us to basically compete for all the deals. We think that that's a really great price-point for our customers. And it really unlocks all those workloads for the cloud. >> It's dirt cheap, it's easy to store and then it takes a while to get it back, right, that's the concept? >> No, it is not at all. We are very different than other storage vendors or other public cloud offerings. When you drop your data into our system, basically, the trade up that you're making is saying, I will give you a cheaper price in exchange for agreeing to leave the data in the platform, for a longer time. So, basically you're making a time-based commitment to us, at which point we're giving you a cheaper price. But, what's fundamentally different about Google Cloud Storage, is that regardless of which storage class you use, everything is available within milliseconds. You don't have to wait hours or any amount of time to be able to get that data. It's all available to you. So, this is really important, if you have long-term archival data and then, let's say, that you got a compliance request or regulatory requests and you need to analyze all the data and get to all your data, you're not waiting hours to get access to that data. We're actually giving you, within milliseconds, giving you access to that data, so that you can get the answers you need. >> And the quid pro quo is I commit to storing it there for some period of time, is that you said? >> Correct. So, we have four storage classes. We have our Standard, our Nearline, our Coldline and this new Archival. Each of them has a lower price point, in exchange for a longer, committed time the you'll leave the product. >> That's cool. I think that adds real business value there. So, obviously, it's not sitting on tape somewhere. >> We have a number of solutions for how we store the data. For us, it's indifferent, how we store the data. It's all about how long you're willing to tell us it'll be there and that allows us to plan for those resources long term. >> That's a great story. Now, you also have this pay-as-you-go pricing tiers, can you talk about that a little bit? >> For which, for Google Cloud Storage? >> Dave: Yes. >> Yeah, everything is pay-as-you-go and so basically you write data to us and there's a charge for the operations you do and then you charge for however long you leave the data in the system. So, if you're using our Standard class, you're just paying our standard price. You can either use Regional or Multi-Regional, depending on the disaster recovery and the durability and availability requirements that you have. Then you're just paying us for that for however long you leave the data in the system. Once you delete it, you stop paying. >> So it must be, I'm not sure what kind of customer discussions are going on in terms of storage optionality. It used to be just, okay, I got block and I got file, but now you've got all different kind of. You just mentioned several different tiers of performance. What's the customer conversation like, specifically in terms of optionality and what are they asking you to deliver? >> I think within the storage space, there's really three things, there's object, block and file. So, on the object side, or on the block side we have our persistence product. Customers are asking for better price performance, more performance, more IOPS, more throughput. We're continuing to deliver a higher-performance, block device for them and that's going very, very well. For those that need file, we have our first-party service, which is Cloud Filestore, which is our manage NFS. So if you need managed NFS, we can provide that for you at a really low price point. We also partner with, you mentioned Elastifile earlier. We partner with NetApp, we're partnering with EMC. So all those options are also available for file. Then on the object side, if you can accept the object API, it's not POSIX-compliant it's a very different model. If your workloads can support that model then we give you a bunch of options with the Object Model API. >> So, data management is another hot topic and it means a lot of things to a lot of people. You hear the backup guys talking about data management. The database guys talk about data management. What is data management to Google and what your philosophy and strategy there? >> I think for us, again, I spend a lot of time making sure that the solutions are unified and consistent across. So, for us, the idea is that if you bring data into the platform, you're gonna get a consistent experience. So you're gonna have consistent backup options you're gonna have consistent pricing models. Everything should be very similar across the various products So, number one, we're just making sure that it's not confusing by making everything very simple and very consistent. Then over time, we're providing additional features that help you manage that. I'm really excited about all the work we're doing on the security side. So, you heard Orr's talk about access transparency and access approvals right. So basically, we can have a unified way to know whether or not anyone, either Google or if a third-party offer, a third-party request has come in about if we're having to access the data for any reason. So we're giving you full transparency as to what's going on with your data. And that's across the data platform. That's not on a per-product basis. We can basically layer in all these amazing security features on top of your data. The way that we view our business is that we are stewards of your data. You've given us your data and asked us to take care of it, right, don't lose it. Give it back to me when I want it and let me know when anything's happening to it. We take that very seriously and we see all the things we're able to bring to bear on the security side, to really help us be good stewards of that data. >> The other thing you said is I get those access logs in near real time, which is, again, nuanced but it's very important. Dominic, great story, really. I think clear thinking and you, obviously, delivered some value for the customers there. So thanks very much for coming on theCUBE and sharing that with us. >> Absolutely, happy to be here. >> All right, keep it right there everybody, we'll be back with our next guest right after this. You're watching theCUBE live from Google Cloud Next from Moscone. Dave Vellante, Stu Miniman, John Furrier. We'll be right back. (upbeat music)
SUMMARY :
Brought to you by Google Cloud and it's ecosystem partners. Dominic Preuss is here, he's the Director Your job is at the heart of it and it's important to Google. to make sure that we provide the most secure, and how do you differentiate from We have a long history in our commitment to open source. So, square that circle with regard to partnerships and I'm gonna have the same, seamless, But also for Kafka, to manage Kafka the other piece of your business, which is storage. of the old storage market. to understand what data you have, How does that storage, the hybrid multi-cloud, and that we have a story with all the storage vendors to the very cold, ice cold storage. that that's the right price to meet all of those needs can get the answers you need. the you'll leave the product. I think that adds real business value there. We have a number of solutions for how we store the data. can you talk about that a little bit? for the operations you do and then you charge and what are they asking you to deliver? Then on the object side, if you can accept and it means a lot of things to a lot of people. on the security side, to really help us be good stewards and sharing that with us. we'll be back with our next guest right after this.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Stuart Miniman | PERSON | 0.99+ |
Dominic Preuss | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Dominic | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Cisco | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
Each | QUANTITY | 0.99+ |
San Francisco | LOCATION | 0.99+ |
yesterday | DATE | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
seven vendors | QUANTITY | 0.99+ |
Coldline | ORGANIZATION | 0.99+ |
MySQL | TITLE | 0.99+ |
first | QUANTITY | 0.99+ |
today | DATE | 0.98+ |
seven | QUANTITY | 0.98+ |
Kafka | TITLE | 0.98+ |
one product | QUANTITY | 0.98+ |
NetApp | TITLE | 0.98+ |
two major competitors | QUANTITY | 0.97+ |
PostgreS | TITLE | 0.97+ |
NetApp | ORGANIZATION | 0.97+ |
Google Cloud Next | TITLE | 0.97+ |
day two | QUANTITY | 0.97+ |
one bill | QUANTITY | 0.96+ |
S3 | TITLE | 0.96+ |
three things | QUANTITY | 0.96+ |
300 | QUANTITY | 0.96+ |
one | QUANTITY | 0.96+ |
single | QUANTITY | 0.96+ |
Cloud Filestore | TITLE | 0.95+ |
hundreds of database technologies | QUANTITY | 0.94+ |
three databases | QUANTITY | 0.94+ |
day one | QUANTITY | 0.94+ |
first class | QUANTITY | 0.94+ |
20 years ago | DATE | 0.94+ |
this week | DATE | 0.93+ |
SQL Server | TITLE | 0.93+ |
$0.12 per gigabyte | QUANTITY | 0.93+ |
Elastifile | ORGANIZATION | 0.92+ |
2019 | DATE | 0.91+ |
Google Cloud Platform | TITLE | 0.9+ |
Gosh | PERSON | 0.89+ |
Moscone Center | LOCATION | 0.87+ |
Google Cloud Storage | TITLE | 0.82+ |
Moscone | LOCATION | 0.8+ |
theCUBE | ORGANIZATION | 0.75+ |
15 | DATE | 0.73+ |
Object Model | OTHER | 0.73+ |
A few years back | DATE | 0.73+ |
Orr | ORGANIZATION | 0.68+ |
Google Spanner | TITLE | 0.66+ |
Jozef de Vries, IBM | IBM Think 2019
(dramatic music) >> Live from San Francisco. It's theCUBE, covering IBM Think 2019. Brought to you by IBM. >> Welcome back to theCUBE. We are live at IBM Think 2019. I'm Lisa Martin with Dave Vellante. We're in San Francisco this year at the newly rejuved Moscone Center. Welcoming to theCUBE for the first time, Jozef de Vries, Director of IBM Cloud Databases. Jozef, it's great to have you on the program. >> Thank you very much, great to be here, great to be here. >> So as we were talking before we went live, this is, I was asking what you're excited about for this year's IBM Think. >> Yeah. >> Only the second annual IBM Think. >> Right. >> This big merger of a number of shows. >> Sure, you're right. >> Day minus one, team minus one, >> Yeah. >> everything really kicks off tomorrow. Talk to us about some of the things that you're working on. You've been at IBM for a long time. >> Mmm hmm. >> But cloud managed databases, let's talk value there for the customers. >> Yeah, definitely. Cloud managed databases really, at its core, it's about simplifying adoption of cloud provided services and reducing the capital expense that comes along with developing applications. Fundamentally what we're trying to do is abstract the overhead that is associated with running your own systems. Whether it's the infrastructure management, whether it's the network management, whether it's the configuration and deployment of you databases. Our collection of services really is about streamlining time to value of accessing and building against your databases. So we are really focused on is allowing the developer to focus on their business critical applications, their objectives, and really what they're paid for. They're paid to build applications, not paid to maintain systems. When we talk about the CIO office, the CTO office, they are looking at cost, they're looking at ways to reduce overall expenditures. And what we're able to provide with cloud managed databases is the ability not to have to staff an IT team, not to have to maintain and pay for infrastructure, not have to procure licenses, what have you, everything that goes into standing up the managing those systems yourself, we provide that and we provide the consumption based methods. So you basically pay for what you use, and we have various ways in which you can interact with your databases and the charges that are associated with that. But it really is again about alleviating all of that overhead and that expense that is associated with running systems yourself. >> 15 years ago, you're back to, before you started with IBM, >> Yeah. >> There was obviously IBM DB2, Oracle, SQL Server, >> SQL Server. >> I guess MySQL is around >> Mm hmm. >> back then, LabStack was building out the internet. But databases are pretty boring >> Yeah. >> back then. And then all of a sudden, it exploded. >> Right. >> And the NoSQL movement happened in a huge way. >> Mm hmm. >> Coincided with the big data movement. What happened? >> Yeah, I think as we saw the space of this technology evolve, and a variety of different kind of use cases cropping up. The development community kind of respond to that. And really what we try to do with our portfolio is provide that variety of database technology solutions. To me, not any number of different use cases. And we like to think about it broken down into two categories. Your primary data stores. This is where your applications are writing and reading the data that has been stored. And then particularly to your point, this is where we call the auxiliary data services, for example. These are your in memory caches, your message brokers, your search index, what have you. There is a plethora of different database technologies out there today that plug into any number of different use cases and application developers are attempting to fill. And more often than not, they're using more than one database at a time. And really what we're trying to do at IBM with our cloud managed database offering is provide a variety of those data services and database technologies to meet a variety of those use cases, whether they're mixing and matching, or different kind of applications workloads or what have you. We'd like to provide our customers with the choices that are out there today in the community at large. >> So many choices. >> Yeah. >> Am I hearing that its kind of horses for courses? I mean, you get things like, even niches like Cumulo with fine grain security. >> Yeah. >> Or Couchbase, obviously. >> Mm hmm. This one scales. And then this one is easy to use. You take Mongo, for text, really easy to use >> Yeah exactly. >> Sort of different specialized use cases. How do you squint through, and how does IBM match the right characteristics with the right technology? >> It's really, it's two-pronged. It's about understanding the user base. Understanding and listening to your customers. And really internalizing what are the use cases that they are looking to fulfill? It's also being in tune with the database technology in the market today. It's understanding where there are trends. Understanding where there are new use cases cropping up. And it's about building a deep enough engineering operations team where we can quickly spin up these new offerings. And again provide that technology to our end customers. And it's about working with our customers as well. And understanding the use cases and then sometimes making recommendations on what database technology or combination of databases would be best suited for their objectives. >> I'm curious. One of the things that you mentioned in terms of what the developer's day-to-day job should be, is this almost IBM's approach to aligning with the developer role and enabling it in new ways? >> It is really about, I think, having sympathy in delivering on solutions in regards that is simply for the pains that they had otherwise endured 10, 15 years ago. When the notion of cloud managed anything really wasn't a thing yet. Or was just starting to emerge. IBM in houses runs their own systems for years and years obviously and the folks on my team, they have come from other companies, they know that the pain, what pain is involved in trying to run services. So like I said it's a little bit out of sympathy, it's a bit out of knowing what your users need in a cloud managed service. Whether again it's security, or availability, or redundancy, you name it. It's about coming around to the other side of the table and I sat where you once sat. And we know what you need out of your data services. So trusting us to provide that for you. >> How are the requirements different? Things like recovery and resiliency. Do I need asset compliance in this new world? May be you could. >> Yeah. It's funny, that's a good question in that we don't necessarily deal so much with database specific requirements. Again as I mention we try to provide a variety of different database technologies. And by and large the users are going to know what they need, what combinations that they will need. And we'll work with them if they're navigating their way through it. Really what we see more the requirements these days are around the management characteristics. As you cited, are they highly available? Are they backed up? What's your disaster recovery policy? What security policies do you have in place? what compliance, so on and so forth. It's really about presenting the overall package of that managed solution. Not so much, whether the database is going to be high available verses consistent replication or what have you. I mean that's in there, and it's part of what we engage with our customers about, but also what we'd like to put a lot of emphasis is on providing those recognized database technologies so that there is a community behind and there's opportunity for the users to understand what it is that they need beyond just what we can sell them. It's really about selling the value proposition of again, the management characteristics of the services. >> So who do you see as the competition? Obviously the other big, the two big cloud providers, AWS and Azure. >> Yep. >> You're competing with them. >> Definitely. >> Quality of offerings. May be talk about how you fit. >> And Google's another one. Or Oracle is another emerging one. Even Alibaba is catching up quite a bit. It really feels like a neck-to-neck race in our day after day. The way we try to approach our portfolio is focusing on deep, broad and secure. Deep being that there're a core set of database technologies. We're building the database itself. Db2, Cloudant which is based off of Couchbase. Excuse me, CouchDB. And then broad. Again as I've been mentioning, having a variety of different database technologies. And they're secure across the board. Whether it's secure in how we run the systems, secure on how we certify them through external compliance certifications. Or secure in how we integrate with security based tooling that our users can take advantage of. Regarding our competitors, it really is one week it may be a new big data at scale type of database technology. Another day it may be, or another week it might be deeper integrations into the platform. It might be new open source database technologies. It might be a new proprietary database technology. But we're, it's a constant, like I say, race to who got the most robust portfolio. >> Developers are like teenagers. They're fickle. >> Yeah, that too, that too. We got to be quick in order to respond to those demands. >> In this age of hybrid multi-cloud, where the average company has five plus private cloud, public cloud, through inertia, through acquisition, et cetera. Where's IBM's advantage there as companies are, I think we heard a stat the other day, Dave, that in 2018, 80% of the companies migrated data and apps from public cloud. In terms of this reality that companies live in this multi-cloud, where is IBM's advantage there? And where does your approach to cloud managed services really differentiate IBM's capabilities? >> Really there's, for the last couple of years, a tremendous amount of investment on building on the Kubernetes open source platform. And even in particular to our cloud managed database services, we have been developing and have been recently releasing a number of different databases that run on a platform that we've developed against Kubernetes. It's a platform that allows us to orchestrate deployments, deletions of databases, backups, high availability, platform level integrations, all, a number of different things. What that has allowed us to do when concerning a hybrid type of strategy is it makes our platform more portable. So Kubernetes is something that can run on the cloud. It can run in a private cloud. It can run on premise. And this platform we're developing is something that can be deployed, which we do today for private, public cloud consumption, which can also be packaged up and deploy into a private cloud type environment. And ultimately it's portable and it's leveraging of that Kubernetes technology itself. So we're not hamstringing ourselves to purely public cloud type services, or only private cloud type services. We want to have something that is abstracted enough that again it can move around to these different kind of environments. >> How important is open source and how important is it for you to commit to the different open source projects? There are so many, >> Yeah. >> And you have limited resources. So how do you manage that? >> Open source is really critical both in what we're building and what we're also offering. As we've talked about our users out there, they know what they often want or sometimes we nudge them to the right or to the left, but generally speaking it's around all the open source technologies and whatever may be trending for that current month is often times what we're getting requested for. It could be a Postgres. It could be a RabbitMQ. It could be ElasticSearch. What have you. And really we put a lot of emphasis on embracing the open source community, providing those database technologies to our customers. And then it allows our customers to benefit from the community at large too. We don't become again the sole provider of education and information about that technology. We're able to expose the whole community to our customers and they're able to take advantage of that. >> I hear a lot of complaints sometimes, particularly from folks that might list themselves in a marketplace for one cloud or another, that they feel like the primary cloud vendor might be nudging the customer into their proprietary database. What's IBM's position on that? Is that fair? Is that overblown? >> We obviously have proprietary tech, particularly the Db2. And that's something we're continue investing in. It's what we view as one of our strategic top priority database technologies. We are very active developers in the Couch community as well. I wouldn't consider that proprietary, but again back to the point of-- >> CouchDB. You're as the steward of CouchDB. >> Exactly. >> Right. >> Right, exactly. But again, firm believers in open source. We want to give those opportunities to our customers to avoid those vendor lock-in type situations. We actually have quite a lot of interests from our EU customer base. And by and large EU policies are around anti-trust and what have you. They tend to gravitate towards open source technology because they know it's again portable. They can be used in Postgres by IBM one month and if they no longer are satisfied with that, they can take their Postgres workloads and move them into another cloud provider. Ideally they're coming from the other cloud providers onto IBM. >> Well I should be actually more specific, in fairness, Dynamo's often cited. I supposed Google's Spanner although that's sort of a more of a niche, >> Mm hmm. >> specialized database. If I understand it correctly, Db2, that's a hard core transaction >> Sure. >> system. You're not going to confused that with, I don't think, anyway CouchDB. Although, who knows? May be there are some use cases there. But it sounds like you're not nudging them to your proprietary, certainly Db2 is proprietary. CouchDB is one of many options that you offer. >> Certainly Db2 is one of our core products for our database portfolio. And we do want to push our customers to Db2 where-- >> If it makes sense. >> Exactly, where it makes sense. And where there's demand for it. If it doesn't make sense so there's not demand we will offer up any number of the other databases that we also offer. >> Excellent, here's our last question.As >> Sure. >> As IBM Think the 2nd annual kicks off really tomorrow. For this developer audience that you were talking about a lot in our conversation, what are some of the exciting things that they're going to you? Any sort of obviously not breaking news, but >> Mmm hmm. >> Where would you advise the developer community, who's attending IBM Think to go to learn more about cloud managed databases? And how they can really become far more efficient to do their jobs better. >> Sure. Databases are hard, plain and simple. They are particularly hard to run, and developers who are not necessarily database admins, they're not database operators, that they want to focus on building the applications, are going to want to find solutions that alleviate that overhead of running those systems themselves. So to your question we've got sessions all throughout the week where we're talking about our Cloudant offerings and the future of where we're going with that. We've got a couple of different sessions around our IBM cloud database portfolio. This is a lot of the open source database technology we're running. We have demos in the solution center and Db2's strided all around the conference as well. So there's lots of different sessions focused on talking the value proposition of IBM's cloud managed database portfolio across the board. >> A lot of opportunities for learning. Well, Jozef de Vries, Thank you so much for joining Dave and me on theCube this afternoon. >> Thank you very much, it was great. And for Dave Vallente, I am Lisa Martin. You're watching theCube, live from IBM Think 2019. Day 1 stick around. We'll be right back with our next guest. (upbeat music)
SUMMARY :
Brought to you by IBM. Jozef, it's great to have you on the program. this is, I was asking what you're excited about a number of shows. Talk to us about some of the things that you're working on. But cloud managed databases, is the ability not to have to staff an IT team, back then, LabStack was building out the internet. And then all of a sudden, it exploded. Coincided with the big data movement. And really what we try to do with our portfolio Am I hearing that its kind of horses for courses? And then this one is easy to use. the right characteristics with the right technology? And again provide that technology to our end customers. One of the things that you mentioned in terms of And we know what you need out of your data services. How are the requirements different? And by and large the users are going to know what they need, the two big cloud providers, AWS and Azure. May be talk about how you fit. Or secure in how we integrate with security based Developers are like teenagers. We got to be quick in order to respond to those demands. in 2018, 80% of the companies migrated data and apps So Kubernetes is something that can run on the cloud. And you have limited resources. And then it allows our customers to benefit from the or another, that they feel like the primary cloud vendor We obviously have proprietary tech, particularly the Db2. You're as the steward of CouchDB. and what have you. of a niche, that's a hard core transaction CouchDB is one of many options that you offer. And we do want to push our customers to Db2 that we also offer. Excellent, here's our last question that they're going to you? And how they can really become far more efficient and the future of where we're going with that. Thank you so much And for Dave Vallente, I am Lisa Martin.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Dave Vallente | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Jozef de Vries | PERSON | 0.99+ |
Alibaba | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
2018 | DATE | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Jozef | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
80% | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
this year | DATE | 0.99+ |
one week | QUANTITY | 0.99+ |
first time | QUANTITY | 0.99+ |
Kubernetes | TITLE | 0.99+ |
MySQL | TITLE | 0.98+ |
one month | QUANTITY | 0.98+ |
tomorrow | DATE | 0.98+ |
IBM Cloud Databases | ORGANIZATION | 0.98+ |
two categories | QUANTITY | 0.97+ |
both | QUANTITY | 0.97+ |
today | DATE | 0.97+ |
Dynamo | ORGANIZATION | 0.97+ |
CouchDB | TITLE | 0.96+ |
15 years ago | DATE | 0.96+ |
EU | ORGANIZATION | 0.96+ |
IBM Think | ORGANIZATION | 0.96+ |
LabStack | ORGANIZATION | 0.96+ |
IBM Think 2019 | EVENT | 0.96+ |
more than one database | QUANTITY | 0.96+ |
10, 15 years ago | DATE | 0.95+ |
One | QUANTITY | 0.95+ |
five plus | QUANTITY | 0.95+ |
one | QUANTITY | 0.94+ |
Postgres | ORGANIZATION | 0.94+ |
SQL Server | TITLE | 0.93+ |
Day 1 | QUANTITY | 0.92+ |
Moscone Center | LOCATION | 0.92+ |
second annual | QUANTITY | 0.91+ |
Db2 | TITLE | 0.9+ |
this afternoon | DATE | 0.9+ |
two big cloud | QUANTITY | 0.89+ |
Couch | TITLE | 0.89+ |
one cloud | QUANTITY | 0.88+ |
last couple of years | DATE | 0.87+ |
Azure | ORGANIZATION | 0.84+ |
Cloudant | ORGANIZATION | 0.82+ |
NoSQL | TITLE | 0.81+ |
2019 | DATE | 0.8+ |
Think 2019 | EVENT | 0.8+ |
Day minus one | QUANTITY | 0.79+ |
Markus Strauss, McAfee | AWS re:Invent 2018
>> Live from Las Vegas, it's theCUBE, covering AWS re:Invent 2018, brought to you by Amazon Web Services, Intel, and their ecosystem partners. >> Hi everybody, welcome back to Las Vegas. I'm Dave Vellante with theCUBE, the leader in live tech coverages. This is day three from AWS re:Invent, #reInvent18, amazing. We have four sets here this week, two sets on the main stage. This is day three for us, our sixth year at AWS re:Invent, covering all the innovations. Markus Strauss is here as a Product Manager for database security at McAfee. Markus, welcome. >> Hi Dave, thanks very much for having me. >> You're very welcome. Topic near and dear to my heart, just generally, database security, privacy, compliance, governance, super important topics. But I wonder if we can start with some of the things that you see as an organization, just general challenges in securing database. Why is it important, why is it hard, what are some of the critical factors? >> Most of our customers, one of the biggest challenges they have is the fact that whenever you start migrating databases into the cloud, you inadvertently lose some of the controls that you might have on premise. Things like monitoring the data, things like being able to do real time access monitoring and real time data monitoring, which is very, very important, regardless of where you are, whether you are in the cloud or on premise. So these are probably really the biggest challenges that we see for customers, and also a point that holds them back a little, in terms of being able to move database workloads into the cloud. >> I want to make sure I understand that. So you're saying, if I can rephrase or reinterpret, and tell me if I'm wrong. You're saying, you got great visibility on prem and you're trying to replicate that degree of visibility in the cloud. >> Correct. >> It's almost the opposite of what you hear oftentimes, how people want to bring the cloud while on premise. >> Exactly. >> It's the opposite here. >> It's the opposite, yeah. 'Cause traditionally, we're very used to monitoring databases on prem, whether that's native auditing, whether that is in memory monitoring, network monitoring, all of these things. But once you take that database workload, and push it into the cloud, all of those monitoring capabilities essentially disappear, 'cause none of that technology was essentially moved over into the cloud, which is a really, really big point for customers, 'cause they cannot take that and just have a gap in their compliance. >> So database discovery is obviously a key step in that process. >> Correct, correct. >> What is database discovery? Why is it important and where does it fit? >> One of the main challenges most customers have is the ability to know where the data sits, and that begins with knowing where the database and how many databases customers have. Whenever we talk to customers and we ask how many databases are within an organization, generally speaking, the answer is 100, 200, 500, and when the actual scanning happens, very often the surprise is it's a lot more than what the customer initially thought, and that's because it's so easy to just spin off a database, work with it, and then forget about it, but from a compliance point of view, that means you're now sitting there, having data, and you're not monitoring it, you're not compliant. You don't even know it exists. So data discovery in terms of database discovery means you got to be able to find where your database workload is and be able to start monitoring that. >> You know, it's interesting. 10 years ago, database was kind of boring. I mean it was like Oracle, SQL Server, maybe DB2, maybe a couple of others, then all of a sudden, the NoSQL explosion occurred. So when we talk about moving databases into the cloud, what are you seeing there? Obviously Oracle is the commercial database market share leader. Maybe there's some smaller players. Well, Microsoft SQL Server obviously a very big... Those are the two big ones. Are we talking about moving those into the cloud? Kind of a lift and shift. Are we talking about conversion? Maybe you could give us some color on that. >> I think there's a bit of both, right? A lot of organizations who have proprietary applications that run since many, many years, there's a certain amount of lift and shift, right, because they don't want to rewrite the applications that run on these databases. But wherever there is a chance for organizations to move into some of their, let's say, more newer database systems, most organizations would take that opportunity, because it's easier to scale, it's quicker, it's faster, they get a lot more out of it, and it's obviously commercially more valuable as well, right? So, we see quite a big shift around NoSQL, but also some of the open source engines, like MySQL, ProsgreSQL, Percona, MariaDB, a lot of the other databases that, traditionally within the enterprise space, we probably wouldn't have seen that much in the past, right? >> And are you seeing that in a lot of those sort of emerging databases, that the attention to security detail is perhaps not as great as it has been in the traditional transaction environment, whether it's Oracle, DB2, even certainly, SQL Server. So, talk about that potential issue and how you guys are helping solve that. >> Yeah, I mean, one of the big things, and I think it was two years ago, when one of the open source databases got discovered essentially online via some, and I'm not going to name names, but the initial default installation had admin as username and no password, right? And it's very easy to install it that way, but unfortunately it means you potentially leave a very, very big gaping hole open, right? And that's one of the challenges with having open source and easily deployable solutions, because Oracle, SQLServer, they don't let you do that that quickly, right? But it might happen with other not as large database instances. One of the things that McAfee for instance does is helps customers making sure that configuration scans are done, so that once you have set up a database instance, that as an organization, you can go in and can say, okay, I need to know whether it's up to patch level, whether we have any sort of standard users with standard passwords, whether we have any sort of very weak passwords that are within the database environment, just to make sure that you cover all of those points, but because it's also important from a compliance point of view, right? It brings me always back to the compliance point of view of the organization being the data steward, the owner of the data, and it has to be our, I suppose, biggest point to protect the data that sits on those databases, right? >> Yeah, well there's kind of two sides of the same coin. The security and then compliance, governance, privacy, it flips. For those edicts, those compliance and governance edicts, I presume your objective is to make sure that those carry over when you move to the cloud. How do you ensure that? >> So, I suppose the biggest point to make that happen is ensure that you have one set of controls that applies to both environments. It brings us back to the hybrid point, right? Because you got to be able to reuse and use the same policies, and measures, and controls that you have on prem and be able to shift these into the cloud and apply them to the same rigor into the cloud databases as you would have been used to on prem, right? So that means being able to use the same set of policies, the same set of access control whether you're on prem or in the cloud. >> Yeah, so I don't know if our folks in our audience saw it today, but Werner Vogels gave a really, really detailed overview of Aurora. He went back to 2004, when their Oracle database went down because they were trying to do things that were unnatural. They were scaling up, and the global distribution. But anyway, he talked about how they re-architected their systems and gave inside baseball on Aurora. Huge emphasis on recovery. So you know, being very important to them, data accessibility, obviously security is a big piece of that. You're working with AWS on Aurora, and RDS as well. Can you talk specifically about what you're doing there as a partnership? >> So, AWS has, I think it was two days ago, essentially put the Aurora database activity stream into private preview, which is essentially a way for third party vendors to be able to read a activity stream off Aurora, enabling McAfee, for instance, to consume that data and bring customers the same level of real-time monitoring to the database as the servers were, as were used to on prem or even in a EC2 environment, where it's a lot easier because customers have access to the infrastructure, install things. That's always been a challenge within the database as the servers were because that access is not there, right? So, customers need to have an ability to get the same level of detail, and with the database activity stream and the ability for McAfee to read that, we give customers the same ability with Aurora PostgreSQL at the moment as customers have on premise with any of the other databases that we support. >> So you're bringing your expertise, some of which is really being able to identify anomalies, and scribbling through all this noise, and identifying the signal that's dangerous, and then obviously helping people respond to that. That's what you're enabling through that connection point. >> Correct, 'cause for organizations, using something like Aurora is a big saving, and the scalability that comes with it is fantastic. But if I can't have the same level of data control that I have on premise, it's going to stop me as an organization, moving critical data into that, 'cause I can't protect it, and I have to be able to. So, with this step, it's a great first step into being able to provide that same level of activity monitoring in real time as we're used to on prem. >> Same for RDS, is that pretty much what you're doing there? >> It's the same for RDS, yes. There is a certain set level of, obviously, you know, we go through before things go into GA but RDS is part of that program as well, yes. >> So, I wonder if we can step back a little bit and talk about some of the big picture trends in security. You know, we've gone from a world of hacktivists to organized crime, which is very lucrative. There are even state sponsored terrorism. I think Stuxnet is interesting. You probably can't talk about Stuxnet. Anyway-- >> No, not really. >> But, conceptually, now the bar is raised and the sophistication goes up. It's an arms race. How are you keeping pace? What role does data have? What's the state of security technology? >> It's very interesting, because traditionally, databases, nobody wanted to touch the areas. We were all very, very good at building walls around and being very perimeter-oriented when it comes to data center and all of that. I think that has changed little bit with the, I suppose the increased focus on the actual data. Since a lot of the legislations have changed since the threat of what if GDPR came in, a lot of companies had to rethink their take on protecting data at source. 'Cause when we start looking at the exfiltration path of data breaches, almost all the exfiltration happens essentially out of the database. Of course, it makes sense, right? I mean I get into the environment through various different other ways, but essentially, my main goal is not to see the network traffic. My main goal as any sort of hacker is essentially get onto the data, get that out, 'cause that's where the money sits. That's what essentially brings the most money in the open market. So being able to protect that data at source is going to help a lot of companies make sure that that doesn't happen, right? >> Now, the other big topic I want to touch on in the minute we have remaining is ransomware. It's a hot topic. People are talking about creating air gaps, but even air gaps, you can get through an air gap with a stick. Yeah, people get through. Your thoughts on ransomware, how are you guys combating that? >> There is very specific strains, actually, developed for databases. It's a hugely interesting topic. But essentially what it does is it doesn't encrypt the whole database, it encrypts very specific key fields, leaves the public key present for a longer period of time than what we're used to see on the endpoint board, where it's a lot more like a shotgun approach and you know somebody is going to pick it up, and going to pay the $200, $300, $400, whatever it is. On the database side, it's a lot more targeted, but generally it's a lot more expensive, right? So, that essentially runs for six months, eight months, make sure that all of the backups are encrypted as well, and then the public key gets removed, and essentially, you have lost access to all of your data, 'cause even the application that access the data can't talk to the database anymore. So, we have put specific controls in place that monitor for changes in the encryption level, so even if only one or two key fields starting to get encrypted with a different encryption key, we're able to pick that up, and alert you on it, and say hey, hang on, there is something different to what you usually do in terms of your encryption. And that's a first step to stopping that, and being able to roll back and bring in a backup, and change, and start looking where the attacker essentially gained access into the environment. >> Markus, are organizations at the point where they are automating that process, or is it still too dangerous? >> A lot of it is still too dangerous, although, having said that, we would like to go more into the automation space, and I think it's something as an industry we have to, because there is so much pressure on any security personnel to follow through and do all of the rules, and sift through, and find the needle in the haystack. But especially on a database, the risk of automating some of those points is very great, because if you make a mistake, you might break a connection, or you might break something that's essentially very, very valuable, and that's the crown jewels, the data within the company. >> Right. All right, we got to go. Thanks so much. This is a really super important topic. >> Appreciate all the good work you're doing. >> Thanks for having me. >> You're very welcome. All right, keep it right there, everybody. You're watching theCUBE. We'll be right back, right after this short break from AWS re:Invent 2018, from Las Vegas. We'll be right back. (techno music)
SUMMARY :
brought to you by Amazon Web Services, covering all the innovations. some of the things that you see is the fact that whenever you start and you're trying to replicate It's almost the opposite of and push it into the cloud, a key step in that process. is the ability to know where the data sits, Obviously Oracle is the commercial database a lot of the other databases that, that the attention to security detail and it has to be our, those carry over when you move to the cloud. and controls that you have on prem and the global distribution. and the ability for McAfee to read that, and identifying the signal that's dangerous, and the scalability It's the same for RDS, yes. the big picture trends in security. and the sophistication goes up. Since a lot of the legislations have changed in the minute we have remaining is ransomware. that monitor for changes in the encryption level, and do all of the rules, This is a really super important topic. Appreciate all the good work You're very welcome.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Amazon Web Services | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
six months | QUANTITY | 0.99+ |
eight months | QUANTITY | 0.99+ |
Markus Strauss | PERSON | 0.99+ |
one | QUANTITY | 0.99+ |
Markus | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
$200 | QUANTITY | 0.99+ |
2004 | DATE | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
McAfee | ORGANIZATION | 0.99+ |
MySQL | TITLE | 0.99+ |
$300 | QUANTITY | 0.99+ |
$400 | QUANTITY | 0.99+ |
100 | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
sixth year | QUANTITY | 0.99+ |
NoSQL | TITLE | 0.99+ |
two sides | QUANTITY | 0.99+ |
two years ago | DATE | 0.98+ |
both environments | QUANTITY | 0.98+ |
first step | QUANTITY | 0.98+ |
Werner Vogels | PERSON | 0.98+ |
two days ago | DATE | 0.98+ |
ProsgreSQL | TITLE | 0.98+ |
two sets | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
10 years ago | DATE | 0.98+ |
today | DATE | 0.98+ |
MariaDB | TITLE | 0.98+ |
SQL Server | TITLE | 0.97+ |
Aurora | TITLE | 0.97+ |
#reInvent18 | EVENT | 0.96+ |
GDPR | TITLE | 0.96+ |
One | QUANTITY | 0.96+ |
500 | QUANTITY | 0.96+ |
four sets | QUANTITY | 0.95+ |
200 | QUANTITY | 0.95+ |
DB2 | TITLE | 0.95+ |
SQL | TITLE | 0.94+ |
day three | QUANTITY | 0.94+ |
this week | DATE | 0.93+ |
Aurora PostgreSQL | TITLE | 0.89+ |
two key fields | QUANTITY | 0.89+ |
Percona | TITLE | 0.88+ |
one set | QUANTITY | 0.87+ |
re:Invent | EVENT | 0.86+ |
prem | ORGANIZATION | 0.84+ |
AWS re:Invent | EVENT | 0.83+ |
two big ones | QUANTITY | 0.79+ |
AWS re:Invent 2018 | EVENT | 0.77+ |
RDS | TITLE | 0.76+ |
EC2 | TITLE | 0.73+ |
Invent 2018 | TITLE | 0.7+ |
Invent 2018 | EVENT | 0.68+ |
Stuxnet | ORGANIZATION | 0.63+ |
theCUBE | ORGANIZATION | 0.59+ |
Stuxnet | PERSON | 0.57+ |
ttacker | TITLE | 0.52+ |
SQLServer | ORGANIZATION | 0.5+ |
challenges | QUANTITY | 0.49+ |
Jon Rooney, Splunk | Splunk .conf18
>> Announcer: Live from Orlando, Florida. It's theCube. Covering .conf18, brought to you by Splunk. >> We're back in Orlando, Dave Vellante with Stu Miniman. John Rooney is here. He's the vice president of product marketing at Splunk. Lot's to talk about John, welcome back. >> Thank you, thanks so much for having me back. Yeah we've had a busy couple of days. We've announced a few things, quite a few things, and we're excited about what we're bringing to market. >> Okay well let's start with yesterday's announcements. Splunk 7.2 >> Yup. _ What are the critical aspects of 7.2, What do we need to know? >> Yeah I think first, Splunk Enterprise 7.2, a lot of what we wanted to work on was manageability and scale. And so if you think about the core key features, the smart storage, which is the ability to separate the compute and storage, and move some of that cool and cold storage off to blob. Sort of API level blob storage. A lot of our large customers were asking for it. We think it's going to enable a ton of growth and enable a ton of use cases for customers and that's just sort of smart design on our side. So we've been real excited about that. >> So that's simplicity and it's less costly, right? Free storage. >> Yeah and you free up the resources to just focus on what are you asking out of Splunk. You know running the searches and the safe searches. Move the storage off to somewhere else and when you need it you pull it back when you need it. >> And when I add an index or I don't have to both compute and storage, I can add whatever I need in granular increments, right? >> Absolutely. It just enables more graceful and elastic expansiveness. >> Okay that's huge, what else should we know about? >> So workload management, which again is another manageability and scale feature. It's just the ability to say the great thing about Splunk is you put your data in there and multiple people can ask questions of that data. It's just like an apartment building that has ... You know if you only have one hot water heater and a bunch of people are taking a shower at the same time, maybe you want to give some privileges to say you know, the penthouse they're going to get the hot water first. Other people not so much. And that's really the underlying principle behind workload management. So there are certain groups and certain people that are running business critical, or mission critical, searches. We want to make sure they get the resources first and then maybe people that are experimenting or kind of kicking the tires. We have a little bit of a gradation of resources. >> So that's essentially programmatic SLAs. I can set those policies, I can change them. >> Absolutely, it's the same level of granular control that say you were on access control. It's the same underlying principle. >> Other things? Go ahead. >> Yeah John just you guys always have some cool, pithy statements. One of the things that jumped out to me in the keynotes, because it made me laugh, was the end of metrics. >> John: Yes. >> You've been talking about data. Data's at the ... the line I heard today was Splunk users are at the crossroads of data so it gives a little insight about what you're doing that's different ways of managing data 'cause every company can interact with the same data. Why is the Splunk user, what is it different, what do they do different, and how is your product different? >> Yeah I mean absolutely. I think the core of what we've always done and Doug talked about it in the keynote yesterday is this idea of this expansive, investigative search. The idea that you're not exactly sure what the right question is so you want to go in, ask a question of the data, which is going to lead you to another question, which is going to lead you to another question, and that's that finding a needle in a pile of needles that Splunk's always great at. And we think of that as more the investigative expansive search. >> Yeah so when I think back I remember talking with companies five years ago when they'd say okay I've got my data scientists and finding which is the right question to ask once I'm swimming in the data can be really tough. Sounds like you're getting answers much faster. It's not necessarily a data scientist, maybe it is. We say BMW on stage. >> Yeah. >> But help us understand why this is just so much simpler and faster. >> Yeah I mean again it's the idea for the IT and security professionals to not necessarily have to know what the right question is or even anticipate the answer, but to find that in an evolving, iterative process. And the idea that there's flexibility, you're in no way penalized, you don't have to go back and re-ingest the data or do anything to say when you're changing exactly what your query is. You're just asking the question which leads to another question, And that's how we think about on the investigative side. From a metric standpoint, we do have additional ... The third big feature that we have in Splunk Enterprise 7.2 is an improved metrics visualization experience. Is the idea of our investigative search which we think we are the best in the industry at. When you're not exactly sure what you're looking for and you're doing a deep dive, but if you know what you're looking for from a monitoring standpoint you're asking the same question again and again and again, over and again. You want be able to have an efficient and easy way to track that if you're just saying I'm looking for CPU utilization or some other metric. >> Just one last follow up on that. I look ... the name of the show is .conf >> Yes. >> Because it talks about the config file. You look at everywhere, people are in the code versus gooey and graphical and visualization. What are you hearing from your user base? How do you balance between the people that want to get in there versus being able to point and click? Or ask a question? >> Yeah this company was built off of the strength of our practitioners and our community, so we always want to make sure that we create a great and powerful experience for those technical users and the people that are in the code and in the configuration files. But you know that's one of the underlying principles behind Splunk Next which was a big announcement part of day one is to bring that power of Splunk to more people. So create the right interface for the right persona and the right people. So the traditional Linux sys admin person who's working in IT or security, they have a certain skill set. So the SPL and those things are native to them. But if you are a business user and you're used to maybe working in Excel or doing pivot tables, you need a visual experience that is more native to the way you work. And the information that's sitting in Splunk is valuable to you we just want to get it to you in the right way. And similar to what we talked about today in the keynote with application developers. The idea of saying well everything that you need is going to be delivered in a payload and json objects makes a lot of sense if you're a modern application developer. If you're a business analyst somewhere that may not make a lot of sense so we want to be able to service all of those personas equally. >> So you've made metrics a first class citizen. >> John: Absolutely. >> Opening it up to more people. I also wanted to ask you about the performance gains. I was talking to somebody and I want to make sure I got these numbers right. It was literally like three orders of magnitude faster. I think the number was 2000 times faster. I don't know if I got that number right, it just sounds ... Implausible. >> That's specifically what we're doing around the data fabric search which we announced in beta on day one. Simply because of the approach to the architecture and the approach to the data ... I mean Splunk is already amazingly fast, amazingly best in class in terms of scale and speed. But you realize that what's fast today because of the pace and growth of data isn't quite so fast two, three, four years down the road. So we're really focused looking well into the future and enabling those types of orders of magnitude growth by completely re imagining and rethinking through what the architecture looks like. >> So talk about that a little bit more. Is that ... I was going to say is that the source of the performance gain? Is it sort of the architecture, is it tighter code, was it a platform do over? >> No I mean it wasn't a platform do over, it's just the idea that in some cases the idea of thinking like I'm federating a search between one index here and one index there, to have a virtualization layer that also taps into compute. Let's say living in a patchy Kafka, taking advantage of those sorts of open source projects and open source technologies to further enable and power the experiences that our customers ultimately want. So we're always looking at what problems our customers are trying to solve. How do we deliver to them through the product and that constant iteration, that constant self evaluation is what drives what we're doing. >> Okay now today was all about the line of business. We've been talking about, I've used the term land and expand about a hundred times today. It's not your term but others have used it in the industry and it's really the template that you're following. You're in deep in sec ops, you're in deep in IT, operations management, and now we're seeing just big data permeate throughout the organization. Splunk is a tool for business users and you're making it easier for them. Talk about Splunk business flow. >> Absolutely, so business flow is the idea that we had ... Again we learned from our customers. We had a couple of customers that were essentially tip of the spear, doing some really interesting things where as you described, let's say the IT department said well we need to pull in this data to check out application performance and those types of things. The same data that's following through is going to give you insight into customer behavior. It's going to give you insight into coupons and promotions and all the things that the business cares about. If you're a product manager, if you're sitting in marketing, if you're sitting in promotions, that's what you want to access and you want to be able to access that in real time. So the challenge is that we're now stepping you with things like business flow is how do you create an interface? How do you create an experience that again matches those folks and how they think about the world? The magic, the value that's sitting in the data is we just have to surface it for the right way for the right people. >> Now the demo, Stu knows I hate demos, but the demo today was awesome. And I really do, I hate demos because most of them are just so boring but this demo was amazing. You took a bunch of log data and a business user ingested it and looked at it and it was just a bunch of data. >> Yeah. >> Like you'd expect and go eh what am I supposed to do with this and then he pushed button and then all of a sudden there was a flow chart and it showed the flow of the customer through the buying pattern. Now maybe that's a simpler use case but it was still very powerful. And then he isolated on where the customer actually made a phone call to the call center because you want to avoid if possible and then he looked at the percentage of drop outs, which was like 90% in that case, versus the percentage of drop outs in a normal flow which was 10%- Oop something's wrong, drilled in, fixed the problem. He showed how he fixed it, oh graphically beautiful. Is it really that easy? >> Yeah I mean I think if you think about what we've done in computing over the last 40 years. If you think about even the most basic word processor, the most basic spreadsheet work, that was done by trained technicians 30-40 years ago. But the democratization of data created this notion of the information worker and we're a decade or so now plus into big data and the idea that oh that's only highly trained professionals and scientists and people that have PHDs. There's always going to be an aspect of the market or an aspect of the use cases that is of course going to be that level of sophistication, but ultimately this is all work for an information worker. If you're an information worker, if you're responsible for driving business results and looking at things, it should be the same level of ease as your traditional sort of office suite. >> So I want to push on that a little if I can. So and just test this, because it looked so amazingly simple. Doug Merritt made the point yesterday that business processes they used to be codified. Codifying business processes is a waste of time because business processes are changing so fast. The business process that you used in the example was a very linear process, admittedly. I'm going to search for a product, maybe read a review, I'm going to put it in my cart, I'm going to buy it. You know, very straightforward. But business processes as we know are unpredictable now. Can that level of simplicity work and the data feed in some kind of unpredictable business process? >> Yeah and again that's our fundamental difference. How we've done it differently than everyone in the market. It's the same thing we did with IT surface intelligence when we launched that back in 2015 because it's not a tops down approach. We're not dictating, taking sort of a central planning approach to say this is what it needs to look like. The data needs to adhere to this structure. The structure comes out of the data and that's what we think. It's a bit of a simplification, but I'm a marketing guy and I can get away with it. But that's where we think we do it differently in a way that allows us to reach all these different users and all these different personas. So it doesn't matter. Again that business process emerges from the data. >> And Stu, that's going to be important when we talk about IOT but jump in here. >> Yeah so I wanted to have you give us a bit of insight on the natural language processing. >> John: Yeah natural language processing. >> You've been playing with things like the Alexa. I've got a Google Home at home, I've got Alexa at home, my family plays with it. Certain things it's okay for but I think about the business environment. The requirements in what you might ask Alexa to ask Splunk seems like that would be challenging. You're got a global audience. You know, languages are tough, accents are tough, syntax is really really challenging. So give us the why and where are we. Is this nascent things? Do you expect customers to really be strongly using this in the near future? >> Absolutely. The notion of natural language search or natural language computing has made huge strides over the last five or six years and again we're leveraging work that's done elsewhere. To Dave's point about demos ... Alexa it looks good on stage. Would we think, and if you're to ask me, we'll see. We'll always learn from the customers and the good thing is I like to be wrong all the time. These are my hypotheses, but my hypothesis is the most actual relevant use of that technology is not going to be speech it's going to be text. It's going to be in Slack or Hipchat where you have a team collaborating on an issue or project and they say I'm looking for this information and they're going to pass that search via text into Splunk and back via Slack in a way that's very transparent. That's where I think the business cases are going to come through and if you were to ask me again, we're starting the betas we're going to learn from our customers. But my assumption is that's going to be much more prevalent within our customer base. >> That's interesting because the quality of that text presumably is going to be much much better, at least today, than what you get with speech. We know well with the transcriptions we do of theCUBE interviews. Okay so that's it. ML and MLP I thought I heard 4.0, right? >> Yeah so we've been pushing really hard on the machine learning tool kit for multiple versions. That team is heavily invested in working with customers to figure out what exactly do they want to do. And as we think about the highly skilled users, our customers that do have data scientists, that do have people that understand the math to go in and say no we need to customize or tweak the algorithm to better fit our business, how do we allow them essentially the bare metal access to the technology. >> We're going to leave dev cloud for Skip if that's okay. I want to talk about industrial IOT. You said something just now that was really important and I want to just take a moment to explain to the audience. What we've seen from IOT, particularly from IT suppliers, is a top down approach. We're going to take our IT framework and put it at the edge. >> Yes. >> And that's not going to work. IOT, industrial IOT, these process engineers, it's going to be a bottoms up approach and it's going to be standard set by OT not IT. >> John: Yes. >> Splunk's advantage is you've got the data. You're sort of agnostic to everything else. Wherever the data is, we're going to have that data so to me your advantage with industrial IOT is you're coming at it from a bottoms up approach as you just described and you should be able to plug into the IOT standards. Now having said that, a lot of data is still analog but that's okay you're pulling machine data. You don't really have tight relationships with the IOT guys but that's okay you got a growing ecosystem. >> We're working on it. >> But talk about industrial IOT and we'll get into some of the challenges. >> Yeah so interestingly we first announced the Industrial Asset Intelligence product at the Hannover Messe show in Germany, which is this massive like 300,000 it's a city, it's amazing. >> I've been, Hannover. One hotel, huge show, 400,000 people. >> Lot of schnitzel (laughs) I was just there. And the interesting thing is it's the first time I'd been at a show really first of all in years where people ... You know if you go to an IT or security show they're like oh we know Splunk, we love Splunk, what's in the next version. It was the first time we were having a lot of people come up to us saying yeah I'm a process engineer in an industrial plant, what's Splunk? Which is a great opportunity. And as you explain the technology to them their mindset is very different in the sense they think of very custom connectors for each piece. They have a very, almost bespoke or matched up notion, of a sense to a piece of equipment. So for an example they'll say oh do you have a connector for and again, I don't have the machine numbers, but like the Siemens 123 machine. And I'll be like well as long as it's textural structural to semi structural data ideally with a time stamp, we can ingest and correlate that. Okay but then what about the Siemens ABC machine? Well the idea that, the notion that ... we don't care where the source is as long as there's a sensor sending the data in a format that we can consume. And if you think back to the beginning of the data stream processor demo that Devani and Eric gave yesterday that showed the history over time, the purple boxes that were built, like we can now ingest data via multiple inputs and via multiple ways into Splunk. And that hopefully enables the IOT ecosystems and the machine manufacturers, but more importantly, the sensor manufacturers because it feels like in my understanding of the market we're still at a point of a lot of folks getting those sensors instrumented. But once it's there and essentially the faucet's turned on, we can pull it all in and we can treat it and ingest it just as easily as we can data from AWS Kineses or Apache Access logs or MySequel logs. >> Yeah and so instrumenting the windmill, to use the metaphor, is not your job. Connectivity to the windmill is not your job, but once those steps have been taken and the business takes those steps because there's a business case, once that's done then the data starts flowing and that's where you come in. >> And there's a tremendous amount of incentive in the industry right now to do that level of instrumentation and connectivity. So it feels like that notion of instrument connect then do the analytics, we're sitting there well positioned once all those things are in place to be one of the top providers for those analytics. >> John I want to ask you something. Stu and I were talking about this at our kickoff and I just want to clarify it. >> Doug Merritt said that he didn't like the term unstructured data. I think that's what he said yesterday, it's just data. My question is how do you guys deal with structured data because there is structured data. Bringing transaction processing data and analytics data together for whatever reason. Whether it's fraud detection, to give the buyer an offer before you lose them, better customer service. How do you handle that kind of structured data that lives in IBM mainframes or whatever. USS mainframes in the case of Carnival. >> Again we want to be able to access data that lives everywhere. And so we've been working with partners for years to pull data off mainframes. Again, the traditional in outs aren't necessarily there but there are incentives in the market. We work with our ecosystem to pull that data to give it to us in a format that makes sense. We've long been able to connect to traditional relational databases so I think when people think of structured data they think about oh it's sitting in a relational database somewhere in Oracle or MySequel or SQL Server. Again, we can connect to that data and that data is important to enhance things particularly for the business user. Because if the log says okay whatever product ID 12345, but the business user needs to know what product ID 12345 is and has a lookup table. Pull it in and now all of a sudden you're creating information that's meaningful to you. But structure again, there's fluidity there. Coming from my background a Json object is structured. You can the same way Theresa Vu in the demo today unfurled in the dev cloud what a Json object looks like. There's structure there. You have key value pairs. There's structure to key value pairs. So all of those things, that's why I think to Doug's point, there's fluidity there. It is definitely a continuum and we want to be able to add value and play at all ends of that continuum. >> And the key is you guys your philosophy is to curate that data in the moment when you need it and then put whatever schema you want at that time. >> Absolutely. Going back to this bottoms up approach and how we approach it differently from basically everyone else in the industry. You pull it in, we take the data as is, we're not transforming or changing or breaking the data or trying to put it into a structure anywhere. But when you ask it a question we will apply a structure to give you the answer. If that data changes when you ask that question again, it's okay it doesn't break the question. That's the magic. >> Sounds like magic. 16,000 customers will tell you that it actually works. So John thanks so much for coming to theCUBE it was great to see you again. >> Thanks so much for having me. >> You're welcome. Alright keep it right there everybody. Stu and I will be back. You're watching theCUBE from Splunk conf18 #splunkconf18. We'll be right back. (electronic drums)
SUMMARY :
brought to you by Splunk. He's the vice president of product marketing at Splunk. and we're excited about what we're bringing to market. Okay well let's start with yesterday's announcements. _ What are the critical aspects of 7.2, and move some of that cool and cold storage off to blob. So that's simplicity and it's less costly, right? Move the storage off to somewhere else and when you need it It just enables more graceful and elastic expansiveness. It's just the ability to say the great thing about Splunk is So that's essentially programmatic SLAs. Absolutely, it's the same level of granular control that Other things? One of the things that jumped out to me in the keynotes, Why is the Splunk user, what is it different, and Doug talked about it in the keynote yesterday is ask once I'm swimming in the data can be really tough. But help us understand why this is just so much And the idea that there's flexibility, you're in no way I look ... the name of the show is You look at everywhere, people are in the code versus So the SPL and those things are native to them. I also wanted to ask you about the performance gains. Simply because of the approach to the architecture and Is it sort of the architecture, is it tighter code, it's just the idea that in some cases the idea of and it's really the template that you're following. So the challenge is that we're now stepping you with things but the demo today was awesome. made a phone call to the call center because it should be the same level of ease as your traditional The business process that you used in the example It's the same thing we did with IT surface intelligence And Stu, that's going to be important when we talk about Yeah so I wanted to have you give us a bit of insight The requirements in what you might ask Alexa to ask Splunk It's going to be in Slack or Hipchat where you have a team That's interesting because the quality of that text bare metal access to the technology. We're going to take our IT framework and put it at the edge. And that's not going to work. Wherever the data is, we're going to have that data some of the challenges. Industrial Asset Intelligence product at the I've been, Hannover. And that hopefully enables the IOT ecosystems and the Yeah and so instrumenting the windmill, once all those things are in place to be one of the top John I want to ask you something. Doug Merritt said that he didn't like the term but the business user needs to know what product ID 12345 is curate that data in the moment when you need it to give you the answer. it was great to see you again. Stu and I will be back.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Doug Merritt | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Orlando | LOCATION | 0.99+ |
John Rooney | PERSON | 0.99+ |
90% | QUANTITY | 0.99+ |
Jon Rooney | PERSON | 0.99+ |
Germany | LOCATION | 0.99+ |
2015 | DATE | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Doug | PERSON | 0.99+ |
Excel | TITLE | 0.99+ |
Splunk | ORGANIZATION | 0.99+ |
10% | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Orlando, Florida | LOCATION | 0.99+ |
yesterday | DATE | 0.99+ |
Stu | PERSON | 0.99+ |
Theresa Vu | PERSON | 0.99+ |
2000 times | QUANTITY | 0.99+ |
BMW | ORGANIZATION | 0.99+ |
400,000 people | QUANTITY | 0.99+ |
each piece | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Hannover | LOCATION | 0.99+ |
Eric | PERSON | 0.99+ |
three | QUANTITY | 0.99+ |
Devani | PERSON | 0.99+ |
one index | QUANTITY | 0.99+ |
four years | QUANTITY | 0.99+ |
16,000 customers | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
300,000 | QUANTITY | 0.98+ |
first time | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
One hotel | QUANTITY | 0.97+ |
Siemens | ORGANIZATION | 0.97+ |
SQL Server | TITLE | 0.97+ |
30-40 years ago | DATE | 0.96+ |
five years ago | DATE | 0.96+ |
both | QUANTITY | 0.96+ |
One | QUANTITY | 0.95+ |
Linux | TITLE | 0.95+ |
Hannover Messe | EVENT | 0.95+ |
one hot water heater | QUANTITY | 0.94+ |
first | QUANTITY | 0.94+ |
Splunk | TITLE | 0.94+ |
Kafka | TITLE | 0.94+ |
Alexa | TITLE | 0.92+ |
three orders | QUANTITY | 0.92+ |
Oracle | ORGANIZATION | 0.92+ |
day one | QUANTITY | 0.91+ |
.conf | OTHER | 0.87+ |
#splunkconf18 | EVENT | 0.86+ |
MySequel | TITLE | 0.86+ |
third big feature | QUANTITY | 0.85+ |