Rahul Pathak, AWS | AWS re:Invent 2020
>>from around the globe. It's the Cube with digital coverage of AWS reinvent 2020 sponsored by Intel and AWS. Yeah, welcome back to the cubes. Ongoing coverage of AWS reinvent virtual Cuba's Gone Virtual along with most events these days are all events and continues to bring our digital coverage of reinvent With me is Rahul Pathak, who is the vice president of analytics at AWS A Ro. It's great to see you again. Welcome. And thanks for joining the program. >>They have Great co two and always a pleasure. Thanks for having me on. >>You're very welcome. Before we get into your leadership discussion, I want to talk about some of the things that AWS has announced. Uh, in the early parts of reinvent, I want to start with a glue elastic views. Very notable announcement allowing people to, you know, essentially share data across different data stores. Maybe tell us a little bit more about glue. Elastic view is kind of where the name came from and what the implication is, >>Uh, sure. So, yeah, we're really excited about blue elastic views and, you know, as you mentioned, the idea is to make it easy for customers to combine and use data from a variety of different sources and pull them together into one or many targets. And the reason for it is that you know we're really seeing customers adopt what we're calling a lake house architectural, which is, uh, at its core Data Lake for making sense of data and integrating it across different silos, uh, typically integrated with the data warehouse, and not just that, but also a range of other purpose. Both stores like Aurora, Relation of Workloads or dynamodb for non relational ones. And while customers typically get a lot of benefit from using purpose built stores because you get the best possible functionality, performance and scale forgiven use case, you often want to combine data across them to get a holistic view of what's happening in your business or with your customers. And before glue elastic views, customers would have to either use E. T. L or data integration software, or they have to write custom code that could be complex to manage, and I could be are prone and tough to change. And so, with elastic views, you can now use sequel to define a view across multiple data sources pick one or many targets. And then the system will actually monitor the sources for changes and propagate them into the targets in near real time. And it manages the anti pipeline and can notify operators if if anything, changes. And so the you know the components of the name are pretty straightforward. Blues are survivalists E T Elling data integration service on blue elastic views about our about data integration their views because you could define these virtual tables using sequel and then elastic because it's several lists and will scale up and down to deal with the propagation of changes. So we're really excited about it, and customers are as well. >>Okay, great. So my understanding is I'm gonna be able to take what's called what the parlance of materialized views, which in my laypersons terms assumes I'm gonna run a query on the database and take that subset. And then I'm gonna be ableto thio. Copy that and move it to another data store. And then you're gonna automatically keep track of the changes and keep everything up to date. Is that right? >>Yes. That's exactly right. So you can imagine. So you had a product catalog for example, that's being updated in dynamodb, and you can create a view that will move that to Amazon Elasticsearch service. You could search through a current version of your catalog, and we will monitor your dynamodb tables for any changes and make sure those air all propagated in the real time. And all of that is is taken care of for our customers as soon as they defined the view on. But they don't be just kept in sync a za long as the views in effect. >>Let's see, this is being really valuable for a person who's building Looks like I like to think in terms of data services or data products that are gonna help me, you know, monetize my business. Maybe, you know, maybe it's a simple as a dashboard, but maybe it's actually a product. You know, it might be some content that I want to develop, and I've got transaction systems. I've got unstructured data, may be in a no sequel database, and I wanna actually combine those build new products, and I want to do that quickly. So So take me through what I would have to do. You you sort of alluded to it with, you know, a lot of e t l and but take me through in a little bit more detail how I would do that, you know, before this innovation. And maybe you could give us a sense as to what the possibilities are with glue. Elastic views? >>Sure. So, you know, before we announced elastic views, a customer would typically have toe think about using a T l software, so they'd have to write a neat L pipeline that would extract data periodically from a range of sources. They then have to write transformation code that would do things like matchup types. Make sure you didn't have any invalid values, and then you would combine it on periodically, Write that into a target. And so once you've got that pipeline set up, you've got to monitor it. If you see an unusual spike in data volume, you might have to add more. Resource is to the pipeline to make a complete on time. And then, if anything changed in either the source of the destination that prevented that data from flowing in the way you would expect it, you'd have toe manually, figure that out and have data, quality checks and all of that in place to make sure everything kept working but with elastic views just gets much simpler. So instead of having to write custom transformation code, you right view using sequel and um, sequel is, uh, you know, widely popular with data analysts and folks that work with data, as you well know. And so you can define that view and sequel. The view will look across multiple sources, and then you pick your destination and then glue. Elastic views essentially monitors both the source for changes as well as the source and the destination for any any issues like, for example, did the schema changed. The shape of the data change is something briefly unavailable, and it can monitor. All of that can handle any errors, but it can recover from automatically. Or if it can't say someone dropped an important table in the source. That was part of your view. You can actually get alerted and notified to take some action to prevent bad data from getting through your system or to prevent your pipeline from breaking without your knowledge and then the final pieces, the elasticity of it. It will automatically deal with adding more resource is if, for example, say you had a spiky day, Um, in the markets, maybe you're building a financial services application and you needed to add more resource is to process those changes into your targets more quickly. The system would handle that for you. And then, if you're monetizing data services on the back end, you've got a range of options for folks subscribing to those targets. So we've got capabilities like our, uh, Amazon data exchange, where people can exchange and monetize data set. So it allows this and to end flow in a much more straightforward way. It was possible before >>awesome. So a lot of automation, especially if something goes wrong. So something goes wrong. You can automatically recover. And if for whatever reason, you can't what happens? You quite ask the system and and let the operator No. Hey, there's an issue. You gotta go fix it. How does that work? >>Yes, exactly. Right. So if we can recover, say, for example, you can you know that for a short period of time, you can't read the target database. The system will keep trying until it can get through. But say someone dropped a column from your source. That was a key part of your ultimate view and destination. You just can't proceed at that point. So the pipeline stops and then we notify using a PS or an SMS alert eso that programmatic action can be taken. So this effectively provides a really great way to enforce the integrity of data that's going between the sources and the targets. >>All right, make it kindergarten proof of it. So let's talk about another innovation. You guys announced quicksight que, uh, kind of speaking to the machine in my natural language, but but give us some more detail there. What is quicksight Q and and how doe I interact with it. What What kind of questions can I ask it >>so quick? Like you is essentially a deep, learning based semantic model of your data that allows you to ask natural language questions in your dashboard so you'll get a search bar in your quick side dashboard and quick site is our service B I service. That makes it really easy to provide rich dashboards. Whoever needs them in the organization on what Q does is it's automatically developing relationships between the entities in your data, and it's able to actually reason about the questions you ask. So unlike earlier natural language systems, where you have to pre define your models, you have to pre define all the calculations that you might ask the system to do on your behalf. Q can actually figure it out. So you can say Show me the top five categories for sales in California and it'll look in your data and figure out what that is and will prevent. It will present you with how it parse that question, and there will, in line in seconds, pop up a dashboard of what you asked and actually automatically try and take a chart or visualization for that data. That makes sense, and you could then start to refine it further and say, How does this compare to what happened in New York? And we'll be able to figure out that you're tryingto overlay those two data sets and it'll add them. And unlike other systems, it doesn't need to have all of those things pre defined. It's able to reason about it because it's building a model of what your data means on the flight and we pre trained it across a variety of different domains So you can ask a question about sales or HR or any of that on another great part accused that when it presents to you what it's parsed, you're actually able toe correct it if it needs it and provide feedback to the system. So, for example, if it got something slightly off you could actually select from a drop down and then it will remember your selection for the next time on it will get better as you use it. >>I saw a demo on in Swamis Keynote on December 8. That was basically you were able to ask Quick psych you the same question, but in different ways, you know, like compare California in New York or and then the data comes up or give me the top, you know, five. And then the California, New York, the same exact data. So so is that how I kind of can can check and see if the answer that I'm getting back is correct is ask different questions. I don't have to know. The schema is what you're saying. I have to have knowledge of that is the user I can. I can triangulate from different angles and then look and see if that's correct. Is that is that how you verify or there are other ways? >>Eso That's one way to verify. You could definitely ask the same question a couple of different ways and ensure you're seeing the same results. I think the third option would be toe, uh, you know, potentially click and drill and filter down into that data through the dash one on, then the you know, the other step would be at data ingestion Time. Typically, data pipelines will have some quality controls, but when you're interacting with Q, I think the ability to ask the question multiple ways and make sure that you're getting the same result is a perfectly reasonable way to validate. >>You know what I like about that answer that you just gave, and I wonder if I could get your opinion on this because you're you've been in this business for a while? You work with a lot of customers is if you think about our operational systems, you know things like sales or E r. P systems. We've contextualized them. In other words, the business lines have inject context into the system. I mean, they kind of own it, if you will. They own the data when I put in quotes, but they do. They feel like they're responsible for it. There's not this constant argument because it's their data. It seems to me that if you look back in the last 10 years, ah, lot of the the data architecture has been sort of generis ized. In other words, the experts. Whether it's the data engineer, the quality engineer, they don't really have the business context. But the example that you just gave it the drill down to verify that the answer is correct. It seems to me, just in listening again to Swamis Keynote the other day is that you're really trying to put data in the hands of business users who have the context on the domain knowledge. And that seems to me to be a change in mindset that we're gonna see evolve over the next decade. I wonder if you could give me your thoughts on that change in the data architecture data mindset. >>David, I think you're absolutely right. I mean, we see this across all the customers that we speak with there's there's an increasing desire to get data broadly distributed into the hands of the organization in a well governed and controlled way. But customers want to give data to the folks that know what it means and know how they can take action on it to do something for the business, whether that's finding a new opportunity or looking for efficiencies. And I think, you know, we're seeing that increasingly, especially given the unpredictability that we've all gone through in 2020 customers are realizing that they need to get a lot more agile, and they need to get a lot more data about their business, their customers, because you've got to find ways to adapt quickly. And you know, that's not gonna change anytime in the future. >>And I've said many times in the The Cube, you know, there are industry. The technology industry used to be all about the products, and in the last decade it was really platforms, whether it's SAS platforms or AWS cloud platforms, and it seems like innovation in the coming years, in many respects is coming is gonna come from the ecosystem and the ability toe share data we've We've had some examples today and then But you hit on. You know, one of the key challenges, of course, is security and governance. And can you automate that if you will and protect? You know the users from doing things that you know, whether it's data access of corporate edicts for governance and compliance. How are you handling that challenge? >>That's a great question, and it's something that really emphasized in my leadership session. But the you know, the notion of what customers are doing and what we're seeing is that there's, uh, the Lake House architectural concept. So you've got a day late. Purpose build stores and customers are looking for easy data movement across those. And so we have things like blue elastic views or some of the other blue features we announced. But they're also looking for unified governance, and that's why we built it ws late formation. And the idea here is that it can quickly discover and catalog customer data assets and then allows customers to define granular access policies centrally around that data. And once you have defined that, it then sets customers free to give broader access to the data because they put the guardrails in place. They put the protections in place. So you know you can tag columns as being private so nobody can see them on gun were announced. We announced a couple of new capabilities where you can provide row based control. So only a certain set of users can see certain rose in the data, whereas a different set of users might only be able to see, you know, a different step. And so, by creating this fine grained but unified governance model, this actually sets customers free to give broader access to the data because they know that they're policies and compliance requirements are being met on it gets them out of the way of the analyst. For someone who can actually use the data to drive some value for the business, >>right? They could really focus on driving value. And I always talk about monetization. However monetization could be, you know, a generic term, for it could be saving lives, admission of the business or the or the organization I meant to ask you about acute customers in bed. Uh, looks like you into their own APs. >>Yes, absolutely so one of quick sites key strengths is its embed ability. And on then it's also serverless, so you could embed it at a really massive scale. And so we see customers, for example, like blackboard that's embedding quick side dashboards into information. It's providing the thousands of educators to provide data on the effectiveness of online learning. For example, on you could embed Q into that capability. So it's a really cool way to give a broad set of people the ability to ask questions of data without requiring them to be fluent in things like Sequel. >>If I ask you a question, we've talked a little bit about data movement. I think last year reinvent you guys announced our A three. I think it made general availability this year. And remember Andy speaking about it, talking about you know, the importance of having big enough pipes when you're moving, you know, data around. Of course you do. Doing tearing. You also announced Aqua Advanced Query accelerator, which kind of reduces bringing the computer. The data, I guess, is how I would think about that reducing that movement. But then we're talking about, you know, glue, elastic views you're copying and moving data. How are you ensuring you know, maintaining that that maximum performance for your customers. I mean, I know it's an architectural question, but as an analytics professional, you have toe be comfortable that that infrastructure is there. So how does what's A. W s general philosophy in that regard? >>So there's a few ways that we think about this, and you're absolutely right. I think there's data volumes were going up, and we're seeing customers going from terabytes, two petabytes and even people heading into the exabyte range. Uh, there's really a need to deliver performance at scale. And you know, the reality of customer architectures is that customers will use purpose built systems for different best in class use cases. And, you know, if you're trying to do a one size fits all thing, you're inevitably going to end up compromising somewhere. And so the reality is, is that customers will have more data. We're gonna want to get it to more people on. They're gonna want their analytics to be fast and cost effective. And so we look at strategies to enable all of this. So, for example, glue elastic views. It's about moving data, but it's about moving data efficiently. So What we do is we allow customers to define a view that represents the subset of their data they care about, and then we only look to move changes as efficiently as possible. So you're reducing the amount of data that needs to get moved and making sure it's focused on the essential. Similarly, with Aqua, what we've done, as you mentioned, is we've taken the compute down to the storage layer, and we're using our nitro chips to help with things like compression and encryption. And then we have F. P. J s in line to allow filtering an aggregation operation. So again, you're tryingto quickly and effectively get through as much data as you can so that you're only sending back what's relevant to the query that's being processed. And that again leads to more performance. If you can avoid reading a bite, you're going to speed up your queries. And that Awkward is trying to do. It's trying to push those operations down so that you're really reducing data as close to its origin as possible on focusing on what's essential. And that's what we're applying across our analytics portfolio. I would say one other piece we're focused on with performance is really about innovating across the stack. So you mentioned network performance. You know, we've got 100 gigabits per second throughout now, with the next 10 instances and then with things like Grab it on to your able to drive better price performance for customers, for general purpose workloads. So it's really innovating at all layers. >>It's amazing to watch it. I mean, you guys, it's a It's an incredible engineering challenge as you built this hyper distributed system. That's now, of course, going to the edge. I wanna come back to something you mentioned on do wanna hit on your leadership session as well. But you mentioned the one size fits all, uh, system. And I've asked Andy Jassy about this. I've had a discussion with many folks that because you're full and and of course, you mentioned the challenges you're gonna have to make tradeoffs if it's one size fits all. The flip side of that is okay. It's simple is you know, 11 of the Swiss Army knife of database, for example. But your philosophy is Amazon is you wanna have fine grained access and to the primitives in case the market changes you, you wanna be able to move quickly. So that puts more pressure on you to then simplify. You're not gonna build this big hairball abstraction layer. That's not what he gonna dio. Uh, you know, I think about, you know, layers and layers of paint. I live in a very old house. Eso your That's not your approach. So it puts greater pressure on on you to constantly listen to your customers, and and they're always saying, Hey, I want to simplify, simplify, simplify. We certainly again heard that in swamis presentation the other day, all about, you know, minimizing complexity. So that really is your trade office. It puts pressure on Amazon Engineering to continue to raise the bar on simplification. Isn't Is that a fair statement? >>Yeah, I think so. I mean, you know, I think any time we can do work, so our customers don't have to. I think that's a win for both of us. Um, you know, because I think we're delivering more value, and it makes it easier for our customers to get value from their data way. Absolutely believe in using the right tool for the right job. And you know you talked about an old house. You're not gonna build or renovate a house of the Swiss Army knife. It's just the wrong tool. It might work for small projects, but you're going to need something more specialized. The handle things that matter. It's and that is, uh, that's really what we see with that, you know, with that set of capabilities. So we want to provide customers with the best of both worlds. We want to give them purpose built tools so they don't have to compromise on performance or scale of functionality. And then we want to make it easy to use these together. Whether it's about data movement or things like Federated Queries, you can reach into each of them and through a single query and through a unified governance model. So it's all about stitching those together. >>Yeah, so far you've been on the right side of history. I think it serves you well on your customers. Well, I wanna come back to your leadership discussion, your your leadership session. What else could you tell us about? You know, what you covered there? >>So we we've actually had a bunch of innovations on the analytics tax. So some of the highlights are in m r, which is our managed spark. And to do service, we've been able to achieve 1.7 x better performance and open source with our spark runtime. So we've invested heavily in performance on now. EMR is also available for customers who are running and containerized environment. So we announced you Marnie chaos on then eh an integrated development environment and studio for you Marco D M R studio. So making it easier both for people at the infrastructure layer to run em are on their eks environments and make it available within their organizations but also simplifying life for data analysts and folks working with data so they can operate in that studio and not have toe mess with the details of the clusters underneath and then a bunch of innovation in red shift. We talked about Aqua already, but then we also announced data sharing for red Shift. So this makes it easy for red shift clusters to share data with other clusters without putting any load on the central producer cluster. And this also speaks to the theme of simplifying getting data from point A to point B so you could have central producer environments publishing data, which represents the source of truth, say into other departments within the organization or departments. And they can query the data, use it. It's always up to date, but it doesn't put any load on the producers that enables these really powerful data sharing on downstream data monetization capabilities like you've mentioned. In addition, like Swami mentioned in his keynote Red Shift ML, so you can now essentially train and run models that were built in sage maker and optimized from within your red shift clusters. And then we've also automated all of the performance tuning that's possible in red ships. So we really invested heavily in price performance, and now we've automated all of the things that make Red Shift the best in class data warehouse service from a price performance perspective up to three X better than others. But customers can just set red shift auto, and it'll handle workload management, data compression and data distribution. Eso making it easier to access all about performance and then the other big one was in Lake Formacion. We announced three new capabilities. One is transactions, so enabling consistent acid transactions on data lakes so you can do things like inserts and updates and deletes. We announced row based filtering for fine grained access control and that unified governance model and then automated storage optimization for Data Lake. So customers are dealing with an optimized small files that air coming off streaming systems, for example, like Formacion can auto compact those under the covers, and you can get a 78 x performance boost. It's been a busy year for prime lyrics. >>I'll say that, z that it no great great job, bro. Thanks so much for coming back in the Cube and, you know, sharing the innovations and, uh, great to see you again. And good luck in the coming here. Well, >>thank you very much. Great to be here. Great to see you. And hope we get Thio see each other in person against >>I hope so. All right. And thank you for watching everybody says Dave Volonte for the Cube will be right back right after this short break
SUMMARY :
It's great to see you again. They have Great co two and always a pleasure. to, you know, essentially share data across different And so the you know the components of the name are pretty straightforward. And then you're gonna automatically keep track of the changes and keep everything up to date. So you can imagine. services or data products that are gonna help me, you know, monetize my business. that prevented that data from flowing in the way you would expect it, you'd have toe manually, And if for whatever reason, you can't what happens? So if we can recover, say, for example, you can you know that for a So let's talk about another innovation. that you might ask the system to do on your behalf. but in different ways, you know, like compare California in New York or and then the data comes then the you know, the other step would be at data ingestion Time. But the example that you just gave it the drill down to verify that the answer is correct. And I think, you know, we're seeing that increasingly, You know the users from doing things that you know, whether it's data access But the you know, the notion of what customers are doing and what we're seeing is that admission of the business or the or the organization I meant to ask you about acute customers And on then it's also serverless, so you could embed it at a really massive But then we're talking about, you know, glue, elastic views you're copying and moving And you know, the reality of customer architectures is that customers will use purpose built So that puts more pressure on you to then really what we see with that, you know, with that set of capabilities. I think it serves you well on your customers. speaks to the theme of simplifying getting data from point A to point B so you could have central in the Cube and, you know, sharing the innovations and, uh, great to see you again. thank you very much. And thank you for watching everybody says Dave Volonte for the Cube will be right back right after
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Rahul Pathak | PERSON | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
David | PERSON | 0.99+ |
California | LOCATION | 0.99+ |
New York | LOCATION | 0.99+ |
Andy | PERSON | 0.99+ |
Swiss Army | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
December 8 | DATE | 0.99+ |
Dave Volonte | PERSON | 0.99+ |
last year | DATE | 0.99+ |
2020 | DATE | 0.99+ |
third option | QUANTITY | 0.99+ |
Swami | PERSON | 0.99+ |
each | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
A. W | PERSON | 0.99+ |
this year | DATE | 0.99+ |
10 instances | QUANTITY | 0.98+ |
A three | COMMERCIAL_ITEM | 0.98+ |
78 x | QUANTITY | 0.98+ |
two petabytes | QUANTITY | 0.98+ |
five | QUANTITY | 0.97+ |
Amazon Engineering | ORGANIZATION | 0.97+ |
Red Shift ML | TITLE | 0.97+ |
Formacion | ORGANIZATION | 0.97+ |
11 | QUANTITY | 0.96+ |
one | QUANTITY | 0.96+ |
one way | QUANTITY | 0.96+ |
Intel | ORGANIZATION | 0.96+ |
One | QUANTITY | 0.96+ |
five categories | QUANTITY | 0.94+ |
Aqua | ORGANIZATION | 0.93+ |
Elasticsearch | TITLE | 0.93+ |
terabytes | QUANTITY | 0.93+ |
both worlds | QUANTITY | 0.93+ |
next decade | DATE | 0.92+ |
two data sets | QUANTITY | 0.91+ |
Lake Formacion | ORGANIZATION | 0.9+ |
single query | QUANTITY | 0.9+ |
Data Lake | ORGANIZATION | 0.89+ |
thousands of educators | QUANTITY | 0.89+ |
Both stores | QUANTITY | 0.88+ |
Thio | PERSON | 0.88+ |
agile | TITLE | 0.88+ |
Cuba | LOCATION | 0.87+ |
dynamodb | ORGANIZATION | 0.86+ |
1.7 x | QUANTITY | 0.86+ |
Swamis | PERSON | 0.84+ |
EMR | TITLE | 0.82+ |
one size | QUANTITY | 0.82+ |
Red Shift | TITLE | 0.82+ |
up to three X | QUANTITY | 0.82+ |
100 gigabits per second | QUANTITY | 0.82+ |
Marnie | PERSON | 0.79+ |
last decade | DATE | 0.79+ |
reinvent 2020 | EVENT | 0.74+ |
Invent | EVENT | 0.74+ |
last 10 years | DATE | 0.74+ |
Cube | COMMERCIAL_ITEM | 0.74+ |
today | DATE | 0.74+ |
A Ro | EVENT | 0.71+ |
three new capabilities | QUANTITY | 0.71+ |
two | QUANTITY | 0.7+ |
E T Elling | PERSON | 0.69+ |
Eso | ORGANIZATION | 0.66+ |
Aqua | TITLE | 0.64+ |
Cube | ORGANIZATION | 0.63+ |
Query | COMMERCIAL_ITEM | 0.63+ |
SAS | ORGANIZATION | 0.62+ |
Aurora | ORGANIZATION | 0.61+ |
Lake House | ORGANIZATION | 0.6+ |
Sequel | TITLE | 0.58+ |
P. | PERSON | 0.56+ |
Bala Kuchibhotla, Nutanix | Nutanix .NEXT EU 2019
>>live from Copenhagen, Denmark. It's the Q covering Nutanix dot next 2019. Brought to you by Nutanix >>Welcome back, everyone to the cubes. Live coverage of Nutanix dot Next here at the Bella Centre in the Copenhagen. I'm your host, Rebecca Knight, coasting along side of stew, Minutemen were joined by Bala Coochie bottler >>Bhola. He is the VP GM Nutanix era and business critical lapse at Nutanix. Thanks so much for coming on the island. >>It's an honor to come here and talk to guys. >>So you were up on the main stage this morning. You did a fantastic job doing some demos for us. But up there you talked about your data, your days gold. And you said there are four p's thio the challenges of mining the burning process you want >>you want to go through >>those for our viewers? >>Definitely. So for every business, critical lab data is gold likely anam bigness for a lot of people are anyone. Now the question is like similar to how the gore gets processed and there's a lot of hazardous mining that happens and process finally get this processed gold. To me, the data is also very similar for business could collapse. Little database systems will be processed in a way to get the most efficient, elegant way of getting the database back data back. No. The four pains that I see for managing data businesses started provisioning even today. Some of his biggest companies that I talkto they take about 3 to 5 weeks toe provisions. A database. It goes from Infrastructure team. The ticket passes from infrastructure team, computer, networking stories, toe database team and the database administration team. That's number one silo. Number two is like proliferation, and it's very consistent, pretty much every big company I talkto there. How about 8 to 10 copies of the data for other analytics que year development staging Whatever it is, it's like over you take a photo and put it on. What Step and your friends download it. They're basically doing a coffee data. Essentially, that Fordham be becomes 40 and in no time in our what's up. It's the same thing that happens for databases, data bits gets cloned or if it's all the time. But this seemingly simple, simple operation off over Clone Copy copy paste operation becomes the most dreaded, complex long running error prone process. And I see that dedicated Devi is just doing Tony. That's another thing. And then lineage problem that someone is cloning the data to somewhere. I don't know where the data is coming from. Canister in The third pain that we talk about is the protection. Actually, to me it's like a number one and number two problem, but I was just putting it in the third. If you're running daily basis, and if you're running it for Mission critical data basis, your ability to restore the rhythm is to any point in time. It's an absolute must write like otherwise, you're not even calling The database. Question is, Are the technologies don't have this kind of production technology? Are they already taken care? They did already, but the question is on our new town expert from Are on Cloud platform. Can they be efficient and elegant? Can we can we take out some of the pain in this whole process? That's what we're talking about. And the last one is, ah, big company problem. Anyone who has dozens of databases can empathize with me how painful it is to patch how painful it is to get up get your complaints going to it. Holy Manager instead driven database service, this kind of stuff. So these are the four things that we actually think that if you solve them, your databases are one step. Are much a lot steps closer to database service. That's what I see >>Bala. It's interesting. You know, you spent a lot of time working for, you know, the big database company out there. There is no shortage of options out there for databases. When I talked to most enterprises, it's not one database they now have, you know, often dozens of databases that they have. Um so explain line. Now you know, there's still an unmet need in the marketplace that Nutanix is looking to help fill there. >>So you're absolutely right on the dark that there are lots of date of this technology is actually that compounds the problem because all these big enterprise companies that are specially Steadman stations for Oracle Post Grace may really be my sequel sequel administrator. Now they're new breed of databases in no sequel monger leave. You know, it's it's like Hardy Man is among really be somebody manage the Marta logics and stuff like that so no, we I personally eating their databases need to become seemed like Alex City. Right? So >>most of >>these banks and telcos all the company that we talk about data this is just a means to an end for them. So there should focus on the business logic. Creating those business value applications and databases are more like okay, I can just manage them with almost no touch Aghanistan. But whether these technologies that were created around 20 years back are there, there it kind of stopped. So that is what we're trying to talk about when you have a powerful platform like Nutanix that actually abstracts the stories and solve some of the fundamental problems for database upstream technologies to take advantage of. We combine the date of this FBI's the render A P s as well as the strength of the new tenants platform to give their simplicity. Essentially. So that's what I see. We're not inventing. New databases were trying to simplify the database. If that's what >>you and help make sure we understand that you know, Nutanix isn't just building the next great lock in, you know, from top to bottom. You know, Nutanix can provide it. But Optionality is a word that Nutanix way >>live and time by choice and freedom for the customers. In fact, I make this as one of the fundamental design principles, even for era we use. AP is provided with the database vendors, for example, for our men, we just use our men. AP is. We start the database in the backup, using our many years where we take that one day. It is the platform. Once the database in the backup more we're taking snapshots of the latest visit is pretty much like our men. Regan back up with a Miss based backup, essentially alchemist, so the customer is not locked in the 2nd 1 is if the customer wants to go to the other clothes are even other technologies kind of stuff? We will probably appear just kind of migrate. So that's one of the thing that I want to kind of emphasize that we're not here to lock in any customer. In fact, your choice is to work. In fact, I emphasize, if the customer has the the computer environment on the year six were more than happy weaken. Some 40 year six are his feet both are equal for us. All we need is the air weighs on era because it was is something that we leverage a lot off platform patent, uh, repentance of Nutanix technology that we're passing on the benefits canister down the road where we're trying to see is we'll have cyclists and AWS and DCP. And as you and customers can move databases from unpromising private cloud platform through hybrid cloud to other clusters and then they can bring back the data business. That's what we can to protect the customers. Investment. >>Yeah. I mean, I'm curious. Your commentary. When you go listen, toe the big cloud player out there. It's, you know, they tell you how many hundreds of thousands of databases they've migrated. When I talk to customers and they think about their workload, migrations are gonna come even more often, and it's not a one way thing. It's often it's moving around and things change. So can we get there for the database? Because usually it's like, Well, it isn't it easier for me to move my computer to my data. You know, data has gravity. You know, there's a lot of, you know, physics. Tell General today. >>See what what is happening with hyper killers is. They're asking the applications. Toby return against clothed native databases, obviously by if you are writing an application again, it's chlorinated. Databases say there are Are are are even DCP big table. You're pretty much locked technical because further obligation to come back down from there is no view. There's no big table on and there's no one around. Where is what we're trying to say is the more one APS, the oracles the sequels were trying to clarify? We're trying to bring the simplicity of them, so if they can run in the clover, they condone an art crime. So that's how we protect the investment, that there is not much new engineering that needs to be done for your rafts as is, we can move them. Only thing is, we're taking or the pain off mobility leveraging all platform. So obviously we can run your APS, as is Oracle applications on the public lower like oracle, and if you feel like you want to do it on on from, we can do it on the impromptu canister so and to protect the investment for the customers, we do have grown feeling this man, That means that you can How did a bee is running on your ex editor and you can do capacity. Mediation means tier two tier three environments on Nutanix using our time mission technology. So we give the choicest customers >>So thinking about this truly virtualized d be what is what some of the things you're hearing from customers here a dot next Copenhagen. What are the things that you were they there, There there Pain points. I mean, in addition to those four peas. But what are some of the next generation problems that you're trying to solve here? >>So that first awful for the customers come in acknowledges way that this is a true database. Which letters? I don't know what happened is what tradition is all aboard compute. And when when he saw the computer watch logician problem you threw in database server and then try to run the databases. You're not really solving the problem of the data? No, With Nutanix, our DNA is in data. So we have started our pioneered the storage, which location and then extended to the files and objects. Now we're extending into database making that application Native Watch Ladies database for dilation, leveraging the story published Combining that with Computer. What's litigation? We think that we have made an honest effort to watch less data basis. Know the trend that I see is Everyone is moving. Our everyone wants cloudlike experience. It's not like they want to go to club, but they want the cloud like agility, that one click simplicity, consumer, great experience for the data basis, I would liketo kind of manage my data basis in self service matter. So we took both these dimensions. We made a great we made an honest effort to make. The databases are truly watch list. That's the copy data management and olive stuff and then coupled with how cloud works able to tow provisions. Self service way ability to manage your backups in self service. Weigh heavily to do patch self service fair and customers love it, and they want to take us tow new engines. One of the other thing that we see beget Bronte's with ERA is Chloe's. Olive or new databases generally are the post press and the cancer, but there's a lot of data on site because there's a lot of data on Mississippi. Honey, there's a lot of data on TV, too. Why don't we enjoy the same kind of experience for those databases? What? What did they do wrong? So can we >>give >>those experience the cloud like experience and then true? Watch allegation for those databases on the platform. That's what customers ask What kind of stuff. Obviously, they will have asked for more and more, um, br kind of facilities and other stuff that way there in the road map that we will be able to take it off. One >>of the questions we've had this week as Nutanix build out some of these application software not just infrastructure software pieces, go to market tends to be a little bit different. We had an interesting conversation with the Pro. They're wrapping the service for a row so that that seems like a really good way to be able to reach customers that might not even knew no Nutanix tell us, you know, how is that going? Is there an overlay? Salesforce's it? Some of the strategic channel and partnership engagements, you know, because this is not the traditional Nutanix, >>So obviously Nutanix is known. Andi made its name and fame for infrastructure as service. So it's really a challenge to talk about database language for our salespeople. But country that I heard the doubt when I kind of started my journey It Nutanix Okay, we will build a product. But how are you going to the city? And we get off this kind of sales for But believe me, we're making multimillion dollar deals mainly led by the application Native Miss our application centric nous so I could talk about federal governments. And yes, she made perches because it was a different station for them. We're talking about big telco company in Europe trying to replace their big Internet appliances because era makes the difference vanished. We're providing almost two X value almost half the price. So the pain point is real. Question is, can we translate their token reconnect with the right kind of customer? So we do have a cell so early for my division. They speak database language. Obviously we're very early in the game, so we will have selected few people in highly dense are important geographic regions who after that, but I also work with channels, work with apartments like geniuses like we prove head steal another kind of stuff and down the best people to leverage and take this holding and practice. This is the solution. In fact, companies like GE S D s is like people take an offer. Managed database seven. Right. So we have a product. People can build a cloud with it. But with the pro they can offer in a word, why do you want to go to public Lower? I can provide the same cloud. Man is database service more on our picks, Mortal kind of stuff. So we're kind of off fighting on all cylinders in this sense, but very selectively very focused. And I really believe that customers fill understand this, Mrs, that Nutanix is not just the infrastructure, but it's a cloud. It's a It's a club platform where I considered arise like Microsoft Office Suite on Microsoft's operating system. Think about that. That's the part off full power that we think that I can make make it happen >>and who are you know, you said you're going in very tight. Who are these Target customers without naming names? But what kinds of businesses are they? You know? How big are they? What kinds of challenges. Are >>they looking at all? The early customers were hardly in the third quarter of the business, but five. Financial sector is big. The pain point of data mismanagement is so acute there capacity limitation is a huge thing. They are spending hundreds of millions of dollars on this big. When that kind of stuff on can they run in the can extract efficiencies out of this hole all their investment. Second thing is manufacturing and tell Cole, and obviously federal is one of the biggest friend of Nutanix and I happened to pitch in and religions is loaded. And they said, Israel, let's do it real demo. And then let's make it happen. They actually tested the product and there are taking it. So the e r piece, where are they? Run Oracle, Where the run big sequence kind of stuff. This is what we're seeing. It >>followed. Wanna make sure there was a bunch of announcements about era tudo Otto, Just walk us through real quick kind of where we are today. And what should we be looking for? Directionally in the future. >>So we started out with four are five engines. Basically, Andi, you know that Oracle sequel and my sequel post this kind of stuff, and we attacked on four problems this provisioning patching copy, data management and then production. But when we talked to all these customers on, I talked to see Ables and City Walls. They love it. They wanted to say that Hey, Kanna, how around more engines? Right? So that's one will live. But more importantly, they do have practices. They have their closest vehicles that they want to have single pane of management, off era managing data basis across. So the multi cluster capability, what we call that's like equal and a prison central which manage multiple excesses. They weren't error to manage multiple clusters that manage daily basis, right? That's number one. That's big for a product with in one year that we regard to that stage. Second thing was, obviously, people and press customers expect rule rule based access control. But this is data, so it's not a simple privilege, and, uh, you would define the roles and religious and then get it over kind of stuff. You do want to know who is accessing the data, whether they can access the data and where they can accident. We want to give them freedom to create clones and data kind of act. Give the access to data, but in a country manor so they can clone on their cure. Clusters there need to file a huge big ticket with Wait for two weeks. They can have that flexibility, but they can manage the data at that particular fear class. So this is what we call D a M Data access management. It's like a dam on the like construct on the river, control flow of the water and then channel is it to the right place and right. But since Canister, so that's what we're trying to do for data. That's the second big thing that we look for in the attitude. Otto. Obviously, there's a lot off interest on engines. Expand both relation in Cecil has no sequel are We are seeing huge interest in recipe. Hannah. We're going to do it in a couple of months. You'll have take review monger. Dubious. The big big guy in no sequel space will expand that from long. Would it be to march logic and other stuff, But even D B two insiders There's a lot of interest. I'm just looking for committed Customers were, weren't They are willing to put the dollars on the table, and we're going to rule it out. That's the beauty of fair that we're not just talking about. Cloud native databases Just force Chris and kind of stuff. What? All this innovation that happened in 30 40 years, we can we can renew them to the New Age. Afghanistan. >>Great. Well, Bala, thank you so much for coming on. The Cuba was >>Thank you. >>I'm Rebecca Knight for stew minimum. Stay tuned. For more of the cubes. Live coverage of Nutanix dot next.
SUMMARY :
It's the Q covering Live coverage of Nutanix dot Next here at the Bella Centre Thanks so much for coming on the island. mining the burning process you want So these are the four things that we actually think that if you solve them, You know, you spent a lot of time working for, is among really be somebody manage the Marta logics and stuff like that so no, So that is what we're trying to talk about when you have a powerful platform like Nutanix the next great lock in, you know, from top to bottom. So that's one of the thing that I want to kind of emphasize that we're not here to lock in any customer. So can we get there for the database? applications on the public lower like oracle, and if you feel like you want to do it on on from, What are the things that you were they there, One of the other thing that we see beget Bronte's with there in the road map that we will be able to take it off. Some of the strategic channel and partnership engagements, head steal another kind of stuff and down the best people to leverage and who are you know, you said you're going in very tight. of the biggest friend of Nutanix and I happened to pitch in and Directionally in the future. That's the second big thing that we look for in the attitude. The Cuba was For more of the cubes.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Rebecca Knight | PERSON | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
Chris | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Europe | LOCATION | 0.99+ |
Copenhagen | LOCATION | 0.99+ |
40 | QUANTITY | 0.99+ |
FBI | ORGANIZATION | 0.99+ |
two weeks | QUANTITY | 0.99+ |
Mississippi | LOCATION | 0.99+ |
Hannah | PERSON | 0.99+ |
Cole | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
one year | QUANTITY | 0.99+ |
Bala | PERSON | 0.99+ |
Fordham | ORGANIZATION | 0.99+ |
five | QUANTITY | 0.99+ |
Copenhagen, Denmark | LOCATION | 0.99+ |
Tony | PERSON | 0.99+ |
Ables | ORGANIZATION | 0.99+ |
five engines | QUANTITY | 0.99+ |
Bala Kuchibhotla | PERSON | 0.99+ |
third | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
hundreds of millions of dollars | QUANTITY | 0.99+ |
Bala Coochie | PERSON | 0.99+ |
one | QUANTITY | 0.98+ |
Andi | PERSON | 0.98+ |
today | DATE | 0.98+ |
one way | QUANTITY | 0.98+ |
Second thing | QUANTITY | 0.98+ |
Kanna | PERSON | 0.98+ |
5 weeks | QUANTITY | 0.98+ |
one step | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
four pains | QUANTITY | 0.98+ |
10 copies | QUANTITY | 0.98+ |
four things | QUANTITY | 0.98+ |
first | QUANTITY | 0.97+ |
City Walls | ORGANIZATION | 0.97+ |
8 | QUANTITY | 0.97+ |
four problems | QUANTITY | 0.97+ |
Otto | PERSON | 0.97+ |
Aghanistan | LOCATION | 0.97+ |
this week | DATE | 0.97+ |
third pain | QUANTITY | 0.97+ |
third quarter | QUANTITY | 0.97+ |
ERA | ORGANIZATION | 0.96+ |
Oracle | ORGANIZATION | 0.96+ |
2nd 1 | QUANTITY | 0.96+ |
Bella Centre | LOCATION | 0.95+ |
Afghanistan | LOCATION | 0.95+ |
one database | QUANTITY | 0.94+ |
telco | ORGANIZATION | 0.94+ |
one day | QUANTITY | 0.93+ |
four peas | QUANTITY | 0.93+ |
multimillion dollar | QUANTITY | 0.93+ |
both relation | QUANTITY | 0.93+ |
Target | ORGANIZATION | 0.93+ |
Alex City | TITLE | 0.93+ |
40 year | QUANTITY | 0.92+ |
about 3 | QUANTITY | 0.91+ |
around 20 years back | DATE | 0.91+ |
second big | QUANTITY | 0.9+ |
tier two | QUANTITY | 0.9+ |
2019 | DATE | 0.9+ |
Dubious | PERSON | 0.88+ |
Number two | QUANTITY | 0.87+ |